feat: Centralize AI prompts into JSON templates (#882)
* centralize prompt management * add changeset * add variant key to determine prompt version * update tests and add prompt manager test * determine internal path, don't use projectRoot * add promptManager mock * detailed prompt docs * add schemas and validator packages * add validate prompts command * add schema validation * update tests * move schemas to src/prompts/schemas * use this.promptsDir for better semantics * add prompt schemas * version schema files & update links * remove validate command * expect dependencies * update docs * fix test * remove suggestmode to ensure clean keys * remove default variant from research and update schema * now handled by prompt manager * add manual test to verify prompts * remove incorrect batch variant * consolidate variants * consolidate analyze-complexity to just default variant * consolidate parse-prd variants * add eq handler for handlebars * consolidate research prompt variants * use brevity * consolidate variants for update subtask * add not handler * consolidate variants for update-task * consolidate update-tasks variants * add conditional content to prompt when research used * update prompt tests * show correct research variant * make variant names link to below * remove changset * restore gitignore * Merge branch 'next' of https://github.com/eyaltoledano/claude-task-master into joedanz/centralize-prompts # Conflicts: # package-lock.json # scripts/modules/task-manager/expand-task.js # scripts/modules/task-manager/parse-prd.js remove unused * add else * update tests * update biome optional dependencies * responsive html output for mobile
This commit is contained in:
255
tests/manual/prompts/README.md
Normal file
255
tests/manual/prompts/README.md
Normal file
@@ -0,0 +1,255 @@
|
||||
# Task Master Prompt Template Testing
|
||||
|
||||
This directory contains comprehensive testing tools for Task Master's centralized prompt template system.
|
||||
|
||||
## Interactive Menu System (Recommended)
|
||||
|
||||
The test script now includes an interactive menu system for easy testing and exploration:
|
||||
|
||||
```bash
|
||||
node prompt-test.js
|
||||
```
|
||||
|
||||
### Menu Features
|
||||
|
||||
**Main Menu Options:**
|
||||
1. **Test specific prompt template** - Choose individual templates and variants
|
||||
2. **Run all tests** - Execute the full test suite
|
||||
3. **Toggle full prompt display** - Switch between preview and full prompt output (default: ON)
|
||||
4. **Generate HTML report** - Create a professional HTML report and open in browser
|
||||
5. **Exit** - Close the application
|
||||
|
||||
**Template Selection:**
|
||||
- Choose from 8 available prompt templates
|
||||
- See available variants for each template
|
||||
- Test individual variants or all variants at once
|
||||
|
||||
**Interactive Flow:**
|
||||
- Select template → Select variant → View results → Choose next action
|
||||
- Easy navigation back to previous menus
|
||||
- Color-coded output for better readability
|
||||
|
||||
## Batch Mode Options
|
||||
|
||||
### Run All Tests (Batch)
|
||||
```bash
|
||||
node prompt-test.js --batch
|
||||
```
|
||||
Runs all tests non-interactively and exits with appropriate status code.
|
||||
|
||||
### Generate HTML Report
|
||||
```bash
|
||||
node prompt-test.js --html
|
||||
```
|
||||
Generates a professional HTML report with all test results and full prompt content. The report includes:
|
||||
- **Test summary dashboard** with pass/fail statistics at the top
|
||||
- **Compact single-line format** - Each template shows: `template: [variant ✓] [variant ✗] - x/y passed`
|
||||
- **Individual pass/fail badges** - Visual ✓/✗ indicators for each variant test result
|
||||
- **Template status summary** - Shows x/y passed count at the end of each line
|
||||
- **Separate error condition section** - Tests for missing parameters, invalid variants, nonexistent templates
|
||||
- **Alphabetically sorted** - Templates and variants are sorted for predictable ordering
|
||||
- **Space-efficient layout** - Optimized for developer review with minimal vertical space
|
||||
- **Two-section layout**:
|
||||
1. **Prompt Templates** - Real template variants testing
|
||||
2. **Error Condition Tests** - Error handling validation (empty-prompt, missing-parameters, invalid-variant, etc.)
|
||||
3. **Detailed Content** - Full system and user prompts below
|
||||
- **Full prompt content** displayed without scrolling (no truncation)
|
||||
- **Professional styling** with clear visual hierarchy and responsive design
|
||||
- **Automatic browser opening** (cross-platform)
|
||||
|
||||
Reports are saved to `tests/manual/prompts/output/` with timestamps.
|
||||
|
||||
### Legacy Full Test Mode
|
||||
```bash
|
||||
node prompt-test.js --full
|
||||
```
|
||||
Runs all tests and shows sample full prompts for verification.
|
||||
|
||||
### Help
|
||||
```bash
|
||||
node prompt-test.js --help
|
||||
```
|
||||
Shows usage information and examples.
|
||||
|
||||
## Test Coverage
|
||||
|
||||
The comprehensive test suite covers:
|
||||
|
||||
## Test Coverage Summary
|
||||
|
||||
**Total Test Cases: 23** (18 functional + 5 error condition tests)
|
||||
|
||||
### Templates with Research Conditional Content
|
||||
These templates have `useResearch` or `research` parameters that modify prompt content:
|
||||
- **add-task** (default, research variants)
|
||||
- **analyze-complexity** (default, research variants)
|
||||
- **parse-prd** (default, research variants)
|
||||
- **update-subtask** (default, research variants)
|
||||
- **update-task** (default, append, research variants)
|
||||
|
||||
### Templates with Legitimate Separate Variants
|
||||
These templates have genuinely different prompts for different use cases:
|
||||
- **expand-task** (default, research, complexity-report variants) - Three sophisticated strategies with advanced parameter support
|
||||
- **research** (low, medium, high detail level variants)
|
||||
|
||||
### Single Variant Templates
|
||||
These templates only have one variant because research mode only changes AI role, not prompt content:
|
||||
- **update-tasks** (default variant only)
|
||||
|
||||
### Prompt Templates (8 total)
|
||||
- **add-task** (default, research variants)
|
||||
- **expand-task** (default, research, complexity-report variants) - Enhanced with sophisticated parameter support and context handling
|
||||
- **analyze-complexity** (default variant)
|
||||
- **research** (low, medium, high detail variants)
|
||||
- **parse-prd** (default variant) - Enhanced with sophisticated numTasks conditional logic
|
||||
- **update-subtask** (default variant with `useResearch` conditional content)
|
||||
- **update-task** (default, append variants; research uses `useResearch` conditional content)
|
||||
- **update-tasks** (default variant with `useResearch` conditional content)
|
||||
|
||||
### Test Scenarios (27 total)
|
||||
- 16 valid template/variant combinations (including enhanced expand-task with new parameter support)
|
||||
- 4 conditional logic validation tests (testing new gt/gte helper functions)
|
||||
- 7 error condition tests (nonexistent variants, templates, missing params, invalid detail levels)
|
||||
|
||||
### Validation
|
||||
- Parameter schema compliance
|
||||
- Template loading success/failure
|
||||
- Error handling for invalid inputs
|
||||
- Realistic test data for each template type
|
||||
- **Output content validation** for conditional logic (NEW)
|
||||
|
||||
#### Conditional Logic Testing (NEW)
|
||||
The test suite now includes specific validation for the new `gt` (greater than) and `gte` (greater than or equal) helper functions:
|
||||
|
||||
**Helper Function Tests:**
|
||||
- `conditional-zero-tasks`: Validates `numTasks = 0` produces "an appropriate number of" text
|
||||
- `conditional-positive-tasks`: Validates `numTasks = 5` produces "approximately 5" text
|
||||
- `conditional-zero-subtasks`: Validates `subtaskCount = 0` produces "an appropriate number of" text
|
||||
- `conditional-positive-subtasks`: Validates `subtaskCount = 3` produces "exactly 3" text
|
||||
|
||||
These tests use the new `validateOutput` function to verify that conditional template logic produces the expected rendered content, ensuring our helper functions work correctly beyond just successful template loading.
|
||||
|
||||
## Output Modes
|
||||
|
||||
### Preview Mode (Default)
|
||||
Shows truncated prompts (200 characters) for quick overview:
|
||||
```
|
||||
System Prompt Preview:
|
||||
You are an AI assistant helping with task management...
|
||||
|
||||
User Prompt Preview:
|
||||
Create a new task based on the following description...
|
||||
|
||||
Tip: Use option 3 in main menu to toggle full prompt display
|
||||
```
|
||||
|
||||
### Full Mode
|
||||
Shows complete system and user prompts for detailed verification:
|
||||
```
|
||||
System Prompt:
|
||||
[Complete system prompt content]
|
||||
|
||||
User Prompt:
|
||||
[Complete user prompt content]
|
||||
```
|
||||
|
||||
## Test Data
|
||||
|
||||
Each template uses realistic test data:
|
||||
|
||||
- **Tasks**: Complete task objects with proper IDs, titles, descriptions
|
||||
- **Context**: Simulated project context and gathered information
|
||||
- **Parameters**: Properly formatted parameters matching each template's schema
|
||||
- **Research**: Sample queries and detail levels for research prompts
|
||||
|
||||
## Error Testing
|
||||
|
||||
The test suite includes error condition validation:
|
||||
- Nonexistent template variants
|
||||
- Invalid template names
|
||||
- Missing required parameters
|
||||
- Malformed parameter data
|
||||
|
||||
## Exit Codes (Batch Mode)
|
||||
|
||||
- **0**: All tests passed
|
||||
- **1**: One or more tests failed
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Development Workflow
|
||||
1. **Template Development**: Test new templates interactively
|
||||
2. **Variant Testing**: Verify all variants work correctly
|
||||
3. **Parameter Validation**: Ensure parameter schemas are working
|
||||
4. **Regression Testing**: Run batch tests after changes
|
||||
|
||||
### Manual Verification
|
||||
1. **Prompt Review**: Human verification of generated prompts
|
||||
2. **Parameter Exploration**: See how different parameters affect output
|
||||
3. **Context Testing**: Verify context inclusion and formatting
|
||||
|
||||
### CI/CD Integration
|
||||
```bash
|
||||
# In CI pipeline
|
||||
node tests/manual/prompts/prompt-test.js --batch
|
||||
```
|
||||
|
||||
The interactive menu makes it easy to explore and verify prompt templates during development, while batch mode enables automated testing in CI/CD pipelines.
|
||||
|
||||
## 🎯 Purpose
|
||||
|
||||
- **Verify all 8 prompt templates** work correctly with the prompt manager
|
||||
- **Test multiple variants** for each prompt (default, research, complexity-report, etc.)
|
||||
- **Show full generated prompts** for human verification and debugging
|
||||
- **Test error conditions** and parameter validation
|
||||
- **Provide realistic sample data** for each prompt type
|
||||
|
||||
## 📁 Files
|
||||
|
||||
- `prompt-test.js` - Main test script
|
||||
- `output/` - Generated HTML reports (when using --html flag or menu option)
|
||||
|
||||
## 🎯 Use Cases
|
||||
|
||||
### For Developers
|
||||
- **Verify prompt changes** don't break existing functionality
|
||||
- **Test new prompt variants** before deployment
|
||||
- **Debug prompt generation** issues with full output
|
||||
- **Validate parameter schemas** work correctly
|
||||
|
||||
### For QA
|
||||
- **Regression testing** after prompt template changes
|
||||
- **Verification of prompt outputs** match expectations
|
||||
- **Parameter validation testing** for robustness
|
||||
- **Cross-variant consistency** checking
|
||||
|
||||
### For Documentation
|
||||
- **Reference for prompt usage** with realistic examples
|
||||
- **Parameter requirements** demonstration
|
||||
- **Variant differences** visualization
|
||||
- **Expected output formats** examples
|
||||
|
||||
## ⚠️ Important Notes
|
||||
|
||||
1. **Real Prompt Manager**: This test uses the actual prompt manager, not mocks
|
||||
2. **Parameter Accuracy**: All parameters match the exact schema requirements of each prompt template
|
||||
3. **Variant Coverage**: Tests all documented variants for each prompt type
|
||||
4. **Sample Data**: Uses realistic project scenarios, not dummy data
|
||||
5. **Exit Codes**: Returns exit code 1 if any tests fail, 0 if all pass
|
||||
|
||||
## 🔄 Maintenance
|
||||
|
||||
When adding new prompt templates or variants:
|
||||
|
||||
1. Add sample data to the `sampleData` object
|
||||
2. Include realistic parameters matching the prompt's schema
|
||||
3. Test all documented variants
|
||||
4. Verify with the `--full` flag that prompts generate correctly
|
||||
5. Update this README with new coverage information
|
||||
|
||||
This test suite should be run whenever:
|
||||
- Prompt templates are modified
|
||||
- New variants are added
|
||||
- Parameter schemas change
|
||||
- Prompt manager logic is updated
|
||||
- Before major releases
|
||||
1874
tests/manual/prompts/prompt-test.js
Normal file
1874
tests/manual/prompts/prompt-test.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -2,12 +2,27 @@ import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
// Mock the schema integration functions to avoid chalk issues
|
||||
const mockSetupSchemaIntegration = jest.fn();
|
||||
|
||||
import { vscodeProfile } from '../../../src/profiles/vscode.js';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock fs/promises
|
||||
const mockFsPromises = {
|
||||
mkdir: jest.fn(),
|
||||
access: jest.fn(),
|
||||
copyFile: jest.fn(),
|
||||
readFile: jest.fn(),
|
||||
writeFile: jest.fn()
|
||||
};
|
||||
|
||||
jest.mock('fs/promises', () => mockFsPromises);
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
@@ -288,4 +303,41 @@ Task Master specific VS Code instruction.`;
|
||||
expect(content).toContain('alwaysApply:');
|
||||
expect(content).toContain('**/*.ts'); // File patterns in quotes
|
||||
});
|
||||
|
||||
describe('Schema Integration', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Replace the onAddRulesProfile function with our mock
|
||||
vscodeProfile.onAddRulesProfile = mockSetupSchemaIntegration;
|
||||
});
|
||||
|
||||
test('setupSchemaIntegration is called with project root', async () => {
|
||||
// Arrange
|
||||
mockSetupSchemaIntegration.mockResolvedValue();
|
||||
|
||||
// Act
|
||||
await vscodeProfile.onAddRulesProfile(tempDir);
|
||||
|
||||
// Assert
|
||||
expect(mockSetupSchemaIntegration).toHaveBeenCalledWith(tempDir);
|
||||
});
|
||||
|
||||
test('schema integration function exists and is callable', () => {
|
||||
// Assert that the VS Code profile has the schema integration function
|
||||
expect(vscodeProfile.onAddRulesProfile).toBeDefined();
|
||||
expect(typeof vscodeProfile.onAddRulesProfile).toBe('function');
|
||||
});
|
||||
|
||||
test('schema integration handles errors gracefully', async () => {
|
||||
// Arrange
|
||||
mockSetupSchemaIntegration.mockRejectedValue(
|
||||
new Error('Schema setup failed')
|
||||
);
|
||||
|
||||
// Act & Assert - Should propagate the error
|
||||
await expect(vscodeProfile.onAddRulesProfile(tempDir)).rejects.toThrow(
|
||||
'Schema setup failed'
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
406
tests/unit/prompt-manager.test.js
Normal file
406
tests/unit/prompt-manager.test.js
Normal file
@@ -0,0 +1,406 @@
|
||||
import {
|
||||
jest,
|
||||
beforeEach,
|
||||
afterEach,
|
||||
describe,
|
||||
it,
|
||||
expect
|
||||
} from '@jest/globals';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
// Create mock functions
|
||||
const mockReadFileSync = jest.fn();
|
||||
const mockReaddirSync = jest.fn();
|
||||
const mockExistsSync = jest.fn();
|
||||
|
||||
// Set up default mock for supported-models.json to prevent config-manager from failing
|
||||
mockReadFileSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('supported-models.json')) {
|
||||
return JSON.stringify({
|
||||
anthropic: [{ id: 'claude-3-5-sonnet', max_tokens: 8192 }],
|
||||
openai: [{ id: 'gpt-4', max_tokens: 8192 }]
|
||||
});
|
||||
}
|
||||
// Default return for other files
|
||||
return '{}';
|
||||
});
|
||||
|
||||
// Mock fs before importing modules that use it
|
||||
jest.unstable_mockModule('fs', () => ({
|
||||
default: {
|
||||
readFileSync: mockReadFileSync,
|
||||
readdirSync: mockReaddirSync,
|
||||
existsSync: mockExistsSync
|
||||
},
|
||||
readFileSync: mockReadFileSync,
|
||||
readdirSync: mockReaddirSync,
|
||||
existsSync: mockExistsSync
|
||||
}));
|
||||
|
||||
// Mock process.exit to prevent tests from exiting
|
||||
const mockExit = jest.fn();
|
||||
jest.unstable_mockModule('process', () => ({
|
||||
default: {
|
||||
exit: mockExit,
|
||||
env: {}
|
||||
},
|
||||
exit: mockExit
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
const { getPromptManager } = await import(
|
||||
'../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
|
||||
describe('PromptManager', () => {
|
||||
let promptManager;
|
||||
// Calculate expected templates directory
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const expectedTemplatesDir = path.join(
|
||||
__dirname,
|
||||
'..',
|
||||
'..',
|
||||
'src',
|
||||
'prompts'
|
||||
);
|
||||
|
||||
beforeEach(() => {
|
||||
// Clear all mocks
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Re-setup the default mock after clearing
|
||||
mockReadFileSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('supported-models.json')) {
|
||||
return JSON.stringify({
|
||||
anthropic: [{ id: 'claude-3-5-sonnet', max_tokens: 8192 }],
|
||||
openai: [{ id: 'gpt-4', max_tokens: 8192 }]
|
||||
});
|
||||
}
|
||||
// Default return for other files
|
||||
return '{}';
|
||||
});
|
||||
|
||||
// Get the singleton instance
|
||||
promptManager = getPromptManager();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('loadPrompt', () => {
|
||||
it('should load and render a simple prompt template', () => {
|
||||
const mockTemplate = {
|
||||
id: 'test-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'You are a helpful assistant',
|
||||
user: 'Hello {{name}}, please {{action}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const result = promptManager.loadPrompt('test-prompt', {
|
||||
name: 'Alice',
|
||||
action: 'help me'
|
||||
});
|
||||
|
||||
expect(result.systemPrompt).toBe('You are a helpful assistant');
|
||||
expect(result.userPrompt).toBe('Hello Alice, please help me');
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith(
|
||||
path.join(expectedTemplatesDir, 'test-prompt.json'),
|
||||
'utf-8'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle conditional content', () => {
|
||||
const mockTemplate = {
|
||||
id: 'conditional-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System prompt',
|
||||
user: '{{#if useResearch}}Research and {{/if}}analyze the task'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
// Test with useResearch = true
|
||||
let result = promptManager.loadPrompt('conditional-prompt', {
|
||||
useResearch: true
|
||||
});
|
||||
expect(result.userPrompt).toBe('Research and analyze the task');
|
||||
|
||||
// Test with useResearch = false
|
||||
result = promptManager.loadPrompt('conditional-prompt', {
|
||||
useResearch: false
|
||||
});
|
||||
expect(result.userPrompt).toBe('analyze the task');
|
||||
});
|
||||
|
||||
it('should handle array iteration with {{#each}}', () => {
|
||||
const mockTemplate = {
|
||||
id: 'loop-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System prompt',
|
||||
user: 'Tasks:\n{{#each tasks}}- {{id}}: {{title}}\n{{/each}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const result = promptManager.loadPrompt('loop-prompt', {
|
||||
tasks: [
|
||||
{ id: 1, title: 'First task' },
|
||||
{ id: 2, title: 'Second task' }
|
||||
]
|
||||
});
|
||||
|
||||
expect(result.userPrompt).toBe(
|
||||
'Tasks:\n- 1: First task\n- 2: Second task\n'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle JSON serialization with triple braces', () => {
|
||||
const mockTemplate = {
|
||||
id: 'json-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System prompt',
|
||||
user: 'Analyze these tasks: {{{json tasks}}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const tasks = [
|
||||
{ id: 1, title: 'Task 1' },
|
||||
{ id: 2, title: 'Task 2' }
|
||||
];
|
||||
|
||||
const result = promptManager.loadPrompt('json-prompt', { tasks });
|
||||
|
||||
expect(result.userPrompt).toBe(
|
||||
`Analyze these tasks: ${JSON.stringify(tasks, null, 2)}`
|
||||
);
|
||||
});
|
||||
|
||||
it('should select variants based on conditions', () => {
|
||||
const mockTemplate = {
|
||||
id: 'variant-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'Default system',
|
||||
user: 'Default user'
|
||||
},
|
||||
research: {
|
||||
condition: 'useResearch === true',
|
||||
system: 'Research system',
|
||||
user: 'Research user'
|
||||
},
|
||||
highComplexity: {
|
||||
condition: 'complexity >= 8',
|
||||
system: 'Complex system',
|
||||
user: 'Complex user'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
// Test default variant
|
||||
let result = promptManager.loadPrompt('variant-prompt', {
|
||||
useResearch: false,
|
||||
complexity: 5
|
||||
});
|
||||
expect(result.systemPrompt).toBe('Default system');
|
||||
|
||||
// Test research variant
|
||||
result = promptManager.loadPrompt('variant-prompt', {
|
||||
useResearch: true,
|
||||
complexity: 5
|
||||
});
|
||||
expect(result.systemPrompt).toBe('Research system');
|
||||
|
||||
// Test high complexity variant
|
||||
result = promptManager.loadPrompt('variant-prompt', {
|
||||
useResearch: false,
|
||||
complexity: 9
|
||||
});
|
||||
expect(result.systemPrompt).toBe('Complex system');
|
||||
});
|
||||
|
||||
it('should use specified variant key over conditions', () => {
|
||||
const mockTemplate = {
|
||||
id: 'variant-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'Default system',
|
||||
user: 'Default user'
|
||||
},
|
||||
research: {
|
||||
condition: 'useResearch === true',
|
||||
system: 'Research system',
|
||||
user: 'Research user'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
// Force research variant even though useResearch is false
|
||||
const result = promptManager.loadPrompt(
|
||||
'variant-prompt',
|
||||
{ useResearch: false },
|
||||
'research'
|
||||
);
|
||||
|
||||
expect(result.systemPrompt).toBe('Research system');
|
||||
});
|
||||
|
||||
it('should handle nested properties with dot notation', () => {
|
||||
const mockTemplate = {
|
||||
id: 'nested-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System',
|
||||
user: 'Project: {{project.name}}, Version: {{project.version}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const result = promptManager.loadPrompt('nested-prompt', {
|
||||
project: {
|
||||
name: 'TaskMaster',
|
||||
version: '1.0.0'
|
||||
}
|
||||
});
|
||||
|
||||
expect(result.userPrompt).toBe('Project: TaskMaster, Version: 1.0.0');
|
||||
});
|
||||
|
||||
it('should handle complex nested structures', () => {
|
||||
const mockTemplate = {
|
||||
id: 'complex-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System',
|
||||
user: '{{#if hasSubtasks}}Task has subtasks:\n{{#each subtasks}}- {{title}} ({{status}})\n{{/each}}{{/if}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const result = promptManager.loadPrompt('complex-prompt', {
|
||||
hasSubtasks: true,
|
||||
subtasks: [
|
||||
{ title: 'Subtask 1', status: 'pending' },
|
||||
{ title: 'Subtask 2', status: 'done' }
|
||||
]
|
||||
});
|
||||
|
||||
expect(result.userPrompt).toBe(
|
||||
'Task has subtasks:\n- Subtask 1 (pending)\n- Subtask 2 (done)\n'
|
||||
);
|
||||
});
|
||||
|
||||
it('should cache loaded templates', () => {
|
||||
const mockTemplate = {
|
||||
id: 'cached-prompt',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System',
|
||||
user: 'User {{value}}'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
// First load
|
||||
promptManager.loadPrompt('cached-prompt', { value: 'test1' });
|
||||
expect(mockReadFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Second load with same params should use cache
|
||||
promptManager.loadPrompt('cached-prompt', { value: 'test1' });
|
||||
expect(mockReadFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Third load with different params should NOT use cache
|
||||
promptManager.loadPrompt('cached-prompt', { value: 'test2' });
|
||||
expect(mockReadFileSync).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('should throw error for non-existent template', () => {
|
||||
const error = new Error('File not found');
|
||||
error.code = 'ENOENT';
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw error;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
promptManager.loadPrompt('non-existent', {});
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should throw error for invalid JSON', () => {
|
||||
mockReadFileSync.mockReturnValue('{ invalid json');
|
||||
|
||||
expect(() => {
|
||||
promptManager.loadPrompt('invalid-json', {});
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should handle missing prompts section', () => {
|
||||
const mockTemplate = {
|
||||
id: 'no-prompts'
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
expect(() => {
|
||||
promptManager.loadPrompt('no-prompts', {});
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should handle special characters in templates', () => {
|
||||
const mockTemplate = {
|
||||
id: 'special-chars',
|
||||
prompts: {
|
||||
default: {
|
||||
system: 'System with "quotes" and \'apostrophes\'',
|
||||
user: 'User with newlines\nand\ttabs'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockTemplate));
|
||||
|
||||
const result = promptManager.loadPrompt('special-chars', {});
|
||||
|
||||
expect(result.systemPrompt).toBe(
|
||||
'System with "quotes" and \'apostrophes\''
|
||||
);
|
||||
expect(result.userPrompt).toBe('User with newlines\nand\ttabs');
|
||||
});
|
||||
});
|
||||
|
||||
describe('singleton behavior', () => {
|
||||
it('should return the same instance on multiple calls', () => {
|
||||
const instance1 = getPromptManager();
|
||||
const instance2 = getPromptManager();
|
||||
|
||||
expect(instance1).toBe(instance2);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -123,6 +123,18 @@ jest.unstable_mockModule(
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/prompt-manager.js',
|
||||
() => ({
|
||||
getPromptManager: jest.fn().mockReturnValue({
|
||||
loadPrompt: jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Mocked system prompt',
|
||||
userPrompt: 'Mocked user prompt'
|
||||
})
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock external UI libraries
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
|
||||
@@ -171,6 +171,18 @@ jest.unstable_mockModule('fs', () => ({
|
||||
writeFileSync: mockWriteFileSync
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/prompt-manager.js',
|
||||
() => ({
|
||||
getPromptManager: jest.fn().mockReturnValue({
|
||||
loadPrompt: jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Mocked system prompt',
|
||||
userPrompt: 'Mocked user prompt'
|
||||
})
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Import the mocked modules
|
||||
const { readJSON, writeJSON, log, CONFIG } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
@@ -262,11 +274,13 @@ describe('analyzeTaskComplexity', () => {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: '5',
|
||||
research: false
|
||||
research: false,
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await analyzeTaskComplexity(options, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -279,7 +293,7 @@ describe('analyzeTaskComplexity', () => {
|
||||
// Assert
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
undefined,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
|
||||
@@ -296,11 +310,13 @@ describe('analyzeTaskComplexity', () => {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: '5',
|
||||
research: true
|
||||
research: true,
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await analyzeTaskComplexity(researchOptions, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -323,10 +339,12 @@ describe('analyzeTaskComplexity', () => {
|
||||
let options = {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: '7'
|
||||
threshold: '7',
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
await analyzeTaskComplexity(options, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -349,10 +367,12 @@ describe('analyzeTaskComplexity', () => {
|
||||
options = {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: 8
|
||||
threshold: 8,
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
await analyzeTaskComplexity(options, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -374,11 +394,13 @@ describe('analyzeTaskComplexity', () => {
|
||||
const options = {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: '5'
|
||||
threshold: '5',
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await analyzeTaskComplexity(options, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -402,7 +424,8 @@ describe('analyzeTaskComplexity', () => {
|
||||
const options = {
|
||||
file: 'tasks/tasks.json',
|
||||
output: 'scripts/task-complexity-report.json',
|
||||
threshold: '5'
|
||||
threshold: '5',
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Force API error
|
||||
@@ -419,6 +442,7 @@ describe('analyzeTaskComplexity', () => {
|
||||
// Act & Assert
|
||||
await expect(
|
||||
analyzeTaskComplexity(options, {
|
||||
projectRoot: '/mock/project/root',
|
||||
mcpLog: mockMcpLog
|
||||
})
|
||||
).rejects.toThrow('API Error');
|
||||
|
||||
@@ -131,11 +131,7 @@ jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/utils/contextGatherer.js',
|
||||
() => ({
|
||||
ContextGatherer: jest.fn().mockImplementation(() => ({
|
||||
gather: jest.fn().mockResolvedValue({
|
||||
contextSummary: 'Mock context summary',
|
||||
allRelatedTaskIds: [],
|
||||
graphVisualization: 'Mock graph'
|
||||
})
|
||||
gather: jest.fn().mockResolvedValue('Mock project context from files')
|
||||
}))
|
||||
})
|
||||
);
|
||||
@@ -147,6 +143,18 @@ jest.unstable_mockModule(
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/prompt-manager.js',
|
||||
() => ({
|
||||
getPromptManager: jest.fn().mockReturnValue({
|
||||
loadPrompt: jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Mocked system prompt',
|
||||
userPrompt: 'Mocked user prompt'
|
||||
})
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock external UI libraries
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
@@ -663,6 +671,18 @@ describe('expandTask', () => {
|
||||
describe('Complexity Report Integration (Tag-Specific)', () => {
|
||||
test('should use tag-specific complexity report when available', async () => {
|
||||
// Arrange
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Generate exactly 5 subtasks for complexity report',
|
||||
userPrompt:
|
||||
'Please break this task into 5 parts\n\nUser provided context'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task in feature-branch
|
||||
const context = {
|
||||
@@ -710,6 +730,16 @@ describe('expandTask', () => {
|
||||
const callArg = generateTextService.mock.calls[0][0];
|
||||
expect(callArg.systemPrompt).toContain('Generate exactly 5 subtasks');
|
||||
|
||||
// Assert - Should use complexity-report variant with expansion prompt
|
||||
expect(mockLoadPrompt).toHaveBeenCalledWith(
|
||||
'expand-task',
|
||||
expect.objectContaining({
|
||||
subtaskCount: 5,
|
||||
expansionPrompt: 'Please break this task into 5 parts'
|
||||
}),
|
||||
'complexity-report'
|
||||
);
|
||||
|
||||
// Clean up stub
|
||||
existsSpy.mockRestore();
|
||||
});
|
||||
@@ -903,6 +933,17 @@ describe('expandTask', () => {
|
||||
|
||||
test('should handle additional context correctly', async () => {
|
||||
// Arrange
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Mocked system prompt',
|
||||
userPrompt: 'Mocked user prompt with context'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const additionalContext = 'Use React hooks and TypeScript';
|
||||
@@ -922,11 +963,28 @@ describe('expandTask', () => {
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should include additional context in prompt
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
// Assert - Should pass separate context parameters to prompt manager
|
||||
expect(mockLoadPrompt).toHaveBeenCalledWith(
|
||||
'expand-task',
|
||||
expect.objectContaining({
|
||||
prompt: expect.stringContaining('Use React hooks and TypeScript')
|
||||
})
|
||||
additionalContext: expect.stringContaining(
|
||||
'Use React hooks and TypeScript'
|
||||
),
|
||||
gatheredContext: expect.stringContaining(
|
||||
'Mock project context from files'
|
||||
)
|
||||
}),
|
||||
expect.any(String)
|
||||
);
|
||||
|
||||
// Additional assertion to verify the context parameters are passed separately
|
||||
const call = mockLoadPrompt.mock.calls[0];
|
||||
const parameters = call[1];
|
||||
expect(parameters.additionalContext).toContain(
|
||||
'Use React hooks and TypeScript'
|
||||
);
|
||||
expect(parameters.gatheredContext).toContain(
|
||||
'Mock project context from files'
|
||||
);
|
||||
});
|
||||
|
||||
@@ -1003,6 +1061,20 @@ describe('expandTask', () => {
|
||||
});
|
||||
|
||||
test('should use dynamic prompting when numSubtasks is 0', async () => {
|
||||
// Mock getPromptManager to return realistic prompt with dynamic content
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt:
|
||||
'You are an AI assistant helping with task breakdown for software development. You need to break down a high-level task into an appropriate number of specific subtasks that can be implemented one by one.',
|
||||
userPrompt:
|
||||
'Break down this task into an appropriate number of specific subtasks'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 0, false, '', context, false);
|
||||
|
||||
@@ -1017,6 +1089,19 @@ describe('expandTask', () => {
|
||||
});
|
||||
|
||||
test('should use specific count prompting when numSubtasks is positive', async () => {
|
||||
// Mock getPromptManager to return realistic prompt with specific count
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt:
|
||||
'You are an AI assistant helping with task breakdown for software development. You need to break down a high-level task into 5 specific subtasks that can be implemented one by one.',
|
||||
userPrompt: 'Break down this task into exactly 5 specific subtasks'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 5, false, '', context, false);
|
||||
|
||||
@@ -1032,6 +1117,19 @@ describe('expandTask', () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(4);
|
||||
|
||||
// Mock getPromptManager to return realistic prompt with default count
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt:
|
||||
'You are an AI assistant helping with task breakdown for software development. You need to break down a high-level task into 4 specific subtasks that can be implemented one by one.',
|
||||
userPrompt: 'Break down this task into exactly 4 specific subtasks'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, -3, false, '', context, false);
|
||||
|
||||
@@ -1045,6 +1143,19 @@ describe('expandTask', () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(6);
|
||||
|
||||
// Mock getPromptManager to return realistic prompt with default count
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt:
|
||||
'You are an AI assistant helping with task breakdown for software development. You need to break down a high-level task into 6 specific subtasks that can be implemented one by one.',
|
||||
userPrompt: 'Break down this task into exactly 6 specific subtasks'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
// Act - Call without specifying numSubtasks (undefined)
|
||||
await expandTask(tasksPath, taskId, undefined, false, '', context, false);
|
||||
|
||||
@@ -1058,6 +1169,19 @@ describe('expandTask', () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(7);
|
||||
|
||||
// Mock getPromptManager to return realistic prompt with default count
|
||||
const { getPromptManager } = await import(
|
||||
'../../../../../scripts/modules/prompt-manager.js'
|
||||
);
|
||||
const mockLoadPrompt = jest.fn().mockResolvedValue({
|
||||
systemPrompt:
|
||||
'You are an AI assistant helping with task breakdown for software development. You need to break down a high-level task into 7 specific subtasks that can be implemented one by one.',
|
||||
userPrompt: 'Break down this task into exactly 7 specific subtasks'
|
||||
});
|
||||
getPromptManager.mockReturnValue({
|
||||
loadPrompt: mockLoadPrompt
|
||||
});
|
||||
|
||||
// Act - Call with null numSubtasks
|
||||
await expandTask(tasksPath, taskId, null, false, '', context, false);
|
||||
|
||||
|
||||
@@ -48,7 +48,8 @@ jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
getDefaultNumTasks: jest.fn(() => 10)
|
||||
getDefaultNumTasks: jest.fn(() => 10),
|
||||
getDefaultPriority: jest.fn(() => 'medium')
|
||||
})
|
||||
);
|
||||
|
||||
@@ -70,6 +71,30 @@ jest.unstable_mockModule(
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/prompt-manager.js',
|
||||
() => ({
|
||||
getPromptManager: jest.fn().mockReturnValue({
|
||||
loadPrompt: jest.fn().mockImplementation((templateName, params) => {
|
||||
// Create dynamic mock prompts based on the parameters
|
||||
const { numTasks } = params || {};
|
||||
let numTasksText = '';
|
||||
|
||||
if (numTasks > 0) {
|
||||
numTasksText = `approximately ${numTasks}`;
|
||||
} else {
|
||||
numTasksText = 'an appropriate number of';
|
||||
}
|
||||
|
||||
return Promise.resolve({
|
||||
systemPrompt: 'Mocked system prompt for parse-prd',
|
||||
userPrompt: `Generate ${numTasksText} top-level development tasks from the PRD content.`
|
||||
});
|
||||
})
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock fs module
|
||||
jest.unstable_mockModule('fs', () => ({
|
||||
default: {
|
||||
@@ -348,33 +373,23 @@ describe('parsePRD', () => {
|
||||
expect(fs.default.writeFileSync).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should call process.exit when tasks in tag exist without force flag in CLI mode', async () => {
|
||||
test('should throw error when tasks in tag exist without force flag in CLI mode', async () => {
|
||||
// Setup mocks to simulate tasks.json already exists with tasks in the target tag
|
||||
fs.default.existsSync.mockReturnValue(true);
|
||||
fs.default.readFileSync.mockReturnValueOnce(
|
||||
JSON.stringify(existingTasksData)
|
||||
);
|
||||
|
||||
// Mock process.exit for this specific test
|
||||
const mockProcessExit = jest
|
||||
.spyOn(process, 'exit')
|
||||
.mockImplementation((code) => {
|
||||
throw new Error(`process.exit: ${code}`);
|
||||
});
|
||||
|
||||
// Call the function without mcpLog (CLI mode) and expect it to throw due to mocked process.exit
|
||||
// Call the function without mcpLog (CLI mode) and expect it to throw an error
|
||||
// In test environment, process.exit is prevented and error is thrown instead
|
||||
await expect(
|
||||
parsePRD('path/to/prd.txt', 'tasks/tasks.json', 3)
|
||||
).rejects.toThrow('process.exit: 1');
|
||||
|
||||
// Verify process.exit was called with code 1
|
||||
expect(mockProcessExit).toHaveBeenCalledWith(1);
|
||||
).rejects.toThrow(
|
||||
"Tag 'master' already contains 2 tasks. Use --force to overwrite or --append to add to existing tasks."
|
||||
);
|
||||
|
||||
// Verify the file was NOT written
|
||||
expect(fs.default.writeFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Restore the mock
|
||||
mockProcessExit.mockRestore();
|
||||
});
|
||||
|
||||
test('should append new tasks when append option is true', async () => {
|
||||
|
||||
@@ -55,6 +55,18 @@ jest.unstable_mockModule(
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/prompt-manager.js',
|
||||
() => ({
|
||||
getPromptManager: jest.fn().mockReturnValue({
|
||||
loadPrompt: jest.fn().mockResolvedValue({
|
||||
systemPrompt: 'Mocked system prompt',
|
||||
userPrompt: 'Mocked user prompt'
|
||||
})
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/models.js',
|
||||
() => ({
|
||||
|
||||
Reference in New Issue
Block a user