feat: CLI & MCP progress tracking for parse-prd command (#1048)
* initial cutover * update log to debug * update tracker to pass units * update test to match new base tracker format * add streamTextService mocks * remove unused imports * Ensure the CLI waits for async main() completion * refactor to reduce code duplication * update comment * reuse function * ensure targetTag is defined in streaming mode * avoid throwing inside process.exit spy * check for null * remove reference to generate * fix formatting * fix textStream assignment * ensure no division by 0 * fix jest chalk mocks * refactor for maintainability * Improve bar chart calculation logic for consistent visual representation * use custom streaming error types; fix mocks * Update streamText extraction in parse-prd.js to match actual service response * remove check - doesn't belong here * update mocks * remove streaming test that wasn't really doing anything * add comment * make parsing logic more DRY * fix formatting * Fix textStream extraction to match actual service response * fix mock * Add a cleanup method to ensure proper resource disposal and prevent memory leaks * debounce progress updates to reduce UI flicker during rapid updates * Implement timeout protection for streaming operations (60-second timeout) with automatic fallback to non-streaming mode. * clear timeout properly * Add a maximum buffer size limit (1MB) to prevent unbounded memory growth with very large streaming responses. * fix formatting * remove duplicate mock * better docs * fix formatting * sanitize the dynamic property name * Fix incorrect remaining progress calculation * Use onError callback instead of console.warn * Remove unused chalk import * Add missing custom validator in fallback parsing configuration * add custom validator parameter in fallback parsing * chore: fix package-lock.json * chore: large code refactor * chore: increase timeout from 1 minute to 3 minutes * fix: refactor and fix streaming * Merge remote-tracking branch 'origin/next' into joedanz/parse-prd-progress * fix: cleanup and fix unit tests * chore: fix unit tests * chore: fix format * chore: run format * chore: fix weird CI unit test error * chore: fix format --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
This commit is contained in:
97
tests/manual/progress/TESTING_GUIDE.md
Normal file
97
tests/manual/progress/TESTING_GUIDE.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Task Master Progress Testing Guide
|
||||
|
||||
Quick reference for testing streaming/non-streaming functionality with token tracking.
|
||||
|
||||
## 🎯 Test Modes
|
||||
|
||||
1. **MCP Streaming** - Has `reportProgress` + `mcpLog`, shows emoji indicators (🔴🟠🟢)
|
||||
2. **CLI Streaming** - No `reportProgress`, shows terminal progress bars
|
||||
3. **Non-Streaming** - No progress reporting, single response
|
||||
|
||||
## 🚀 Quick Commands
|
||||
|
||||
```bash
|
||||
# Test Scripts (accept: mcp-streaming, cli-streaming, non-streaming, both, all)
|
||||
node test-parse-prd.js [mode]
|
||||
node test-analyze-complexity.js [mode]
|
||||
node test-expand.js [mode] [num_subtasks]
|
||||
node test-expand-all.js [mode] [num_subtasks]
|
||||
node parse-prd-analysis.js [accuracy|complexity|all]
|
||||
|
||||
# CLI Commands
|
||||
node scripts/dev.js parse-prd test.txt # Local dev (streaming)
|
||||
node scripts/dev.js analyze-complexity --research
|
||||
node scripts/dev.js expand --id=1 --force
|
||||
node scripts/dev.js expand --all --force
|
||||
|
||||
task-master [command] # Global CLI (non-streaming)
|
||||
```
|
||||
|
||||
## ✅ Success Indicators
|
||||
|
||||
### Indicators
|
||||
- **Priority**: 🔴🔴🔴 (high), 🟠🟠⚪ (medium), 🟢⚪⚪ (low)
|
||||
- **Complexity**: ●●● (7-10), ●●○ (4-6), ●○○ (1-3)
|
||||
|
||||
### Token Format
|
||||
`Tokens (I/O): 2,150/1,847 ($0.0423)` (~4 chars per token)
|
||||
|
||||
### Progress Bars
|
||||
```
|
||||
Single: Generating subtasks... |████████░░| 80% (4/5)
|
||||
Dual: Expanding 3 tasks | Task 2/3 |████████░░| 66%
|
||||
Generating 5 subtasks... |██████░░░░| 60%
|
||||
```
|
||||
|
||||
### Fractional Progress
|
||||
`(completedTasks + currentSubtask/totalSubtasks) / totalTasks`
|
||||
Example: 33% → 46% → 60% → 66% → 80% → 93% → 100%
|
||||
|
||||
## 🐛 Quick Fixes
|
||||
|
||||
| Issue | Fix |
|
||||
|-------|-----|
|
||||
| No streaming | Check `reportProgress` is passed |
|
||||
| NaN% progress | Filter duplicate `subtask_progress` events |
|
||||
| Missing tokens | Check `.env` has API keys |
|
||||
| Broken bars | Terminal width > 80 |
|
||||
| projectRoot.split | Use `projectRoot` not `session` |
|
||||
|
||||
```bash
|
||||
# Debug
|
||||
TASKMASTER_DEBUG=true node test-expand.js
|
||||
npm run lint
|
||||
```
|
||||
|
||||
## 📊 Benchmarks
|
||||
- Single task: 10-20s (5 subtasks)
|
||||
- Expand all: 30-45s (3 tasks)
|
||||
- Streaming: ~10-20% faster
|
||||
- Updates: Every 2-5s
|
||||
|
||||
## 🔄 Test Workflow
|
||||
|
||||
```bash
|
||||
# Quick check
|
||||
node test-parse-prd.js both && npm test
|
||||
|
||||
# Full suite (before release)
|
||||
for test in parse-prd analyze-complexity expand expand-all; do
|
||||
node test-$test.js all
|
||||
done
|
||||
node parse-prd-analysis.js all
|
||||
npm test
|
||||
```
|
||||
|
||||
## 🎯 MCP Tool Example
|
||||
|
||||
```javascript
|
||||
{
|
||||
"tool": "parse_prd",
|
||||
"args": {
|
||||
"input": "prd.txt",
|
||||
"numTasks": "8",
|
||||
"force": true,
|
||||
"projectRoot": "/path/to/project"
|
||||
}
|
||||
}
|
||||
334
tests/manual/progress/parse-prd-analysis.js
Normal file
334
tests/manual/progress/parse-prd-analysis.js
Normal file
@@ -0,0 +1,334 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* parse-prd-analysis.js
|
||||
*
|
||||
* Detailed timing and accuracy analysis for parse-prd progress reporting.
|
||||
* Tests different task generation complexities using the sample PRD from fixtures.
|
||||
* Validates real-time characteristics and focuses on progress behavior and performance metrics.
|
||||
* Uses tests/fixtures/sample-prd.txt for consistent testing across all scenarios.
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
import parsePRD from '../../../scripts/modules/task-manager/parse-prd/index.js';
|
||||
|
||||
// Use the same project root as the main test file
|
||||
const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..');
|
||||
|
||||
/**
|
||||
* Get the path to the sample PRD file
|
||||
*/
|
||||
function getSamplePRDPath() {
|
||||
return path.resolve(PROJECT_ROOT, 'tests', 'fixtures', 'sample-prd.txt');
|
||||
}
|
||||
|
||||
/**
|
||||
* Detailed Progress Reporter for timing analysis
|
||||
*/
|
||||
class DetailedProgressReporter {
|
||||
constructor() {
|
||||
this.progressHistory = [];
|
||||
this.startTime = Date.now();
|
||||
this.lastProgress = 0;
|
||||
}
|
||||
|
||||
async reportProgress(data) {
|
||||
const timestamp = Date.now() - this.startTime;
|
||||
const timeSinceLastProgress =
|
||||
this.progressHistory.length > 0
|
||||
? timestamp -
|
||||
this.progressHistory[this.progressHistory.length - 1].timestamp
|
||||
: timestamp;
|
||||
|
||||
const entry = {
|
||||
timestamp,
|
||||
timeSinceLastProgress,
|
||||
...data
|
||||
};
|
||||
|
||||
this.progressHistory.push(entry);
|
||||
|
||||
const percentage = data.total
|
||||
? Math.round((data.progress / data.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
chalk.blue(`[${timestamp}ms] (+${timeSinceLastProgress}ms)`),
|
||||
chalk.green(`${percentage}%`),
|
||||
`(${data.progress}/${data.total})`,
|
||||
chalk.yellow(data.message)
|
||||
);
|
||||
}
|
||||
|
||||
getAnalysis() {
|
||||
if (this.progressHistory.length === 0) return null;
|
||||
|
||||
const totalDuration =
|
||||
this.progressHistory[this.progressHistory.length - 1].timestamp;
|
||||
const intervals = this.progressHistory
|
||||
.slice(1)
|
||||
.map((entry) => entry.timeSinceLastProgress);
|
||||
const avgInterval =
|
||||
intervals.length > 0
|
||||
? intervals.reduce((a, b) => a + b, 0) / intervals.length
|
||||
: 0;
|
||||
const minInterval = intervals.length > 0 ? Math.min(...intervals) : 0;
|
||||
const maxInterval = intervals.length > 0 ? Math.max(...intervals) : 0;
|
||||
|
||||
return {
|
||||
totalReports: this.progressHistory.length,
|
||||
totalDuration,
|
||||
avgInterval: Math.round(avgInterval),
|
||||
minInterval,
|
||||
maxInterval,
|
||||
intervals
|
||||
};
|
||||
}
|
||||
|
||||
printDetailedAnalysis() {
|
||||
const analysis = this.getAnalysis();
|
||||
if (!analysis) {
|
||||
console.log(chalk.red('No progress data to analyze'));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(chalk.cyan('\n=== Detailed Progress Analysis ==='));
|
||||
console.log(`Total Progress Reports: ${analysis.totalReports}`);
|
||||
console.log(`Total Duration: ${analysis.totalDuration}ms`);
|
||||
console.log(`Average Interval: ${analysis.avgInterval}ms`);
|
||||
console.log(`Min Interval: ${analysis.minInterval}ms`);
|
||||
console.log(`Max Interval: ${analysis.maxInterval}ms`);
|
||||
|
||||
console.log(chalk.cyan('\n=== Progress Timeline ==='));
|
||||
this.progressHistory.forEach((entry, index) => {
|
||||
const percentage = entry.total
|
||||
? Math.round((entry.progress / entry.total) * 100)
|
||||
: 0;
|
||||
const intervalText =
|
||||
index > 0 ? ` (+${entry.timeSinceLastProgress}ms)` : '';
|
||||
console.log(
|
||||
`${index + 1}. [${entry.timestamp}ms]${intervalText} ${percentage}% - ${entry.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Check for real-time characteristics
|
||||
console.log(chalk.cyan('\n=== Real-time Characteristics ==='));
|
||||
const hasRealTimeUpdates = analysis.intervals.some(
|
||||
(interval) => interval < 10000
|
||||
); // Less than 10s
|
||||
const hasConsistentUpdates = analysis.intervals.length > 3;
|
||||
const hasProgressiveUpdates = this.progressHistory.every(
|
||||
(entry, index) =>
|
||||
index === 0 ||
|
||||
entry.progress >= this.progressHistory[index - 1].progress
|
||||
);
|
||||
|
||||
console.log(`✅ Real-time updates: ${hasRealTimeUpdates ? 'YES' : 'NO'}`);
|
||||
console.log(
|
||||
`✅ Consistent updates: ${hasConsistentUpdates ? 'YES' : 'NO'}`
|
||||
);
|
||||
console.log(
|
||||
`✅ Progressive updates: ${hasProgressiveUpdates ? 'YES' : 'NO'}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get PRD path for complexity testing
|
||||
* For complexity testing, we'll use the same sample PRD but request different numbers of tasks
|
||||
* This provides more realistic testing since the AI will generate different complexity based on task count
|
||||
*/
|
||||
function getPRDPathForComplexity(complexity = 'medium') {
|
||||
// Always use the same sample PRD file - complexity will be controlled by task count
|
||||
return getSamplePRDPath();
|
||||
}
|
||||
|
||||
/**
|
||||
* Test streaming with different task generation complexities
|
||||
* Uses the same sample PRD but requests different numbers of tasks to test complexity scaling
|
||||
*/
|
||||
async function testStreamingComplexity() {
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
'🧪 Testing Streaming with Different Task Generation Complexities\n'
|
||||
)
|
||||
);
|
||||
|
||||
const complexities = ['simple', 'medium', 'complex'];
|
||||
const results = [];
|
||||
|
||||
for (const complexity of complexities) {
|
||||
console.log(
|
||||
chalk.yellow(`\n--- Testing ${complexity.toUpperCase()} Complexity ---`)
|
||||
);
|
||||
|
||||
const testPRDPath = getPRDPathForComplexity(complexity);
|
||||
const testTasksPath = path.join(__dirname, `test-tasks-${complexity}.json`);
|
||||
|
||||
// Clean up existing file
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
const progressReporter = new DetailedProgressReporter();
|
||||
const expectedTasks =
|
||||
complexity === 'simple' ? 3 : complexity === 'medium' ? 6 : 10;
|
||||
|
||||
try {
|
||||
const startTime = Date.now();
|
||||
|
||||
await parsePRD(testPRDPath, testTasksPath, expectedTasks, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
projectRoot: PROJECT_ROOT
|
||||
});
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
console.log(
|
||||
chalk.green(`✅ ${complexity} complexity completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
progressReporter.printDetailedAnalysis();
|
||||
|
||||
results.push({
|
||||
complexity,
|
||||
duration,
|
||||
analysis: progressReporter.getAnalysis()
|
||||
});
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.red(`❌ ${complexity} complexity failed: ${error.message}`)
|
||||
);
|
||||
results.push({
|
||||
complexity,
|
||||
error: error.message
|
||||
});
|
||||
} finally {
|
||||
// Clean up (only the tasks file, not the PRD since we're using the fixture)
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log(chalk.cyan('\n=== Complexity Test Summary ==='));
|
||||
results.forEach((result) => {
|
||||
if (result.error) {
|
||||
console.log(`${result.complexity}: ❌ FAILED - ${result.error}`);
|
||||
} else {
|
||||
console.log(
|
||||
`${result.complexity}: ✅ ${result.duration}ms (${result.analysis.totalReports} reports)`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test progress accuracy
|
||||
*/
|
||||
async function testProgressAccuracy() {
|
||||
console.log(chalk.cyan('🧪 Testing Progress Accuracy\n'));
|
||||
|
||||
const testPRDPath = getSamplePRDPath();
|
||||
const testTasksPath = path.join(__dirname, 'test-accuracy-tasks.json');
|
||||
|
||||
// Clean up existing file
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
const progressReporter = new DetailedProgressReporter();
|
||||
|
||||
try {
|
||||
await parsePRD(testPRDPath, testTasksPath, 8, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
projectRoot: PROJECT_ROOT
|
||||
});
|
||||
|
||||
console.log(chalk.green('✅ Progress accuracy test completed'));
|
||||
progressReporter.printDetailedAnalysis();
|
||||
|
||||
// Additional accuracy checks
|
||||
const analysis = progressReporter.getAnalysis();
|
||||
console.log(chalk.cyan('\n=== Accuracy Metrics ==='));
|
||||
console.log(
|
||||
`Progress consistency: ${analysis.intervals.every((i) => i > 0) ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Reasonable intervals: ${analysis.intervals.every((i) => i < 30000) ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Expected report count: ${analysis.totalReports >= 8 ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.red(`❌ Progress accuracy test failed: ${error.message}`)
|
||||
);
|
||||
} finally {
|
||||
// Clean up (only the tasks file, not the PRD since we're using the fixture)
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Main test runner
|
||||
*/
|
||||
async function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const testType = args[0] || 'accuracy';
|
||||
|
||||
console.log(chalk.bold.cyan('🚀 Task Master Detailed Progress Tests\n'));
|
||||
console.log(chalk.blue(`Test type: ${testType}\n`));
|
||||
|
||||
try {
|
||||
switch (testType.toLowerCase()) {
|
||||
case 'accuracy':
|
||||
await testProgressAccuracy();
|
||||
break;
|
||||
|
||||
case 'complexity':
|
||||
await testStreamingComplexity();
|
||||
break;
|
||||
|
||||
case 'all':
|
||||
console.log(chalk.yellow('Running all detailed tests...\n'));
|
||||
await testProgressAccuracy();
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
await testStreamingComplexity();
|
||||
break;
|
||||
|
||||
default:
|
||||
console.log(chalk.red(`Unknown test type: ${testType}`));
|
||||
console.log(
|
||||
chalk.yellow('Available options: accuracy, complexity, all')
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.green('\n🎉 Detailed tests completed successfully!'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n❌ Test failed: ${error.message}`));
|
||||
console.error(chalk.red(error.stack));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
// Top-level await is available in ESM; keep compatibility with Node ≥14
|
||||
await main();
|
||||
}
|
||||
577
tests/manual/progress/test-parse-prd.js
Normal file
577
tests/manual/progress/test-parse-prd.js
Normal file
@@ -0,0 +1,577 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* test-parse-prd.js
|
||||
*
|
||||
* Comprehensive integration test for parse-prd functionality.
|
||||
* Tests MCP streaming, CLI streaming, and non-streaming modes.
|
||||
* Validates token tracking, message formats, and priority indicators across all contexts.
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
// Get current directory
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Get project root (three levels up from tests/manual/progress/)
|
||||
const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..');
|
||||
|
||||
// Import the parse-prd function
|
||||
import parsePRD from '../../../scripts/modules/task-manager/parse-prd/index.js';
|
||||
|
||||
/**
|
||||
* Mock Progress Reporter for testing
|
||||
*/
|
||||
class MockProgressReporter {
|
||||
constructor(enableDebug = true) {
|
||||
this.enableDebug = enableDebug;
|
||||
this.progressHistory = [];
|
||||
this.startTime = Date.now();
|
||||
}
|
||||
|
||||
async reportProgress(data) {
|
||||
const timestamp = Date.now() - this.startTime;
|
||||
|
||||
const entry = {
|
||||
timestamp,
|
||||
...data
|
||||
};
|
||||
|
||||
this.progressHistory.push(entry);
|
||||
|
||||
if (this.enableDebug) {
|
||||
const percentage = data.total
|
||||
? Math.round((data.progress / data.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
chalk.blue(`[${timestamp}ms]`),
|
||||
chalk.green(`${percentage}%`),
|
||||
chalk.yellow(data.message)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
getProgressHistory() {
|
||||
return this.progressHistory;
|
||||
}
|
||||
|
||||
printSummary() {
|
||||
console.log(chalk.green('\n=== Progress Summary ==='));
|
||||
console.log(`Total progress reports: ${this.progressHistory.length}`);
|
||||
console.log(
|
||||
`Duration: ${this.progressHistory[this.progressHistory.length - 1]?.timestamp || 0}ms`
|
||||
);
|
||||
|
||||
this.progressHistory.forEach((entry, index) => {
|
||||
const percentage = entry.total
|
||||
? Math.round((entry.progress / entry.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
`${index + 1}. [${entry.timestamp}ms] ${percentage}% - ${entry.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Check for expected message formats
|
||||
const hasInitialMessage = this.progressHistory.some(
|
||||
(entry) =>
|
||||
entry.message.includes('Starting PRD analysis') &&
|
||||
entry.message.includes('Input:') &&
|
||||
entry.message.includes('tokens')
|
||||
);
|
||||
// Make regex more flexible to handle potential whitespace variations
|
||||
const hasTaskMessages = this.progressHistory.some((entry) =>
|
||||
/^[🔴🟠🟢⚪]{3} Task \d+\/\d+ - .+ \| ~Output: \d+ tokens/u.test(
|
||||
entry.message.trim()
|
||||
)
|
||||
);
|
||||
|
||||
const hasCompletionMessage = this.progressHistory.some(
|
||||
(entry) =>
|
||||
entry.message.includes('✅ Task Generation Completed') &&
|
||||
entry.message.includes('Tokens (I/O):')
|
||||
);
|
||||
|
||||
console.log(chalk.cyan('\n=== Message Format Validation ==='));
|
||||
console.log(
|
||||
`✅ Initial message format: ${hasInitialMessage ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(`✅ Task message format: ${hasTaskMessages ? 'PASS' : 'FAIL'}`);
|
||||
console.log(
|
||||
`✅ Completion message format: ${hasCompletionMessage ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock MCP Logger for testing
|
||||
*/
|
||||
class MockMCPLogger {
|
||||
constructor(enableDebug = true) {
|
||||
this.enableDebug = enableDebug;
|
||||
this.logs = [];
|
||||
}
|
||||
|
||||
_log(level, ...args) {
|
||||
const entry = {
|
||||
level,
|
||||
timestamp: Date.now(),
|
||||
message: args.join(' ')
|
||||
};
|
||||
this.logs.push(entry);
|
||||
|
||||
if (this.enableDebug) {
|
||||
const color =
|
||||
{
|
||||
info: chalk.blue,
|
||||
warn: chalk.yellow,
|
||||
error: chalk.red,
|
||||
debug: chalk.gray,
|
||||
success: chalk.green
|
||||
}[level] || chalk.white;
|
||||
|
||||
console.log(color(`[${level.toUpperCase()}]`), ...args);
|
||||
}
|
||||
}
|
||||
|
||||
info(...args) {
|
||||
this._log('info', ...args);
|
||||
}
|
||||
warn(...args) {
|
||||
this._log('warn', ...args);
|
||||
}
|
||||
error(...args) {
|
||||
this._log('error', ...args);
|
||||
}
|
||||
debug(...args) {
|
||||
this._log('debug', ...args);
|
||||
}
|
||||
success(...args) {
|
||||
this._log('success', ...args);
|
||||
}
|
||||
|
||||
getLogs() {
|
||||
return this.logs;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the path to the sample PRD file
|
||||
*/
|
||||
function getSamplePRDPath() {
|
||||
return path.resolve(PROJECT_ROOT, 'tests', 'fixtures', 'sample-prd.txt');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a basic test config file
|
||||
*/
|
||||
function createTestConfig() {
|
||||
const testConfig = {
|
||||
models: {
|
||||
main: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
temperature: 0.2
|
||||
},
|
||||
research: {
|
||||
provider: 'perplexity',
|
||||
modelId: 'sonar-pro',
|
||||
maxTokens: 8700,
|
||||
temperature: 0.1
|
||||
},
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
global: {
|
||||
logLevel: 'info',
|
||||
debug: false,
|
||||
defaultSubtasks: 5,
|
||||
defaultPriority: 'medium',
|
||||
projectName: 'Task Master Test',
|
||||
ollamaBaseURL: 'http://localhost:11434/api',
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
|
||||
}
|
||||
};
|
||||
|
||||
const taskmasterDir = path.join(__dirname, '.taskmaster');
|
||||
const configPath = path.join(taskmasterDir, 'config.json');
|
||||
|
||||
// Create .taskmaster directory if it doesn't exist
|
||||
if (!fs.existsSync(taskmasterDir)) {
|
||||
fs.mkdirSync(taskmasterDir, { recursive: true });
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, JSON.stringify(testConfig, null, 2));
|
||||
return configPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup test files and configuration
|
||||
*/
|
||||
function setupTestFiles(testName) {
|
||||
const testPRDPath = getSamplePRDPath();
|
||||
const testTasksPath = path.join(__dirname, `test-${testName}-tasks.json`);
|
||||
const configPath = createTestConfig();
|
||||
|
||||
// Clean up existing files
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
return { testPRDPath, testTasksPath, configPath };
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test files
|
||||
*/
|
||||
function cleanupTestFiles(testTasksPath, configPath) {
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
if (fs.existsSync(configPath)) fs.unlinkSync(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Run parsePRD with configurable options
|
||||
*/
|
||||
async function runParsePRD(testPRDPath, testTasksPath, numTasks, options = {}) {
|
||||
const startTime = Date.now();
|
||||
|
||||
const result = await parsePRD(testPRDPath, testTasksPath, numTasks, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
projectRoot: PROJECT_ROOT,
|
||||
...options
|
||||
});
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
return { result, duration };
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify task file existence and structure
|
||||
*/
|
||||
function verifyTaskResults(testTasksPath) {
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
const tasksData = JSON.parse(fs.readFileSync(testTasksPath, 'utf8'));
|
||||
console.log(
|
||||
chalk.green(
|
||||
`\n✅ Tasks file created with ${tasksData.tasks.length} tasks`
|
||||
)
|
||||
);
|
||||
|
||||
// Verify task structure
|
||||
const firstTask = tasksData.tasks[0];
|
||||
if (firstTask && firstTask.id && firstTask.title && firstTask.description) {
|
||||
console.log(chalk.green('✅ Task structure is valid'));
|
||||
return true;
|
||||
} else {
|
||||
console.log(chalk.red('❌ Task structure is invalid'));
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.red('❌ Tasks file was not created'));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Print MCP-specific logs and validation
|
||||
*/
|
||||
function printMCPResults(mcpLogger, progressReporter) {
|
||||
// Print progress summary
|
||||
progressReporter.printSummary();
|
||||
|
||||
// Print MCP logs
|
||||
console.log(chalk.cyan('\n=== MCP Logs ==='));
|
||||
const logs = mcpLogger.getLogs();
|
||||
logs.forEach((log, index) => {
|
||||
const color =
|
||||
{
|
||||
info: chalk.blue,
|
||||
warn: chalk.yellow,
|
||||
error: chalk.red,
|
||||
debug: chalk.gray,
|
||||
success: chalk.green
|
||||
}[log.level] || chalk.white;
|
||||
console.log(
|
||||
`${index + 1}. ${color(`[${log.level.toUpperCase()}]`)} ${log.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Verify MCP-specific message formats (should use emoji indicators)
|
||||
const hasEmojiIndicators = progressReporter
|
||||
.getProgressHistory()
|
||||
.some((entry) => /[🔴🟠🟢]/u.test(entry.message));
|
||||
|
||||
console.log(chalk.cyan('\n=== MCP-Specific Validation ==='));
|
||||
console.log(
|
||||
`✅ Emoji priority indicators: ${hasEmojiIndicators ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
|
||||
return { hasEmojiIndicators, logs };
|
||||
}
|
||||
|
||||
/**
|
||||
* Test MCP streaming with proper MCP context
|
||||
*/
|
||||
async function testMCPStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing MCP Streaming Functionality\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } = setupTestFiles('mcp');
|
||||
const progressReporter = new MockProgressReporter(true);
|
||||
const mcpLogger = new MockMCPLogger(true); // Enable debug for MCP context
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting MCP streaming test...'));
|
||||
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks,
|
||||
{
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
mcpLog: mcpLogger // Add MCP context - this is the key difference
|
||||
}
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ MCP streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const { hasEmojiIndicators, logs } = printMCPResults(
|
||||
mcpLogger,
|
||||
progressReporter
|
||||
);
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
progressHistory: progressReporter.getProgressHistory(),
|
||||
mcpLogs: logs,
|
||||
hasEmojiIndicators,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ MCP streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test CLI streaming (no reportProgress)
|
||||
*/
|
||||
async function testCLIStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing CLI Streaming (No Progress Reporter)\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } = setupTestFiles('cli');
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting CLI streaming test...'));
|
||||
|
||||
// No reportProgress provided; CLI text mode uses the default streaming reporter
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ CLI streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ CLI streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test non-streaming functionality
|
||||
*/
|
||||
async function testNonStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing Non-Streaming Functionality\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } =
|
||||
setupTestFiles('non-streaming');
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting non-streaming test...'));
|
||||
|
||||
// Force non-streaming by not providing reportProgress
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ Non-streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ Non-streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare results between streaming and non-streaming
|
||||
*/
|
||||
function compareResults(streamingResult, nonStreamingResult) {
|
||||
console.log(chalk.cyan('\n=== Results Comparison ==='));
|
||||
|
||||
if (!streamingResult.success || !nonStreamingResult.success) {
|
||||
console.log(chalk.red('❌ Cannot compare - one or both tests failed'));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Streaming duration: ${streamingResult.duration}ms`);
|
||||
console.log(`Non-streaming duration: ${nonStreamingResult.duration}ms`);
|
||||
|
||||
const durationDiff = Math.abs(
|
||||
streamingResult.duration - nonStreamingResult.duration
|
||||
);
|
||||
const durationDiffPercent = Math.round(
|
||||
(durationDiff /
|
||||
Math.max(streamingResult.duration, nonStreamingResult.duration)) *
|
||||
100
|
||||
);
|
||||
|
||||
console.log(
|
||||
`Duration difference: ${durationDiff}ms (${durationDiffPercent}%)`
|
||||
);
|
||||
|
||||
if (streamingResult.progressHistory) {
|
||||
console.log(
|
||||
`Streaming progress reports: ${streamingResult.progressHistory.length}`
|
||||
);
|
||||
}
|
||||
|
||||
console.log(chalk.green('✅ Both methods completed successfully'));
|
||||
}
|
||||
|
||||
/**
|
||||
* Main test runner
|
||||
*/
|
||||
async function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const testType = args[0] || 'streaming';
|
||||
const numTasks = parseInt(args[1]) || 8;
|
||||
|
||||
console.log(chalk.bold.cyan('🚀 Task Master PRD Streaming Tests\n'));
|
||||
console.log(chalk.blue(`Test type: ${testType}`));
|
||||
console.log(chalk.blue(`Number of tasks: ${numTasks}\n`));
|
||||
|
||||
try {
|
||||
switch (testType.toLowerCase()) {
|
||||
case 'mcp':
|
||||
case 'mcp-streaming':
|
||||
await testMCPStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'cli':
|
||||
case 'cli-streaming':
|
||||
await testCLIStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'non-streaming':
|
||||
case 'non':
|
||||
await testNonStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'both': {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Running both MCP streaming and non-streaming tests...\n'
|
||||
)
|
||||
);
|
||||
const mcpStreamingResult = await testMCPStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const nonStreamingResult = await testNonStreaming(numTasks);
|
||||
compareResults(mcpStreamingResult, nonStreamingResult);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'all': {
|
||||
console.log(chalk.yellow('Running all test types...\n'));
|
||||
const mcpResult = await testMCPStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const cliResult = await testCLIStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const nonStreamResult = await testNonStreaming(numTasks);
|
||||
|
||||
console.log(chalk.cyan('\n=== All Tests Summary ==='));
|
||||
console.log(
|
||||
`MCP Streaming: ${mcpResult.success ? '✅ PASS' : '❌ FAIL'} ${mcpResult.hasEmojiIndicators ? '(✅ Emojis)' : '(❌ No Emojis)'}`
|
||||
);
|
||||
console.log(
|
||||
`CLI Streaming: ${cliResult.success ? '✅ PASS' : '❌ FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Non-streaming: ${nonStreamResult.success ? '✅ PASS' : '❌ FAIL'}`
|
||||
);
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
console.log(chalk.red(`Unknown test type: ${testType}`));
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Available options: mcp-streaming, cli-streaming, non-streaming, both, all'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.green('\n🎉 Tests completed successfully!'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n❌ Test failed: ${error.message}`));
|
||||
console.error(chalk.red(error.stack));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
main();
|
||||
}
|
||||
@@ -391,7 +391,7 @@ describe('Unified AI Services', () => {
|
||||
expect.stringContaining('Service call failed for role main')
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'info',
|
||||
'debug',
|
||||
expect.stringContaining('New AI service call with role: fallback')
|
||||
);
|
||||
});
|
||||
@@ -435,7 +435,7 @@ describe('Unified AI Services', () => {
|
||||
expect.stringContaining('Service call failed for role fallback')
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'info',
|
||||
'debug',
|
||||
expect.stringContaining('New AI service call with role: research')
|
||||
);
|
||||
});
|
||||
|
||||
@@ -1,68 +1,470 @@
|
||||
// In tests/unit/parse-prd.test.js
|
||||
// Testing that parse-prd.js handles both .txt and .md files the same way
|
||||
// Testing parse-prd.js file extension compatibility with real files
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import os from 'os';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Mock the AI services to avoid real API calls
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/ai-services-unified.js',
|
||||
() => ({
|
||||
streamTextService: jest.fn(),
|
||||
generateObjectService: jest.fn(),
|
||||
streamObjectService: jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
get partialObjectStream() {
|
||||
return (async function* () {
|
||||
yield { tasks: [] };
|
||||
yield { tasks: [{ id: 1, title: 'Test Task', priority: 'high' }] };
|
||||
})();
|
||||
},
|
||||
object: Promise.resolve({
|
||||
tasks: [{ id: 1, title: 'Test Task', priority: 'high' }]
|
||||
})
|
||||
};
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock all config-manager exports comprehensively
|
||||
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
getDefaultPriority: jest.fn(() => 'medium'),
|
||||
getMainModelId: jest.fn(() => 'test-model'),
|
||||
getResearchModelId: jest.fn(() => 'test-research-model'),
|
||||
getParametersForRole: jest.fn(() => ({ maxTokens: 1000, temperature: 0.7 })),
|
||||
getMainProvider: jest.fn(() => 'anthropic'),
|
||||
getResearchProvider: jest.fn(() => 'perplexity'),
|
||||
getFallbackProvider: jest.fn(() => 'anthropic'),
|
||||
getResponseLanguage: jest.fn(() => 'English'),
|
||||
getDefaultNumTasks: jest.fn(() => 10),
|
||||
getDefaultSubtasks: jest.fn(() => 5),
|
||||
getLogLevel: jest.fn(() => 'info'),
|
||||
getConfig: jest.fn(() => ({})),
|
||||
getAllProviders: jest.fn(() => ['anthropic', 'perplexity']),
|
||||
MODEL_MAP: {},
|
||||
VALID_PROVIDERS: ['anthropic', 'perplexity'],
|
||||
validateProvider: jest.fn(() => true),
|
||||
validateProviderModelCombination: jest.fn(() => true),
|
||||
isApiKeySet: jest.fn(() => true)
|
||||
}));
|
||||
|
||||
// Mock utils comprehensively to prevent CLI behavior
|
||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||
log: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn(),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
ensureTagMetadata: jest.fn(),
|
||||
readJSON: jest.fn(() => ({ master: { tasks: [] } })),
|
||||
findProjectRoot: jest.fn(() => '/tmp/test'),
|
||||
resolveEnvVariable: jest.fn(() => 'mock-key'),
|
||||
findTaskById: jest.fn(() => null),
|
||||
findTaskByPattern: jest.fn(() => []),
|
||||
validateTaskId: jest.fn(() => true),
|
||||
createTask: jest.fn(() => ({ id: 1, title: 'Mock Task' })),
|
||||
sortByDependencies: jest.fn((tasks) => tasks),
|
||||
isEmpty: jest.fn(() => false),
|
||||
truncate: jest.fn((text) => text),
|
||||
slugify: jest.fn((text) => text.toLowerCase()),
|
||||
getTagFromPath: jest.fn(() => 'master'),
|
||||
isValidTag: jest.fn(() => true),
|
||||
migrateToTaggedFormat: jest.fn(() => ({ master: { tasks: [] } })),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
resolveCurrentTag: jest.fn(() => 'master'),
|
||||
getDefaultTag: jest.fn(() => 'master'),
|
||||
performMigrationIfNeeded: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock prompt manager
|
||||
jest.unstable_mockModule('../../scripts/modules/prompt-manager.js', () => ({
|
||||
getPromptManager: jest.fn(() => ({
|
||||
loadPrompt: jest.fn(() => ({
|
||||
systemPrompt: 'Test system prompt',
|
||||
userPrompt: 'Test user prompt'
|
||||
}))
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock progress/UI components to prevent real CLI UI
|
||||
jest.unstable_mockModule('../../src/progress/parse-prd-tracker.js', () => ({
|
||||
createParsePrdTracker: jest.fn(() => ({
|
||||
start: jest.fn(),
|
||||
stop: jest.fn(),
|
||||
cleanup: jest.fn(),
|
||||
addTaskLine: jest.fn(),
|
||||
updateTokens: jest.fn(),
|
||||
complete: jest.fn(),
|
||||
getSummary: jest.fn().mockReturnValue({
|
||||
taskPriorities: { high: 0, medium: 0, low: 0 },
|
||||
elapsedTime: 0,
|
||||
actionVerb: 'generated'
|
||||
})
|
||||
}))
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../src/ui/parse-prd.js', () => ({
|
||||
displayParsePrdStart: jest.fn(),
|
||||
displayParsePrdSummary: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../scripts/modules/ui.js', () => ({
|
||||
displayAiUsageSummary: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock task generation to prevent file operations
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
// Mock stream parser
|
||||
jest.unstable_mockModule('../../src/utils/stream-parser.js', () => {
|
||||
// Define mock StreamingError class
|
||||
class StreamingError extends Error {
|
||||
constructor(message, code) {
|
||||
super(message);
|
||||
this.name = 'StreamingError';
|
||||
this.code = code;
|
||||
}
|
||||
}
|
||||
|
||||
// Define mock error codes
|
||||
const STREAMING_ERROR_CODES = {
|
||||
NOT_ASYNC_ITERABLE: 'STREAMING_NOT_SUPPORTED',
|
||||
STREAM_PROCESSING_FAILED: 'STREAM_PROCESSING_FAILED',
|
||||
STREAM_NOT_ITERABLE: 'STREAM_NOT_ITERABLE'
|
||||
};
|
||||
|
||||
return {
|
||||
parseStream: jest.fn(),
|
||||
StreamingError,
|
||||
STREAMING_ERROR_CODES
|
||||
};
|
||||
});
|
||||
|
||||
// Mock other potential UI elements
|
||||
jest.unstable_mockModule('ora', () => ({
|
||||
default: jest.fn(() => ({
|
||||
start: jest.fn(),
|
||||
stop: jest.fn(),
|
||||
succeed: jest.fn(),
|
||||
fail: jest.fn()
|
||||
}))
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
red: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: {
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
},
|
||||
red: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: {
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock boxen
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((content) => content)
|
||||
}));
|
||||
|
||||
// Mock constants
|
||||
jest.unstable_mockModule('../../src/constants/task-priority.js', () => ({
|
||||
DEFAULT_TASK_PRIORITY: 'medium',
|
||||
TASK_PRIORITY_OPTIONS: ['low', 'medium', 'high']
|
||||
}));
|
||||
|
||||
// Mock UI indicators
|
||||
jest.unstable_mockModule('../../src/ui/indicators.js', () => ({
|
||||
getPriorityIndicators: jest.fn(() => ({
|
||||
high: '🔴',
|
||||
medium: '🟡',
|
||||
low: '🟢'
|
||||
}))
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
const { generateObjectService } = await import(
|
||||
'../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
const parsePRD = (
|
||||
await import('../../scripts/modules/task-manager/parse-prd/parse-prd.js')
|
||||
).default;
|
||||
|
||||
describe('parse-prd file extension compatibility', () => {
|
||||
// Test directly that the parse-prd functionality works with different extensions
|
||||
// by examining the parameter handling in mcp-server/src/tools/parse-prd.js
|
||||
let tempDir;
|
||||
let testFiles;
|
||||
|
||||
test('Parameter description mentions support for .md files', () => {
|
||||
// The parameter description for 'input' in parse-prd.js includes .md files
|
||||
const description =
|
||||
'Absolute path to the PRD document file (.txt, .md, etc.)';
|
||||
|
||||
// Verify the description explicitly mentions .md files
|
||||
expect(description).toContain('.md');
|
||||
});
|
||||
|
||||
test('File extension validation is not restricted to .txt files', () => {
|
||||
// Check for absence of extension validation
|
||||
const fileValidator = (filePath) => {
|
||||
// Return a boolean value to ensure the test passes
|
||||
if (!filePath || filePath.length === 0) {
|
||||
return false;
|
||||
const mockTasksResponse = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Test Task 1',
|
||||
description: 'First test task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
priority: 'high',
|
||||
details: 'Implementation details for task 1',
|
||||
testStrategy: 'Unit tests for task 1'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Test Task 2',
|
||||
description: 'Second test task',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'medium',
|
||||
details: 'Implementation details for task 2',
|
||||
testStrategy: 'Integration tests for task 2'
|
||||
}
|
||||
return true;
|
||||
],
|
||||
metadata: {
|
||||
projectName: 'Test Project',
|
||||
totalTasks: 2,
|
||||
sourceFile: 'test-prd',
|
||||
generatedAt: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
const samplePRDContent = `# Test Project PRD
|
||||
|
||||
## Overview
|
||||
Build a simple task management application.
|
||||
|
||||
## Features
|
||||
1. Create and manage tasks
|
||||
2. Set task priorities
|
||||
3. Track task dependencies
|
||||
|
||||
## Technical Requirements
|
||||
- React frontend
|
||||
- Node.js backend
|
||||
- PostgreSQL database
|
||||
|
||||
## Success Criteria
|
||||
- Users can create tasks successfully
|
||||
- Task dependencies work correctly`;
|
||||
|
||||
beforeAll(() => {
|
||||
// Create temporary directory for test files
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'parse-prd-test-'));
|
||||
|
||||
// Create test files with different extensions
|
||||
testFiles = {
|
||||
txt: path.join(tempDir, 'test-prd.txt'),
|
||||
md: path.join(tempDir, 'test-prd.md'),
|
||||
rst: path.join(tempDir, 'test-prd.rst'),
|
||||
noExt: path.join(tempDir, 'test-prd')
|
||||
};
|
||||
|
||||
// Test with different extensions
|
||||
expect(fileValidator('/path/to/prd.txt')).toBe(true);
|
||||
expect(fileValidator('/path/to/prd.md')).toBe(true);
|
||||
// Write the same content to all test files
|
||||
Object.values(testFiles).forEach((filePath) => {
|
||||
fs.writeFileSync(filePath, samplePRDContent);
|
||||
});
|
||||
|
||||
// Invalid cases should still fail regardless of extension
|
||||
expect(fileValidator('')).toBe(false);
|
||||
// Mock process.exit to prevent actual exit
|
||||
jest.spyOn(process, 'exit').mockImplementation(() => undefined);
|
||||
|
||||
// Mock console methods to prevent output
|
||||
jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
jest.spyOn(console, 'error').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
test('Implementation handles all file types the same way', () => {
|
||||
// This test confirms that the implementation treats all file types equally
|
||||
// by simulating the core functionality
|
||||
afterAll(() => {
|
||||
// Clean up temporary directory
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
|
||||
const mockImplementation = (filePath) => {
|
||||
// The parse-prd.js implementation only checks file existence,
|
||||
// not the file extension, which is what we want to verify
|
||||
// Restore mocks
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
if (!filePath) {
|
||||
return { success: false, error: { code: 'MISSING_INPUT_FILE' } };
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Mock successful AI response
|
||||
generateObjectService.mockResolvedValue({
|
||||
mainResult: { object: mockTasksResponse },
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: 'test-user',
|
||||
commandName: 'parse-prd',
|
||||
modelUsed: 'test-model',
|
||||
providerName: 'test-provider',
|
||||
inputTokens: 100,
|
||||
outputTokens: 200,
|
||||
totalTokens: 300,
|
||||
totalCost: 0.01,
|
||||
currency: 'USD'
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// In the real implementation, this would check if the file exists
|
||||
// But for our test, we're verifying that the same logic applies
|
||||
// regardless of file extension
|
||||
test('should accept and parse .txt files', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-txt.json');
|
||||
|
||||
// No special handling for different extensions
|
||||
return { success: true };
|
||||
};
|
||||
const result = await parsePRD(testFiles.txt, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
// Verify same behavior for different extensions
|
||||
const txtResult = mockImplementation('/path/to/prd.txt');
|
||||
const mdResult = mockImplementation('/path/to/prd.md');
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
|
||||
// Both should succeed since there's no extension-specific logic
|
||||
expect(txtResult.success).toBe(true);
|
||||
expect(mdResult.success).toBe(true);
|
||||
// Verify the content was parsed correctly
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
expect(tasksData.master.tasks).toHaveLength(2);
|
||||
expect(tasksData.master.tasks[0].title).toBe('Test Task 1');
|
||||
});
|
||||
|
||||
// Both should have the same structure
|
||||
expect(Object.keys(txtResult)).toEqual(Object.keys(mdResult));
|
||||
test('should accept and parse .md files', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-md.json');
|
||||
|
||||
const result = await parsePRD(testFiles.md, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
|
||||
// Verify the content was parsed correctly
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
expect(tasksData.master.tasks).toHaveLength(2);
|
||||
});
|
||||
|
||||
test('should accept and parse files with other text extensions', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-rst.json');
|
||||
|
||||
const result = await parsePRD(testFiles.rst, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
});
|
||||
|
||||
test('should accept and parse files with no extension', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-noext.json');
|
||||
|
||||
const result = await parsePRD(testFiles.noExt, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
});
|
||||
|
||||
test('should produce identical results regardless of file extension', async () => {
|
||||
const outputs = {};
|
||||
|
||||
// Parse each file type with a unique project root to avoid ID conflicts
|
||||
for (const [ext, filePath] of Object.entries(testFiles)) {
|
||||
// Create a unique subdirectory for each test to isolate them
|
||||
const testSubDir = path.join(tempDir, `test-${ext}`);
|
||||
fs.mkdirSync(testSubDir, { recursive: true });
|
||||
|
||||
const outputPath = path.join(testSubDir, `tasks.json`);
|
||||
|
||||
await parsePRD(filePath, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: testSubDir
|
||||
});
|
||||
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
outputs[ext] = tasksData;
|
||||
}
|
||||
|
||||
// Compare all outputs - they should be identical (except metadata timestamps)
|
||||
const baseOutput = outputs.txt;
|
||||
Object.values(outputs).forEach((output) => {
|
||||
expect(output.master.tasks).toEqual(baseOutput.master.tasks);
|
||||
expect(output.master.metadata.projectName).toEqual(
|
||||
baseOutput.master.metadata.projectName
|
||||
);
|
||||
expect(output.master.metadata.totalTasks).toEqual(
|
||||
baseOutput.master.metadata.totalTasks
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle non-existent files gracefully', async () => {
|
||||
const nonExistentFile = path.join(tempDir, 'does-not-exist.txt');
|
||||
const outputPath = path.join(tempDir, 'tasks-error.json');
|
||||
|
||||
await expect(
|
||||
parsePRD(nonExistentFile, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
})
|
||||
).rejects.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
134
tests/unit/progress/base-progress-tracker.test.js
Normal file
134
tests/unit/progress/base-progress-tracker.test.js
Normal file
@@ -0,0 +1,134 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock cli-progress factory before importing BaseProgressTracker
|
||||
jest.unstable_mockModule(
|
||||
'../../../src/progress/cli-progress-factory.js',
|
||||
() => ({
|
||||
newMultiBar: jest.fn(() => ({
|
||||
create: jest.fn(() => ({
|
||||
update: jest.fn()
|
||||
})),
|
||||
stop: jest.fn()
|
||||
}))
|
||||
})
|
||||
);
|
||||
|
||||
const { newMultiBar } = await import(
|
||||
'../../../src/progress/cli-progress-factory.js'
|
||||
);
|
||||
const { BaseProgressTracker } = await import(
|
||||
'../../../src/progress/base-progress-tracker.js'
|
||||
);
|
||||
|
||||
describe('BaseProgressTracker', () => {
|
||||
let tracker;
|
||||
let mockMultiBar;
|
||||
let mockProgressBar;
|
||||
let mockTimeTokensBar;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
jest.useFakeTimers();
|
||||
|
||||
// Setup mocks
|
||||
mockProgressBar = { update: jest.fn() };
|
||||
mockTimeTokensBar = { update: jest.fn() };
|
||||
mockMultiBar = {
|
||||
create: jest
|
||||
.fn()
|
||||
.mockReturnValueOnce(mockTimeTokensBar)
|
||||
.mockReturnValueOnce(mockProgressBar),
|
||||
stop: jest.fn()
|
||||
};
|
||||
newMultiBar.mockReturnValue(mockMultiBar);
|
||||
|
||||
tracker = new BaseProgressTracker({ numUnits: 10, unitName: 'task' });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.useRealTimers();
|
||||
});
|
||||
|
||||
describe('cleanup', () => {
|
||||
it('should stop and clear timer interval', () => {
|
||||
tracker.start();
|
||||
expect(tracker._timerInterval).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker._timerInterval).toBeNull();
|
||||
});
|
||||
|
||||
it('should stop and null multibar reference', () => {
|
||||
tracker.start();
|
||||
expect(tracker.multibar).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(mockMultiBar.stop).toHaveBeenCalled();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
|
||||
it('should null progress bar references', () => {
|
||||
tracker.start();
|
||||
expect(tracker.timeTokensBar).toBeTruthy();
|
||||
expect(tracker.progressBar).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker.timeTokensBar).toBeNull();
|
||||
expect(tracker.progressBar).toBeNull();
|
||||
});
|
||||
|
||||
it('should set finished state', () => {
|
||||
tracker.start();
|
||||
expect(tracker.isStarted).toBe(true);
|
||||
expect(tracker.isFinished).toBe(false);
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker.isStarted).toBe(false);
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle cleanup when multibar.stop throws error', () => {
|
||||
tracker.start();
|
||||
mockMultiBar.stop.mockImplementation(() => {
|
||||
throw new Error('Stop failed');
|
||||
});
|
||||
|
||||
expect(() => tracker.cleanup()).not.toThrow();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
|
||||
it('should be safe to call multiple times', () => {
|
||||
tracker.start();
|
||||
|
||||
tracker.cleanup();
|
||||
tracker.cleanup();
|
||||
tracker.cleanup();
|
||||
|
||||
expect(mockMultiBar.stop).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should be safe to call without starting', () => {
|
||||
expect(() => tracker.cleanup()).not.toThrow();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('stop vs cleanup', () => {
|
||||
it('stop should call cleanup and null multibar reference', () => {
|
||||
tracker.start();
|
||||
tracker.stop();
|
||||
|
||||
// stop() now calls cleanup() which nulls the multibar
|
||||
expect(tracker.multibar).toBeNull();
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
|
||||
it('cleanup should null multibar preventing getSummary', () => {
|
||||
tracker.start();
|
||||
tracker.cleanup();
|
||||
|
||||
expect(tracker.multibar).toBeNull();
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -79,6 +79,38 @@ jest.unstable_mockModule(
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
streamTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: async function* () {
|
||||
yield '{"tasks":[';
|
||||
yield '{"id":1,"title":"Test Task","priority":"high"}';
|
||||
yield ']}';
|
||||
},
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: '1234567890',
|
||||
commandName: 'analyze-complexity',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
streamObjectService: jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
get partialObjectStream() {
|
||||
return (async function* () {
|
||||
yield { tasks: [] };
|
||||
yield { tasks: [{ id: 1, title: 'Test Task', priority: 'high' }] };
|
||||
})();
|
||||
},
|
||||
object: Promise.resolve({
|
||||
tasks: [{ id: 1, title: 'Test Task', priority: 'high' }]
|
||||
})
|
||||
};
|
||||
})
|
||||
})
|
||||
);
|
||||
@@ -189,9 +221,8 @@ const { readJSON, writeJSON, log, CONFIG, findTaskById } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const { generateObjectService, generateTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
const { generateObjectService, generateTextService, streamTextService } =
|
||||
await import('../../../../../scripts/modules/ai-services-unified.js');
|
||||
|
||||
const fs = await import('fs');
|
||||
|
||||
|
||||
@@ -178,6 +178,24 @@ jest.unstable_mockModule(
|
||||
});
|
||||
}
|
||||
}),
|
||||
streamTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: async function* () {
|
||||
yield '{"tasks":[';
|
||||
yield '{"id":1,"title":"Test Task","priority":"high"}';
|
||||
yield ']}';
|
||||
},
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
commandName: 'analyze-complexity',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
generateObjectService: jest.fn().mockResolvedValue({
|
||||
mainResult: {
|
||||
object: {
|
||||
@@ -402,7 +420,7 @@ const { readJSON, writeJSON, getTagAwareFilePath } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const { generateTextService } = await import(
|
||||
const { generateTextService, streamTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
169
tests/unit/ui/indicators.test.js
Normal file
169
tests/unit/ui/indicators.test.js
Normal file
@@ -0,0 +1,169 @@
|
||||
/**
|
||||
* Unit tests for indicators module (priority and complexity indicators)
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock chalk using unstable_mockModule for ESM compatibility
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
red: jest.fn((str) => str),
|
||||
yellow: jest.fn((str) => str),
|
||||
green: jest.fn((str) => str),
|
||||
white: jest.fn((str) => str),
|
||||
hex: jest.fn(() => jest.fn((str) => str))
|
||||
}
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
const {
|
||||
getMcpPriorityIndicators,
|
||||
getCliPriorityIndicators,
|
||||
getPriorityIndicators,
|
||||
getPriorityIndicator,
|
||||
getStatusBarPriorityIndicators,
|
||||
getPriorityColors,
|
||||
getCliComplexityIndicators,
|
||||
getStatusBarComplexityIndicators,
|
||||
getComplexityColors,
|
||||
getComplexityIndicator
|
||||
} = await import('../../../src/ui/indicators.js');
|
||||
|
||||
describe('Priority Indicators', () => {
|
||||
describe('getMcpPriorityIndicators', () => {
|
||||
it('should return emoji indicators for MCP context', () => {
|
||||
const indicators = getMcpPriorityIndicators();
|
||||
expect(indicators).toEqual({
|
||||
high: '🔴',
|
||||
medium: '🟠',
|
||||
low: '🟢'
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getCliPriorityIndicators', () => {
|
||||
it('should return colored dot indicators for CLI context', () => {
|
||||
const indicators = getCliPriorityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
// Since chalk is mocked, we're just verifying structure
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPriorityIndicators', () => {
|
||||
it('should return MCP indicators when isMcp is true', () => {
|
||||
const indicators = getPriorityIndicators(true);
|
||||
expect(indicators).toEqual({
|
||||
high: '🔴',
|
||||
medium: '🟠',
|
||||
low: '🟢'
|
||||
});
|
||||
});
|
||||
|
||||
it('should return CLI indicators when isMcp is false', () => {
|
||||
const indicators = getPriorityIndicators(false);
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
});
|
||||
|
||||
it('should default to CLI indicators when no parameter provided', () => {
|
||||
const indicators = getPriorityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPriorityIndicator', () => {
|
||||
it('should return correct MCP indicator for valid priority', () => {
|
||||
expect(getPriorityIndicator('high', true)).toBe('🔴');
|
||||
expect(getPriorityIndicator('medium', true)).toBe('🟠');
|
||||
expect(getPriorityIndicator('low', true)).toBe('🟢');
|
||||
});
|
||||
|
||||
it('should return correct CLI indicator for valid priority', () => {
|
||||
const highIndicator = getPriorityIndicator('high', false);
|
||||
const mediumIndicator = getPriorityIndicator('medium', false);
|
||||
const lowIndicator = getPriorityIndicator('low', false);
|
||||
|
||||
expect(highIndicator).toContain('●');
|
||||
expect(mediumIndicator).toContain('●');
|
||||
expect(lowIndicator).toContain('●');
|
||||
});
|
||||
|
||||
it('should return medium indicator for invalid priority', () => {
|
||||
expect(getPriorityIndicator('invalid', true)).toBe('🟠');
|
||||
expect(getPriorityIndicator(null, true)).toBe('🟠');
|
||||
expect(getPriorityIndicator(undefined, true)).toBe('🟠');
|
||||
});
|
||||
|
||||
it('should default to CLI context when isMcp not provided', () => {
|
||||
const indicator = getPriorityIndicator('high');
|
||||
expect(indicator).toContain('●');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complexity Indicators', () => {
|
||||
describe('getCliComplexityIndicators', () => {
|
||||
it('should return colored dot indicators for complexity levels', () => {
|
||||
const indicators = getCliComplexityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getStatusBarComplexityIndicators', () => {
|
||||
it('should return single character indicators for status bars', () => {
|
||||
const indicators = getStatusBarComplexityIndicators();
|
||||
// Since chalk is mocked, we need to check for the actual characters
|
||||
expect(indicators.high).toContain('⋮');
|
||||
expect(indicators.medium).toContain(':');
|
||||
expect(indicators.low).toContain('.');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getComplexityColors', () => {
|
||||
it('should return complexity color functions', () => {
|
||||
const colors = getComplexityColors();
|
||||
expect(colors).toHaveProperty('high');
|
||||
expect(colors).toHaveProperty('medium');
|
||||
expect(colors).toHaveProperty('low');
|
||||
// Verify they are functions (mocked chalk functions)
|
||||
expect(typeof colors.high).toBe('function');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getComplexityIndicator', () => {
|
||||
it('should return high indicator for scores >= 7', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(7)).toBe(cliIndicators.high);
|
||||
expect(getComplexityIndicator(8)).toBe(cliIndicators.high);
|
||||
expect(getComplexityIndicator(10)).toBe(cliIndicators.high);
|
||||
});
|
||||
|
||||
it('should return low indicator for scores <= 3', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(1)).toBe(cliIndicators.low);
|
||||
expect(getComplexityIndicator(2)).toBe(cliIndicators.low);
|
||||
expect(getComplexityIndicator(3)).toBe(cliIndicators.low);
|
||||
});
|
||||
|
||||
it('should return medium indicator for scores 4-6', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(4)).toBe(cliIndicators.medium);
|
||||
expect(getComplexityIndicator(5)).toBe(cliIndicators.medium);
|
||||
expect(getComplexityIndicator(6)).toBe(cliIndicators.medium);
|
||||
});
|
||||
|
||||
it('should return status bar indicators when statusBar is true', () => {
|
||||
const statusBarIndicators = getStatusBarComplexityIndicators();
|
||||
expect(getComplexityIndicator(8, true)).toBe(statusBarIndicators.high);
|
||||
expect(getComplexityIndicator(5, true)).toBe(statusBarIndicators.medium);
|
||||
expect(getComplexityIndicator(2, true)).toBe(statusBarIndicators.low);
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user