Compare commits
1 Commits
task-maste
...
chore/pimp
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
74a1abc3b2 |
@@ -1,38 +0,0 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||
|
||||
---
|
||||
93
.claude/commands/tm/clear-subtasks/clear-all-subtasks.md
Normal file
93
.claude/commands/tm/clear-subtasks/clear-all-subtasks.md
Normal file
@@ -0,0 +1,93 @@
|
||||
Clear all subtasks from all tasks globally.
|
||||
|
||||
## Global Subtask Clearing
|
||||
|
||||
Remove all subtasks across the entire project. Use with extreme caution.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Project-Wide Summary**
|
||||
```
|
||||
Global Subtask Summary
|
||||
━━━━━━━━━━━━━━━━━━━━
|
||||
Total parent tasks: 12
|
||||
Total subtasks: 47
|
||||
- Completed: 15
|
||||
- In-progress: 8
|
||||
- Pending: 24
|
||||
|
||||
Work at risk: ~120 hours
|
||||
```
|
||||
|
||||
2. **Critical Warnings**
|
||||
- In-progress subtasks that will lose work
|
||||
- Completed subtasks with valuable history
|
||||
- Complex dependency chains
|
||||
- Integration test results
|
||||
|
||||
## Double Confirmation
|
||||
|
||||
```
|
||||
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
This will remove ALL 47 subtasks from your project
|
||||
Including 8 in-progress and 15 completed subtasks
|
||||
|
||||
This action CANNOT be undone
|
||||
|
||||
Type 'CLEAR ALL SUBTASKS' to confirm:
|
||||
```
|
||||
|
||||
## Smart Safeguards
|
||||
|
||||
- Require explicit confirmation phrase
|
||||
- Create automatic backup
|
||||
- Log all removed data
|
||||
- Option to export first
|
||||
|
||||
## Use Cases
|
||||
|
||||
Valid reasons for global clear:
|
||||
- Project restructuring
|
||||
- Major pivot in approach
|
||||
- Starting fresh breakdown
|
||||
- Switching to different task organization
|
||||
|
||||
## Process
|
||||
|
||||
1. Full project analysis
|
||||
2. Create backup file
|
||||
3. Show detailed impact
|
||||
4. Require confirmation
|
||||
5. Execute removal
|
||||
6. Generate summary report
|
||||
|
||||
## Alternative Suggestions
|
||||
|
||||
Before clearing all:
|
||||
- Export subtasks to file
|
||||
- Clear only pending subtasks
|
||||
- Clear by task category
|
||||
- Archive instead of delete
|
||||
|
||||
## Post-Clear Report
|
||||
|
||||
```
|
||||
Global Subtask Clear Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Removed: 47 subtasks from 12 tasks
|
||||
Backup saved: .taskmaster/backup/subtasks-20240115.json
|
||||
Parent tasks updated: 12
|
||||
Time estimates adjusted: Yes
|
||||
|
||||
Next steps:
|
||||
- Review updated task list
|
||||
- Re-expand complex tasks as needed
|
||||
- Check project timeline
|
||||
```
|
||||
86
.claude/commands/tm/clear-subtasks/clear-subtasks.md
Normal file
86
.claude/commands/tm/clear-subtasks/clear-subtasks.md
Normal file
@@ -0,0 +1,86 @@
|
||||
Clear all subtasks from a specific task.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
Remove all subtasks from a parent task at once.
|
||||
|
||||
## Clearing Subtasks
|
||||
|
||||
Bulk removal of all subtasks from a parent task.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --id=<task-id>
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Subtask Summary**
|
||||
- Number of subtasks
|
||||
- Completion status of each
|
||||
- Work already done
|
||||
- Dependencies affected
|
||||
|
||||
2. **Impact Assessment**
|
||||
- Data that will be lost
|
||||
- Dependencies to be removed
|
||||
- Effect on project timeline
|
||||
- Parent task implications
|
||||
|
||||
## Confirmation Required
|
||||
|
||||
```
|
||||
Clear Subtasks Confirmation
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Parent Task: #5 "Implement user authentication"
|
||||
Subtasks to remove: 4
|
||||
- #5.1 "Setup auth framework" (done)
|
||||
- #5.2 "Create login form" (in-progress)
|
||||
- #5.3 "Add validation" (pending)
|
||||
- #5.4 "Write tests" (pending)
|
||||
|
||||
⚠️ This will permanently delete all subtask data
|
||||
Continue? (y/n)
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Option to convert to standalone tasks
|
||||
- Backup task data before clearing
|
||||
- Preserve completed work history
|
||||
- Update parent task appropriately
|
||||
|
||||
## Process
|
||||
|
||||
1. List all subtasks for confirmation
|
||||
2. Check for in-progress work
|
||||
3. Remove all subtasks
|
||||
4. Update parent task
|
||||
5. Clean up dependencies
|
||||
|
||||
## Alternative Options
|
||||
|
||||
Suggest alternatives:
|
||||
- Convert important subtasks to tasks
|
||||
- Keep completed subtasks
|
||||
- Archive instead of delete
|
||||
- Export subtask data first
|
||||
|
||||
## Post-Clear
|
||||
|
||||
- Show updated parent task
|
||||
- Recalculate time estimates
|
||||
- Update task complexity
|
||||
- Suggest next steps
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/clear-subtasks 5
|
||||
→ Found 4 subtasks to remove
|
||||
→ Warning: Subtask #5.2 is in-progress
|
||||
→ Cleared all subtasks from task #5
|
||||
→ Updated parent task estimates
|
||||
→ Suggestion: Consider re-expanding with better breakdown
|
||||
```
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
54
.github/scripts/pre-release.mjs
vendored
Executable file
54
.github/scripts/pre-release.mjs
vendored
Executable file
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import {
|
||||
findRootDir,
|
||||
runCommand,
|
||||
getPackageVersion,
|
||||
createAndPushTag
|
||||
} from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const extensionPkgPath = join(rootDir, 'apps', 'extension', 'package.json');
|
||||
|
||||
console.log('🚀 Starting pre-release process...');
|
||||
|
||||
// Check if we're in RC mode
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.error('⚠️ Not in RC mode. Run "npx changeset pre enter rc" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
try {
|
||||
const preJson = JSON.parse(readFileSync(preJsonPath, 'utf8'));
|
||||
if (preJson.tag !== 'rc') {
|
||||
console.error(`⚠️ Not in RC mode. Current tag: ${preJson.tag}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to read pre.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get current extension version
|
||||
const extensionVersion = getPackageVersion(extensionPkgPath);
|
||||
console.log(`Extension version: ${extensionVersion}`);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
console.log('📦 Publishing npm packages...');
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
// Create tag for extension pre-release if it doesn't exist
|
||||
const extensionTag = `extension-rc@${extensionVersion}`;
|
||||
const tagCreated = createAndPushTag(extensionTag);
|
||||
|
||||
if (tagCreated) {
|
||||
console.log('This will trigger the extension-pre-release workflow...');
|
||||
}
|
||||
|
||||
console.log('✅ Pre-release process completed!');
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
@@ -1,46 +0,0 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
156
.github/workflows/claude-docs-updater.yml
vendored
156
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,156 +0,0 @@
|
||||
name: Claude Documentation Updater
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
# Store changed files for Claude to analyze
|
||||
echo "changed_files<<EOF" >> $GITHUB_OUTPUT
|
||||
git diff --name-only HEAD^ HEAD >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the commit message and changes summary
|
||||
echo "commit_message<<EOF" >> $GITHUB_OUTPUT
|
||||
git log -1 --pretty=%B >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get diff for documentation context
|
||||
echo "commit_diff<<EOF" >> $GITHUB_OUTPUT
|
||||
git diff HEAD^ HEAD --stat >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Create docs update branch
|
||||
id: create-branch
|
||||
run: |
|
||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||
git checkout -b $BRANCH_NAME
|
||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Claude Code to Update Documentation
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ steps.changed-files.outputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ steps.changed-files.outputs.changed_files }}
|
||||
|
||||
- Changes summary:
|
||||
${{ steps.changed-files.outputs.commit_diff }}
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
3. If documentation updates are needed:
|
||||
- Update relevant documentation files in apps/docs/
|
||||
- Ensure examples are updated if APIs changed
|
||||
- Update any configuration documentation if config options changed
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
- Include code examples where appropriate
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
id: check-changes
|
||||
run: |
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
git add -A
|
||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --local user.name "github-actions[bot]"
|
||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||
|
||||
This PR was automatically generated to update documentation based on recent changes.
|
||||
|
||||
Original commit: ${{ steps.changed-files.outputs.commit_message }}
|
||||
|
||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||
fi
|
||||
|
||||
- name: Push changes and create PR
|
||||
if: steps.check-changes.outputs.has_changes == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
--body "## 📚 Documentation Update
|
||||
|
||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||
|
||||
### Original Changes
|
||||
**Commit:** ${{ github.sha }}
|
||||
**Message:** ${{ steps.changed-files.outputs.commit_message }}
|
||||
|
||||
### Changed Files in Original Commit
|
||||
\`\`\`
|
||||
${{ steps.changed-files.outputs.changed_files }}
|
||||
\`\`\`
|
||||
|
||||
### Documentation Updates
|
||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||
- [ ] Documentation accurately reflects the changes
|
||||
- [ ] Examples are correct and working
|
||||
- [ ] No important details are missing
|
||||
- [ ] Style is consistent with existing documentation
|
||||
|
||||
---
|
||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
110
.github/workflows/extension-pre-release.yml
vendored
Normal file
110
.github/workflows/extension-pre-release.yml
vendored
Normal file
@@ -0,0 +1,110 @@
|
||||
name: Extension Pre-Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "extension-rc@*"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
concurrency: extension-pre-release-${{ github.ref }}
|
||||
|
||||
jobs:
|
||||
publish-extension-rc:
|
||||
runs-on: ubuntu-latest
|
||||
environment: extension-release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
node_modules
|
||||
*/*/node_modules
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Type Check Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run check-types
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run build
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Package Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run package
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Create VSIX Package (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce package --no-dependencies --pre-release
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Get VSIX filename
|
||||
id: vsix-info
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: |
|
||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
||||
if [ -z "$VSIX_FILE" ]; then
|
||||
echo "Error: No VSIX file found"
|
||||
exit 1
|
||||
fi
|
||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
||||
echo "Found VSIX: $VSIX_FILE"
|
||||
|
||||
- name: Publish to VS Code Marketplace (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Install Open VSX CLI
|
||||
run: npm install -g ovsx
|
||||
|
||||
- name: Publish to Open VSX Registry (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Upload Build Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: extension-pre-release-${{ github.ref_name }}
|
||||
path: |
|
||||
apps/extension/vsix-build/*.vsix
|
||||
apps/extension/dist/
|
||||
retention-days: 30
|
||||
|
||||
notify-success:
|
||||
needs: publish-extension-rc
|
||||
if: success()
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Success Notification
|
||||
run: |
|
||||
echo "🚀 Extension ${{ github.ref_name }} successfully published as pre-release!"
|
||||
echo "📦 Available on VS Code Marketplace (Pre-Release)"
|
||||
echo "🌍 Available on Open VSX Registry (Pre-Release)"
|
||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
4
.github/workflows/pre-release.yml
vendored
4
.github/workflows/pre-release.yml
vendored
@@ -68,10 +68,12 @@ jobs:
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: npx changeset publish
|
||||
publish: node ./.github/scripts/pre-release.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
|
||||
- name: Commit & Push changes
|
||||
uses: actions-js/push@master
|
||||
|
||||
96
.github/workflows/weekly-metrics-discord.yml
vendored
96
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,96 +0,0 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Get dates for last week
|
||||
run: |
|
||||
# Last 7 days
|
||||
first_day=$(date -d "7 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
|
||||
- name: Generate PR metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_metrics.md
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: |
|
||||
# Parse the metrics from the generated markdown files
|
||||
if [ -f "issue_metrics.md" ]; then
|
||||
# Extract key metrics using grep/awk
|
||||
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
|
||||
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
if [ -f "pr_metrics.md" ]; then
|
||||
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
|
||||
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
# Set outputs for Discord action
|
||||
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
|
||||
**⏱️ Response Times**
|
||||
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
|
||||
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
|
||||
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
@@ -1,8 +0,0 @@
|
||||
Simple Todo App PRD
|
||||
|
||||
Create a basic todo list application with the following features:
|
||||
1. Add new todos
|
||||
2. Mark todos as complete
|
||||
3. Delete todos
|
||||
|
||||
That's it. Keep it simple.
|
||||
108
CHANGELOG.md
108
CHANGELOG.md
@@ -1,113 +1,5 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -56,7 +56,7 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
#### Quick Install for Cursor 1.0+ (One-Click)
|
||||
|
||||
[](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
[](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
@@ -255,11 +255,6 @@ task-master show 1,3,5
|
||||
# Research fresh information with project context
|
||||
task-master research "What are the latest best practices for JWT authentication?"
|
||||
|
||||
# Move tasks between tags (cross-tag movement)
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
# docs
|
||||
|
||||
## 0.0.1
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"version": "0.0.1",
|
||||
"version": "0.0.0",
|
||||
"private": true,
|
||||
"description": "Task Master documentation powered by Mintlify",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,29 +1,5 @@
|
||||
# Change Log
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1100](https://github.com/eyaltoledano/claude-task-master/pull/1100) [`30ca144`](https://github.com/eyaltoledano/claude-task-master/commit/30ca144231c36a6c63911f20adc225d38fb15a2f) Thanks [@vedovelli](https://github.com/vedovelli)! - Display current task ID on task details page
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1), [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573), [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e), [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d), [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397), [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5)]:
|
||||
- task-master-ai@0.25.0
|
||||
|
||||
## 0.24.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1100](https://github.com/eyaltoledano/claude-task-master/pull/1100) [`30ca144`](https://github.com/eyaltoledano/claude-task-master/commit/30ca144231c36a6c63911f20adc225d38fb15a2f) Thanks [@vedovelli](https://github.com/vedovelli)! - Display current task ID on task details page
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1), [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573), [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e), [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d), [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397), [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5)]:
|
||||
- task-master-ai@0.25.0-rc.0
|
||||
|
||||
## 0.23.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -3,23 +3,15 @@
|
||||
"private": true,
|
||||
"displayName": "TaskMaster",
|
||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||
"version": "0.24.0",
|
||||
"version": "0.23.1",
|
||||
"publisher": "Hamster",
|
||||
"icon": "assets/icon.png",
|
||||
"engines": {
|
||||
"vscode": "^1.93.0"
|
||||
},
|
||||
"categories": [
|
||||
"AI",
|
||||
"Visualization",
|
||||
"Education",
|
||||
"Other"
|
||||
],
|
||||
"categories": ["AI", "Visualization", "Education", "Other"],
|
||||
"main": "./dist/extension.js",
|
||||
"activationEvents": [
|
||||
"onStartupFinished",
|
||||
"workspaceContains:.taskmaster/**"
|
||||
],
|
||||
"activationEvents": ["onStartupFinished", "workspaceContains:.taskmaster/**"],
|
||||
"contributes": {
|
||||
"viewsContainers": {
|
||||
"activitybar": [
|
||||
@@ -147,11 +139,7 @@
|
||||
},
|
||||
"taskmaster.ui.theme": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"auto",
|
||||
"light",
|
||||
"dark"
|
||||
],
|
||||
"enum": ["auto", "light", "dark"],
|
||||
"default": "auto",
|
||||
"description": "UI theme preference"
|
||||
},
|
||||
@@ -212,12 +200,7 @@
|
||||
},
|
||||
"taskmaster.debug.logLevel": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"error",
|
||||
"warn",
|
||||
"info",
|
||||
"debug"
|
||||
],
|
||||
"enum": ["error", "warn", "info", "debug"],
|
||||
"default": "info",
|
||||
"description": "Logging level"
|
||||
},
|
||||
@@ -256,7 +239,7 @@
|
||||
"check-types": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"task-master-ai": "0.25.0"
|
||||
"task-master-ai": "0.24.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
|
||||
@@ -53,11 +53,6 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
|
||||
refreshComplexityAfterAI
|
||||
} = useTaskDetails({ taskId, sendMessage, tasks: allTasks });
|
||||
|
||||
const displayId =
|
||||
isSubtask && parentTask
|
||||
? `${parentTask.id}.${currentTask?.id}`
|
||||
: currentTask?.id;
|
||||
|
||||
const handleStatusChange = async (newStatus: TaskMasterTask['status']) => {
|
||||
if (!currentTask) return;
|
||||
|
||||
@@ -65,7 +60,10 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
|
||||
await sendMessage({
|
||||
type: 'updateTaskStatus',
|
||||
data: {
|
||||
taskId: displayId,
|
||||
taskId:
|
||||
isSubtask && parentTask
|
||||
? `${parentTask.id}.${currentTask.id}`
|
||||
: currentTask.id,
|
||||
newStatus: newStatus
|
||||
}
|
||||
});
|
||||
@@ -137,7 +135,7 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
|
||||
<BreadcrumbSeparator />
|
||||
<BreadcrumbItem>
|
||||
<span className="text-vscode-foreground">
|
||||
#{displayId} {currentTask.title}
|
||||
{currentTask.title}
|
||||
</span>
|
||||
</BreadcrumbItem>
|
||||
</BreadcrumbList>
|
||||
@@ -154,9 +152,9 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Task ID and title */}
|
||||
{/* Task title */}
|
||||
<h1 className="text-2xl font-bold tracking-tight text-vscode-foreground">
|
||||
#{displayId} {currentTask.title}
|
||||
{currentTask.title}
|
||||
</h1>
|
||||
|
||||
{/* Description */}
|
||||
|
||||
@@ -1,282 +0,0 @@
|
||||
# Cross-Tag Task Movement
|
||||
|
||||
Task Master now supports moving tasks between different tag contexts, allowing you to organize your work across multiple project contexts, feature branches, or development phases.
|
||||
|
||||
## Overview
|
||||
|
||||
Cross-tag task movement enables you to:
|
||||
- Move tasks between different tag contexts (e.g., from "backlog" to "in-progress")
|
||||
- Handle cross-tag dependencies intelligently
|
||||
- Maintain task relationships across different contexts
|
||||
- Organize work across multiple project phases
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Within-Tag Moves
|
||||
|
||||
Move tasks within the same tag context:
|
||||
|
||||
```bash
|
||||
# Move a single task
|
||||
task-master move --from=5 --to=7
|
||||
|
||||
# Move a subtask
|
||||
task-master move --from=5.2 --to=7.3
|
||||
|
||||
# Move multiple tasks
|
||||
task-master move --from=5,6,7 --to=10,11,12
|
||||
```
|
||||
|
||||
### Cross-Tag Moves
|
||||
|
||||
Move tasks between different tag contexts:
|
||||
|
||||
```bash
|
||||
# Basic cross-tag move
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
|
||||
# Move multiple tasks
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done
|
||||
```
|
||||
|
||||
## Dependency Resolution
|
||||
|
||||
When moving tasks between tags, you may encounter cross-tag dependencies. Task Master provides several options to handle these:
|
||||
|
||||
### Move with Dependencies
|
||||
|
||||
Move the main task along with all its dependent tasks:
|
||||
|
||||
```bash
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
```
|
||||
|
||||
This ensures that all dependent tasks are moved together, maintaining the task relationships.
|
||||
|
||||
### Break Dependencies
|
||||
|
||||
Break cross-tag dependencies and move only the specified task:
|
||||
|
||||
```bash
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
```
|
||||
|
||||
This removes the dependency relationships and moves only the specified task.
|
||||
|
||||
### Force Move
|
||||
|
||||
Force the move even with dependency conflicts:
|
||||
|
||||
```bash
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force
|
||||
```
|
||||
|
||||
⚠️ **Warning**: This may break dependency relationships and should be used with caution.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Task Master provides enhanced error messages with specific resolution suggestions:
|
||||
|
||||
### Cross-Tag Dependency Conflicts
|
||||
|
||||
When you encounter dependency conflicts, you'll see:
|
||||
|
||||
```text
|
||||
❌ Cannot move tasks from "backlog" to "in-progress"
|
||||
|
||||
Cross-tag dependency conflicts detected:
|
||||
• Task 5 depends on 2 (in backlog)
|
||||
• Task 6 depends on 3 (in done)
|
||||
|
||||
Resolution options:
|
||||
1. Move with dependencies: task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
2. Break dependencies: task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
3. Validate and fix dependencies: task-master validate-dependencies && task-master fix-dependencies
|
||||
4. Move dependencies first: task-master move --from=2,3 --from-tag=backlog --to-tag=in-progress
|
||||
5. Force move (may break dependencies): task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --force
|
||||
```
|
||||
|
||||
### Subtask Movement Restrictions
|
||||
|
||||
Subtasks cannot be moved directly between tags:
|
||||
|
||||
```text
|
||||
❌ Cannot move subtask 5.2 directly between tags
|
||||
|
||||
Subtask movement restriction:
|
||||
• Subtasks cannot be moved directly between tags
|
||||
• They must be promoted to full tasks first
|
||||
|
||||
Resolution options:
|
||||
1. Promote subtask to full task: task-master remove-subtask --id=5.2 --convert
|
||||
2. Then move the promoted task: task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
3. Or move the parent task with all subtasks: task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
```
|
||||
|
||||
### Invalid Tag Combinations
|
||||
|
||||
When source and target tags are the same:
|
||||
|
||||
```text
|
||||
❌ Invalid tag combination
|
||||
|
||||
Error details:
|
||||
• Source tag: "backlog"
|
||||
• Target tag: "backlog"
|
||||
• Reason: Source and target tags are identical
|
||||
|
||||
Resolution options:
|
||||
1. Use different tags for cross-tag moves
|
||||
2. Use within-tag move: task-master move --from=<id> --to=<id> --tag=backlog
|
||||
3. Check available tags: task-master tags
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Check Dependencies First
|
||||
|
||||
Before moving tasks, validate your dependencies:
|
||||
|
||||
```bash
|
||||
# Check for dependency issues
|
||||
task-master validate-dependencies
|
||||
|
||||
# Fix common dependency problems
|
||||
task-master fix-dependencies
|
||||
```
|
||||
|
||||
### 2. Use Appropriate Flags
|
||||
|
||||
- **`--with-dependencies`**: When you want to maintain task relationships
|
||||
- **`--ignore-dependencies`**: When you want to break cross-tag dependencies
|
||||
- **`--force`**: Only when you understand the consequences
|
||||
|
||||
### 3. Organize by Context
|
||||
|
||||
Use tags to organize work by:
|
||||
- **Development phases**: `backlog`, `in-progress`, `review`, `done`
|
||||
- **Feature branches**: `feature-auth`, `feature-dashboard`
|
||||
- **Team members**: `alice-tasks`, `bob-tasks`
|
||||
- **Project versions**: `v1.0`, `v2.0`
|
||||
|
||||
### 4. Handle Subtasks Properly
|
||||
|
||||
For subtasks, either:
|
||||
1. Promote the subtask to a full task first
|
||||
2. Move the parent task with all subtasks using `--with-dependencies`
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple Task Movement
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks with dependencies
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
|
||||
# Move multiple tasks, breaking dependencies
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
```
|
||||
|
||||
### Tag Creation
|
||||
|
||||
Target tags are created automatically if they don't exist:
|
||||
|
||||
```bash
|
||||
# This will create the "new-feature" tag if it doesn't exist
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=new-feature
|
||||
```
|
||||
|
||||
### Current Tag Fallback
|
||||
|
||||
If `--from-tag` is not provided, the current tag is used:
|
||||
|
||||
```bash
|
||||
# Uses current tag as source
|
||||
task-master move --from=5 --to-tag=in-progress
|
||||
```
|
||||
|
||||
## MCP Integration
|
||||
|
||||
The cross-tag move functionality is also available through MCP tools:
|
||||
|
||||
```javascript
|
||||
// Move task with dependencies
|
||||
await moveTask({
|
||||
from: "5",
|
||||
fromTag: "backlog",
|
||||
toTag: "in-progress",
|
||||
withDependencies: true
|
||||
});
|
||||
|
||||
// Break dependencies
|
||||
await moveTask({
|
||||
from: "5",
|
||||
fromTag: "backlog",
|
||||
toTag: "in-progress",
|
||||
ignoreDependencies: true
|
||||
});
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"Source tag not found"**: Check available tags with `task-master tags`
|
||||
2. **"Task not found"**: Verify task IDs with `task-master list`
|
||||
3. **"Cross-tag dependency conflicts"**: Use dependency resolution flags
|
||||
4. **"Cannot move subtask"**: Promote subtask first or move parent task
|
||||
|
||||
### Getting Help
|
||||
|
||||
```bash
|
||||
# Show move command help
|
||||
task-master move --help
|
||||
|
||||
# Check available tags
|
||||
task-master tags
|
||||
|
||||
# Validate dependencies
|
||||
task-master validate-dependencies
|
||||
|
||||
# Fix dependency issues
|
||||
task-master fix-dependencies
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Scenario 1: Moving from Backlog to In-Progress
|
||||
|
||||
```bash
|
||||
# Check for dependencies first
|
||||
task-master validate-dependencies
|
||||
|
||||
# Move with dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
```
|
||||
|
||||
### Scenario 2: Breaking Dependencies
|
||||
|
||||
```bash
|
||||
# Move task, breaking cross-tag dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=done --ignore-dependencies
|
||||
```
|
||||
|
||||
### Scenario 3: Force Move
|
||||
|
||||
```bash
|
||||
# Force move despite conflicts
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force
|
||||
```
|
||||
|
||||
### Scenario 4: Moving Subtasks
|
||||
|
||||
```bash
|
||||
# Option 1: Promote subtask first
|
||||
task-master remove-subtask --id=5.2 --convert
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
|
||||
# Option 2: Move parent with all subtasks
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
|
||||
```
|
||||
@@ -1,4 +1,4 @@
|
||||
# Available Models as of August 12, 2025
|
||||
# Available Models as of August 8, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
@@ -68,9 +68,6 @@
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| ollama | gpt-oss:latest | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:20b | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:120b | 0.624 | 0 | 0 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
@@ -177,9 +174,6 @@
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| ollama | gpt-oss:latest | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:20b | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:120b | 0.624 | 0 | 0 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
|
||||
@@ -1,203 +0,0 @@
|
||||
/**
|
||||
* Direct function wrapper for cross-tag task moves
|
||||
*/
|
||||
|
||||
import { moveTasksBetweenTags } from '../../../../scripts/modules/task-manager/move-task.js';
|
||||
import { findTasksPath } from '../utils/path-utils.js';
|
||||
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Move tasks between tags
|
||||
* @param {Object} args - Function arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file
|
||||
* @param {string} args.sourceIds - Comma-separated IDs of tasks to move
|
||||
* @param {string} args.sourceTag - Source tag name
|
||||
* @param {string} args.targetTag - Target tag name
|
||||
* @param {boolean} args.withDependencies - Move dependent tasks along with main task
|
||||
* @param {boolean} args.ignoreDependencies - Break cross-tag dependencies during move
|
||||
* @param {string} args.file - Alternative path to the tasks.json file
|
||||
* @param {string} args.projectRoot - Project root directory
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: Object}>}
|
||||
*/
|
||||
export async function moveTaskCrossTagDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
const { projectRoot } = args;
|
||||
|
||||
log.info(`moveTaskCrossTagDirect called with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Validate required parameters
|
||||
if (!args.sourceIds) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Source IDs are required',
|
||||
code: 'MISSING_SOURCE_IDS'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!args.sourceTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Source tag is required for cross-tag moves',
|
||||
code: 'MISSING_SOURCE_TAG'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!args.targetTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Target tag is required for cross-tag moves',
|
||||
code: 'MISSING_TARGET_TAG'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Validate that source and target tags are different
|
||||
if (args.sourceTag === args.targetTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: `Source and target tags are the same ("${args.sourceTag}")`,
|
||||
code: 'SAME_SOURCE_TARGET_TAG',
|
||||
suggestions: [
|
||||
'Use different tags for cross-tag moves',
|
||||
'Use within-tag move: task-master move --from=<id> --to=<id> --tag=<tag>',
|
||||
'Check available tags: task-master tags'
|
||||
]
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
// Find tasks.json path if not provided
|
||||
let tasksPath = args.tasksJsonPath || args.file;
|
||||
if (!tasksPath) {
|
||||
if (!args.projectRoot) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message:
|
||||
'Project root is required if tasksJsonPath is not provided',
|
||||
code: 'MISSING_PROJECT_ROOT'
|
||||
}
|
||||
};
|
||||
}
|
||||
tasksPath = findTasksPath(args, log);
|
||||
}
|
||||
|
||||
// Enable silent mode to prevent console output during MCP operation
|
||||
enableSilentMode();
|
||||
|
||||
try {
|
||||
// Parse source IDs
|
||||
const sourceIds = args.sourceIds.split(',').map((id) => id.trim());
|
||||
|
||||
// Prepare move options
|
||||
const moveOptions = {
|
||||
withDependencies: args.withDependencies || false,
|
||||
ignoreDependencies: args.ignoreDependencies || false
|
||||
};
|
||||
|
||||
// Call the core moveTasksBetweenTags function
|
||||
const result = await moveTasksBetweenTags(
|
||||
tasksPath,
|
||||
sourceIds,
|
||||
args.sourceTag,
|
||||
args.targetTag,
|
||||
moveOptions,
|
||||
{ projectRoot }
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
...result,
|
||||
message: `Successfully moved ${sourceIds.length} task(s) from "${args.sourceTag}" to "${args.targetTag}"`,
|
||||
moveOptions,
|
||||
sourceTag: args.sourceTag,
|
||||
targetTag: args.targetTag
|
||||
}
|
||||
};
|
||||
} finally {
|
||||
// Restore console output - always executed regardless of success or error
|
||||
disableSilentMode();
|
||||
}
|
||||
} catch (error) {
|
||||
log.error(`Failed to move tasks between tags: ${error.message}`);
|
||||
log.error(`Error code: ${error.code}, Error name: ${error.name}`);
|
||||
|
||||
// Enhanced error handling with structured error objects
|
||||
let errorCode = 'MOVE_TASK_CROSS_TAG_ERROR';
|
||||
let suggestions = [];
|
||||
|
||||
// Handle structured errors first
|
||||
if (error.code === 'CROSS_TAG_DEPENDENCY_CONFLICTS') {
|
||||
errorCode = 'CROSS_TAG_DEPENDENCY_CONFLICT';
|
||||
suggestions = [
|
||||
'Use --with-dependencies to move dependent tasks together',
|
||||
'Use --ignore-dependencies to break cross-tag dependencies',
|
||||
'Run task-master validate-dependencies to check for issues',
|
||||
'Move dependencies first, then move the main task'
|
||||
];
|
||||
} else if (error.code === 'CANNOT_MOVE_SUBTASK') {
|
||||
errorCode = 'SUBTASK_MOVE_RESTRICTION';
|
||||
suggestions = [
|
||||
'Promote subtask to full task first: task-master remove-subtask --id=<subtaskId> --convert',
|
||||
'Move the parent task with all subtasks using --with-dependencies'
|
||||
];
|
||||
} else if (
|
||||
error.code === 'TASK_NOT_FOUND' ||
|
||||
error.code === 'INVALID_SOURCE_TAG' ||
|
||||
error.code === 'INVALID_TARGET_TAG'
|
||||
) {
|
||||
errorCode = 'TAG_OR_TASK_NOT_FOUND';
|
||||
suggestions = [
|
||||
'Check available tags: task-master tags',
|
||||
'Verify task IDs exist: task-master list',
|
||||
'Check task details: task-master show <id>'
|
||||
];
|
||||
} else if (error.message.includes('cross-tag dependency conflicts')) {
|
||||
// Fallback for legacy error messages
|
||||
errorCode = 'CROSS_TAG_DEPENDENCY_CONFLICT';
|
||||
suggestions = [
|
||||
'Use --with-dependencies to move dependent tasks together',
|
||||
'Use --ignore-dependencies to break cross-tag dependencies',
|
||||
'Run task-master validate-dependencies to check for issues',
|
||||
'Move dependencies first, then move the main task'
|
||||
];
|
||||
} else if (error.message.includes('Cannot move subtask')) {
|
||||
// Fallback for legacy error messages
|
||||
errorCode = 'SUBTASK_MOVE_RESTRICTION';
|
||||
suggestions = [
|
||||
'Promote subtask to full task first: task-master remove-subtask --id=<subtaskId> --convert',
|
||||
'Move the parent task with all subtasks using --with-dependencies'
|
||||
];
|
||||
} else if (error.message.includes('not found')) {
|
||||
// Fallback for legacy error messages
|
||||
errorCode = 'TAG_OR_TASK_NOT_FOUND';
|
||||
suggestions = [
|
||||
'Check available tags: task-master tags',
|
||||
'Verify task IDs exist: task-master list',
|
||||
'Check task details: task-master show <id>'
|
||||
];
|
||||
}
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: error.message,
|
||||
code: errorCode,
|
||||
suggestions
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -32,7 +32,7 @@ import { TASKMASTER_TASKS_FILE } from '../../../../src/constants/paths.js';
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
*/
|
||||
export async function parsePRDDirect(args, log, context = {}) {
|
||||
const { session, reportProgress } = context;
|
||||
const { session } = context;
|
||||
// Extract projectRoot from args
|
||||
const {
|
||||
input: inputArg,
|
||||
@@ -164,7 +164,6 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
force,
|
||||
append,
|
||||
research,
|
||||
reportProgress,
|
||||
commandName: 'parse-prd',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
|
||||
@@ -31,7 +31,6 @@ import { removeTaskDirect } from './direct-functions/remove-task.js';
|
||||
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
|
||||
import { modelsDirect } from './direct-functions/models.js';
|
||||
import { moveTaskDirect } from './direct-functions/move-task.js';
|
||||
import { moveTaskCrossTagDirect } from './direct-functions/move-task-cross-tag.js';
|
||||
import { researchDirect } from './direct-functions/research.js';
|
||||
import { addTagDirect } from './direct-functions/add-tag.js';
|
||||
import { deleteTagDirect } from './direct-functions/delete-tag.js';
|
||||
@@ -73,7 +72,6 @@ export const directFunctions = new Map([
|
||||
['initializeProjectDirect', initializeProjectDirect],
|
||||
['modelsDirect', modelsDirect],
|
||||
['moveTaskDirect', moveTaskDirect],
|
||||
['moveTaskCrossTagDirect', moveTaskCrossTagDirect],
|
||||
['researchDirect', researchDirect],
|
||||
['addTagDirect', addTagDirect],
|
||||
['deleteTagDirect', deleteTagDirect],
|
||||
@@ -113,7 +111,6 @@ export {
|
||||
initializeProjectDirect,
|
||||
modelsDirect,
|
||||
moveTaskDirect,
|
||||
moveTaskCrossTagDirect,
|
||||
researchDirect,
|
||||
addTagDirect,
|
||||
deleteTagDirect,
|
||||
|
||||
@@ -9,10 +9,7 @@ import {
|
||||
createErrorResponse,
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import {
|
||||
moveTaskDirect,
|
||||
moveTaskCrossTagDirect
|
||||
} from '../core/task-master-core.js';
|
||||
import { moveTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
@@ -32,9 +29,8 @@ export function registerMoveTaskTool(server) {
|
||||
),
|
||||
to: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'ID of the destination (e.g., "7" or "7.3"). Required for within-tag moves. For cross-tag moves, if omitted, task will be moved to the target tag maintaining its ID'
|
||||
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
|
||||
),
|
||||
file: z.string().optional().describe('Custom path to tasks.json file'),
|
||||
projectRoot: z
|
||||
@@ -42,82 +38,17 @@ export function registerMoveTaskTool(server) {
|
||||
.describe(
|
||||
'Root directory of the project (typically derived from session)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on'),
|
||||
fromTag: z.string().optional().describe('Source tag for cross-tag moves'),
|
||||
toTag: z.string().optional().describe('Target tag for cross-tag moves'),
|
||||
withDependencies: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Move dependent tasks along with main task'),
|
||||
ignoreDependencies: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Break cross-tag dependencies during move')
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
// Check if this is a cross-tag move
|
||||
const isCrossTagMove =
|
||||
args.fromTag && args.toTag && args.fromTag !== args.toTag;
|
||||
|
||||
if (isCrossTagMove) {
|
||||
// Cross-tag move logic
|
||||
if (!args.from) {
|
||||
return createErrorResponse(
|
||||
'Source IDs are required for cross-tag moves',
|
||||
'MISSING_SOURCE_IDS'
|
||||
);
|
||||
}
|
||||
|
||||
// Warn if 'to' parameter is provided for cross-tag moves
|
||||
if (args.to) {
|
||||
log.warn(
|
||||
'The "to" parameter is not used for cross-tag moves and will be ignored. Tasks retain their original IDs in the target tag.'
|
||||
);
|
||||
}
|
||||
|
||||
// Find tasks.json path if not provided
|
||||
let tasksJsonPath = args.file;
|
||||
if (!tasksJsonPath) {
|
||||
tasksJsonPath = findTasksPath(args, log);
|
||||
}
|
||||
|
||||
// Use cross-tag move function
|
||||
return handleApiResult(
|
||||
await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: args.from,
|
||||
sourceTag: args.fromTag,
|
||||
targetTag: args.toTag,
|
||||
withDependencies: args.withDependencies || false,
|
||||
ignoreDependencies: args.ignoreDependencies || false,
|
||||
tasksJsonPath,
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
),
|
||||
log,
|
||||
'Error moving tasks between tags',
|
||||
undefined,
|
||||
args.projectRoot
|
||||
);
|
||||
} else {
|
||||
// Within-tag move logic (existing functionality)
|
||||
if (!args.to) {
|
||||
return createErrorResponse(
|
||||
'Destination ID is required for within-tag moves',
|
||||
'MISSING_DESTINATION_ID'
|
||||
);
|
||||
}
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Find tasks.json path if not provided
|
||||
let tasksJsonPath = args.file;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
tasksJsonPath = findTasksPath(args, log);
|
||||
}
|
||||
@@ -128,9 +59,15 @@ export function registerMoveTaskTool(server) {
|
||||
|
||||
// Validate matching IDs count
|
||||
if (fromIds.length !== toIds.length) {
|
||||
return createErrorResponse(
|
||||
'The number of source and destination IDs must match',
|
||||
'MISMATCHED_ID_COUNT'
|
||||
);
|
||||
}
|
||||
|
||||
// If moving multiple tasks
|
||||
if (fromIds.length > 1) {
|
||||
const results = [];
|
||||
const skipped = [];
|
||||
// Move tasks one by one, only generate files on the last move
|
||||
for (let i = 0; i < fromIds.length; i++) {
|
||||
const fromId = fromIds[i];
|
||||
@@ -139,7 +76,6 @@ export function registerMoveTaskTool(server) {
|
||||
// Skip if source and destination are the same
|
||||
if (fromId === toId) {
|
||||
log.info(`Skipping ${fromId} -> ${toId} (same ID)`);
|
||||
skipped.push({ fromId, toId, reason: 'same ID' });
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -150,8 +86,7 @@ export function registerMoveTaskTool(server) {
|
||||
destinationId: toId,
|
||||
tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag,
|
||||
generateFiles: shouldGenerateFiles
|
||||
tag: resolvedTag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
@@ -171,23 +106,7 @@ export function registerMoveTaskTool(server) {
|
||||
success: true,
|
||||
data: {
|
||||
moves: results,
|
||||
skipped: skipped.length > 0 ? skipped : undefined,
|
||||
message: `Successfully moved ${results.length} tasks${skipped.length > 0 ? `, skipped ${skipped.length}` : ''}`
|
||||
}
|
||||
},
|
||||
log,
|
||||
'Error moving multiple tasks',
|
||||
undefined,
|
||||
args.projectRoot
|
||||
);
|
||||
}
|
||||
return handleApiResult(
|
||||
{
|
||||
success: true,
|
||||
data: {
|
||||
moves: results,
|
||||
skippedMoves: skippedMoves,
|
||||
message: `Successfully moved ${results.length} tasks${skippedMoves.length > 0 ? `, skipped ${skippedMoves.length} moves` : ''}`
|
||||
message: `Successfully moved ${results.length} tasks`
|
||||
}
|
||||
},
|
||||
log,
|
||||
@@ -204,8 +123,7 @@ export function registerMoveTaskTool(server) {
|
||||
destinationId: args.to,
|
||||
tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag,
|
||||
generateFiles: true
|
||||
tag: resolvedTag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
@@ -216,7 +134,6 @@ export function registerMoveTaskTool(server) {
|
||||
args.projectRoot
|
||||
);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
return createErrorResponse(
|
||||
`Failed to move task: ${error.message}`,
|
||||
|
||||
@@ -7,8 +7,7 @@ import { z } from 'zod';
|
||||
import {
|
||||
handleApiResult,
|
||||
withNormalizedProjectRoot,
|
||||
createErrorResponse,
|
||||
checkProgressCapability
|
||||
createErrorResponse
|
||||
} from './utils.js';
|
||||
import { parsePRDDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
@@ -65,24 +64,19 @@ export function registerParsePRDTool(server) {
|
||||
.optional()
|
||||
.describe('Append generated tasks to existing file.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(
|
||||
async (args, { log, session, reportProgress }) => {
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
const progressCapability = checkProgressCapability(
|
||||
reportProgress,
|
||||
log
|
||||
);
|
||||
const result = await parsePRDDirect(
|
||||
{
|
||||
...args,
|
||||
tag: resolvedTag
|
||||
},
|
||||
log,
|
||||
{ session, reportProgress: progressCapability }
|
||||
{ session }
|
||||
);
|
||||
return handleApiResult(
|
||||
result,
|
||||
@@ -95,7 +89,6 @@ export function registerParsePRDTool(server) {
|
||||
log.error(`Error in parse_prd: ${error.message}`);
|
||||
return createErrorResponse(`Failed to parse PRD: ${error.message}`);
|
||||
}
|
||||
}
|
||||
)
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
@@ -778,77 +778,6 @@ function withNormalizedProjectRoot(executeFn) {
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks progress reporting capability and returns the validated function or undefined.
|
||||
*
|
||||
* STANDARD PATTERN for AI-powered, long-running operations (parse-prd, expand-task, expand-all, analyze):
|
||||
*
|
||||
* This helper should be used as the first step in any MCP tool that performs long-running
|
||||
* AI operations. It validates the availability of progress reporting and provides consistent
|
||||
* logging about the capability status.
|
||||
*
|
||||
* Operations that should use this pattern:
|
||||
* - parse-prd: Parsing PRD documents with AI
|
||||
* - expand-task: Expanding tasks into subtasks
|
||||
* - expand-all: Expanding all tasks in batch
|
||||
* - analyze-complexity: Analyzing task complexity
|
||||
* - update-task: Updating tasks with AI assistance
|
||||
* - add-task: Creating new tasks with AI
|
||||
* - Any operation that makes AI service calls
|
||||
*
|
||||
* @example Basic usage in a tool's execute function:
|
||||
* ```javascript
|
||||
* import { checkProgressCapability } from './utils.js';
|
||||
*
|
||||
* async execute(args, context) {
|
||||
* const { log, reportProgress, session } = context;
|
||||
*
|
||||
* // Always validate progress capability first
|
||||
* const progressCapability = checkProgressCapability(reportProgress, log);
|
||||
*
|
||||
* // Pass to direct function - it handles undefined gracefully
|
||||
* const result = await expandTask(taskId, numSubtasks, {
|
||||
* session,
|
||||
* reportProgress: progressCapability,
|
||||
* mcpLog: log
|
||||
* });
|
||||
* }
|
||||
* ```
|
||||
*
|
||||
* @example With progress reporting available:
|
||||
* ```javascript
|
||||
* // When reportProgress is available, users see real-time updates:
|
||||
* // "Starting PRD analysis (Input: 5432 tokens)..."
|
||||
* // "Task 1/10 - Implement user authentication"
|
||||
* // "Task 2/10 - Create database schema"
|
||||
* // "Task Generation Completed | Tokens: 5432/1234"
|
||||
* ```
|
||||
*
|
||||
* @example Without progress reporting (graceful degradation):
|
||||
* ```javascript
|
||||
* // When reportProgress is not available:
|
||||
* // - Operation runs normally without progress updates
|
||||
* // - Debug log: "reportProgress not available - operation will run without progress updates"
|
||||
* // - User gets final result after completion
|
||||
* ```
|
||||
*
|
||||
* @param {Function|undefined} reportProgress - The reportProgress function from MCP context.
|
||||
* Expected signature: async (progress: {progress: number, total: number, message: string}) => void
|
||||
* @param {Object} log - Logger instance with debug, info, warn, error methods
|
||||
* @returns {Function|undefined} The validated reportProgress function or undefined if not available
|
||||
*/
|
||||
function checkProgressCapability(reportProgress, log) {
|
||||
// Validate that reportProgress is available for long-running operations
|
||||
if (typeof reportProgress !== 'function') {
|
||||
log.debug(
|
||||
'reportProgress not available - operation will run without progress updates'
|
||||
);
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return reportProgress;
|
||||
}
|
||||
|
||||
// Ensure all functions are exported
|
||||
export {
|
||||
getProjectRoot,
|
||||
@@ -863,6 +792,5 @@ export {
|
||||
createLogWrapper,
|
||||
normalizeProjectRoot,
|
||||
getRawProjectRootFromSession,
|
||||
withNormalizedProjectRoot,
|
||||
checkProgressCapability
|
||||
withNormalizedProjectRoot
|
||||
};
|
||||
|
||||
562
package-lock.json
generated
562
package-lock.json
generated
@@ -27,14 +27,12 @@
|
||||
"@aws-sdk/credential-providers": "^3.817.0",
|
||||
"@inquirer/search": "^3.0.15",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"@streamparser/json": "^0.0.22",
|
||||
"ai": "^4.3.10",
|
||||
"ajv": "^8.17.1",
|
||||
"ajv-formats": "^3.0.1",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-highlight": "^2.1.11",
|
||||
"cli-progress": "^3.12.0",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^11.1.0",
|
||||
"cors": "^2.8.5",
|
||||
@@ -4155,9 +4153,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@google/gemini-cli-core/node_modules/dotenv": {
|
||||
"version": "17.2.1",
|
||||
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.2.1.tgz",
|
||||
"integrity": "sha512-kQhDYKZecqnM0fCnzI5eIv5L4cAe/iRI+HqMbO/hbRdTAeXDG+M9FjipUxNfbARuEg4iHIbhnhs78BCHNbSxEQ==",
|
||||
"version": "17.2.0",
|
||||
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.2.0.tgz",
|
||||
"integrity": "sha512-Q4sgBT60gzd0BB0lSyYD3xM4YxrXA9y4uBDof1JNYGzOXrQdQ6yX+7XIAqoFOGQFOTK1D3Hts5OllpxMDZFONQ==",
|
||||
"license": "BSD-2-Clause",
|
||||
"optional": true,
|
||||
"engines": {
|
||||
@@ -4231,9 +4229,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@google/genai": {
|
||||
"version": "1.11.0",
|
||||
"resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.11.0.tgz",
|
||||
"integrity": "sha512-4XFAHCvU91ewdWOU3RUdSeXpDuZRJHNYLqT9LKw7WqPjRQcEJvVU+VOU49ocruaSp8VuLKMecl0iadlQK+Zgfw==",
|
||||
"version": "1.10.0",
|
||||
"resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.10.0.tgz",
|
||||
"integrity": "sha512-PR4tLuiIFMrpAiiCko2Z16ydikFsPF1c5TBfI64hlZcv3xBEApSCceLuDYu1pNMq2SkNh4r66J4AG+ZexBnMLw==",
|
||||
"license": "Apache-2.0",
|
||||
"optional": true,
|
||||
"dependencies": {
|
||||
@@ -9624,12 +9622,6 @@
|
||||
"node": "^12.20 || >=14.13"
|
||||
}
|
||||
},
|
||||
"node_modules/@streamparser/json": {
|
||||
"version": "0.0.22",
|
||||
"resolved": "https://registry.npmjs.org/@streamparser/json/-/json-0.0.22.tgz",
|
||||
"integrity": "sha512-b6gTSBjJ8G8SuO3Gbbj+zXbVx8NSs1EbpbMKpzGLWMdkR+98McH9bEjSz3+0mPJf68c5nxa3CrJHp5EQNXM6zQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@szmarczak/http-timer": {
|
||||
"version": "5.0.1",
|
||||
"resolved": "https://registry.npmjs.org/@szmarczak/http-timer/-/http-timer-5.0.1.tgz",
|
||||
@@ -9688,6 +9680,23 @@
|
||||
"@tailwindcss/oxide-win32-x64-msvc": "4.1.11"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-android-arm64": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-android-arm64/-/oxide-android-arm64-4.1.11.tgz",
|
||||
"integrity": "sha512-3IfFuATVRUMZZprEIx9OGDjG3Ou3jG4xQzNTvjDoKmU9JdmoCohQJ83MYd0GPnQIu89YoJqvMM0G3uqLRFtetg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-darwin-arm64": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-darwin-arm64/-/oxide-darwin-arm64-4.1.11.tgz",
|
||||
@@ -9705,6 +9714,189 @@
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-darwin-x64": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-darwin-x64/-/oxide-darwin-x64-4.1.11.tgz",
|
||||
"integrity": "sha512-EgnK8kRchgmgzG6jE10UQNaH9Mwi2n+yw1jWmof9Vyg2lpKNX2ioe7CJdf9M5f8V9uaQxInenZkOxnTVL3fhAw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-freebsd-x64": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-freebsd-x64/-/oxide-freebsd-x64-4.1.11.tgz",
|
||||
"integrity": "sha512-xdqKtbpHs7pQhIKmqVpxStnY1skuNh4CtbcyOHeX1YBE0hArj2romsFGb6yUmzkq/6M24nkxDqU8GYrKrz+UcA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-linux-arm-gnueabihf": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm-gnueabihf/-/oxide-linux-arm-gnueabihf-4.1.11.tgz",
|
||||
"integrity": "sha512-ryHQK2eyDYYMwB5wZL46uoxz2zzDZsFBwfjssgB7pzytAeCCa6glsiJGjhTEddq/4OsIjsLNMAiMlHNYnkEEeg==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-linux-arm64-gnu": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm64-gnu/-/oxide-linux-arm64-gnu-4.1.11.tgz",
|
||||
"integrity": "sha512-mYwqheq4BXF83j/w75ewkPJmPZIqqP1nhoghS9D57CLjsh3Nfq0m4ftTotRYtGnZd3eCztgbSPJ9QhfC91gDZQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-linux-arm64-musl": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm64-musl/-/oxide-linux-arm64-musl-4.1.11.tgz",
|
||||
"integrity": "sha512-m/NVRFNGlEHJrNVk3O6I9ggVuNjXHIPoD6bqay/pubtYC9QIdAMpS+cswZQPBLvVvEF6GtSNONbDkZrjWZXYNQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-linux-x64-gnu": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-x64-gnu/-/oxide-linux-x64-gnu-4.1.11.tgz",
|
||||
"integrity": "sha512-YW6sblI7xukSD2TdbbaeQVDysIm/UPJtObHJHKxDEcW2exAtY47j52f8jZXkqE1krdnkhCMGqP3dbniu1Te2Fg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-linux-x64-musl": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-x64-musl/-/oxide-linux-x64-musl-4.1.11.tgz",
|
||||
"integrity": "sha512-e3C/RRhGunWYNC3aSF7exsQkdXzQ/M+aYuZHKnw4U7KQwTJotnWsGOIVih0s2qQzmEzOFIJ3+xt7iq67K/p56Q==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-wasm32-wasi": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-wasm32-wasi/-/oxide-wasm32-wasi-4.1.11.tgz",
|
||||
"integrity": "sha512-Xo1+/GU0JEN/C/dvcammKHzeM6NqKovG+6921MR6oadee5XPBaKOumrJCXvopJ/Qb5TH7LX/UAywbqrP4lax0g==",
|
||||
"bundleDependencies": [
|
||||
"@napi-rs/wasm-runtime",
|
||||
"@emnapi/core",
|
||||
"@emnapi/runtime",
|
||||
"@tybys/wasm-util",
|
||||
"@emnapi/wasi-threads",
|
||||
"tslib"
|
||||
],
|
||||
"cpu": [
|
||||
"wasm32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"dependencies": {
|
||||
"@emnapi/core": "^1.4.3",
|
||||
"@emnapi/runtime": "^1.4.3",
|
||||
"@emnapi/wasi-threads": "^1.0.2",
|
||||
"@napi-rs/wasm-runtime": "^0.2.11",
|
||||
"@tybys/wasm-util": "^0.9.0",
|
||||
"tslib": "^2.8.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-win32-arm64-msvc": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.1.11.tgz",
|
||||
"integrity": "sha512-UgKYx5PwEKrac3GPNPf6HVMNhUIGuUh4wlDFR2jYYdkX6pL/rn73zTq/4pzUm8fOjAn5L8zDeHp9iXmUGOXZ+w==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/oxide-win32-x64-msvc": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-x64-msvc/-/oxide-win32-x64-msvc-4.1.11.tgz",
|
||||
"integrity": "sha512-YfHoggn1j0LK7wR82TOucWc5LDCguHnoS879idHekmmiR7g9HUtMw9MI0NHatS28u/Xlkfi9w5RJWgz2Dl+5Qg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 10"
|
||||
}
|
||||
},
|
||||
"node_modules/@tailwindcss/postcss": {
|
||||
"version": "4.1.11",
|
||||
"resolved": "https://registry.npmjs.org/@tailwindcss/postcss/-/postcss-4.1.11.tgz",
|
||||
@@ -10446,6 +10638,34 @@
|
||||
"@vscode/vsce-sign-win32-x64": "2.0.5"
|
||||
}
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-alpine-arm64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-arm64/-/vsce-sign-alpine-arm64-2.0.5.tgz",
|
||||
"integrity": "sha512-XVmnF40APwRPXSLYA28Ye+qWxB25KhSVpF2eZVtVOs6g7fkpOxsVnpRU1Bz2xG4ySI79IRuapDJoAQFkoOgfdQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"alpine"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-alpine-x64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-x64/-/vsce-sign-alpine-x64-2.0.5.tgz",
|
||||
"integrity": "sha512-JuxY3xcquRsOezKq6PEHwCgd1rh1GnhyH6urVEWUzWn1c1PC4EOoyffMD+zLZtFuZF5qR1I0+cqDRNKyPvpK7Q==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"alpine"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-darwin-arm64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-arm64/-/vsce-sign-darwin-arm64-2.0.5.tgz",
|
||||
@@ -10460,6 +10680,90 @@
|
||||
"darwin"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-darwin-x64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-x64/-/vsce-sign-darwin-x64-2.0.5.tgz",
|
||||
"integrity": "sha512-ma9JDC7FJ16SuPXlLKkvOD2qLsmW/cKfqK4zzM2iJE1PbckF3BlR08lYqHV89gmuoTpYB55+z8Y5Fz4wEJBVDA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-linux-arm": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm/-/vsce-sign-linux-arm-2.0.5.tgz",
|
||||
"integrity": "sha512-cdCwtLGmvC1QVrkIsyzv01+o9eR+wodMJUZ9Ak3owhcGxPRB53/WvrDHAFYA6i8Oy232nuen1YqWeEohqBuSzA==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-linux-arm64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm64/-/vsce-sign-linux-arm64-2.0.5.tgz",
|
||||
"integrity": "sha512-Hr1o0veBymg9SmkCqYnfaiUnes5YK6k/lKFA5MhNmiEN5fNqxyPUCdRZMFs3Ajtx2OFW4q3KuYVRwGA7jdLo7Q==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-linux-x64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-x64/-/vsce-sign-linux-x64-2.0.5.tgz",
|
||||
"integrity": "sha512-XLT0gfGMcxk6CMRLDkgqEPTyG8Oa0OFe1tPv2RVbphSOjFWJwZgK3TYWx39i/7gqpDHlax0AP6cgMygNJrA6zg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-win32-arm64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-arm64/-/vsce-sign-win32-arm64-2.0.5.tgz",
|
||||
"integrity": "sha512-hco8eaoTcvtmuPhavyCZhrk5QIcLiyAUhEso87ApAWDllG7djIrWiOCtqn48k4pHz+L8oCQlE0nwNHfcYcxOPw==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce-sign-win32-x64": {
|
||||
"version": "2.0.5",
|
||||
"resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-x64/-/vsce-sign-win32-x64-2.0.5.tgz",
|
||||
"integrity": "sha512-1ixKFGM2FwM+6kQS2ojfY3aAelICxjiCzeg4nTHpkeU1Tfs4RC+lVLrgq5NwcBC7ZLr6UfY3Ct3D6suPeOf7BQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "SEE LICENSE IN LICENSE.txt",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@vscode/vsce/node_modules/ansi-styles": {
|
||||
"version": "3.2.1",
|
||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz",
|
||||
@@ -12579,47 +12883,6 @@
|
||||
"url": "https://github.com/chalk/chalk?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/cli-progress": {
|
||||
"version": "3.12.0",
|
||||
"resolved": "https://registry.npmjs.org/cli-progress/-/cli-progress-3.12.0.tgz",
|
||||
"integrity": "sha512-tRkV3HJ1ASwm19THiiLIXLO7Im7wlTuKnvkYaTkyoAPefqjNg7W7DHKUlGRxy9vxDvbyCYQkQozvptuMkGCg8A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"string-width": "^4.2.3"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=4"
|
||||
}
|
||||
},
|
||||
"node_modules/cli-progress/node_modules/emoji-regex": {
|
||||
"version": "8.0.0",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/cli-progress/node_modules/is-fullwidth-code-point": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
|
||||
"integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/cli-progress/node_modules/string-width": {
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
"is-fullwidth-code-point": "^3.0.0",
|
||||
"strip-ansi": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/cli-spinners": {
|
||||
"version": "2.9.2",
|
||||
"resolved": "https://registry.npmjs.org/cli-spinners/-/cli-spinners-2.9.2.tgz",
|
||||
@@ -19693,6 +19956,195 @@
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-darwin-x64": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-darwin-x64/-/lightningcss-darwin-x64-1.30.1.tgz",
|
||||
"integrity": "sha512-k1EvjakfumAQoTfcXUcHQZhSpLlkAuEkdMBsI/ivWw9hL+7FtilQc0Cy3hrx0AAQrVtQAbMI7YjCgYgvn37PzA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-freebsd-x64": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-freebsd-x64/-/lightningcss-freebsd-x64-1.30.1.tgz",
|
||||
"integrity": "sha512-kmW6UGCGg2PcyUE59K5r0kWfKPAVy4SltVeut+umLCFoJ53RdCUWxcRDzO1eTaxf/7Q2H7LTquFHPL5R+Gjyig==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-linux-arm-gnueabihf": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-linux-arm-gnueabihf/-/lightningcss-linux-arm-gnueabihf-1.30.1.tgz",
|
||||
"integrity": "sha512-MjxUShl1v8pit+6D/zSPq9S9dQ2NPFSQwGvxBCYaBYLPlCWuPh9/t1MRS8iUaR8i+a6w7aps+B4N0S1TYP/R+Q==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-linux-arm64-gnu": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-linux-arm64-gnu/-/lightningcss-linux-arm64-gnu-1.30.1.tgz",
|
||||
"integrity": "sha512-gB72maP8rmrKsnKYy8XUuXi/4OctJiuQjcuqWNlJQ6jZiWqtPvqFziskH3hnajfvKB27ynbVCucKSm2rkQp4Bw==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-linux-arm64-musl": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-linux-arm64-musl/-/lightningcss-linux-arm64-musl-1.30.1.tgz",
|
||||
"integrity": "sha512-jmUQVx4331m6LIX+0wUhBbmMX7TCfjF5FoOH6SD1CttzuYlGNVpA7QnrmLxrsub43ClTINfGSYyHe2HWeLl5CQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-linux-x64-gnu": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-linux-x64-gnu/-/lightningcss-linux-x64-gnu-1.30.1.tgz",
|
||||
"integrity": "sha512-piWx3z4wN8J8z3+O5kO74+yr6ze/dKmPnI7vLqfSqI8bccaTGY5xiSGVIJBDd5K5BHlvVLpUB3S2YCfelyJ1bw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-linux-x64-musl": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-linux-x64-musl/-/lightningcss-linux-x64-musl-1.30.1.tgz",
|
||||
"integrity": "sha512-rRomAK7eIkL+tHY0YPxbc5Dra2gXlI63HL+v1Pdi1a3sC+tJTcFrHX+E86sulgAXeI7rSzDYhPSeHHjqFhqfeQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-win32-arm64-msvc": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-win32-arm64-msvc/-/lightningcss-win32-arm64-msvc-1.30.1.tgz",
|
||||
"integrity": "sha512-mSL4rqPi4iXq5YVqzSsJgMVFENoa4nGTT/GjO2c0Yl9OuQfPsIfncvLrEW6RbbB24WtZ3xP/2CCmI3tNkNV4oA==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lightningcss-win32-x64-msvc": {
|
||||
"version": "1.30.1",
|
||||
"resolved": "https://registry.npmjs.org/lightningcss-win32-x64-msvc/-/lightningcss-win32-x64-msvc-1.30.1.tgz",
|
||||
"integrity": "sha512-PVqXh48wh4T53F/1CCu8PIPCxLzWyCnn/9T5W1Jpmdy5h9Cwd+0YQS6/LwhHXSafuc61/xg9Lv5OrCby6a++jg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MPL-2.0",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/parcel"
|
||||
}
|
||||
},
|
||||
"node_modules/lilconfig": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/lilconfig/-/lilconfig-2.1.0.tgz",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.25.0",
|
||||
"version": "0.24.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -9,10 +9,7 @@
|
||||
"task-master-mcp": "mcp-server/server.js",
|
||||
"task-master-ai": "mcp-server/server.js"
|
||||
},
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
"."
|
||||
],
|
||||
"workspaces": ["apps/*", "."],
|
||||
"scripts": {
|
||||
"test": "node --experimental-vm-modules node_modules/.bin/jest",
|
||||
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
|
||||
@@ -57,14 +54,12 @@
|
||||
"@aws-sdk/credential-providers": "^3.817.0",
|
||||
"@inquirer/search": "^3.0.15",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"@streamparser/json": "^0.0.22",
|
||||
"ai": "^4.3.10",
|
||||
"ajv": "^8.17.1",
|
||||
"ajv-formats": "^3.0.1",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-highlight": "^2.1.11",
|
||||
"cli-progress": "^3.12.0",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^11.1.0",
|
||||
"cors": "^2.8.5",
|
||||
|
||||
@@ -91,74 +91,45 @@ function _getProvider(providerName) {
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
function _getCostForModel(providerName, modelId) {
|
||||
const DEFAULT_COST = { inputCost: 0, outputCost: 0, currency: 'USD' };
|
||||
|
||||
if (!MODEL_MAP || !MODEL_MAP[providerName]) {
|
||||
log(
|
||||
'warn',
|
||||
`Provider "${providerName}" not found in MODEL_MAP. Cannot determine cost for model ${modelId}.`
|
||||
);
|
||||
return DEFAULT_COST;
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
const modelData = MODEL_MAP[providerName].find((m) => m.id === modelId);
|
||||
|
||||
if (!modelData?.cost_per_1m_tokens) {
|
||||
if (!modelData || !modelData.cost_per_1m_tokens) {
|
||||
log(
|
||||
'debug',
|
||||
`Cost data not found for model "${modelId}" under provider "${providerName}". Assuming zero cost.`
|
||||
);
|
||||
return DEFAULT_COST;
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
const costs = modelData.cost_per_1m_tokens;
|
||||
// Ensure currency is part of the returned object, defaulting if not present
|
||||
const currency = modelData.cost_per_1m_tokens.currency || 'USD';
|
||||
|
||||
return {
|
||||
inputCost: costs.input || 0,
|
||||
outputCost: costs.output || 0,
|
||||
currency: costs.currency || 'USD'
|
||||
inputCost: modelData.cost_per_1m_tokens.input || 0,
|
||||
outputCost: modelData.cost_per_1m_tokens.output || 0,
|
||||
currency: currency
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate cost from token counts and cost per million
|
||||
* @param {number} inputTokens - Number of input tokens
|
||||
* @param {number} outputTokens - Number of output tokens
|
||||
* @param {number} inputCost - Cost per million input tokens
|
||||
* @param {number} outputCost - Cost per million output tokens
|
||||
* @returns {number} Total calculated cost
|
||||
*/
|
||||
function _calculateCost(inputTokens, outputTokens, inputCost, outputCost) {
|
||||
const calculatedCost =
|
||||
((inputTokens || 0) / 1_000_000) * inputCost +
|
||||
((outputTokens || 0) / 1_000_000) * outputCost;
|
||||
return parseFloat(calculatedCost.toFixed(6));
|
||||
}
|
||||
|
||||
// Helper function to get tag information for responses
|
||||
function _getTagInfo(projectRoot) {
|
||||
const DEFAULT_TAG_INFO = { currentTag: 'master', availableTags: ['master'] };
|
||||
|
||||
try {
|
||||
if (!projectRoot) {
|
||||
return DEFAULT_TAG_INFO;
|
||||
return { currentTag: 'master', availableTags: ['master'] };
|
||||
}
|
||||
|
||||
const currentTag = getCurrentTag(projectRoot) || 'master';
|
||||
const availableTags = _readAvailableTags(projectRoot);
|
||||
|
||||
return { currentTag, availableTags };
|
||||
} catch (error) {
|
||||
if (getDebugFlag()) {
|
||||
log('debug', `Error getting tag information: ${error.message}`);
|
||||
}
|
||||
return DEFAULT_TAG_INFO;
|
||||
}
|
||||
}
|
||||
|
||||
// Extract method for reading available tags
|
||||
function _readAvailableTags(projectRoot) {
|
||||
const DEFAULT_TAGS = ['master'];
|
||||
const currentTag = getCurrentTag(projectRoot);
|
||||
|
||||
// Read available tags from tasks.json
|
||||
let availableTags = ['master']; // Default fallback
|
||||
try {
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
@@ -169,37 +140,42 @@ function _readAvailableTags(projectRoot) {
|
||||
'tasks.json'
|
||||
);
|
||||
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
return DEFAULT_TAGS;
|
||||
}
|
||||
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
const tasksData = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
if (!tasksData || typeof tasksData !== 'object') {
|
||||
return DEFAULT_TAGS;
|
||||
}
|
||||
|
||||
if (tasksData && typeof tasksData === 'object') {
|
||||
// Check if it's tagged format (has tag-like keys with tasks arrays)
|
||||
const potentialTags = Object.keys(tasksData).filter((key) =>
|
||||
_isValidTaggedTask(tasksData[key])
|
||||
const potentialTags = Object.keys(tasksData).filter(
|
||||
(key) =>
|
||||
tasksData[key] &&
|
||||
typeof tasksData[key] === 'object' &&
|
||||
Array.isArray(tasksData[key].tasks)
|
||||
);
|
||||
|
||||
return potentialTags.length > 0 ? potentialTags : DEFAULT_TAGS;
|
||||
if (potentialTags.length > 0) {
|
||||
availableTags = potentialTags;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (readError) {
|
||||
// Silently fall back to default if we can't read tasks file
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
'debug',
|
||||
`Could not read tasks file for available tags: ${readError.message}`
|
||||
);
|
||||
}
|
||||
return DEFAULT_TAGS;
|
||||
}
|
||||
}
|
||||
|
||||
// Helper to validate tagged task structure
|
||||
function _isValidTaggedTask(taskData) {
|
||||
return (
|
||||
taskData && typeof taskData === 'object' && Array.isArray(taskData.tasks)
|
||||
);
|
||||
return {
|
||||
currentTag: currentTag || 'master',
|
||||
availableTags: availableTags
|
||||
};
|
||||
} catch (error) {
|
||||
if (getDebugFlag()) {
|
||||
log('debug', `Error getting tag information: ${error.message}`);
|
||||
}
|
||||
return { currentTag: 'master', availableTags: ['master'] };
|
||||
}
|
||||
}
|
||||
|
||||
// --- Configuration for Retries ---
|
||||
@@ -268,65 +244,6 @@ function _extractErrorMessage(error) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get role configuration (provider and model) based on role type
|
||||
* @param {string} role - The role ('main', 'research', 'fallback')
|
||||
* @param {string} projectRoot - Project root path
|
||||
* @returns {Object|null} Configuration object with provider and modelId
|
||||
*/
|
||||
function _getRoleConfiguration(role, projectRoot) {
|
||||
const roleConfigs = {
|
||||
main: {
|
||||
provider: getMainProvider(projectRoot),
|
||||
modelId: getMainModelId(projectRoot)
|
||||
},
|
||||
research: {
|
||||
provider: getResearchProvider(projectRoot),
|
||||
modelId: getResearchModelId(projectRoot)
|
||||
},
|
||||
fallback: {
|
||||
provider: getFallbackProvider(projectRoot),
|
||||
modelId: getFallbackModelId(projectRoot)
|
||||
}
|
||||
};
|
||||
|
||||
return roleConfigs[role] || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get Vertex AI specific configuration
|
||||
* @param {string} projectRoot - Project root path
|
||||
* @param {Object} session - Session object
|
||||
* @returns {Object} Vertex AI configuration parameters
|
||||
*/
|
||||
function _getVertexConfiguration(projectRoot, session) {
|
||||
const projectId =
|
||||
getVertexProjectId(projectRoot) ||
|
||||
resolveEnvVariable('VERTEX_PROJECT_ID', session, projectRoot);
|
||||
|
||||
const location =
|
||||
getVertexLocation(projectRoot) ||
|
||||
resolveEnvVariable('VERTEX_LOCATION', session, projectRoot) ||
|
||||
'us-central1';
|
||||
|
||||
const credentialsPath = resolveEnvVariable(
|
||||
'GOOGLE_APPLICATION_CREDENTIALS',
|
||||
session,
|
||||
projectRoot
|
||||
);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
|
||||
);
|
||||
|
||||
return {
|
||||
projectId,
|
||||
location,
|
||||
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal helper to resolve the API key for a given provider.
|
||||
* @param {string} providerName - The name of the provider (lowercase).
|
||||
@@ -507,13 +424,18 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
let telemetryData = null;
|
||||
|
||||
try {
|
||||
log('debug', `New AI service call with role: ${currentRole}`);
|
||||
log('info', `New AI service call with role: ${currentRole}`);
|
||||
|
||||
const roleConfig = _getRoleConfiguration(
|
||||
currentRole,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
if (!roleConfig) {
|
||||
if (currentRole === 'main') {
|
||||
providerName = getMainProvider(effectiveProjectRoot);
|
||||
modelId = getMainModelId(effectiveProjectRoot);
|
||||
} else if (currentRole === 'research') {
|
||||
providerName = getResearchProvider(effectiveProjectRoot);
|
||||
modelId = getResearchModelId(effectiveProjectRoot);
|
||||
} else if (currentRole === 'fallback') {
|
||||
providerName = getFallbackProvider(effectiveProjectRoot);
|
||||
modelId = getFallbackModelId(effectiveProjectRoot);
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
`Unknown role encountered in _unifiedServiceRunner: ${currentRole}`
|
||||
@@ -522,8 +444,6 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
lastError || new Error(`Unknown AI role specified: ${currentRole}`);
|
||||
continue;
|
||||
}
|
||||
providerName = roleConfig.provider;
|
||||
modelId = roleConfig.modelId;
|
||||
|
||||
if (!providerName || !modelId) {
|
||||
log(
|
||||
@@ -597,9 +517,41 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
// Handle Vertex AI specific configuration
|
||||
if (providerName?.toLowerCase() === 'vertex') {
|
||||
providerSpecificParams = _getVertexConfiguration(
|
||||
effectiveProjectRoot,
|
||||
session
|
||||
// Get Vertex project ID and location
|
||||
const projectId =
|
||||
getVertexProjectId(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_PROJECT_ID',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
|
||||
const location =
|
||||
getVertexLocation(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_LOCATION',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
) ||
|
||||
'us-central1';
|
||||
|
||||
// Get credentials path if available
|
||||
const credentialsPath = resolveEnvVariable(
|
||||
'GOOGLE_APPLICATION_CREDENTIALS',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
|
||||
// Add Vertex-specific parameters
|
||||
providerSpecificParams = {
|
||||
projectId,
|
||||
location,
|
||||
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
|
||||
};
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
|
||||
);
|
||||
}
|
||||
|
||||
@@ -642,8 +594,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
...(baseURL && { baseURL }),
|
||||
...((serviceType === 'generateObject' ||
|
||||
serviceType === 'streamObject') && { schema, objectName }),
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...providerSpecificParams,
|
||||
...restApiParams
|
||||
};
|
||||
@@ -684,10 +635,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
finalMainResult = providerResponse.text;
|
||||
} else if (serviceType === 'generateObject') {
|
||||
finalMainResult = providerResponse.object;
|
||||
} else if (
|
||||
serviceType === 'streamText' ||
|
||||
serviceType === 'streamObject'
|
||||
) {
|
||||
} else if (serviceType === 'streamText') {
|
||||
finalMainResult = providerResponse;
|
||||
} else {
|
||||
log(
|
||||
@@ -703,9 +651,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
return {
|
||||
mainResult: finalMainResult,
|
||||
telemetryData: telemetryData,
|
||||
tagInfo: tagInfo,
|
||||
providerName: providerName,
|
||||
modelId: modelId
|
||||
tagInfo: tagInfo
|
||||
};
|
||||
} catch (error) {
|
||||
const cleanMessage = _extractErrorMessage(error);
|
||||
@@ -786,31 +732,6 @@ async function streamTextService(params) {
|
||||
return _unifiedServiceRunner('streamText', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
* Unified service function for streaming structured objects.
|
||||
* Uses Vercel AI SDK's streamObject for proper JSON streaming.
|
||||
*
|
||||
* @param {object} params - Parameters for the service call.
|
||||
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
||||
* @param {object} [params.session=null] - Optional MCP session object.
|
||||
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the expected object.
|
||||
* @param {string} params.prompt - The prompt for the AI.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing the stream and usage data.
|
||||
*/
|
||||
async function streamObjectService(params) {
|
||||
const defaults = { outputType: 'cli' };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// Stream object requires a schema
|
||||
if (!combinedParams.schema) {
|
||||
throw new Error('streamObjectService requires a schema parameter');
|
||||
}
|
||||
return _unifiedServiceRunner('streamObject', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
* Unified service function for generating structured objects.
|
||||
* Handles client retrieval, retries, and fallback sequence.
|
||||
@@ -871,12 +792,9 @@ async function logAiUsage({
|
||||
modelId
|
||||
);
|
||||
|
||||
const totalCost = _calculateCost(
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
inputCost,
|
||||
outputCost
|
||||
);
|
||||
const totalCost =
|
||||
((inputTokens || 0) / 1_000_000) * inputCost +
|
||||
((outputTokens || 0) / 1_000_000) * outputCost;
|
||||
|
||||
const telemetryData = {
|
||||
timestamp,
|
||||
@@ -887,7 +805,7 @@ async function logAiUsage({
|
||||
inputTokens: inputTokens || 0,
|
||||
outputTokens: outputTokens || 0,
|
||||
totalTokens,
|
||||
totalCost,
|
||||
totalCost: parseFloat(totalCost.toFixed(6)),
|
||||
currency // Add currency to the telemetry data
|
||||
};
|
||||
|
||||
@@ -910,7 +828,6 @@ async function logAiUsage({
|
||||
export {
|
||||
generateTextService,
|
||||
streamTextService,
|
||||
streamObjectService,
|
||||
generateObjectService,
|
||||
logAiUsage
|
||||
};
|
||||
|
||||
@@ -48,12 +48,6 @@ import {
|
||||
validateStrength
|
||||
} from './task-manager.js';
|
||||
|
||||
import {
|
||||
moveTasksBetweenTags,
|
||||
MoveTaskError,
|
||||
MOVE_ERROR_CODES
|
||||
} from './task-manager/move-task.js';
|
||||
|
||||
import {
|
||||
createTag,
|
||||
deleteTag,
|
||||
@@ -67,9 +61,7 @@ import {
|
||||
addDependency,
|
||||
removeDependency,
|
||||
validateDependenciesCommand,
|
||||
fixDependenciesCommand,
|
||||
DependencyError,
|
||||
DEPENDENCY_ERROR_CODES
|
||||
fixDependenciesCommand
|
||||
} from './dependency-manager.js';
|
||||
|
||||
import {
|
||||
@@ -111,11 +103,7 @@ import {
|
||||
displayAiUsageSummary,
|
||||
displayMultipleTasksSummary,
|
||||
displayTaggedTasksFYI,
|
||||
displayCurrentTagIndicator,
|
||||
displayCrossTagDependencyError,
|
||||
displaySubtaskMoveError,
|
||||
displayInvalidTagCombinationError,
|
||||
displayDependencyValidationHints
|
||||
displayCurrentTagIndicator
|
||||
} from './ui.js';
|
||||
import {
|
||||
confirmProfilesRemove,
|
||||
@@ -912,6 +900,8 @@ function registerCommands(programInstance) {
|
||||
return true;
|
||||
}
|
||||
|
||||
let spinner;
|
||||
|
||||
try {
|
||||
if (!(await confirmOverwriteIfNeeded())) return;
|
||||
|
||||
@@ -928,6 +918,7 @@ function registerCommands(programInstance) {
|
||||
);
|
||||
}
|
||||
|
||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||
// Handle case where getTasksPath() returns null
|
||||
const outputPath =
|
||||
taskMaster.getTasksPath() ||
|
||||
@@ -939,8 +930,13 @@ function registerCommands(programInstance) {
|
||||
projectRoot: taskMaster.getProjectRoot(),
|
||||
tag: tag
|
||||
});
|
||||
spinner.succeed('Tasks generated successfully!');
|
||||
} catch (error) {
|
||||
if (spinner) {
|
||||
spinner.fail(`Error parsing PRD: ${error.message}`);
|
||||
} else {
|
||||
console.error(chalk.red(`Error parsing PRD: ${error.message}`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
@@ -1757,7 +1753,6 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.option('-s, --status <status>', 'Filter by status')
|
||||
.option('--with-subtasks', 'Show subtasks for each task')
|
||||
.option('-c, --compact', 'Display tasks in compact one-line format')
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
@@ -1775,12 +1770,10 @@ function registerCommands(programInstance) {
|
||||
|
||||
const statusFilter = options.status;
|
||||
const withSubtasks = options.withSubtasks || false;
|
||||
const compact = options.compact || false;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
|
||||
if (!compact) {
|
||||
console.log(
|
||||
chalk.blue(`Listing tasks from: ${taskMaster.getTasksPath()}`)
|
||||
);
|
||||
@@ -1790,14 +1783,13 @@ function registerCommands(programInstance) {
|
||||
if (withSubtasks) {
|
||||
console.log(chalk.blue('Including subtasks in listing'));
|
||||
}
|
||||
}
|
||||
|
||||
await listTasks(
|
||||
taskMaster.getTasksPath(),
|
||||
statusFilter,
|
||||
taskMaster.getComplexityReportPath(),
|
||||
withSubtasks,
|
||||
compact ? 'compact' : 'text',
|
||||
'text',
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag }
|
||||
);
|
||||
});
|
||||
@@ -4042,9 +4034,7 @@ Examples:
|
||||
// move-task command
|
||||
programInstance
|
||||
.command('move')
|
||||
.description(
|
||||
'Move tasks between tags or reorder within tags. Supports cross-tag moves with dependency resolution options.'
|
||||
)
|
||||
.description('Move a task or subtask to a new position')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
@@ -4059,168 +4049,20 @@ Examples:
|
||||
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.option('--from-tag <tag>', 'Source tag for cross-tag moves')
|
||||
.option('--to-tag <tag>', 'Target tag for cross-tag moves')
|
||||
.option('--with-dependencies', 'Move dependent tasks along with main task')
|
||||
.option('--ignore-dependencies', 'Break cross-tag dependencies during move')
|
||||
.action(async (options) => {
|
||||
// Helper function to show move command help - defined in scope for proper encapsulation
|
||||
function showMoveHelp() {
|
||||
console.log(
|
||||
chalk.white.bold('Move Command Help') +
|
||||
'\n\n' +
|
||||
chalk.cyan('Move tasks between tags or reorder within tags.') +
|
||||
'\n\n' +
|
||||
chalk.yellow.bold('Within-Tag Moves:') +
|
||||
'\n' +
|
||||
chalk.white(' task-master move --from=5 --to=7') +
|
||||
'\n' +
|
||||
chalk.white(' task-master move --from=5.2 --to=7.3') +
|
||||
'\n' +
|
||||
chalk.white(' task-master move --from=5,6,7 --to=10,11,12') +
|
||||
'\n\n' +
|
||||
chalk.yellow.bold('Cross-Tag Moves:') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' task-master move --from=5,6 --from-tag=backlog --to-tag=done'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.yellow.bold('Dependency Resolution:') +
|
||||
'\n' +
|
||||
chalk.white(' # Move with dependencies') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.white(' # Break dependencies') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.white(' # Force move (may break dependencies)') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.yellow.bold('Best Practices:') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Use --with-dependencies to move dependent tasks together'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Use --ignore-dependencies to break cross-tag dependencies'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Use --force only when you understand the consequences'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Check dependencies first: task-master validate-dependencies'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Fix dependency issues: task-master fix-dependencies'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.yellow.bold('Error Resolution:') +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Cross-tag dependency conflicts: Use --with-dependencies or --ignore-dependencies'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Subtask movement: Promote subtask first with remove-subtask --convert'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
' • Invalid tags: Check available tags with task-master tags'
|
||||
) +
|
||||
'\n\n' +
|
||||
chalk.gray('For more help, run: task-master move --help')
|
||||
);
|
||||
}
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
});
|
||||
|
||||
// Helper function to handle cross-tag move logic
|
||||
async function handleCrossTagMove(moveContext, options) {
|
||||
const { sourceId, sourceTag, toTag, taskMaster } = moveContext;
|
||||
|
||||
if (!sourceId) {
|
||||
console.error(
|
||||
chalk.red('Error: --from parameter is required for cross-tag moves')
|
||||
);
|
||||
showMoveHelp();
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const sourceIds = sourceId.split(',').map((id) => id.trim());
|
||||
const moveOptions = {
|
||||
withDependencies: options.withDependencies || false,
|
||||
ignoreDependencies: options.ignoreDependencies || false
|
||||
};
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Moving tasks ${sourceIds.join(', ')} from "${sourceTag}" to "${toTag}"...`
|
||||
)
|
||||
);
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
taskMaster.getTasksPath(),
|
||||
sourceIds,
|
||||
sourceTag,
|
||||
toTag,
|
||||
moveOptions,
|
||||
{ projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
|
||||
console.log(chalk.green(`✓ ${result.message}`));
|
||||
|
||||
// Check if source tag still contains tasks before regenerating files
|
||||
const tasksData = readJSON(
|
||||
taskMaster.getTasksPath(),
|
||||
taskMaster.getProjectRoot(),
|
||||
sourceTag
|
||||
);
|
||||
const sourceTagHasTasks =
|
||||
tasksData &&
|
||||
Array.isArray(tasksData.tasks) &&
|
||||
tasksData.tasks.length > 0;
|
||||
|
||||
// Generate task files for the affected tags
|
||||
await generateTaskFiles(
|
||||
taskMaster.getTasksPath(),
|
||||
path.dirname(taskMaster.getTasksPath()),
|
||||
{ tag: toTag, projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
|
||||
// Only regenerate source tag files if it still contains tasks
|
||||
if (sourceTagHasTasks) {
|
||||
await generateTaskFiles(
|
||||
taskMaster.getTasksPath(),
|
||||
path.dirname(taskMaster.getTasksPath()),
|
||||
{ tag: sourceTag, projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to handle within-tag move logic
|
||||
async function handleWithinTagMove(moveContext) {
|
||||
const { sourceId, destinationId, tag, taskMaster } = moveContext;
|
||||
const sourceId = options.from;
|
||||
const destinationId = options.to;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
if (!sourceId || !destinationId) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Both --from and --to parameters are required for within-tag moves'
|
||||
)
|
||||
chalk.red('Error: Both --from and --to parameters are required')
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
@@ -4255,6 +4097,7 @@ Examples:
|
||||
)
|
||||
);
|
||||
|
||||
try {
|
||||
// Read tasks data once to validate destination IDs
|
||||
const tasksData = readJSON(
|
||||
taskMaster.getTasksPath(),
|
||||
@@ -4263,17 +4106,11 @@ Examples:
|
||||
);
|
||||
if (!tasksData || !tasksData.tasks) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Invalid or missing tasks file at ${taskMaster.getTasksPath()}`
|
||||
)
|
||||
chalk.red(`Error: Invalid or missing tasks file at ${tasksPath}`)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Collect errors during move attempts
|
||||
const moveErrors = [];
|
||||
const successfulMoves = [];
|
||||
|
||||
// Move tasks one by one
|
||||
for (let i = 0; i < sourceIds.length; i++) {
|
||||
const fromId = sourceIds[i];
|
||||
@@ -4303,52 +4140,16 @@ Examples:
|
||||
`✓ Successfully moved task/subtask ${fromId} to ${toId}`
|
||||
)
|
||||
);
|
||||
successfulMoves.push({ fromId, toId });
|
||||
} catch (error) {
|
||||
const errorInfo = {
|
||||
fromId,
|
||||
toId,
|
||||
error: error.message
|
||||
};
|
||||
moveErrors.push(errorInfo);
|
||||
console.error(
|
||||
chalk.red(`Error moving ${fromId} to ${toId}: ${error.message}`)
|
||||
);
|
||||
// Continue with the next task rather than exiting
|
||||
}
|
||||
}
|
||||
|
||||
// Display summary after all moves are attempted
|
||||
if (moveErrors.length > 0) {
|
||||
console.log(chalk.yellow('\n--- Move Operation Summary ---'));
|
||||
console.log(
|
||||
chalk.green(
|
||||
`✓ Successfully moved: ${successfulMoves.length} tasks`
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.red(`✗ Failed to move: ${moveErrors.length} tasks`)
|
||||
);
|
||||
|
||||
if (successfulMoves.length > 0) {
|
||||
console.log(chalk.cyan('\nSuccessful moves:'));
|
||||
successfulMoves.forEach(({ fromId, toId }) => {
|
||||
console.log(chalk.cyan(` ${fromId} → ${toId}`));
|
||||
});
|
||||
}
|
||||
|
||||
console.log(chalk.red('\nFailed moves:'));
|
||||
moveErrors.forEach(({ fromId, toId, error }) => {
|
||||
console.log(chalk.red(` ${fromId} → ${toId}: ${error}`));
|
||||
});
|
||||
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nNote: Some tasks were moved successfully. Check the errors above for failed moves.'
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(chalk.green('\n✓ All tasks moved successfully!'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
} else {
|
||||
// Moving a single task (existing logic)
|
||||
@@ -4356,6 +4157,7 @@ Examples:
|
||||
chalk.blue(`Moving task/subtask ${sourceId} to ${destinationId}...`)
|
||||
);
|
||||
|
||||
try {
|
||||
const result = await moveTask(
|
||||
taskMaster.getTasksPath(),
|
||||
sourceId,
|
||||
@@ -4368,89 +4170,10 @@ Examples:
|
||||
`✓ Successfully moved task/subtask ${sourceId} to ${destinationId}`
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to handle move errors
|
||||
function handleMoveError(error, moveContext) {
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
|
||||
// Enhanced error handling with structured error objects
|
||||
if (error.code === 'CROSS_TAG_DEPENDENCY_CONFLICTS') {
|
||||
// Use structured error data
|
||||
const conflicts = error.data.conflicts || [];
|
||||
const taskIds = error.data.taskIds || [];
|
||||
displayCrossTagDependencyError(
|
||||
conflicts,
|
||||
moveContext.sourceTag,
|
||||
moveContext.toTag,
|
||||
taskIds.join(', ')
|
||||
);
|
||||
} else if (error.code === 'CANNOT_MOVE_SUBTASK') {
|
||||
// Use structured error data
|
||||
const taskId =
|
||||
error.data.taskId || moveContext.sourceId?.split(',')[0];
|
||||
displaySubtaskMoveError(
|
||||
taskId,
|
||||
moveContext.sourceTag,
|
||||
moveContext.toTag
|
||||
);
|
||||
} else if (
|
||||
error.code === 'SOURCE_TARGET_TAGS_SAME' ||
|
||||
error.code === 'SAME_SOURCE_TARGET_TAG'
|
||||
) {
|
||||
displayInvalidTagCombinationError(
|
||||
moveContext.sourceTag,
|
||||
moveContext.toTag,
|
||||
'Source and target tags are identical'
|
||||
);
|
||||
} else {
|
||||
// General error - show dependency validation hints
|
||||
displayDependencyValidationHints('after-error');
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
});
|
||||
|
||||
const sourceId = options.from;
|
||||
const destinationId = options.to;
|
||||
const fromTag = options.fromTag;
|
||||
const toTag = options.toTag;
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
// Get the source tag - fallback to current tag if not provided
|
||||
const sourceTag = fromTag || taskMaster.getCurrentTag();
|
||||
|
||||
// Check if this is a cross-tag move (different tags)
|
||||
const isCrossTagMove = sourceTag && toTag && sourceTag !== toTag;
|
||||
|
||||
// Initialize move context with all relevant data
|
||||
const moveContext = {
|
||||
sourceId,
|
||||
destinationId,
|
||||
sourceTag,
|
||||
toTag,
|
||||
tag,
|
||||
taskMaster
|
||||
};
|
||||
|
||||
try {
|
||||
if (isCrossTagMove) {
|
||||
// Cross-tag move logic
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
} else {
|
||||
// Within-tag move logic
|
||||
await handleWithinTagMove(moveContext);
|
||||
}
|
||||
} catch (error) {
|
||||
handleMoveError(error, moveContext);
|
||||
}
|
||||
});
|
||||
|
||||
@@ -4871,7 +4594,7 @@ Examples:
|
||||
const gitUtils = await import('./utils/git-utils.js');
|
||||
|
||||
// Check if we're in a git repository
|
||||
if (!(await gitUtils.isGitRepository(context.projectRoot))) {
|
||||
if (!(await gitUtils.isGitRepository(projectRoot))) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Not in a git repository. Cannot use --from-branch option.'
|
||||
@@ -4881,9 +4604,7 @@ Examples:
|
||||
}
|
||||
|
||||
// Get current git branch
|
||||
const currentBranch = await gitUtils.getCurrentBranch(
|
||||
context.projectRoot
|
||||
);
|
||||
const currentBranch = await gitUtils.getCurrentBranch(projectRoot);
|
||||
if (!currentBranch) {
|
||||
console.error(
|
||||
chalk.red('Error: Could not determine current git branch.')
|
||||
|
||||
@@ -14,35 +14,12 @@ import {
|
||||
taskExists,
|
||||
formatTaskId,
|
||||
findCycles,
|
||||
traverseDependencies,
|
||||
isSilentMode
|
||||
} from './utils.js';
|
||||
|
||||
import { displayBanner } from './ui.js';
|
||||
|
||||
import generateTaskFiles from './task-manager/generate-task-files.js';
|
||||
|
||||
/**
|
||||
* Structured error class for dependency operations
|
||||
*/
|
||||
class DependencyError extends Error {
|
||||
constructor(code, message, data = {}) {
|
||||
super(message);
|
||||
this.name = 'DependencyError';
|
||||
this.code = code;
|
||||
this.data = data;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Error codes for dependency operations
|
||||
*/
|
||||
const DEPENDENCY_ERROR_CODES = {
|
||||
CANNOT_MOVE_SUBTASK: 'CANNOT_MOVE_SUBTASK',
|
||||
INVALID_TASK_ID: 'INVALID_TASK_ID',
|
||||
INVALID_SOURCE_TAG: 'INVALID_SOURCE_TAG',
|
||||
INVALID_TARGET_TAG: 'INVALID_TARGET_TAG'
|
||||
};
|
||||
import { generateTaskFiles } from './task-manager.js';
|
||||
|
||||
/**
|
||||
* Add a dependency to a task
|
||||
@@ -1258,580 +1235,6 @@ function validateAndFixDependencies(
|
||||
return changesDetected;
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively find all dependencies for a set of tasks with depth limiting
|
||||
* Recursively find all dependencies for a set of tasks with depth limiting
|
||||
*
|
||||
* @note This function depends on the traverseDependencies utility from utils.js
|
||||
* for the actual dependency traversal logic.
|
||||
*
|
||||
* @param {Array} sourceTasks - Array of source tasks to find dependencies for
|
||||
* @param {Array} allTasks - Array of all available tasks
|
||||
* @param {Object} options - Options object
|
||||
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
|
||||
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
|
||||
* @returns {Array} Array of all dependency task IDs
|
||||
*/
|
||||
function findAllDependenciesRecursively(sourceTasks, allTasks, options = {}) {
|
||||
if (!Array.isArray(sourceTasks)) {
|
||||
throw new Error('Source tasks parameter must be an array');
|
||||
}
|
||||
if (!Array.isArray(allTasks)) {
|
||||
throw new Error('All tasks parameter must be an array');
|
||||
}
|
||||
return traverseDependencies(sourceTasks, allTasks, {
|
||||
...options,
|
||||
direction: 'forward',
|
||||
logger: { warn: log.warn || console.warn }
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Find dependency task by ID, handling various ID formats
|
||||
* @param {string|number} depId - Dependency ID to find
|
||||
* @param {string} taskId - ID of the task that has this dependency
|
||||
* @param {Array} allTasks - Array of all tasks to search
|
||||
* @returns {Object|null} Found dependency task or null
|
||||
*/
|
||||
/**
|
||||
* Find a subtask within a parent task's subtasks array
|
||||
* @param {string} parentId - The parent task ID
|
||||
* @param {string|number} subtaskId - The subtask ID to find
|
||||
* @param {Array} allTasks - Array of all tasks to search in
|
||||
* @param {boolean} useStringComparison - Whether to use string comparison for subtaskId
|
||||
* @returns {Object|null} The found subtask with full ID or null if not found
|
||||
*/
|
||||
function findSubtaskInParent(
|
||||
parentId,
|
||||
subtaskId,
|
||||
allTasks,
|
||||
useStringComparison = false
|
||||
) {
|
||||
// Convert parentId to numeric for proper comparison with top-level task IDs
|
||||
const numericParentId = parseInt(parentId, 10);
|
||||
const parentTask = allTasks.find((t) => t.id === numericParentId);
|
||||
|
||||
if (parentTask && parentTask.subtasks && Array.isArray(parentTask.subtasks)) {
|
||||
const foundSubtask = parentTask.subtasks.find((subtask) =>
|
||||
useStringComparison
|
||||
? String(subtask.id) === String(subtaskId)
|
||||
: subtask.id === subtaskId
|
||||
);
|
||||
if (foundSubtask) {
|
||||
// Return a task-like object that represents the subtask with full ID
|
||||
return {
|
||||
...foundSubtask,
|
||||
id: `${parentId}.${foundSubtask.id}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function findDependencyTask(depId, taskId, allTasks) {
|
||||
if (!depId) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Convert depId to string for consistent comparison
|
||||
const depIdStr = String(depId);
|
||||
|
||||
// Find the dependency task - handle both top-level and subtask IDs
|
||||
let depTask = null;
|
||||
|
||||
// First try exact match (for top-level tasks)
|
||||
depTask = allTasks.find((t) => String(t.id) === depIdStr);
|
||||
|
||||
// If not found and it's a subtask reference (contains dot), find the parent task first
|
||||
if (!depTask && depIdStr.includes('.')) {
|
||||
const [parentId, subtaskId] = depIdStr.split('.');
|
||||
depTask = findSubtaskInParent(parentId, subtaskId, allTasks, true);
|
||||
}
|
||||
|
||||
// If still not found, try numeric comparison for relative subtask references
|
||||
if (!depTask && !isNaN(depId)) {
|
||||
const numericId = parseInt(depId, 10);
|
||||
// For subtasks, this might be a relative reference within the same parent
|
||||
if (taskId && typeof taskId === 'string' && taskId.includes('.')) {
|
||||
const [parentId] = taskId.split('.');
|
||||
depTask = findSubtaskInParent(parentId, numericId, allTasks, false);
|
||||
}
|
||||
}
|
||||
|
||||
return depTask;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a task has cross-tag dependencies
|
||||
* @param {Object} task - Task to check
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Array} allTasks - Array of all tasks from all tags
|
||||
* @returns {Array} Array of cross-tag dependency conflicts
|
||||
*/
|
||||
function findTaskCrossTagConflicts(task, targetTag, allTasks) {
|
||||
const conflicts = [];
|
||||
|
||||
// Validate task.dependencies is an array before processing
|
||||
if (!Array.isArray(task.dependencies) || task.dependencies.length === 0) {
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
// Filter out null/undefined dependencies and check each valid dependency
|
||||
const validDependencies = task.dependencies.filter((depId) => depId != null);
|
||||
|
||||
validDependencies.forEach((depId) => {
|
||||
const depTask = findDependencyTask(depId, task.id, allTasks);
|
||||
|
||||
if (depTask && depTask.tag !== targetTag) {
|
||||
conflicts.push({
|
||||
taskId: task.id,
|
||||
dependencyId: depId,
|
||||
dependencyTag: depTask.tag,
|
||||
message: `Task ${task.id} depends on ${depId} (in ${depTask.tag})`
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
function validateCrossTagMove(task, sourceTag, targetTag, allTasks) {
|
||||
// Parameter validation
|
||||
if (!task || typeof task !== 'object') {
|
||||
throw new Error('Task parameter must be a valid object');
|
||||
}
|
||||
|
||||
if (!sourceTag || typeof sourceTag !== 'string') {
|
||||
throw new Error('Source tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!targetTag || typeof targetTag !== 'string') {
|
||||
throw new Error('Target tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!Array.isArray(allTasks)) {
|
||||
throw new Error('All tasks parameter must be an array');
|
||||
}
|
||||
|
||||
const conflicts = findTaskCrossTagConflicts(task, targetTag, allTasks);
|
||||
|
||||
return {
|
||||
canMove: conflicts.length === 0,
|
||||
conflicts
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Find all cross-tag dependencies for a set of tasks
|
||||
* @param {Array} sourceTasks - Array of tasks to check
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Array} allTasks - Array of all tasks from all tags
|
||||
* @returns {Array} Array of cross-tag dependency conflicts
|
||||
*/
|
||||
function findCrossTagDependencies(sourceTasks, sourceTag, targetTag, allTasks) {
|
||||
// Parameter validation
|
||||
if (!Array.isArray(sourceTasks)) {
|
||||
throw new Error('Source tasks parameter must be an array');
|
||||
}
|
||||
|
||||
if (!sourceTag || typeof sourceTag !== 'string') {
|
||||
throw new Error('Source tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!targetTag || typeof targetTag !== 'string') {
|
||||
throw new Error('Target tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!Array.isArray(allTasks)) {
|
||||
throw new Error('All tasks parameter must be an array');
|
||||
}
|
||||
|
||||
const conflicts = [];
|
||||
|
||||
sourceTasks.forEach((task) => {
|
||||
// Validate task object and dependencies array
|
||||
if (
|
||||
!task ||
|
||||
typeof task !== 'object' ||
|
||||
!Array.isArray(task.dependencies) ||
|
||||
task.dependencies.length === 0
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Use the shared helper function to find conflicts for this task
|
||||
const taskConflicts = findTaskCrossTagConflicts(task, targetTag, allTasks);
|
||||
conflicts.push(...taskConflicts);
|
||||
});
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to find all tasks that depend on a given task (reverse dependencies)
|
||||
* @param {string|number} taskId - The task ID to find dependencies for
|
||||
* @param {Array} allTasks - Array of all tasks to search
|
||||
* @param {Set} dependentTaskIds - Set to add found dependencies to
|
||||
*/
|
||||
function findTasksThatDependOn(taskId, allTasks, dependentTaskIds) {
|
||||
// Find the task object for the given ID
|
||||
const sourceTask = allTasks.find((t) => t.id === taskId);
|
||||
if (!sourceTask) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Use the shared utility for reverse dependency traversal
|
||||
const reverseDeps = traverseDependencies([sourceTask], allTasks, {
|
||||
direction: 'reverse',
|
||||
includeSelf: false,
|
||||
logger: { warn: log.warn || console.warn }
|
||||
});
|
||||
|
||||
// Add all found reverse dependencies to the dependentTaskIds set
|
||||
reverseDeps.forEach((depId) => dependentTaskIds.add(depId));
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if a task depends on a source task
|
||||
* @param {Object} task - Task to check for dependencies
|
||||
* @param {Object} sourceTask - Source task to check dependency against
|
||||
* @returns {boolean} True if task depends on source task
|
||||
*/
|
||||
function taskDependsOnSource(task, sourceTask) {
|
||||
if (!task || !Array.isArray(task.dependencies)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const sourceTaskIdStr = String(sourceTask.id);
|
||||
|
||||
return task.dependencies.some((depId) => {
|
||||
if (!depId) return false;
|
||||
|
||||
const depIdStr = String(depId);
|
||||
|
||||
// Exact match
|
||||
if (depIdStr === sourceTaskIdStr) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Handle subtask references
|
||||
if (
|
||||
sourceTaskIdStr &&
|
||||
typeof sourceTaskIdStr === 'string' &&
|
||||
sourceTaskIdStr.includes('.')
|
||||
) {
|
||||
// If source is a subtask, check if dependency references the parent
|
||||
const [parentId] = sourceTaskIdStr.split('.');
|
||||
if (depIdStr === parentId) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle relative subtask references
|
||||
if (
|
||||
depIdStr &&
|
||||
typeof depIdStr === 'string' &&
|
||||
depIdStr.includes('.') &&
|
||||
sourceTaskIdStr &&
|
||||
typeof sourceTaskIdStr === 'string' &&
|
||||
sourceTaskIdStr.includes('.')
|
||||
) {
|
||||
const [depParentId] = depIdStr.split('.');
|
||||
const [sourceParentId] = sourceTaskIdStr.split('.');
|
||||
if (depParentId === sourceParentId) {
|
||||
// Both are subtasks of the same parent, check if they reference each other
|
||||
const depSubtaskNum = parseInt(depIdStr.split('.')[1], 10);
|
||||
const sourceSubtaskNum = parseInt(sourceTaskIdStr.split('.')[1], 10);
|
||||
if (depSubtaskNum === sourceSubtaskNum) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if any subtasks of a task depend on source tasks
|
||||
* @param {Object} task - Task to check subtasks of
|
||||
* @param {Array} sourceTasks - Array of source tasks to check dependencies against
|
||||
* @returns {boolean} True if any subtasks depend on source tasks
|
||||
*/
|
||||
function subtasksDependOnSource(task, sourceTasks) {
|
||||
if (!task.subtasks || !Array.isArray(task.subtasks)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return task.subtasks.some((subtask) => {
|
||||
// Check if this subtask depends on any source task
|
||||
const subtaskDependsOnSource = sourceTasks.some((sourceTask) =>
|
||||
taskDependsOnSource(subtask, sourceTask)
|
||||
);
|
||||
|
||||
if (subtaskDependsOnSource) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Recursively check if any nested subtasks depend on source tasks
|
||||
if (subtask.subtasks && Array.isArray(subtask.subtasks)) {
|
||||
return subtasksDependOnSource(subtask, sourceTasks);
|
||||
}
|
||||
|
||||
return false;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all dependent task IDs for a set of cross-tag dependencies
|
||||
* @param {Array} sourceTasks - Array of source tasks
|
||||
* @param {Array} crossTagDependencies - Array of cross-tag dependency conflicts
|
||||
* @param {Array} allTasks - Array of all tasks from all tags
|
||||
* @returns {Array} Array of dependent task IDs to move
|
||||
*/
|
||||
function getDependentTaskIds(sourceTasks, crossTagDependencies, allTasks) {
|
||||
// Enhanced parameter validation
|
||||
if (!Array.isArray(sourceTasks)) {
|
||||
throw new Error('Source tasks parameter must be an array');
|
||||
}
|
||||
|
||||
if (!Array.isArray(crossTagDependencies)) {
|
||||
throw new Error('Cross tag dependencies parameter must be an array');
|
||||
}
|
||||
|
||||
if (!Array.isArray(allTasks)) {
|
||||
throw new Error('All tasks parameter must be an array');
|
||||
}
|
||||
|
||||
// Use the shared recursive dependency finder
|
||||
const dependentTaskIds = new Set(
|
||||
findAllDependenciesRecursively(sourceTasks, allTasks, {
|
||||
includeSelf: false
|
||||
})
|
||||
);
|
||||
|
||||
// Add immediate dependency IDs from conflicts and find their dependencies recursively
|
||||
const conflictTasksToProcess = [];
|
||||
crossTagDependencies.forEach((conflict) => {
|
||||
if (conflict && conflict.dependencyId) {
|
||||
const depId =
|
||||
typeof conflict.dependencyId === 'string'
|
||||
? parseInt(conflict.dependencyId, 10)
|
||||
: conflict.dependencyId;
|
||||
if (!isNaN(depId)) {
|
||||
dependentTaskIds.add(depId);
|
||||
// Find the task object for recursive dependency finding
|
||||
const depTask = allTasks.find((t) => t.id === depId);
|
||||
if (depTask) {
|
||||
conflictTasksToProcess.push(depTask);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Find dependencies of conflict tasks
|
||||
if (conflictTasksToProcess.length > 0) {
|
||||
const conflictDependencies = findAllDependenciesRecursively(
|
||||
conflictTasksToProcess,
|
||||
allTasks,
|
||||
{ includeSelf: false }
|
||||
);
|
||||
conflictDependencies.forEach((depId) => dependentTaskIds.add(depId));
|
||||
}
|
||||
|
||||
// For --with-dependencies, we also need to find all dependencies of the source tasks
|
||||
sourceTasks.forEach((sourceTask) => {
|
||||
if (sourceTask && sourceTask.id) {
|
||||
// Find all tasks that this source task depends on (forward dependencies) - already handled above
|
||||
|
||||
// Find all tasks that depend on this source task (reverse dependencies)
|
||||
findTasksThatDependOn(sourceTask.id, allTasks, dependentTaskIds);
|
||||
}
|
||||
});
|
||||
|
||||
// Also include any tasks that depend on the source tasks
|
||||
sourceTasks.forEach((sourceTask) => {
|
||||
if (!sourceTask || typeof sourceTask !== 'object' || !sourceTask.id) {
|
||||
return; // Skip invalid source tasks
|
||||
}
|
||||
|
||||
allTasks.forEach((task) => {
|
||||
// Validate task and dependencies array
|
||||
if (
|
||||
!task ||
|
||||
typeof task !== 'object' ||
|
||||
!Array.isArray(task.dependencies)
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if this task depends on the source task
|
||||
const hasDependency = taskDependsOnSource(task, sourceTask);
|
||||
|
||||
// Check if any subtasks of this task depend on the source task
|
||||
const subtasksHaveDependency = subtasksDependOnSource(task, [sourceTask]);
|
||||
|
||||
if (hasDependency || subtasksHaveDependency) {
|
||||
dependentTaskIds.add(task.id);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return Array.from(dependentTaskIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate subtask movement - block direct cross-tag subtask moves
|
||||
* @param {string} taskId - Task ID to validate
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @throws {Error} If subtask movement is attempted
|
||||
*/
|
||||
function validateSubtaskMove(taskId, sourceTag, targetTag) {
|
||||
// Parameter validation
|
||||
if (!taskId || typeof taskId !== 'string') {
|
||||
throw new DependencyError(
|
||||
DEPENDENCY_ERROR_CODES.INVALID_TASK_ID,
|
||||
'Task ID must be a valid string'
|
||||
);
|
||||
}
|
||||
|
||||
if (!sourceTag || typeof sourceTag !== 'string') {
|
||||
throw new DependencyError(
|
||||
DEPENDENCY_ERROR_CODES.INVALID_SOURCE_TAG,
|
||||
'Source tag must be a valid string'
|
||||
);
|
||||
}
|
||||
|
||||
if (!targetTag || typeof targetTag !== 'string') {
|
||||
throw new DependencyError(
|
||||
DEPENDENCY_ERROR_CODES.INVALID_TARGET_TAG,
|
||||
'Target tag must be a valid string'
|
||||
);
|
||||
}
|
||||
|
||||
if (taskId.includes('.')) {
|
||||
throw new DependencyError(
|
||||
DEPENDENCY_ERROR_CODES.CANNOT_MOVE_SUBTASK,
|
||||
`Cannot move subtask ${taskId} directly between tags.
|
||||
|
||||
First promote it to a full task using:
|
||||
task-master remove-subtask --id=${taskId} --convert`,
|
||||
{
|
||||
taskId,
|
||||
sourceTag,
|
||||
targetTag
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a task can be moved with its dependencies
|
||||
* @param {string} taskId - Task ID to check
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Array} allTasks - Array of all tasks from all tags
|
||||
* @returns {Object} Object with canMove boolean and dependentTaskIds array
|
||||
*/
|
||||
function canMoveWithDependencies(taskId, sourceTag, targetTag, allTasks) {
|
||||
// Parameter validation
|
||||
if (!taskId || typeof taskId !== 'string') {
|
||||
throw new Error('Task ID must be a valid string');
|
||||
}
|
||||
|
||||
if (!sourceTag || typeof sourceTag !== 'string') {
|
||||
throw new Error('Source tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!targetTag || typeof targetTag !== 'string') {
|
||||
throw new Error('Target tag must be a valid string');
|
||||
}
|
||||
|
||||
if (!Array.isArray(allTasks)) {
|
||||
throw new Error('All tasks parameter must be an array');
|
||||
}
|
||||
|
||||
// Enhanced task lookup to handle subtasks properly
|
||||
let sourceTask = null;
|
||||
|
||||
// Check if it's a subtask ID (e.g., "1.2")
|
||||
if (taskId.includes('.')) {
|
||||
const [parentId, subtaskId] = taskId
|
||||
.split('.')
|
||||
.map((id) => parseInt(id, 10));
|
||||
const parentTask = allTasks.find(
|
||||
(t) => t.id === parentId && t.tag === sourceTag
|
||||
);
|
||||
|
||||
if (
|
||||
parentTask &&
|
||||
parentTask.subtasks &&
|
||||
Array.isArray(parentTask.subtasks)
|
||||
) {
|
||||
const subtask = parentTask.subtasks.find((st) => st.id === subtaskId);
|
||||
if (subtask) {
|
||||
// Create a copy of the subtask with parent context
|
||||
sourceTask = {
|
||||
...subtask,
|
||||
parentTask: {
|
||||
id: parentTask.id,
|
||||
title: parentTask.title,
|
||||
status: parentTask.status
|
||||
},
|
||||
isSubtask: true
|
||||
};
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Regular task lookup - handle both string and numeric IDs
|
||||
sourceTask = allTasks.find((t) => {
|
||||
const taskIdNum = parseInt(taskId, 10);
|
||||
return (t.id === taskIdNum || t.id === taskId) && t.tag === sourceTag;
|
||||
});
|
||||
}
|
||||
|
||||
if (!sourceTask) {
|
||||
return {
|
||||
canMove: false,
|
||||
dependentTaskIds: [],
|
||||
conflicts: [],
|
||||
error: 'Task not found'
|
||||
};
|
||||
}
|
||||
|
||||
const validation = validateCrossTagMove(
|
||||
sourceTask,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
allTasks
|
||||
);
|
||||
|
||||
// Fix contradictory logic: return canMove: false when conflicts exist
|
||||
if (validation.canMove) {
|
||||
return {
|
||||
canMove: true,
|
||||
dependentTaskIds: [],
|
||||
conflicts: []
|
||||
};
|
||||
}
|
||||
|
||||
// When conflicts exist, return canMove: false with conflicts and dependent task IDs
|
||||
const dependentTaskIds = getDependentTaskIds(
|
||||
[sourceTask],
|
||||
validation.conflicts,
|
||||
allTasks
|
||||
);
|
||||
|
||||
return {
|
||||
canMove: false,
|
||||
dependentTaskIds,
|
||||
conflicts: validation.conflicts
|
||||
};
|
||||
}
|
||||
|
||||
export {
|
||||
addDependency,
|
||||
removeDependency,
|
||||
@@ -1842,15 +1245,5 @@ export {
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies,
|
||||
findDependencyTask,
|
||||
findTaskCrossTagConflicts,
|
||||
validateCrossTagMove,
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
validateSubtaskMove,
|
||||
canMoveWithDependencies,
|
||||
findAllDependenciesRecursively,
|
||||
DependencyError,
|
||||
DEPENDENCY_ERROR_CODES
|
||||
validateAndFixDependencies
|
||||
};
|
||||
|
||||
@@ -36,7 +36,7 @@ export class PromptManager {
|
||||
const schema = JSON.parse(schemaContent);
|
||||
|
||||
this.validatePrompt = this.ajv.compile(schema);
|
||||
log('debug', '✓ JSON schema validation enabled');
|
||||
log('info', '✓ JSON schema validation enabled');
|
||||
} catch (error) {
|
||||
log('warn', `⚠ Schema validation disabled: ${error.message}`);
|
||||
this.validatePrompt = () => true; // Fallback to no validation
|
||||
|
||||
@@ -786,39 +786,6 @@
|
||||
}
|
||||
],
|
||||
"ollama": [
|
||||
{
|
||||
"id": "gpt-oss:latest",
|
||||
"swe_score": 0.607,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 128000,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "gpt-oss:20b",
|
||||
"swe_score": 0.607,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 128000,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "gpt-oss:120b",
|
||||
"swe_score": 0.624,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 128000,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "devstral:latest",
|
||||
"swe_score": 0,
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
*/
|
||||
|
||||
import { findTaskById } from './utils.js';
|
||||
import parsePRD from './task-manager/parse-prd/index.js';
|
||||
import parsePRD from './task-manager/parse-prd.js';
|
||||
import updateTasks from './task-manager/update-tasks.js';
|
||||
import updateTaskById from './task-manager/update-task-by-id.js';
|
||||
import generateTaskFiles from './task-manager/generate-task-files.js';
|
||||
|
||||
@@ -294,11 +294,6 @@ function listTasks(
|
||||
});
|
||||
}
|
||||
|
||||
// For compact output, return minimal one-line format
|
||||
if (outputFormat === 'compact') {
|
||||
return renderCompactOutput(filteredTasks, withSubtasks);
|
||||
}
|
||||
|
||||
// ... existing code for text output ...
|
||||
|
||||
// Calculate status breakdowns as percentages of total
|
||||
@@ -967,98 +962,4 @@ function generateMarkdownOutput(data, filteredTasks, stats) {
|
||||
return markdown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependencies for compact output with truncation and coloring
|
||||
* @param {Array} dependencies - Array of dependency IDs
|
||||
* @returns {string} - Formatted dependency string with arrow prefix
|
||||
*/
|
||||
function formatCompactDependencies(dependencies) {
|
||||
if (!dependencies || dependencies.length === 0) {
|
||||
return '';
|
||||
}
|
||||
|
||||
if (dependencies.length > 5) {
|
||||
const visible = dependencies.slice(0, 5).join(',');
|
||||
const remaining = dependencies.length - 5;
|
||||
return ` → ${chalk.cyan(visible)}${chalk.gray('... (+' + remaining + ' more)')}`;
|
||||
} else {
|
||||
return ` → ${chalk.cyan(dependencies.join(','))}`;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single task in compact one-line format
|
||||
* @param {Object} task - Task object
|
||||
* @param {number} maxTitleLength - Maximum title length before truncation
|
||||
* @returns {string} - Formatted task line
|
||||
*/
|
||||
function formatCompactTask(task, maxTitleLength = 50) {
|
||||
const status = task.status || 'pending';
|
||||
const priority = task.priority || 'medium';
|
||||
const title = truncate(task.title || 'Untitled', maxTitleLength);
|
||||
|
||||
// Use colored status from existing function
|
||||
const coloredStatus = getStatusWithColor(status, true);
|
||||
|
||||
// Color priority based on level
|
||||
const priorityColors = {
|
||||
high: chalk.red,
|
||||
medium: chalk.yellow,
|
||||
low: chalk.gray
|
||||
};
|
||||
const priorityColor = priorityColors[priority] || chalk.white;
|
||||
|
||||
// Format dependencies using shared helper
|
||||
const depsText = formatCompactDependencies(task.dependencies);
|
||||
|
||||
return `${chalk.cyan(task.id)} ${coloredStatus} ${chalk.white(title)} ${priorityColor('(' + priority + ')')}${depsText}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a subtask in compact format with indentation
|
||||
* @param {Object} subtask - Subtask object
|
||||
* @param {string|number} parentId - Parent task ID
|
||||
* @param {number} maxTitleLength - Maximum title length before truncation
|
||||
* @returns {string} - Formatted subtask line
|
||||
*/
|
||||
function formatCompactSubtask(subtask, parentId, maxTitleLength = 47) {
|
||||
const status = subtask.status || 'pending';
|
||||
const title = truncate(subtask.title || 'Untitled', maxTitleLength);
|
||||
|
||||
// Use colored status from existing function
|
||||
const coloredStatus = getStatusWithColor(status, true);
|
||||
|
||||
// Format dependencies using shared helper
|
||||
const depsText = formatCompactDependencies(subtask.dependencies);
|
||||
|
||||
return ` ${chalk.cyan(parentId + '.' + subtask.id)} ${coloredStatus} ${chalk.dim(title)}${depsText}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render complete compact output
|
||||
* @param {Array} filteredTasks - Tasks to display
|
||||
* @param {boolean} withSubtasks - Whether to include subtasks
|
||||
* @returns {void} - Outputs directly to console
|
||||
*/
|
||||
function renderCompactOutput(filteredTasks, withSubtasks) {
|
||||
if (filteredTasks.length === 0) {
|
||||
console.log('No tasks found');
|
||||
return;
|
||||
}
|
||||
|
||||
const output = [];
|
||||
|
||||
filteredTasks.forEach((task) => {
|
||||
output.push(formatCompactTask(task));
|
||||
|
||||
if (withSubtasks && task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
output.push(formatCompactSubtask(subtask, task.id));
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
console.log(output.join('\n'));
|
||||
}
|
||||
|
||||
export default listTasks;
|
||||
|
||||
@@ -1,65 +1,7 @@
|
||||
import path from 'path';
|
||||
import {
|
||||
log,
|
||||
readJSON,
|
||||
writeJSON,
|
||||
setTasksForTag,
|
||||
traverseDependencies
|
||||
} from '../utils.js';
|
||||
import { log, readJSON, writeJSON, setTasksForTag } from '../utils.js';
|
||||
import { isTaskDependentOn } from '../task-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import {
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
validateSubtaskMove
|
||||
} from '../dependency-manager.js';
|
||||
|
||||
/**
|
||||
* Find all dependencies recursively for a set of source tasks with depth limiting
|
||||
* @param {Array} sourceTasks - The source tasks to find dependencies for
|
||||
* @param {Array} allTasks - All available tasks from all tags
|
||||
* @param {Object} options - Options object
|
||||
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
|
||||
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
|
||||
* @returns {Array} Array of all dependency task IDs
|
||||
*/
|
||||
function findAllDependenciesRecursively(sourceTasks, allTasks, options = {}) {
|
||||
return traverseDependencies(sourceTasks, allTasks, {
|
||||
...options,
|
||||
direction: 'forward',
|
||||
logger: { warn: console.warn }
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Structured error class for move operations
|
||||
*/
|
||||
class MoveTaskError extends Error {
|
||||
constructor(code, message, data = {}) {
|
||||
super(message);
|
||||
this.name = 'MoveTaskError';
|
||||
this.code = code;
|
||||
this.data = data;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Error codes for move operations
|
||||
*/
|
||||
const MOVE_ERROR_CODES = {
|
||||
CROSS_TAG_DEPENDENCY_CONFLICTS: 'CROSS_TAG_DEPENDENCY_CONFLICTS',
|
||||
CANNOT_MOVE_SUBTASK: 'CANNOT_MOVE_SUBTASK',
|
||||
SOURCE_TARGET_TAGS_SAME: 'SOURCE_TARGET_TAGS_SAME',
|
||||
TASK_NOT_FOUND: 'TASK_NOT_FOUND',
|
||||
SUBTASK_NOT_FOUND: 'SUBTASK_NOT_FOUND',
|
||||
PARENT_TASK_NOT_FOUND: 'PARENT_TASK_NOT_FOUND',
|
||||
PARENT_TASK_NO_SUBTASKS: 'PARENT_TASK_NO_SUBTASKS',
|
||||
DESTINATION_TASK_NOT_FOUND: 'DESTINATION_TASK_NOT_FOUND',
|
||||
TASK_ALREADY_EXISTS: 'TASK_ALREADY_EXISTS',
|
||||
INVALID_TASKS_FILE: 'INVALID_TASKS_FILE',
|
||||
ID_COUNT_MISMATCH: 'ID_COUNT_MISMATCH',
|
||||
INVALID_SOURCE_TAG: 'INVALID_SOURCE_TAG',
|
||||
INVALID_TARGET_TAG: 'INVALID_TARGET_TAG'
|
||||
};
|
||||
|
||||
/**
|
||||
* Move one or more tasks/subtasks to new positions
|
||||
@@ -85,8 +27,7 @@ async function moveTask(
|
||||
const destinationIds = destinationId.split(',').map((id) => id.trim());
|
||||
|
||||
if (sourceIds.length !== destinationIds.length) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.ID_COUNT_MISMATCH,
|
||||
throw new Error(
|
||||
`Number of source IDs (${sourceIds.length}) must match number of destination IDs (${destinationIds.length})`
|
||||
);
|
||||
}
|
||||
@@ -131,8 +72,7 @@ async function moveTask(
|
||||
|
||||
// Ensure the tag exists in the raw data
|
||||
if (!rawData || !rawData[tag] || !Array.isArray(rawData[tag].tasks)) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.INVALID_TASKS_FILE,
|
||||
throw new Error(
|
||||
`Invalid tasks file or tag "${tag}" not found at ${tasksPath}`
|
||||
);
|
||||
}
|
||||
@@ -197,14 +137,10 @@ function moveSubtaskToSubtask(tasks, sourceId, destinationId) {
|
||||
const destParentTask = tasks.find((t) => t.id === destParentId);
|
||||
|
||||
if (!sourceParentTask) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
|
||||
`Source parent task with ID ${sourceParentId} not found`
|
||||
);
|
||||
throw new Error(`Source parent task with ID ${sourceParentId} not found`);
|
||||
}
|
||||
if (!destParentTask) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
|
||||
throw new Error(
|
||||
`Destination parent task with ID ${destParentId} not found`
|
||||
);
|
||||
}
|
||||
@@ -222,10 +158,7 @@ function moveSubtaskToSubtask(tasks, sourceId, destinationId) {
|
||||
(st) => st.id === sourceSubtaskId
|
||||
);
|
||||
if (sourceSubtaskIndex === -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.SUBTASK_NOT_FOUND,
|
||||
`Source subtask ${sourceId} not found`
|
||||
);
|
||||
throw new Error(`Source subtask ${sourceId} not found`);
|
||||
}
|
||||
|
||||
const sourceSubtask = sourceParentTask.subtasks[sourceSubtaskIndex];
|
||||
@@ -283,16 +216,10 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
|
||||
const sourceParentTask = tasks.find((t) => t.id === sourceParentId);
|
||||
|
||||
if (!sourceParentTask) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
|
||||
`Source parent task with ID ${sourceParentId} not found`
|
||||
);
|
||||
throw new Error(`Source parent task with ID ${sourceParentId} not found`);
|
||||
}
|
||||
if (!sourceParentTask.subtasks) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.PARENT_TASK_NO_SUBTASKS,
|
||||
`Source parent task ${sourceParentId} has no subtasks`
|
||||
);
|
||||
throw new Error(`Source parent task ${sourceParentId} has no subtasks`);
|
||||
}
|
||||
|
||||
// Find source subtask
|
||||
@@ -300,10 +227,7 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
|
||||
(st) => st.id === sourceSubtaskId
|
||||
);
|
||||
if (sourceSubtaskIndex === -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.SUBTASK_NOT_FOUND,
|
||||
`Source subtask ${sourceId} not found`
|
||||
);
|
||||
throw new Error(`Source subtask ${sourceId} not found`);
|
||||
}
|
||||
|
||||
const sourceSubtask = sourceParentTask.subtasks[sourceSubtaskIndex];
|
||||
@@ -311,8 +235,7 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
|
||||
// Check if destination task exists
|
||||
const existingDestTask = tasks.find((t) => t.id === destTaskId);
|
||||
if (existingDestTask) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
|
||||
throw new Error(
|
||||
`Cannot move to existing task ID ${destTaskId}. Choose a different ID or use subtask destination.`
|
||||
);
|
||||
}
|
||||
@@ -359,14 +282,10 @@ function moveTaskToSubtask(tasks, sourceId, destinationId) {
|
||||
const destParentTask = tasks.find((t) => t.id === destParentId);
|
||||
|
||||
if (sourceTaskIndex === -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_NOT_FOUND,
|
||||
`Source task with ID ${sourceTaskId} not found`
|
||||
);
|
||||
throw new Error(`Source task with ID ${sourceTaskId} not found`);
|
||||
}
|
||||
if (!destParentTask) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
|
||||
throw new Error(
|
||||
`Destination parent task with ID ${destParentId} not found`
|
||||
);
|
||||
}
|
||||
@@ -421,10 +340,7 @@ function moveTaskToTask(tasks, sourceId, destinationId) {
|
||||
// Find source task
|
||||
const sourceTaskIndex = tasks.findIndex((t) => t.id === sourceTaskId);
|
||||
if (sourceTaskIndex === -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_NOT_FOUND,
|
||||
`Source task with ID ${sourceTaskId} not found`
|
||||
);
|
||||
throw new Error(`Source task with ID ${sourceTaskId} not found`);
|
||||
}
|
||||
|
||||
const sourceTask = tasks[sourceTaskIndex];
|
||||
@@ -437,8 +353,7 @@ function moveTaskToTask(tasks, sourceId, destinationId) {
|
||||
const destTask = tasks[destTaskIndex];
|
||||
|
||||
// For now, throw an error to avoid accidental overwrites
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
|
||||
throw new Error(
|
||||
`Task with ID ${destTaskId} already exists. Use a different destination ID.`
|
||||
);
|
||||
} else {
|
||||
@@ -563,434 +478,4 @@ function moveTaskToNewId(tasks, sourceTaskIndex, sourceTask, destTaskId) {
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all tasks from all tags with tag information
|
||||
* @param {Object} rawData - The raw tagged data object
|
||||
* @returns {Array} A flat array of all task objects with tag property
|
||||
*/
|
||||
function getAllTasksWithTags(rawData) {
|
||||
let allTasks = [];
|
||||
for (const tagName in rawData) {
|
||||
if (
|
||||
Object.prototype.hasOwnProperty.call(rawData, tagName) &&
|
||||
rawData[tagName] &&
|
||||
Array.isArray(rawData[tagName].tasks)
|
||||
) {
|
||||
const tasksWithTag = rawData[tagName].tasks.map((task) => ({
|
||||
...task,
|
||||
tag: tagName
|
||||
}));
|
||||
allTasks = allTasks.concat(tasksWithTag);
|
||||
}
|
||||
}
|
||||
return allTasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate move operation parameters and data
|
||||
* @param {string} tasksPath - Path to tasks.json file
|
||||
* @param {Array} taskIds - Array of task IDs to move
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Object} context - Context object
|
||||
* @returns {Object} Validation result with rawData and sourceTasks
|
||||
*/
|
||||
async function validateMove(tasksPath, taskIds, sourceTag, targetTag, context) {
|
||||
const { projectRoot } = context;
|
||||
|
||||
// Read the raw data without tag resolution to preserve tagged structure
|
||||
let rawData = readJSON(tasksPath, projectRoot, sourceTag);
|
||||
|
||||
// Handle the case where readJSON returns resolved data with _rawTaggedData
|
||||
if (rawData && rawData._rawTaggedData) {
|
||||
rawData = rawData._rawTaggedData;
|
||||
}
|
||||
|
||||
// Validate source tag exists
|
||||
if (
|
||||
!rawData ||
|
||||
!rawData[sourceTag] ||
|
||||
!Array.isArray(rawData[sourceTag].tasks)
|
||||
) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.INVALID_SOURCE_TAG,
|
||||
`Source tag "${sourceTag}" not found or invalid`
|
||||
);
|
||||
}
|
||||
|
||||
// Create target tag if it doesn't exist
|
||||
if (!rawData[targetTag]) {
|
||||
rawData[targetTag] = { tasks: [] };
|
||||
log('info', `Created new tag "${targetTag}"`);
|
||||
}
|
||||
|
||||
// Normalize all IDs to strings once for consistent comparison
|
||||
const normalizedSearchIds = taskIds.map((id) => String(id));
|
||||
|
||||
const sourceTasks = rawData[sourceTag].tasks.filter((t) => {
|
||||
const normalizedTaskId = String(t.id);
|
||||
return normalizedSearchIds.includes(normalizedTaskId);
|
||||
});
|
||||
|
||||
// Validate subtask movement
|
||||
taskIds.forEach((taskId) => {
|
||||
validateSubtaskMove(taskId, sourceTag, targetTag);
|
||||
});
|
||||
|
||||
return { rawData, sourceTasks };
|
||||
}
|
||||
|
||||
/**
|
||||
* Load and prepare task data for move operation
|
||||
* @param {Object} validation - Validation result from validateMove
|
||||
* @returns {Object} Prepared data with rawData, sourceTasks, and allTasks
|
||||
*/
|
||||
async function prepareTaskData(validation) {
|
||||
const { rawData, sourceTasks } = validation;
|
||||
|
||||
// Get all tasks for validation
|
||||
const allTasks = getAllTasksWithTags(rawData);
|
||||
|
||||
return { rawData, sourceTasks, allTasks };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve dependencies and determine tasks to move
|
||||
* @param {Array} sourceTasks - Source tasks to move
|
||||
* @param {Array} allTasks - All available tasks from all tags
|
||||
* @param {Object} options - Move options
|
||||
* @param {Array} taskIds - Original task IDs
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @returns {Object} Tasks to move and dependency resolution info
|
||||
*/
|
||||
async function resolveDependencies(
|
||||
sourceTasks,
|
||||
allTasks,
|
||||
options,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag
|
||||
) {
|
||||
const { withDependencies = false, ignoreDependencies = false } = options;
|
||||
|
||||
// Handle --with-dependencies flag first (regardless of cross-tag dependencies)
|
||||
if (withDependencies) {
|
||||
// Move dependent tasks along with main tasks
|
||||
// Find ALL dependencies recursively within the same tag
|
||||
const allDependentTaskIds = findAllDependenciesRecursively(
|
||||
sourceTasks,
|
||||
allTasks,
|
||||
{ maxDepth: 100, includeSelf: false }
|
||||
);
|
||||
const allTaskIdsToMove = [...new Set([...taskIds, ...allDependentTaskIds])];
|
||||
|
||||
log(
|
||||
'info',
|
||||
`Moving ${allTaskIdsToMove.length} tasks (including dependencies): ${allTaskIdsToMove.join(', ')}`
|
||||
);
|
||||
|
||||
return {
|
||||
tasksToMove: allTaskIdsToMove,
|
||||
dependencyResolution: {
|
||||
type: 'with-dependencies',
|
||||
dependentTasks: allDependentTaskIds
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Find cross-tag dependencies (these shouldn't exist since dependencies are only within tags)
|
||||
const crossTagDependencies = findCrossTagDependencies(
|
||||
sourceTasks,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
allTasks
|
||||
);
|
||||
|
||||
if (crossTagDependencies.length > 0) {
|
||||
if (ignoreDependencies) {
|
||||
// Break cross-tag dependencies (edge case - shouldn't normally happen)
|
||||
sourceTasks.forEach((task) => {
|
||||
task.dependencies = task.dependencies.filter((depId) => {
|
||||
// Handle both task IDs and subtask IDs (e.g., "1.2")
|
||||
let depTask = null;
|
||||
if (typeof depId === 'string' && depId.includes('.')) {
|
||||
// It's a subtask ID - extract parent task ID and find the parent task
|
||||
const [parentId, subtaskId] = depId
|
||||
.split('.')
|
||||
.map((id) => parseInt(id, 10));
|
||||
depTask = allTasks.find((t) => t.id === parentId);
|
||||
} else {
|
||||
// It's a regular task ID - normalize to number for comparison
|
||||
const normalizedDepId =
|
||||
typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
depTask = allTasks.find((t) => t.id === normalizedDepId);
|
||||
}
|
||||
return !depTask || depTask.tag === targetTag;
|
||||
});
|
||||
});
|
||||
|
||||
log(
|
||||
'warn',
|
||||
`Removed ${crossTagDependencies.length} cross-tag dependencies`
|
||||
);
|
||||
|
||||
return {
|
||||
tasksToMove: taskIds,
|
||||
dependencyResolution: {
|
||||
type: 'ignored-dependencies',
|
||||
conflicts: crossTagDependencies
|
||||
}
|
||||
};
|
||||
} else {
|
||||
// Block move and show error
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.CROSS_TAG_DEPENDENCY_CONFLICTS,
|
||||
`Cannot move tasks: ${crossTagDependencies.length} cross-tag dependency conflicts found`,
|
||||
{
|
||||
conflicts: crossTagDependencies,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
taskIds
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
tasksToMove: taskIds,
|
||||
dependencyResolution: { type: 'no-conflicts' }
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the actual move operation
|
||||
* @param {Array} tasksToMove - Array of task IDs to move
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Object} rawData - Raw data object
|
||||
* @param {Object} context - Context object
|
||||
* @param {string} tasksPath - Path to tasks.json file
|
||||
* @returns {Object} Move operation result
|
||||
*/
|
||||
async function executeMoveOperation(
|
||||
tasksToMove,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
rawData,
|
||||
context,
|
||||
tasksPath
|
||||
) {
|
||||
const { projectRoot } = context;
|
||||
const movedTasks = [];
|
||||
|
||||
// Move each task from source to target tag
|
||||
for (const taskId of tasksToMove) {
|
||||
// Normalize taskId to number for comparison
|
||||
const normalizedTaskId =
|
||||
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
|
||||
|
||||
const sourceTaskIndex = rawData[sourceTag].tasks.findIndex(
|
||||
(t) => t.id === normalizedTaskId
|
||||
);
|
||||
|
||||
if (sourceTaskIndex === -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_NOT_FOUND,
|
||||
`Task ${taskId} not found in source tag "${sourceTag}"`
|
||||
);
|
||||
}
|
||||
|
||||
const taskToMove = rawData[sourceTag].tasks[sourceTaskIndex];
|
||||
|
||||
// Check for ID conflicts in target tag
|
||||
const existingTaskIndex = rawData[targetTag].tasks.findIndex(
|
||||
(t) => t.id === normalizedTaskId
|
||||
);
|
||||
if (existingTaskIndex !== -1) {
|
||||
throw new MoveTaskError(
|
||||
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
|
||||
`Task ${taskId} already exists in target tag "${targetTag}"`
|
||||
);
|
||||
}
|
||||
|
||||
// Remove from source tag
|
||||
rawData[sourceTag].tasks.splice(sourceTaskIndex, 1);
|
||||
|
||||
// Preserve task metadata and add to target tag
|
||||
const taskWithPreservedMetadata = preserveTaskMetadata(
|
||||
taskToMove,
|
||||
sourceTag,
|
||||
targetTag
|
||||
);
|
||||
rawData[targetTag].tasks.push(taskWithPreservedMetadata);
|
||||
|
||||
movedTasks.push({
|
||||
id: taskId,
|
||||
fromTag: sourceTag,
|
||||
toTag: targetTag
|
||||
});
|
||||
|
||||
log('info', `Moved task ${taskId} from "${sourceTag}" to "${targetTag}"`);
|
||||
}
|
||||
|
||||
return { rawData, movedTasks };
|
||||
}
|
||||
|
||||
/**
|
||||
* Finalize the move operation by saving data and returning result
|
||||
* @param {Object} moveResult - Result from executeMoveOperation
|
||||
* @param {string} tasksPath - Path to tasks.json file
|
||||
* @param {Object} context - Context object
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @returns {Object} Final result object
|
||||
*/
|
||||
async function finalizeMove(
|
||||
moveResult,
|
||||
tasksPath,
|
||||
context,
|
||||
sourceTag,
|
||||
targetTag
|
||||
) {
|
||||
const { projectRoot } = context;
|
||||
const { rawData, movedTasks } = moveResult;
|
||||
|
||||
// Write the updated data
|
||||
writeJSON(tasksPath, rawData, projectRoot, null);
|
||||
|
||||
return {
|
||||
message: `Successfully moved ${movedTasks.length} tasks from "${sourceTag}" to "${targetTag}"`,
|
||||
movedTasks
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Move tasks between different tags with dependency handling
|
||||
* @param {string} tasksPath - Path to tasks.json file
|
||||
* @param {Array} taskIds - Array of task IDs to move
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Object} options - Move options
|
||||
* @param {boolean} options.withDependencies - Move dependent tasks along with main task
|
||||
* @param {boolean} options.ignoreDependencies - Break cross-tag dependencies during move
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @returns {Object} Result object with moved task details
|
||||
*/
|
||||
async function moveTasksBetweenTags(
|
||||
tasksPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
options = {},
|
||||
context = {}
|
||||
) {
|
||||
// 1. Validation phase
|
||||
const validation = await validateMove(
|
||||
tasksPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
context
|
||||
);
|
||||
|
||||
// 2. Load and prepare data
|
||||
const { rawData, sourceTasks, allTasks } = await prepareTaskData(validation);
|
||||
|
||||
// 3. Handle dependencies
|
||||
const { tasksToMove } = await resolveDependencies(
|
||||
sourceTasks,
|
||||
allTasks,
|
||||
options,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag
|
||||
);
|
||||
|
||||
// 4. Execute move
|
||||
const moveResult = await executeMoveOperation(
|
||||
tasksToMove,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
rawData,
|
||||
context,
|
||||
tasksPath
|
||||
);
|
||||
|
||||
// 5. Save and return
|
||||
return await finalizeMove(
|
||||
moveResult,
|
||||
tasksPath,
|
||||
context,
|
||||
sourceTag,
|
||||
targetTag
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect ID conflicts in target tag
|
||||
* @param {Array} taskIds - Array of task IDs to check
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {Object} rawData - Raw data object
|
||||
* @returns {Array} Array of conflicting task IDs
|
||||
*/
|
||||
function detectIdConflicts(taskIds, targetTag, rawData) {
|
||||
const conflicts = [];
|
||||
|
||||
if (!rawData[targetTag] || !Array.isArray(rawData[targetTag].tasks)) {
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
taskIds.forEach((taskId) => {
|
||||
// Normalize taskId to number for comparison
|
||||
const normalizedTaskId =
|
||||
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
|
||||
const existingTask = rawData[targetTag].tasks.find(
|
||||
(t) => t.id === normalizedTaskId
|
||||
);
|
||||
if (existingTask) {
|
||||
conflicts.push(taskId);
|
||||
}
|
||||
});
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Preserve task metadata during cross-tag moves
|
||||
* @param {Object} task - Task object
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @returns {Object} Task object with preserved metadata
|
||||
*/
|
||||
function preserveTaskMetadata(task, sourceTag, targetTag) {
|
||||
// Update the tag property to reflect the new location
|
||||
task.tag = targetTag;
|
||||
|
||||
// Add move history to task metadata
|
||||
if (!task.metadata) {
|
||||
task.metadata = {};
|
||||
}
|
||||
|
||||
if (!task.metadata.moveHistory) {
|
||||
task.metadata.moveHistory = [];
|
||||
}
|
||||
|
||||
task.metadata.moveHistory.push({
|
||||
fromTag: sourceTag,
|
||||
toTag: targetTag,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
return task;
|
||||
}
|
||||
|
||||
export default moveTask;
|
||||
export {
|
||||
moveTasksBetweenTags,
|
||||
getAllTasksWithTags,
|
||||
detectIdConflicts,
|
||||
preserveTaskMetadata,
|
||||
MoveTaskError,
|
||||
MOVE_ERROR_CODES
|
||||
};
|
||||
|
||||
395
scripts/modules/task-manager/parse-prd.js
Normal file
395
scripts/modules/task-manager/parse-prd.js
Normal file
@@ -0,0 +1,395 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { z } from 'zod';
|
||||
|
||||
import {
|
||||
log,
|
||||
writeJSON,
|
||||
enableSilentMode,
|
||||
disableSilentMode,
|
||||
isSilentMode,
|
||||
readJSON,
|
||||
findTaskById,
|
||||
ensureTagMetadata,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import {
|
||||
getDebugFlag,
|
||||
getMainProvider,
|
||||
getResearchProvider,
|
||||
getDefaultPriority
|
||||
} from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import { displayAiUsageSummary } from '../ui.js';
|
||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
||||
|
||||
// Define the Zod schema for a SINGLE task object
|
||||
const prdSingleTaskSchema = z.object({
|
||||
id: z.number(),
|
||||
title: z.string().min(1),
|
||||
description: z.string().min(1),
|
||||
details: z.string(),
|
||||
testStrategy: z.string(),
|
||||
priority: z.enum(['high', 'medium', 'low']),
|
||||
dependencies: z.array(z.number()),
|
||||
status: z.string()
|
||||
});
|
||||
|
||||
// Define the Zod schema for the ENTIRE expected AI response object
|
||||
const prdResponseSchema = z.object({
|
||||
tasks: z.array(prdSingleTaskSchema),
|
||||
metadata: z.object({
|
||||
projectName: z.string(),
|
||||
totalTasks: z.number(),
|
||||
sourceFile: z.string(),
|
||||
generatedAt: z.string()
|
||||
})
|
||||
});
|
||||
|
||||
/**
|
||||
* Parse a PRD file and generate tasks
|
||||
* @param {string} prdPath - Path to the PRD file
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {number} numTasks - Number of tasks to generate
|
||||
* @param {Object} options - Additional options
|
||||
* @param {boolean} [options.force=false] - Whether to overwrite existing tasks.json.
|
||||
* @param {boolean} [options.append=false] - Append to existing tasks file.
|
||||
* @param {boolean} [options.research=false] - Use research model for enhanced PRD analysis.
|
||||
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
||||
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
||||
* @param {Object} [options.session] - Session object from MCP server (optional).
|
||||
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
||||
* @param {string} [options.tag] - Target tag for task generation.
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
||||
*/
|
||||
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
const {
|
||||
reportProgress,
|
||||
mcpLog,
|
||||
session,
|
||||
projectRoot,
|
||||
force = false,
|
||||
append = false,
|
||||
research = false,
|
||||
tag
|
||||
} = options;
|
||||
const isMCP = !!mcpLog;
|
||||
const outputFormat = isMCP ? 'json' : 'text';
|
||||
|
||||
// Use the provided tag, or the current active tag, or default to 'master'
|
||||
const targetTag = tag;
|
||||
|
||||
const logFn = mcpLog
|
||||
? mcpLog
|
||||
: {
|
||||
// Wrapper for CLI
|
||||
info: (...args) => log('info', ...args),
|
||||
warn: (...args) => log('warn', ...args),
|
||||
error: (...args) => log('error', ...args),
|
||||
debug: (...args) => log('debug', ...args),
|
||||
success: (...args) => log('success', ...args)
|
||||
};
|
||||
|
||||
// Create custom reporter using logFn
|
||||
const report = (message, level = 'info') => {
|
||||
// Check logFn directly
|
||||
if (logFn && typeof logFn[level] === 'function') {
|
||||
logFn[level](message);
|
||||
} else if (!isSilentMode() && outputFormat === 'text') {
|
||||
// Fallback to original log only if necessary and in CLI text mode
|
||||
log(level, message);
|
||||
}
|
||||
};
|
||||
|
||||
report(
|
||||
`Parsing PRD file: ${prdPath}, Force: ${force}, Append: ${append}, Research: ${research}`
|
||||
);
|
||||
|
||||
let existingTasks = [];
|
||||
let nextId = 1;
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
// Check if there are existing tasks in the target tag
|
||||
let hasExistingTasksInTag = false;
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
try {
|
||||
// Read the entire file to check if the tag exists
|
||||
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
||||
const allData = JSON.parse(existingFileContent);
|
||||
|
||||
// Check if the target tag exists and has tasks
|
||||
if (
|
||||
allData[targetTag] &&
|
||||
Array.isArray(allData[targetTag].tasks) &&
|
||||
allData[targetTag].tasks.length > 0
|
||||
) {
|
||||
hasExistingTasksInTag = true;
|
||||
existingTasks = allData[targetTag].tasks;
|
||||
nextId = Math.max(...existingTasks.map((t) => t.id || 0)) + 1;
|
||||
}
|
||||
} catch (error) {
|
||||
// If we can't read the file or parse it, assume no existing tasks in this tag
|
||||
hasExistingTasksInTag = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle file existence and overwrite/append logic based on target tag
|
||||
if (hasExistingTasksInTag) {
|
||||
if (append) {
|
||||
report(
|
||||
`Append mode enabled. Found ${existingTasks.length} existing tasks in tag '${targetTag}'. Next ID will be ${nextId}.`,
|
||||
'info'
|
||||
);
|
||||
} else if (!force) {
|
||||
// Not appending and not forcing overwrite, and there are existing tasks in the target tag
|
||||
const overwriteError = new Error(
|
||||
`Tag '${targetTag}' already contains ${existingTasks.length} tasks. Use --force to overwrite or --append to add to existing tasks.`
|
||||
);
|
||||
report(overwriteError.message, 'error');
|
||||
if (outputFormat === 'text') {
|
||||
console.error(chalk.red(overwriteError.message));
|
||||
}
|
||||
throw overwriteError;
|
||||
} else {
|
||||
// Force overwrite is true
|
||||
report(
|
||||
`Force flag enabled. Overwriting existing tasks in tag '${targetTag}'.`,
|
||||
'info'
|
||||
);
|
||||
}
|
||||
} else {
|
||||
// No existing tasks in target tag, proceed without confirmation
|
||||
report(
|
||||
`Tag '${targetTag}' is empty or doesn't exist. Creating/updating tag with new tasks.`,
|
||||
'info'
|
||||
);
|
||||
}
|
||||
|
||||
report(`Reading PRD content from ${prdPath}`, 'info');
|
||||
const prdContent = fs.readFileSync(prdPath, 'utf8');
|
||||
if (!prdContent) {
|
||||
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
||||
}
|
||||
|
||||
// Load prompts using PromptManager
|
||||
const promptManager = getPromptManager();
|
||||
|
||||
// Get defaultTaskPriority from config
|
||||
const defaultTaskPriority = getDefaultPriority(projectRoot) || 'medium';
|
||||
|
||||
// Check if Claude Code is being used as the provider
|
||||
const currentProvider = research
|
||||
? getResearchProvider(projectRoot)
|
||||
: getMainProvider(projectRoot);
|
||||
const isClaudeCode = currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
||||
|
||||
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
|
||||
'parse-prd',
|
||||
{
|
||||
research,
|
||||
numTasks,
|
||||
nextId,
|
||||
prdContent,
|
||||
prdPath,
|
||||
defaultTaskPriority,
|
||||
isClaudeCode,
|
||||
projectRoot: projectRoot || ''
|
||||
}
|
||||
);
|
||||
|
||||
// Call the unified AI service
|
||||
report(
|
||||
`Calling AI service to generate tasks from PRD${research ? ' with research-backed analysis' : ''}...`,
|
||||
'info'
|
||||
);
|
||||
|
||||
// Call generateObjectService with the CORRECT schema and additional telemetry params
|
||||
aiServiceResponse = await generateObjectService({
|
||||
role: research ? 'research' : 'main', // Use research role if flag is set
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
schema: prdResponseSchema,
|
||||
objectName: 'tasks_data',
|
||||
systemPrompt: systemPrompt,
|
||||
prompt: userPrompt,
|
||||
commandName: 'parse-prd',
|
||||
outputType: isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
// Create the directory if it doesn't exist
|
||||
const tasksDir = path.dirname(tasksPath);
|
||||
if (!fs.existsSync(tasksDir)) {
|
||||
fs.mkdirSync(tasksDir, { recursive: true });
|
||||
}
|
||||
logFn.success(
|
||||
`Successfully parsed PRD via AI service${research ? ' with research-backed analysis' : ''}.`
|
||||
);
|
||||
|
||||
// Validate and Process Tasks
|
||||
// const generatedData = aiServiceResponse?.mainResult?.object;
|
||||
|
||||
// Robustly get the actual AI-generated object
|
||||
let generatedData = null;
|
||||
if (aiServiceResponse?.mainResult) {
|
||||
if (
|
||||
typeof aiServiceResponse.mainResult === 'object' &&
|
||||
aiServiceResponse.mainResult !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult
|
||||
) {
|
||||
// If mainResult itself is the object with a 'tasks' property
|
||||
generatedData = aiServiceResponse.mainResult;
|
||||
} else if (
|
||||
typeof aiServiceResponse.mainResult.object === 'object' &&
|
||||
aiServiceResponse.mainResult.object !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult.object
|
||||
) {
|
||||
// If mainResult.object is the object with a 'tasks' property
|
||||
generatedData = aiServiceResponse.mainResult.object;
|
||||
}
|
||||
}
|
||||
|
||||
if (!generatedData || !Array.isArray(generatedData.tasks)) {
|
||||
logFn.error(
|
||||
`Internal Error: generateObjectService returned unexpected data structure: ${JSON.stringify(generatedData)}`
|
||||
);
|
||||
throw new Error(
|
||||
'AI service returned unexpected data structure after validation.'
|
||||
);
|
||||
}
|
||||
|
||||
let currentId = nextId;
|
||||
const taskMap = new Map();
|
||||
const processedNewTasks = generatedData.tasks.map((task) => {
|
||||
const newId = currentId++;
|
||||
taskMap.set(task.id, newId);
|
||||
return {
|
||||
...task,
|
||||
id: newId,
|
||||
status: task.status || 'pending',
|
||||
priority: task.priority || 'medium',
|
||||
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
||||
subtasks: [],
|
||||
// Ensure all required fields have values (even if empty strings)
|
||||
title: task.title || '',
|
||||
description: task.description || '',
|
||||
details: task.details || '',
|
||||
testStrategy: task.testStrategy || ''
|
||||
};
|
||||
});
|
||||
|
||||
// Remap dependencies for the NEWLY processed tasks
|
||||
processedNewTasks.forEach((task) => {
|
||||
task.dependencies = task.dependencies
|
||||
.map((depId) => taskMap.get(depId)) // Map old AI ID to new sequential ID
|
||||
.filter(
|
||||
(newDepId) =>
|
||||
newDepId != null && // Must exist
|
||||
newDepId < task.id && // Must be a lower ID (could be existing or newly generated)
|
||||
(findTaskById(existingTasks, newDepId) || // Check if it exists in old tasks OR
|
||||
processedNewTasks.some((t) => t.id === newDepId)) // check if it exists in new tasks
|
||||
);
|
||||
});
|
||||
|
||||
const finalTasks = append
|
||||
? [...existingTasks, ...processedNewTasks]
|
||||
: processedNewTasks;
|
||||
|
||||
// Read the existing file to preserve other tags
|
||||
let outputData = {};
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
try {
|
||||
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
||||
outputData = JSON.parse(existingFileContent);
|
||||
} catch (error) {
|
||||
// If we can't read the existing file, start with empty object
|
||||
outputData = {};
|
||||
}
|
||||
}
|
||||
|
||||
// Update only the target tag, preserving other tags
|
||||
outputData[targetTag] = {
|
||||
tasks: finalTasks,
|
||||
metadata: {
|
||||
created:
|
||||
outputData[targetTag]?.metadata?.created || new Date().toISOString(),
|
||||
updated: new Date().toISOString(),
|
||||
description: `Tasks for ${targetTag} context`
|
||||
}
|
||||
};
|
||||
|
||||
// Ensure the target tag has proper metadata
|
||||
ensureTagMetadata(outputData[targetTag], {
|
||||
description: `Tasks for ${targetTag} context`
|
||||
});
|
||||
|
||||
// Write the complete data structure back to the file
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(outputData, null, 2));
|
||||
report(
|
||||
`Successfully ${append ? 'appended' : 'generated'} ${processedNewTasks.length} tasks in ${tasksPath}${research ? ' with research-backed analysis' : ''}`,
|
||||
'success'
|
||||
);
|
||||
|
||||
// Generate markdown task files after writing tasks.json
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), { mcpLog });
|
||||
|
||||
// Handle CLI output (e.g., success message)
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(
|
||||
`Successfully generated ${processedNewTasks.length} new tasks${research ? ' with research-backed analysis' : ''}. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
||||
),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Next Steps:') +
|
||||
'\n\n' +
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` +
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'cyan',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
if (aiServiceResponse && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
// Return telemetry data
|
||||
return {
|
||||
success: true,
|
||||
tasksPath,
|
||||
telemetryData: aiServiceResponse?.telemetryData,
|
||||
tagInfo: aiServiceResponse?.tagInfo
|
||||
};
|
||||
} catch (error) {
|
||||
report(`Error parsing PRD: ${error.message}`, 'error');
|
||||
|
||||
// Only show error UI for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
|
||||
if (getDebugFlag(projectRoot)) {
|
||||
// Use projectRoot for debug flag check
|
||||
console.error(error);
|
||||
}
|
||||
}
|
||||
|
||||
throw error; // Always re-throw for proper error handling
|
||||
}
|
||||
}
|
||||
|
||||
export default parsePRD;
|
||||
@@ -1,3 +0,0 @@
|
||||
// Main entry point for parse-prd module
|
||||
export { default } from './parse-prd.js';
|
||||
export { default as parsePRD } from './parse-prd.js';
|
||||
@@ -1,105 +0,0 @@
|
||||
/**
|
||||
* Configuration classes and schemas for PRD parsing
|
||||
*/
|
||||
|
||||
import { z } from 'zod';
|
||||
import { TASK_PRIORITY_OPTIONS } from '../../../../src/constants/task-priority.js';
|
||||
import { getCurrentTag, isSilentMode, log } from '../../utils.js';
|
||||
import { Duration } from '../../../../src/utils/timeout-manager.js';
|
||||
import { CUSTOM_PROVIDERS } from '../../../../src/constants/providers.js';
|
||||
import { getMainProvider, getResearchProvider } from '../../config-manager.js';
|
||||
|
||||
// ============================================================================
|
||||
// SCHEMAS
|
||||
// ============================================================================
|
||||
|
||||
// Define the Zod schema for a SINGLE task object
|
||||
export const prdSingleTaskSchema = z.object({
|
||||
id: z.number(),
|
||||
title: z.string().min(1),
|
||||
description: z.string().min(1),
|
||||
details: z.string(),
|
||||
testStrategy: z.string(),
|
||||
priority: z.enum(TASK_PRIORITY_OPTIONS),
|
||||
dependencies: z.array(z.number()),
|
||||
status: z.string()
|
||||
});
|
||||
|
||||
// Define the Zod schema for the ENTIRE expected AI response object
|
||||
export const prdResponseSchema = z.object({
|
||||
tasks: z.array(prdSingleTaskSchema),
|
||||
metadata: z.object({
|
||||
projectName: z.string(),
|
||||
totalTasks: z.number(),
|
||||
sourceFile: z.string(),
|
||||
generatedAt: z.string()
|
||||
})
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// CONFIGURATION CLASSES
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Configuration object for PRD parsing
|
||||
*/
|
||||
export class PrdParseConfig {
|
||||
constructor(prdPath, tasksPath, numTasks, options = {}) {
|
||||
this.prdPath = prdPath;
|
||||
this.tasksPath = tasksPath;
|
||||
this.numTasks = numTasks;
|
||||
this.force = options.force || false;
|
||||
this.append = options.append || false;
|
||||
this.research = options.research || false;
|
||||
this.reportProgress = options.reportProgress;
|
||||
this.mcpLog = options.mcpLog;
|
||||
this.session = options.session;
|
||||
this.projectRoot = options.projectRoot;
|
||||
this.tag = options.tag;
|
||||
this.streamingTimeout =
|
||||
options.streamingTimeout || Duration.seconds(180).milliseconds;
|
||||
|
||||
// Derived values
|
||||
this.targetTag = this.tag || getCurrentTag(this.projectRoot) || 'master';
|
||||
this.isMCP = !!this.mcpLog;
|
||||
this.outputFormat = this.isMCP && !this.reportProgress ? 'json' : 'text';
|
||||
this.useStreaming =
|
||||
typeof this.reportProgress === 'function' || this.outputFormat === 'text';
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if Claude Code is being used
|
||||
*/
|
||||
isClaudeCode() {
|
||||
const currentProvider = this.research
|
||||
? getResearchProvider(this.projectRoot)
|
||||
: getMainProvider(this.projectRoot);
|
||||
return currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Logging configuration and utilities
|
||||
*/
|
||||
export class LoggingConfig {
|
||||
constructor(mcpLog, reportProgress) {
|
||||
this.isMCP = !!mcpLog;
|
||||
this.outputFormat = this.isMCP && !reportProgress ? 'json' : 'text';
|
||||
|
||||
this.logFn = mcpLog || {
|
||||
info: (...args) => log('info', ...args),
|
||||
warn: (...args) => log('warn', ...args),
|
||||
error: (...args) => log('error', ...args),
|
||||
debug: (...args) => log('debug', ...args),
|
||||
success: (...args) => log('success', ...args)
|
||||
};
|
||||
}
|
||||
|
||||
report(message, level = 'info') {
|
||||
if (this.logFn && typeof this.logFn[level] === 'function') {
|
||||
this.logFn[level](message);
|
||||
} else if (!isSilentMode() && this.outputFormat === 'text') {
|
||||
log(level, message);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,384 +0,0 @@
|
||||
/**
|
||||
* Helper functions for PRD parsing
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import boxen from 'boxen';
|
||||
import chalk from 'chalk';
|
||||
import { ensureTagMetadata, findTaskById } from '../../utils.js';
|
||||
import { getPriorityIndicators } from '../../../../src/ui/indicators.js';
|
||||
import { displayParsePrdSummary } from '../../../../src/ui/parse-prd.js';
|
||||
import { TimeoutManager } from '../../../../src/utils/timeout-manager.js';
|
||||
import { displayAiUsageSummary } from '../../ui.js';
|
||||
import { getPromptManager } from '../../prompt-manager.js';
|
||||
import { getDefaultPriority } from '../../config-manager.js';
|
||||
|
||||
/**
|
||||
* Estimate token count from text
|
||||
* @param {string} text - Text to estimate tokens for
|
||||
* @returns {number} Estimated token count
|
||||
*/
|
||||
export function estimateTokens(text) {
|
||||
// Common approximation: ~4 characters per token for English
|
||||
return Math.ceil(text.length / 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* Read and validate PRD content
|
||||
* @param {string} prdPath - Path to PRD file
|
||||
* @returns {string} PRD content
|
||||
* @throws {Error} If file is empty or cannot be read
|
||||
*/
|
||||
export function readPrdContent(prdPath) {
|
||||
const prdContent = fs.readFileSync(prdPath, 'utf8');
|
||||
if (!prdContent) {
|
||||
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
||||
}
|
||||
return prdContent;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load existing tasks from file
|
||||
* @param {string} tasksPath - Path to tasks file
|
||||
* @param {string} targetTag - Target tag to load from
|
||||
* @returns {{tasks: Array, nextId: number}} Existing tasks and next ID
|
||||
*/
|
||||
export function loadExistingTasks(tasksPath, targetTag) {
|
||||
let existingTasks = [];
|
||||
let nextId = 1;
|
||||
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
return { existingTasks, nextId };
|
||||
}
|
||||
|
||||
try {
|
||||
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
||||
const allData = JSON.parse(existingFileContent);
|
||||
|
||||
if (allData[targetTag]?.tasks && Array.isArray(allData[targetTag].tasks)) {
|
||||
existingTasks = allData[targetTag].tasks;
|
||||
if (existingTasks.length > 0) {
|
||||
nextId = Math.max(...existingTasks.map((t) => t.id || 0)) + 1;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// If we can't read the file or parse it, assume no existing tasks
|
||||
return { existingTasks: [], nextId: 1 };
|
||||
}
|
||||
|
||||
return { existingTasks, nextId };
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate overwrite/append operations
|
||||
* @param {Object} params
|
||||
* @returns {void}
|
||||
* @throws {Error} If validation fails
|
||||
*/
|
||||
export function validateFileOperations({
|
||||
existingTasks,
|
||||
targetTag,
|
||||
append,
|
||||
force,
|
||||
isMCP,
|
||||
logger
|
||||
}) {
|
||||
const hasExistingTasks = existingTasks.length > 0;
|
||||
|
||||
if (!hasExistingTasks) {
|
||||
logger.report(
|
||||
`Tag '${targetTag}' is empty or doesn't exist. Creating/updating tag with new tasks.`,
|
||||
'info'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (append) {
|
||||
logger.report(
|
||||
`Append mode enabled. Found ${existingTasks.length} existing tasks in tag '${targetTag}'.`,
|
||||
'info'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!force) {
|
||||
const errorMessage = `Tag '${targetTag}' already contains ${existingTasks.length} tasks. Use --force to overwrite or --append to add to existing tasks.`;
|
||||
logger.report(errorMessage, 'error');
|
||||
|
||||
if (isMCP) {
|
||||
throw new Error(errorMessage);
|
||||
} else {
|
||||
console.error(chalk.red(errorMessage));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
logger.report(
|
||||
`Force flag enabled. Overwriting existing tasks in tag '${targetTag}'.`,
|
||||
'debug'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Process and transform tasks with ID remapping
|
||||
* @param {Array} rawTasks - Raw tasks from AI
|
||||
* @param {number} startId - Starting ID for new tasks
|
||||
* @param {Array} existingTasks - Existing tasks for dependency validation
|
||||
* @param {string} defaultPriority - Default priority for tasks
|
||||
* @returns {Array} Processed tasks with remapped IDs
|
||||
*/
|
||||
export function processTasks(
|
||||
rawTasks,
|
||||
startId,
|
||||
existingTasks,
|
||||
defaultPriority
|
||||
) {
|
||||
let currentId = startId;
|
||||
const taskMap = new Map();
|
||||
|
||||
// First pass: assign new IDs and create mapping
|
||||
const processedTasks = rawTasks.map((task) => {
|
||||
const newId = currentId++;
|
||||
taskMap.set(task.id, newId);
|
||||
|
||||
return {
|
||||
...task,
|
||||
id: newId,
|
||||
status: task.status || 'pending',
|
||||
priority: task.priority || defaultPriority,
|
||||
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
||||
subtasks: task.subtasks || [],
|
||||
// Ensure all required fields have values
|
||||
title: task.title || '',
|
||||
description: task.description || '',
|
||||
details: task.details || '',
|
||||
testStrategy: task.testStrategy || ''
|
||||
};
|
||||
});
|
||||
|
||||
// Second pass: remap dependencies
|
||||
processedTasks.forEach((task) => {
|
||||
task.dependencies = task.dependencies
|
||||
.map((depId) => taskMap.get(depId))
|
||||
.filter(
|
||||
(newDepId) =>
|
||||
newDepId != null &&
|
||||
newDepId < task.id &&
|
||||
(findTaskById(existingTasks, newDepId) ||
|
||||
processedTasks.some((t) => t.id === newDepId))
|
||||
);
|
||||
});
|
||||
|
||||
return processedTasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save tasks to file with tag support
|
||||
* @param {string} tasksPath - Path to save tasks
|
||||
* @param {Array} tasks - Tasks to save
|
||||
* @param {string} targetTag - Target tag
|
||||
* @param {Object} logger - Logger instance
|
||||
*/
|
||||
export function saveTasksToFile(tasksPath, tasks, targetTag, logger) {
|
||||
// Create directory if it doesn't exist
|
||||
const tasksDir = path.dirname(tasksPath);
|
||||
if (!fs.existsSync(tasksDir)) {
|
||||
fs.mkdirSync(tasksDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Read existing file to preserve other tags
|
||||
let outputData = {};
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
try {
|
||||
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
||||
outputData = JSON.parse(existingFileContent);
|
||||
} catch (error) {
|
||||
outputData = {};
|
||||
}
|
||||
}
|
||||
|
||||
// Update only the target tag
|
||||
outputData[targetTag] = {
|
||||
tasks: tasks,
|
||||
metadata: {
|
||||
created:
|
||||
outputData[targetTag]?.metadata?.created || new Date().toISOString(),
|
||||
updated: new Date().toISOString(),
|
||||
description: `Tasks for ${targetTag} context`
|
||||
}
|
||||
};
|
||||
|
||||
// Ensure proper metadata
|
||||
ensureTagMetadata(outputData[targetTag], {
|
||||
description: `Tasks for ${targetTag} context`
|
||||
});
|
||||
|
||||
// Write back to file
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(outputData, null, 2));
|
||||
|
||||
logger.report(
|
||||
`Successfully saved ${tasks.length} tasks to ${tasksPath}`,
|
||||
'debug'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Build prompts for AI service
|
||||
* @param {Object} config - Configuration object
|
||||
* @param {string} prdContent - PRD content
|
||||
* @param {number} nextId - Next task ID
|
||||
* @returns {Promise<{systemPrompt: string, userPrompt: string}>}
|
||||
*/
|
||||
export async function buildPrompts(config, prdContent, nextId) {
|
||||
const promptManager = getPromptManager();
|
||||
const defaultTaskPriority =
|
||||
getDefaultPriority(config.projectRoot) || 'medium';
|
||||
|
||||
return promptManager.loadPrompt('parse-prd', {
|
||||
research: config.research,
|
||||
numTasks: config.numTasks,
|
||||
nextId,
|
||||
prdContent,
|
||||
prdPath: config.prdPath,
|
||||
defaultTaskPriority,
|
||||
isClaudeCode: config.isClaudeCode(),
|
||||
projectRoot: config.projectRoot || ''
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle progress reporting for both CLI and MCP
|
||||
* @param {Object} params
|
||||
*/
|
||||
export async function reportTaskProgress({
|
||||
task,
|
||||
currentCount,
|
||||
totalTasks,
|
||||
estimatedTokens,
|
||||
progressTracker,
|
||||
reportProgress,
|
||||
priorityMap,
|
||||
defaultPriority,
|
||||
estimatedInputTokens
|
||||
}) {
|
||||
const priority = task.priority || defaultPriority;
|
||||
const priorityIndicator = priorityMap[priority] || priorityMap.medium;
|
||||
|
||||
// CLI progress tracker
|
||||
if (progressTracker) {
|
||||
progressTracker.addTaskLine(currentCount, task.title, priority);
|
||||
if (estimatedTokens) {
|
||||
progressTracker.updateTokens(estimatedInputTokens, estimatedTokens);
|
||||
}
|
||||
}
|
||||
|
||||
// MCP progress reporting
|
||||
if (reportProgress) {
|
||||
try {
|
||||
const outputTokens = estimatedTokens
|
||||
? Math.floor(estimatedTokens / totalTasks)
|
||||
: 0;
|
||||
|
||||
await reportProgress({
|
||||
progress: currentCount,
|
||||
total: totalTasks,
|
||||
message: `${priorityIndicator} Task ${currentCount}/${totalTasks} - ${task.title} | ~Output: ${outputTokens} tokens`
|
||||
});
|
||||
} catch (error) {
|
||||
// Ignore progress reporting errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display completion summary for CLI
|
||||
* @param {Object} params
|
||||
*/
|
||||
export async function displayCliSummary({
|
||||
processedTasks,
|
||||
nextId,
|
||||
summary,
|
||||
prdPath,
|
||||
tasksPath,
|
||||
usedFallback,
|
||||
aiServiceResponse
|
||||
}) {
|
||||
// Generate task file names
|
||||
const taskFilesGenerated = (() => {
|
||||
if (!Array.isArray(processedTasks) || processedTasks.length === 0) {
|
||||
return `task_${String(nextId).padStart(3, '0')}.txt`;
|
||||
}
|
||||
const firstNewTaskId = processedTasks[0].id;
|
||||
const lastNewTaskId = processedTasks[processedTasks.length - 1].id;
|
||||
if (processedTasks.length === 1) {
|
||||
return `task_${String(firstNewTaskId).padStart(3, '0')}.txt`;
|
||||
}
|
||||
return `task_${String(firstNewTaskId).padStart(3, '0')}.txt -> task_${String(lastNewTaskId).padStart(3, '0')}.txt`;
|
||||
})();
|
||||
|
||||
displayParsePrdSummary({
|
||||
totalTasks: processedTasks.length,
|
||||
taskPriorities: summary.taskPriorities,
|
||||
prdFilePath: prdPath,
|
||||
outputPath: tasksPath,
|
||||
elapsedTime: summary.elapsedTime,
|
||||
usedFallback,
|
||||
taskFilesGenerated,
|
||||
actionVerb: summary.actionVerb
|
||||
});
|
||||
|
||||
// Display telemetry
|
||||
if (aiServiceResponse?.telemetryData) {
|
||||
// For streaming, wait briefly to allow usage data to be captured
|
||||
if (aiServiceResponse.mainResult?.usage) {
|
||||
// Give the usage promise a short time to resolve
|
||||
await TimeoutManager.withSoftTimeout(
|
||||
aiServiceResponse.mainResult.usage,
|
||||
1000,
|
||||
undefined
|
||||
);
|
||||
}
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display non-streaming CLI output
|
||||
* @param {Object} params
|
||||
*/
|
||||
export function displayNonStreamingCliOutput({
|
||||
processedTasks,
|
||||
research,
|
||||
finalTasks,
|
||||
tasksPath,
|
||||
aiServiceResponse
|
||||
}) {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(
|
||||
`Successfully generated ${processedTasks.length} new tasks${research ? ' with research-backed analysis' : ''}. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
||||
),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Next Steps:') +
|
||||
'\n\n' +
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` +
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'cyan',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
if (aiServiceResponse?.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
/**
|
||||
* Non-streaming handler for PRD parsing
|
||||
*/
|
||||
|
||||
import ora from 'ora';
|
||||
import { generateObjectService } from '../../ai-services-unified.js';
|
||||
import { LoggingConfig, prdResponseSchema } from './parse-prd-config.js';
|
||||
import { estimateTokens } from './parse-prd-helpers.js';
|
||||
|
||||
/**
|
||||
* Handle non-streaming AI service call
|
||||
* @param {Object} config - Configuration object
|
||||
* @param {Object} prompts - System and user prompts
|
||||
* @returns {Promise<Object>} Generated tasks and telemetry
|
||||
*/
|
||||
export async function handleNonStreamingService(config, prompts) {
|
||||
const logger = new LoggingConfig(config.mcpLog, config.reportProgress);
|
||||
const { systemPrompt, userPrompt } = prompts;
|
||||
const estimatedInputTokens = estimateTokens(systemPrompt + userPrompt);
|
||||
|
||||
// Initialize spinner for CLI
|
||||
let spinner = null;
|
||||
if (config.outputFormat === 'text' && !config.isMCP) {
|
||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||
}
|
||||
|
||||
try {
|
||||
// Call AI service
|
||||
logger.report(
|
||||
`Calling AI service to generate tasks from PRD${config.research ? ' with research-backed analysis' : ''}...`,
|
||||
'info'
|
||||
);
|
||||
|
||||
const aiServiceResponse = await generateObjectService({
|
||||
role: config.research ? 'research' : 'main',
|
||||
session: config.session,
|
||||
projectRoot: config.projectRoot,
|
||||
schema: prdResponseSchema,
|
||||
objectName: 'tasks_data',
|
||||
systemPrompt,
|
||||
prompt: userPrompt,
|
||||
commandName: 'parse-prd',
|
||||
outputType: config.isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
// Extract generated data
|
||||
let generatedData = null;
|
||||
if (aiServiceResponse?.mainResult) {
|
||||
if (
|
||||
typeof aiServiceResponse.mainResult === 'object' &&
|
||||
aiServiceResponse.mainResult !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult
|
||||
) {
|
||||
generatedData = aiServiceResponse.mainResult;
|
||||
} else if (
|
||||
typeof aiServiceResponse.mainResult.object === 'object' &&
|
||||
aiServiceResponse.mainResult.object !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult.object
|
||||
) {
|
||||
generatedData = aiServiceResponse.mainResult.object;
|
||||
}
|
||||
}
|
||||
|
||||
if (!generatedData || !Array.isArray(generatedData.tasks)) {
|
||||
throw new Error(
|
||||
'AI service returned unexpected data structure after validation.'
|
||||
);
|
||||
}
|
||||
|
||||
if (spinner) {
|
||||
spinner.succeed('Tasks generated successfully!');
|
||||
}
|
||||
|
||||
return {
|
||||
parsedTasks: generatedData.tasks,
|
||||
aiServiceResponse,
|
||||
estimatedInputTokens
|
||||
};
|
||||
} catch (error) {
|
||||
if (spinner) {
|
||||
spinner.fail(`Error parsing PRD: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
@@ -1,653 +0,0 @@
|
||||
/**
|
||||
* Streaming handler for PRD parsing
|
||||
*/
|
||||
|
||||
import { createParsePrdTracker } from '../../../../src/progress/parse-prd-tracker.js';
|
||||
import { displayParsePrdStart } from '../../../../src/ui/parse-prd.js';
|
||||
import { getPriorityIndicators } from '../../../../src/ui/indicators.js';
|
||||
import { TimeoutManager } from '../../../../src/utils/timeout-manager.js';
|
||||
import {
|
||||
streamObjectService,
|
||||
generateObjectService
|
||||
} from '../../ai-services-unified.js';
|
||||
import {
|
||||
getMainModelId,
|
||||
getParametersForRole,
|
||||
getResearchModelId,
|
||||
getDefaultPriority
|
||||
} from '../../config-manager.js';
|
||||
import { LoggingConfig, prdResponseSchema } from './parse-prd-config.js';
|
||||
import { estimateTokens, reportTaskProgress } from './parse-prd-helpers.js';
|
||||
|
||||
/**
|
||||
* Extract a readable stream from various stream result formats
|
||||
* @param {any} streamResult - The stream result object from AI service
|
||||
* @returns {AsyncIterable|ReadableStream} The extracted stream
|
||||
* @throws {StreamingError} If no valid stream can be extracted
|
||||
*/
|
||||
function extractStreamFromResult(streamResult) {
|
||||
if (!streamResult) {
|
||||
throw new StreamingError(
|
||||
'Stream result is null or undefined',
|
||||
STREAMING_ERROR_CODES.NOT_ASYNC_ITERABLE
|
||||
);
|
||||
}
|
||||
|
||||
// Try extraction strategies in priority order
|
||||
const stream = tryExtractStream(streamResult);
|
||||
|
||||
if (!stream) {
|
||||
throw new StreamingError(
|
||||
'Stream object is not async iterable or readable',
|
||||
STREAMING_ERROR_CODES.NOT_ASYNC_ITERABLE
|
||||
);
|
||||
}
|
||||
|
||||
return stream;
|
||||
}
|
||||
|
||||
/**
|
||||
* Try to extract stream using various strategies
|
||||
*/
|
||||
function tryExtractStream(streamResult) {
|
||||
const streamExtractors = [
|
||||
{ key: 'partialObjectStream', extractor: (obj) => obj.partialObjectStream },
|
||||
{ key: 'textStream', extractor: (obj) => extractCallable(obj.textStream) },
|
||||
{ key: 'stream', extractor: (obj) => extractCallable(obj.stream) },
|
||||
{ key: 'baseStream', extractor: (obj) => obj.baseStream }
|
||||
];
|
||||
|
||||
for (const { key, extractor } of streamExtractors) {
|
||||
const stream = extractor(streamResult);
|
||||
if (stream && isStreamable(stream)) {
|
||||
return stream;
|
||||
}
|
||||
}
|
||||
|
||||
// Check if already streamable
|
||||
return isStreamable(streamResult) ? streamResult : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a property that might be a function or direct value
|
||||
*/
|
||||
function extractCallable(property) {
|
||||
if (!property) return null;
|
||||
return typeof property === 'function' ? property() : property;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if object is streamable (async iterable or readable stream)
|
||||
*/
|
||||
function isStreamable(obj) {
|
||||
return (
|
||||
obj &&
|
||||
(typeof obj[Symbol.asyncIterator] === 'function' ||
|
||||
(obj.getReader && typeof obj.getReader === 'function'))
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle streaming AI service call and parsing
|
||||
* @param {Object} config - Configuration object
|
||||
* @param {Object} prompts - System and user prompts
|
||||
* @param {number} numTasks - Number of tasks to generate
|
||||
* @returns {Promise<Object>} Parsed tasks and telemetry
|
||||
*/
|
||||
export async function handleStreamingService(config, prompts, numTasks) {
|
||||
const context = createStreamingContext(config, prompts, numTasks);
|
||||
|
||||
await initializeProgress(config, numTasks, context.estimatedInputTokens);
|
||||
|
||||
const aiServiceResponse = await callAIServiceWithTimeout(
|
||||
config,
|
||||
prompts,
|
||||
config.streamingTimeout
|
||||
);
|
||||
|
||||
const { progressTracker, priorityMap } = await setupProgressTracking(
|
||||
config,
|
||||
numTasks
|
||||
);
|
||||
|
||||
const streamingResult = await processStreamResponse(
|
||||
aiServiceResponse.mainResult,
|
||||
config,
|
||||
prompts,
|
||||
numTasks,
|
||||
progressTracker,
|
||||
priorityMap,
|
||||
context.defaultPriority,
|
||||
context.estimatedInputTokens,
|
||||
context.logger
|
||||
);
|
||||
|
||||
validateStreamingResult(streamingResult);
|
||||
|
||||
// If we have usage data from streaming, log telemetry now
|
||||
if (streamingResult.usage && config.projectRoot) {
|
||||
const { logAiUsage } = await import('../../ai-services-unified.js');
|
||||
const { getUserId } = await import('../../config-manager.js');
|
||||
const userId = getUserId(config.projectRoot);
|
||||
|
||||
if (userId && aiServiceResponse.providerName && aiServiceResponse.modelId) {
|
||||
try {
|
||||
const telemetryData = await logAiUsage({
|
||||
userId,
|
||||
commandName: 'parse-prd',
|
||||
providerName: aiServiceResponse.providerName,
|
||||
modelId: aiServiceResponse.modelId,
|
||||
inputTokens: streamingResult.usage.promptTokens || 0,
|
||||
outputTokens: streamingResult.usage.completionTokens || 0,
|
||||
outputType: config.isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
// Add telemetry to the response
|
||||
if (telemetryData) {
|
||||
aiServiceResponse.telemetryData = telemetryData;
|
||||
}
|
||||
} catch (telemetryError) {
|
||||
context.logger.report(
|
||||
`Failed to log telemetry: ${telemetryError.message}`,
|
||||
'debug'
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return prepareFinalResult(
|
||||
streamingResult,
|
||||
aiServiceResponse,
|
||||
context.estimatedInputTokens,
|
||||
progressTracker
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create streaming context with common values
|
||||
*/
|
||||
function createStreamingContext(config, prompts, numTasks) {
|
||||
const { systemPrompt, userPrompt } = prompts;
|
||||
return {
|
||||
logger: new LoggingConfig(config.mcpLog, config.reportProgress),
|
||||
estimatedInputTokens: estimateTokens(systemPrompt + userPrompt),
|
||||
defaultPriority: getDefaultPriority(config.projectRoot) || 'medium'
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate streaming result has tasks
|
||||
*/
|
||||
function validateStreamingResult(streamingResult) {
|
||||
if (streamingResult.parsedTasks.length === 0) {
|
||||
throw new Error('No tasks were generated from the PRD');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize progress reporting
|
||||
*/
|
||||
async function initializeProgress(config, numTasks, estimatedInputTokens) {
|
||||
if (config.reportProgress) {
|
||||
await config.reportProgress({
|
||||
progress: 0,
|
||||
total: numTasks,
|
||||
message: `Starting PRD analysis (Input: ${estimatedInputTokens} tokens)${config.research ? ' with research' : ''}...`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Call AI service with timeout
|
||||
*/
|
||||
async function callAIServiceWithTimeout(config, prompts, timeout) {
|
||||
const { systemPrompt, userPrompt } = prompts;
|
||||
|
||||
return await TimeoutManager.withTimeout(
|
||||
streamObjectService({
|
||||
role: config.research ? 'research' : 'main',
|
||||
session: config.session,
|
||||
projectRoot: config.projectRoot,
|
||||
schema: prdResponseSchema,
|
||||
systemPrompt,
|
||||
prompt: userPrompt,
|
||||
commandName: 'parse-prd',
|
||||
outputType: config.isMCP ? 'mcp' : 'cli'
|
||||
}),
|
||||
timeout,
|
||||
'Streaming operation'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup progress tracking for CLI output
|
||||
*/
|
||||
async function setupProgressTracking(config, numTasks) {
|
||||
const priorityMap = getPriorityIndicators(config.isMCP);
|
||||
let progressTracker = null;
|
||||
|
||||
if (config.outputFormat === 'text' && !config.isMCP) {
|
||||
progressTracker = createParsePrdTracker({
|
||||
numUnits: numTasks,
|
||||
unitName: 'task',
|
||||
append: config.append
|
||||
});
|
||||
|
||||
const modelId = config.research ? getResearchModelId() : getMainModelId();
|
||||
const parameters = getParametersForRole(
|
||||
config.research ? 'research' : 'main'
|
||||
);
|
||||
|
||||
displayParsePrdStart({
|
||||
prdFilePath: config.prdPath,
|
||||
outputPath: config.tasksPath,
|
||||
numTasks,
|
||||
append: config.append,
|
||||
research: config.research,
|
||||
force: config.force,
|
||||
existingTasks: [],
|
||||
nextId: 1,
|
||||
model: modelId || 'Default',
|
||||
temperature: parameters?.temperature || 0.7
|
||||
});
|
||||
|
||||
progressTracker.start();
|
||||
}
|
||||
|
||||
return { progressTracker, priorityMap };
|
||||
}
|
||||
|
||||
/**
|
||||
* Process stream response based on stream type
|
||||
*/
|
||||
async function processStreamResponse(
|
||||
streamResult,
|
||||
config,
|
||||
prompts,
|
||||
numTasks,
|
||||
progressTracker,
|
||||
priorityMap,
|
||||
defaultPriority,
|
||||
estimatedInputTokens,
|
||||
logger
|
||||
) {
|
||||
const { systemPrompt, userPrompt } = prompts;
|
||||
const context = {
|
||||
config: {
|
||||
...config,
|
||||
schema: prdResponseSchema // Add the schema for generateObject fallback
|
||||
},
|
||||
numTasks,
|
||||
progressTracker,
|
||||
priorityMap,
|
||||
defaultPriority,
|
||||
estimatedInputTokens,
|
||||
prompt: userPrompt,
|
||||
systemPrompt: systemPrompt
|
||||
};
|
||||
|
||||
try {
|
||||
const streamingState = {
|
||||
lastPartialObject: null,
|
||||
taskCount: 0,
|
||||
estimatedOutputTokens: 0,
|
||||
usage: null
|
||||
};
|
||||
|
||||
await processPartialStream(
|
||||
streamResult.partialObjectStream,
|
||||
streamingState,
|
||||
context
|
||||
);
|
||||
|
||||
// Wait for usage data if available
|
||||
if (streamResult.usage) {
|
||||
try {
|
||||
streamingState.usage = await streamResult.usage;
|
||||
} catch (usageError) {
|
||||
logger.report(
|
||||
`Failed to get usage data: ${usageError.message}`,
|
||||
'debug'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return finalizeStreamingResults(streamingState, context);
|
||||
} catch (error) {
|
||||
logger.report(
|
||||
`StreamObject processing failed: ${error.message}. Falling back to generateObject.`,
|
||||
'debug'
|
||||
);
|
||||
return await processWithGenerateObject(context, logger);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process the partial object stream
|
||||
*/
|
||||
async function processPartialStream(partialStream, state, context) {
|
||||
for await (const partialObject of partialStream) {
|
||||
state.lastPartialObject = partialObject;
|
||||
|
||||
if (partialObject) {
|
||||
state.estimatedOutputTokens = estimateTokens(
|
||||
JSON.stringify(partialObject)
|
||||
);
|
||||
}
|
||||
|
||||
await processStreamingTasks(partialObject, state, context);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process tasks from a streaming partial object
|
||||
*/
|
||||
async function processStreamingTasks(partialObject, state, context) {
|
||||
if (!partialObject?.tasks || !Array.isArray(partialObject.tasks)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const newTaskCount = partialObject.tasks.length;
|
||||
|
||||
if (newTaskCount > state.taskCount) {
|
||||
await processNewTasks(
|
||||
partialObject.tasks,
|
||||
state.taskCount,
|
||||
newTaskCount,
|
||||
state.estimatedOutputTokens,
|
||||
context
|
||||
);
|
||||
state.taskCount = newTaskCount;
|
||||
} else if (context.progressTracker && state.estimatedOutputTokens > 0) {
|
||||
context.progressTracker.updateTokens(
|
||||
context.estimatedInputTokens,
|
||||
state.estimatedOutputTokens,
|
||||
true
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process newly appeared tasks in the stream
|
||||
*/
|
||||
async function processNewTasks(
|
||||
tasks,
|
||||
startIndex,
|
||||
endIndex,
|
||||
estimatedOutputTokens,
|
||||
context
|
||||
) {
|
||||
for (let i = startIndex; i < endIndex; i++) {
|
||||
const task = tasks[i] || {};
|
||||
|
||||
if (task.title) {
|
||||
await reportTaskProgress({
|
||||
task,
|
||||
currentCount: i + 1,
|
||||
totalTasks: context.numTasks,
|
||||
estimatedTokens: estimatedOutputTokens,
|
||||
progressTracker: context.progressTracker,
|
||||
reportProgress: context.config.reportProgress,
|
||||
priorityMap: context.priorityMap,
|
||||
defaultPriority: context.defaultPriority,
|
||||
estimatedInputTokens: context.estimatedInputTokens
|
||||
});
|
||||
} else {
|
||||
await reportPlaceholderTask(i + 1, estimatedOutputTokens, context);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Report a placeholder task while it's being generated
|
||||
*/
|
||||
async function reportPlaceholderTask(
|
||||
taskNumber,
|
||||
estimatedOutputTokens,
|
||||
context
|
||||
) {
|
||||
const {
|
||||
progressTracker,
|
||||
config,
|
||||
numTasks,
|
||||
defaultPriority,
|
||||
estimatedInputTokens
|
||||
} = context;
|
||||
|
||||
if (progressTracker) {
|
||||
progressTracker.addTaskLine(
|
||||
taskNumber,
|
||||
`Generating task ${taskNumber}...`,
|
||||
defaultPriority
|
||||
);
|
||||
progressTracker.updateTokens(
|
||||
estimatedInputTokens,
|
||||
estimatedOutputTokens,
|
||||
true
|
||||
);
|
||||
}
|
||||
|
||||
if (config.reportProgress && !progressTracker) {
|
||||
await config.reportProgress({
|
||||
progress: taskNumber,
|
||||
total: numTasks,
|
||||
message: `Generating task ${taskNumber}/${numTasks}...`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Finalize streaming results and update progress display
|
||||
*/
|
||||
async function finalizeStreamingResults(state, context) {
|
||||
const { lastPartialObject, estimatedOutputTokens, taskCount, usage } = state;
|
||||
|
||||
if (!lastPartialObject?.tasks || !Array.isArray(lastPartialObject.tasks)) {
|
||||
throw new Error('No tasks generated from streamObject');
|
||||
}
|
||||
|
||||
// Use actual token counts if available, otherwise use estimates
|
||||
const finalOutputTokens = usage?.completionTokens || estimatedOutputTokens;
|
||||
const finalInputTokens = usage?.promptTokens || context.estimatedInputTokens;
|
||||
|
||||
if (context.progressTracker) {
|
||||
await updateFinalProgress(
|
||||
lastPartialObject.tasks,
|
||||
taskCount,
|
||||
usage ? finalOutputTokens : estimatedOutputTokens,
|
||||
context,
|
||||
usage ? finalInputTokens : null
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
parsedTasks: lastPartialObject.tasks,
|
||||
estimatedOutputTokens: finalOutputTokens,
|
||||
actualInputTokens: finalInputTokens,
|
||||
usage,
|
||||
usedFallback: false
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Update progress tracker with final task content
|
||||
*/
|
||||
async function updateFinalProgress(
|
||||
tasks,
|
||||
taskCount,
|
||||
outputTokens,
|
||||
context,
|
||||
actualInputTokens = null
|
||||
) {
|
||||
const { progressTracker, defaultPriority, estimatedInputTokens } = context;
|
||||
|
||||
if (taskCount > 0) {
|
||||
updateTaskLines(tasks, progressTracker, defaultPriority);
|
||||
} else {
|
||||
await reportAllTasks(tasks, outputTokens, context);
|
||||
}
|
||||
|
||||
progressTracker.updateTokens(
|
||||
actualInputTokens || estimatedInputTokens,
|
||||
outputTokens,
|
||||
false
|
||||
);
|
||||
progressTracker.stop();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update task lines in progress tracker with final content
|
||||
*/
|
||||
function updateTaskLines(tasks, progressTracker, defaultPriority) {
|
||||
for (let i = 0; i < tasks.length; i++) {
|
||||
const task = tasks[i];
|
||||
if (task?.title) {
|
||||
progressTracker.addTaskLine(
|
||||
i + 1,
|
||||
task.title,
|
||||
task.priority || defaultPriority
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Report all tasks that were not streamed incrementally
|
||||
*/
|
||||
async function reportAllTasks(tasks, estimatedOutputTokens, context) {
|
||||
for (let i = 0; i < tasks.length; i++) {
|
||||
const task = tasks[i];
|
||||
if (task?.title) {
|
||||
await reportTaskProgress({
|
||||
task,
|
||||
currentCount: i + 1,
|
||||
totalTasks: context.numTasks,
|
||||
estimatedTokens: estimatedOutputTokens,
|
||||
progressTracker: context.progressTracker,
|
||||
reportProgress: context.config.reportProgress,
|
||||
priorityMap: context.priorityMap,
|
||||
defaultPriority: context.defaultPriority,
|
||||
estimatedInputTokens: context.estimatedInputTokens
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process with generateObject as fallback when streaming fails
|
||||
*/
|
||||
async function processWithGenerateObject(context, logger) {
|
||||
logger.report('Using generateObject fallback for PRD parsing', 'info');
|
||||
|
||||
// Show placeholder tasks while generating
|
||||
if (context.progressTracker) {
|
||||
for (let i = 0; i < context.numTasks; i++) {
|
||||
context.progressTracker.addTaskLine(
|
||||
i + 1,
|
||||
`Generating task ${i + 1}...`,
|
||||
context.defaultPriority
|
||||
);
|
||||
context.progressTracker.updateTokens(
|
||||
context.estimatedInputTokens,
|
||||
0,
|
||||
true
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Use generateObjectService instead of streaming
|
||||
const result = await generateObjectService({
|
||||
role: context.config.research ? 'research' : 'main',
|
||||
commandName: 'parse-prd',
|
||||
prompt: context.prompt,
|
||||
systemPrompt: context.systemPrompt,
|
||||
schema: context.config.schema,
|
||||
outputFormat: context.config.outputFormat || 'text',
|
||||
projectRoot: context.config.projectRoot,
|
||||
session: context.config.session
|
||||
});
|
||||
|
||||
// Extract tasks from the result (handle both direct tasks and mainResult.tasks)
|
||||
const tasks = result?.mainResult || result;
|
||||
|
||||
// Process the generated tasks
|
||||
if (tasks && Array.isArray(tasks.tasks)) {
|
||||
// Update progress tracker with final tasks
|
||||
if (context.progressTracker) {
|
||||
for (let i = 0; i < tasks.tasks.length; i++) {
|
||||
const task = tasks.tasks[i];
|
||||
if (task && task.title) {
|
||||
context.progressTracker.addTaskLine(
|
||||
i + 1,
|
||||
task.title,
|
||||
task.priority || context.defaultPriority
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Final token update - use actual telemetry if available
|
||||
const outputTokens =
|
||||
result.telemetryData?.outputTokens ||
|
||||
estimateTokens(JSON.stringify(tasks));
|
||||
const inputTokens =
|
||||
result.telemetryData?.inputTokens || context.estimatedInputTokens;
|
||||
|
||||
context.progressTracker.updateTokens(inputTokens, outputTokens, false);
|
||||
}
|
||||
|
||||
return {
|
||||
parsedTasks: tasks.tasks,
|
||||
estimatedOutputTokens:
|
||||
result.telemetryData?.outputTokens ||
|
||||
estimateTokens(JSON.stringify(tasks)),
|
||||
actualInputTokens: result.telemetryData?.inputTokens,
|
||||
telemetryData: result.telemetryData,
|
||||
usedFallback: true
|
||||
};
|
||||
}
|
||||
|
||||
throw new Error('Failed to generate tasks using generateObject fallback');
|
||||
}
|
||||
|
||||
/**
|
||||
* Prepare final result with cleanup
|
||||
*/
|
||||
function prepareFinalResult(
|
||||
streamingResult,
|
||||
aiServiceResponse,
|
||||
estimatedInputTokens,
|
||||
progressTracker
|
||||
) {
|
||||
let summary = null;
|
||||
if (progressTracker) {
|
||||
summary = progressTracker.getSummary();
|
||||
progressTracker.cleanup();
|
||||
}
|
||||
|
||||
// If we have actual usage data from streaming, update the AI service response
|
||||
if (streamingResult.usage && aiServiceResponse) {
|
||||
// Map the Vercel AI SDK usage format to our telemetry format
|
||||
const usage = streamingResult.usage;
|
||||
if (!aiServiceResponse.usage) {
|
||||
aiServiceResponse.usage = {
|
||||
promptTokens: usage.promptTokens || 0,
|
||||
completionTokens: usage.completionTokens || 0,
|
||||
totalTokens: usage.totalTokens || 0
|
||||
};
|
||||
}
|
||||
|
||||
// The telemetry should have been logged in the unified service runner
|
||||
// but if not, the usage is now available for telemetry calculation
|
||||
}
|
||||
|
||||
return {
|
||||
parsedTasks: streamingResult.parsedTasks,
|
||||
aiServiceResponse,
|
||||
estimatedInputTokens:
|
||||
streamingResult.actualInputTokens || estimatedInputTokens,
|
||||
estimatedOutputTokens: streamingResult.estimatedOutputTokens,
|
||||
usedFallback: streamingResult.usedFallback,
|
||||
progressTracker,
|
||||
summary
|
||||
};
|
||||
}
|
||||
@@ -1,272 +0,0 @@
|
||||
import chalk from 'chalk';
|
||||
import {
|
||||
StreamingError,
|
||||
STREAMING_ERROR_CODES
|
||||
} from '../../../../src/utils/stream-parser.js';
|
||||
import { TimeoutManager } from '../../../../src/utils/timeout-manager.js';
|
||||
import { getDebugFlag, getDefaultPriority } from '../../config-manager.js';
|
||||
|
||||
// Import configuration classes
|
||||
import { PrdParseConfig, LoggingConfig } from './parse-prd-config.js';
|
||||
|
||||
// Import helper functions
|
||||
import {
|
||||
readPrdContent,
|
||||
loadExistingTasks,
|
||||
validateFileOperations,
|
||||
processTasks,
|
||||
saveTasksToFile,
|
||||
buildPrompts,
|
||||
displayCliSummary,
|
||||
displayNonStreamingCliOutput
|
||||
} from './parse-prd-helpers.js';
|
||||
|
||||
// Import handlers
|
||||
import { handleStreamingService } from './parse-prd-streaming.js';
|
||||
import { handleNonStreamingService } from './parse-prd-non-streaming.js';
|
||||
|
||||
// ============================================================================
|
||||
// MAIN PARSING FUNCTIONS (Simplified after refactoring)
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Shared parsing logic for both streaming and non-streaming
|
||||
* @param {PrdParseConfig} config - Configuration object
|
||||
* @param {Function} serviceHandler - Handler function for AI service
|
||||
* @param {boolean} isStreaming - Whether this is streaming mode
|
||||
* @returns {Promise<Object>} Result object with success status and telemetry
|
||||
*/
|
||||
async function parsePRDCore(config, serviceHandler, isStreaming) {
|
||||
const logger = new LoggingConfig(config.mcpLog, config.reportProgress);
|
||||
|
||||
logger.report(
|
||||
`Parsing PRD file: ${config.prdPath}, Force: ${config.force}, Append: ${config.append}, Research: ${config.research}`,
|
||||
'debug'
|
||||
);
|
||||
|
||||
try {
|
||||
// Load existing tasks
|
||||
const { existingTasks, nextId } = loadExistingTasks(
|
||||
config.tasksPath,
|
||||
config.targetTag
|
||||
);
|
||||
|
||||
// Validate operations
|
||||
validateFileOperations({
|
||||
existingTasks,
|
||||
targetTag: config.targetTag,
|
||||
append: config.append,
|
||||
force: config.force,
|
||||
isMCP: config.isMCP,
|
||||
logger
|
||||
});
|
||||
|
||||
// Read PRD content and build prompts
|
||||
const prdContent = readPrdContent(config.prdPath);
|
||||
const prompts = await buildPrompts(config, prdContent, nextId);
|
||||
|
||||
// Call the appropriate service handler
|
||||
const serviceResult = await serviceHandler(
|
||||
config,
|
||||
prompts,
|
||||
config.numTasks
|
||||
);
|
||||
|
||||
// Process tasks
|
||||
const defaultPriority = getDefaultPriority(config.projectRoot) || 'medium';
|
||||
const processedNewTasks = processTasks(
|
||||
serviceResult.parsedTasks,
|
||||
nextId,
|
||||
existingTasks,
|
||||
defaultPriority
|
||||
);
|
||||
|
||||
// Combine with existing if appending
|
||||
const finalTasks = config.append
|
||||
? [...existingTasks, ...processedNewTasks]
|
||||
: processedNewTasks;
|
||||
|
||||
// Save to file
|
||||
saveTasksToFile(config.tasksPath, finalTasks, config.targetTag, logger);
|
||||
|
||||
// Handle completion reporting
|
||||
await handleCompletionReporting(
|
||||
config,
|
||||
serviceResult,
|
||||
processedNewTasks,
|
||||
finalTasks,
|
||||
nextId,
|
||||
isStreaming
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
tasksPath: config.tasksPath,
|
||||
telemetryData: serviceResult.aiServiceResponse?.telemetryData,
|
||||
tagInfo: serviceResult.aiServiceResponse?.tagInfo
|
||||
};
|
||||
} catch (error) {
|
||||
logger.report(`Error parsing PRD: ${error.message}`, 'error');
|
||||
|
||||
if (!config.isMCP) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
if (getDebugFlag(config.projectRoot)) {
|
||||
console.error(error);
|
||||
}
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle completion reporting for both CLI and MCP
|
||||
* @param {PrdParseConfig} config - Configuration object
|
||||
* @param {Object} serviceResult - Result from service handler
|
||||
* @param {Array} processedNewTasks - New tasks that were processed
|
||||
* @param {Array} finalTasks - All tasks after processing
|
||||
* @param {number} nextId - Next available task ID
|
||||
* @param {boolean} isStreaming - Whether this was streaming mode
|
||||
*/
|
||||
async function handleCompletionReporting(
|
||||
config,
|
||||
serviceResult,
|
||||
processedNewTasks,
|
||||
finalTasks,
|
||||
nextId,
|
||||
isStreaming
|
||||
) {
|
||||
const { aiServiceResponse, estimatedInputTokens, estimatedOutputTokens } =
|
||||
serviceResult;
|
||||
|
||||
// MCP progress reporting
|
||||
if (config.reportProgress) {
|
||||
const hasValidTelemetry =
|
||||
aiServiceResponse?.telemetryData &&
|
||||
(aiServiceResponse.telemetryData.inputTokens > 0 ||
|
||||
aiServiceResponse.telemetryData.outputTokens > 0);
|
||||
|
||||
let completionMessage;
|
||||
if (hasValidTelemetry) {
|
||||
const cost = aiServiceResponse.telemetryData.totalCost || 0;
|
||||
const currency = aiServiceResponse.telemetryData.currency || 'USD';
|
||||
completionMessage = `✅ Task Generation Completed | Tokens (I/O): ${aiServiceResponse.telemetryData.inputTokens}/${aiServiceResponse.telemetryData.outputTokens} | Cost: ${currency === 'USD' ? '$' : currency}${cost.toFixed(4)}`;
|
||||
} else {
|
||||
const outputTokens = isStreaming ? estimatedOutputTokens : 'unknown';
|
||||
completionMessage = `✅ Task Generation Completed | ~Tokens (I/O): ${estimatedInputTokens}/${outputTokens} | Cost: ~$0.00`;
|
||||
}
|
||||
|
||||
await config.reportProgress({
|
||||
progress: config.numTasks,
|
||||
total: config.numTasks,
|
||||
message: completionMessage
|
||||
});
|
||||
}
|
||||
|
||||
// CLI output
|
||||
if (config.outputFormat === 'text' && !config.isMCP) {
|
||||
if (isStreaming && serviceResult.summary) {
|
||||
await displayCliSummary({
|
||||
processedTasks: processedNewTasks,
|
||||
nextId,
|
||||
summary: serviceResult.summary,
|
||||
prdPath: config.prdPath,
|
||||
tasksPath: config.tasksPath,
|
||||
usedFallback: serviceResult.usedFallback,
|
||||
aiServiceResponse
|
||||
});
|
||||
} else if (!isStreaming) {
|
||||
displayNonStreamingCliOutput({
|
||||
processedTasks: processedNewTasks,
|
||||
research: config.research,
|
||||
finalTasks,
|
||||
tasksPath: config.tasksPath,
|
||||
aiServiceResponse
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse PRD with streaming progress reporting
|
||||
*/
|
||||
async function parsePRDWithStreaming(
|
||||
prdPath,
|
||||
tasksPath,
|
||||
numTasks,
|
||||
options = {}
|
||||
) {
|
||||
const config = new PrdParseConfig(prdPath, tasksPath, numTasks, options);
|
||||
return parsePRDCore(config, handleStreamingService, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse PRD without streaming (fallback)
|
||||
*/
|
||||
async function parsePRDWithoutStreaming(
|
||||
prdPath,
|
||||
tasksPath,
|
||||
numTasks,
|
||||
options = {}
|
||||
) {
|
||||
const config = new PrdParseConfig(prdPath, tasksPath, numTasks, options);
|
||||
return parsePRDCore(config, handleNonStreamingService, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Main entry point - decides between streaming and non-streaming
|
||||
*/
|
||||
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
const config = new PrdParseConfig(prdPath, tasksPath, numTasks, options);
|
||||
|
||||
if (config.useStreaming) {
|
||||
try {
|
||||
return await parsePRDWithStreaming(prdPath, tasksPath, numTasks, options);
|
||||
} catch (streamingError) {
|
||||
// Check if this is a streaming-specific error (including timeout)
|
||||
const isStreamingError =
|
||||
streamingError instanceof StreamingError ||
|
||||
streamingError.code === STREAMING_ERROR_CODES.NOT_ASYNC_ITERABLE ||
|
||||
streamingError.code ===
|
||||
STREAMING_ERROR_CODES.STREAM_PROCESSING_FAILED ||
|
||||
streamingError.code === STREAMING_ERROR_CODES.STREAM_NOT_ITERABLE ||
|
||||
TimeoutManager.isTimeoutError(streamingError);
|
||||
|
||||
if (isStreamingError) {
|
||||
const logger = new LoggingConfig(config.mcpLog, config.reportProgress);
|
||||
|
||||
// Show fallback message
|
||||
if (config.outputFormat === 'text' && !config.isMCP) {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`⚠️ Streaming operation ${streamingError.message.includes('timed out') ? 'timed out' : 'failed'}. Falling back to non-streaming mode...`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
logger.report(
|
||||
`Streaming failed (${streamingError.message}), falling back to non-streaming mode...`,
|
||||
'warn'
|
||||
);
|
||||
}
|
||||
|
||||
// Fallback to non-streaming
|
||||
return await parsePRDWithoutStreaming(
|
||||
prdPath,
|
||||
tasksPath,
|
||||
numTasks,
|
||||
options
|
||||
);
|
||||
} else {
|
||||
throw streamingError;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return await parsePRDWithoutStreaming(
|
||||
prdPath,
|
||||
tasksPath,
|
||||
numTasks,
|
||||
options
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export default parsePRD;
|
||||
@@ -7,15 +7,7 @@
|
||||
function taskExists(tasks, taskId) {
|
||||
// Handle subtask IDs (e.g., "1.2")
|
||||
if (typeof taskId === 'string' && taskId.includes('.')) {
|
||||
const parts = taskId.split('.');
|
||||
// Validate that it's a proper subtask format (parentId.subtaskId)
|
||||
if (parts.length !== 2 || !parts[0] || !parts[1]) {
|
||||
// Invalid format - treat as regular task ID
|
||||
const id = parseInt(taskId, 10);
|
||||
return tasks.some((t) => t.id === id);
|
||||
}
|
||||
|
||||
const [parentIdStr, subtaskIdStr] = parts;
|
||||
const [parentIdStr, subtaskIdStr] = taskId.split('.');
|
||||
const parentId = parseInt(parentIdStr, 10);
|
||||
const subtaskId = parseInt(subtaskIdStr, 10);
|
||||
|
||||
|
||||
@@ -15,8 +15,7 @@ import {
|
||||
findTaskById,
|
||||
readJSON,
|
||||
truncate,
|
||||
isSilentMode,
|
||||
formatTaskId
|
||||
isSilentMode
|
||||
} from './utils.js';
|
||||
import fs from 'fs';
|
||||
import {
|
||||
@@ -406,44 +405,9 @@ function formatDependenciesWithStatus(
|
||||
|
||||
// Check if it's already a fully qualified subtask ID (like "22.1")
|
||||
if (depIdStr.includes('.')) {
|
||||
const parts = depIdStr.split('.');
|
||||
// Validate that it's a proper subtask format (parentId.subtaskId)
|
||||
if (parts.length !== 2 || !parts[0] || !parts[1]) {
|
||||
// Invalid format - treat as regular dependency
|
||||
const numericDepId =
|
||||
typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
const depTaskResult = findTaskById(
|
||||
allTasks,
|
||||
numericDepId,
|
||||
complexityReport
|
||||
);
|
||||
const depTask = depTaskResult.task;
|
||||
|
||||
if (!depTask) {
|
||||
return forConsole
|
||||
? chalk.red(`${depIdStr} (Not found)`)
|
||||
: `${depIdStr} (Not found)`;
|
||||
}
|
||||
|
||||
const status = depTask.status || 'pending';
|
||||
const isDone =
|
||||
status.toLowerCase() === 'done' ||
|
||||
status.toLowerCase() === 'completed';
|
||||
const isInProgress = status.toLowerCase() === 'in-progress';
|
||||
|
||||
if (forConsole) {
|
||||
if (isDone) {
|
||||
return chalk.green.bold(depIdStr);
|
||||
} else if (isInProgress) {
|
||||
return chalk.yellow.bold(depIdStr);
|
||||
} else {
|
||||
return chalk.red.bold(depIdStr);
|
||||
}
|
||||
}
|
||||
return depIdStr;
|
||||
}
|
||||
|
||||
const [parentId, subtaskId] = parts.map((id) => parseInt(id, 10));
|
||||
const [parentId, subtaskId] = depIdStr
|
||||
.split('.')
|
||||
.map((id) => parseInt(id, 10));
|
||||
|
||||
// Find the parent task
|
||||
const parentTask = allTasks.find((t) => t.id === parentId);
|
||||
@@ -2833,176 +2797,5 @@ export {
|
||||
warnLoadingIndicator,
|
||||
infoLoadingIndicator,
|
||||
displayContextAnalysis,
|
||||
displayCurrentTagIndicator,
|
||||
formatTaskIdForDisplay
|
||||
displayCurrentTagIndicator
|
||||
};
|
||||
|
||||
/**
|
||||
* Display enhanced error message for cross-tag dependency conflicts
|
||||
* @param {Array} conflicts - Array of cross-tag dependency conflicts
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {string} sourceIds - Source task IDs (comma-separated)
|
||||
*/
|
||||
export function displayCrossTagDependencyError(
|
||||
conflicts,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
sourceIds
|
||||
) {
|
||||
console.log(
|
||||
chalk.red(`\n❌ Cannot move tasks from "${sourceTag}" to "${targetTag}"`)
|
||||
);
|
||||
console.log(chalk.yellow(`\nCross-tag dependency conflicts detected:`));
|
||||
|
||||
if (conflicts.length > 0) {
|
||||
conflicts.forEach((conflict) => {
|
||||
console.log(` • ${conflict.message}`);
|
||||
});
|
||||
}
|
||||
|
||||
console.log(chalk.cyan(`\nResolution options:`));
|
||||
console.log(
|
||||
` 1. Move with dependencies: task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --with-dependencies`
|
||||
);
|
||||
console.log(
|
||||
` 2. Break dependencies: task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --ignore-dependencies`
|
||||
);
|
||||
console.log(
|
||||
` 3. Validate and fix dependencies: task-master validate-dependencies && task-master fix-dependencies`
|
||||
);
|
||||
if (conflicts.length > 0) {
|
||||
console.log(
|
||||
` 4. Move dependencies first: task-master move --from=${conflicts.map((c) => c.dependencyId).join(',')} --from-tag=${conflicts[0].dependencyTag} --to-tag=${targetTag}`
|
||||
);
|
||||
}
|
||||
console.log(
|
||||
` 5. Force move (may break dependencies): task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --force`
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to format task ID for display, handling edge cases with explicit labels
|
||||
* Builds on the existing formatTaskId utility but adds user-friendly display for edge cases
|
||||
* @param {*} taskId - The task ID to format
|
||||
* @returns {string} Formatted task ID for display
|
||||
*/
|
||||
function formatTaskIdForDisplay(taskId) {
|
||||
if (taskId === null) return 'null';
|
||||
if (taskId === undefined) return 'undefined';
|
||||
if (taskId === '') return '(empty)';
|
||||
|
||||
// Use existing formatTaskId for normal cases, with fallback to 'unknown'
|
||||
return formatTaskId(taskId) || 'unknown';
|
||||
}
|
||||
|
||||
/**
|
||||
* Display enhanced error message for subtask movement restriction
|
||||
* @param {string} taskId - The subtask ID that cannot be moved
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
*/
|
||||
export function displaySubtaskMoveError(taskId, sourceTag, targetTag) {
|
||||
// Handle null/undefined taskId but preserve the actual value for display
|
||||
const displayTaskId = formatTaskIdForDisplay(taskId);
|
||||
|
||||
// Safe taskId for operations that need a valid string
|
||||
const safeTaskId = taskId || 'unknown';
|
||||
|
||||
// Validate taskId format before splitting
|
||||
let parentId = safeTaskId;
|
||||
if (safeTaskId.includes('.')) {
|
||||
const parts = safeTaskId.split('.');
|
||||
// Check if it's a valid subtask format (parentId.subtaskId)
|
||||
if (parts.length === 2 && parts[0] && parts[1]) {
|
||||
parentId = parts[0];
|
||||
} else {
|
||||
// Invalid format - log warning and use the original taskId
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`\n⚠️ Warning: Unexpected taskId format "${safeTaskId}". Using as-is for command suggestions.`
|
||||
)
|
||||
);
|
||||
parentId = safeTaskId;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.red(`\n❌ Cannot move subtask ${displayTaskId} directly between tags`)
|
||||
);
|
||||
console.log(chalk.yellow(`\nSubtask movement restriction:`));
|
||||
console.log(` • Subtasks cannot be moved directly between tags`);
|
||||
console.log(` • They must be promoted to full tasks first`);
|
||||
console.log(` • Source tag: "${sourceTag}"`);
|
||||
console.log(` • Target tag: "${targetTag}"`);
|
||||
|
||||
console.log(chalk.cyan(`\nResolution options:`));
|
||||
console.log(
|
||||
` 1. Promote subtask to full task: task-master remove-subtask --id=${displayTaskId} --convert`
|
||||
);
|
||||
console.log(
|
||||
` 2. Then move the promoted task: task-master move --from=${parentId} --from-tag=${sourceTag} --to-tag=${targetTag}`
|
||||
);
|
||||
console.log(
|
||||
` 3. Or move the parent task with all subtasks: task-master move --from=${parentId} --from-tag=${sourceTag} --to-tag=${targetTag} --with-dependencies`
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display enhanced error message for invalid tag combinations
|
||||
* @param {string} sourceTag - Source tag name
|
||||
* @param {string} targetTag - Target tag name
|
||||
* @param {string} reason - Reason for the error
|
||||
*/
|
||||
export function displayInvalidTagCombinationError(
|
||||
sourceTag,
|
||||
targetTag,
|
||||
reason
|
||||
) {
|
||||
console.log(chalk.red(`\n❌ Invalid tag combination`));
|
||||
console.log(chalk.yellow(`\nError details:`));
|
||||
console.log(` • Source tag: "${sourceTag}"`);
|
||||
console.log(` • Target tag: "${targetTag}"`);
|
||||
console.log(` • Reason: ${reason}`);
|
||||
|
||||
console.log(chalk.cyan(`\nResolution options:`));
|
||||
console.log(` 1. Use different tags for cross-tag moves`);
|
||||
console.log(
|
||||
` 2. Use within-tag move: task-master move --from=<id> --to=<id> --tag=${sourceTag}`
|
||||
);
|
||||
console.log(` 3. Check available tags: task-master tags`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display helpful hints for dependency validation commands
|
||||
* @param {string} context - Context for the hints (e.g., 'before-move', 'after-error')
|
||||
*/
|
||||
export function displayDependencyValidationHints(context = 'general') {
|
||||
const hints = {
|
||||
'before-move': [
|
||||
'💡 Tip: Run "task-master validate-dependencies" to check for dependency issues before moving tasks',
|
||||
'💡 Tip: Use "task-master fix-dependencies" to automatically resolve common dependency problems',
|
||||
'💡 Tip: Consider using --with-dependencies flag to move dependent tasks together'
|
||||
],
|
||||
'after-error': [
|
||||
'🔧 Quick fix: Run "task-master validate-dependencies" to identify specific issues',
|
||||
'🔧 Quick fix: Use "task-master fix-dependencies" to automatically resolve problems',
|
||||
'🔧 Quick fix: Check "task-master show <id>" to see task dependencies before moving'
|
||||
],
|
||||
general: [
|
||||
'💡 Use "task-master validate-dependencies" to check for dependency issues',
|
||||
'💡 Use "task-master fix-dependencies" to automatically resolve problems',
|
||||
'💡 Use "task-master show <id>" to view task dependencies',
|
||||
'💡 Use --with-dependencies flag to move dependent tasks together'
|
||||
]
|
||||
};
|
||||
|
||||
const relevantHints = hints[context] || hints.general;
|
||||
|
||||
console.log(chalk.cyan(`\nHelpful hints:`));
|
||||
// Convert to Set to ensure only unique hints are displayed
|
||||
const uniqueHints = new Set(relevantHints);
|
||||
uniqueHints.forEach((hint) => {
|
||||
console.log(` ${hint}`);
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1132,139 +1132,6 @@ function findCycles(
|
||||
return cyclesToBreak;
|
||||
}
|
||||
|
||||
/**
|
||||
* Unified dependency traversal utility that supports both forward and reverse dependency traversal
|
||||
* @param {Array} sourceTasks - Array of source tasks to start traversal from
|
||||
* @param {Array} allTasks - Array of all tasks to search within
|
||||
* @param {Object} options - Configuration options
|
||||
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
|
||||
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
|
||||
* @param {'forward'|'reverse'} options.direction - Direction of traversal (default: 'forward')
|
||||
* @param {Function} options.logger - Optional logger function for warnings
|
||||
* @returns {Array} Array of all dependency task IDs found through traversal
|
||||
*/
|
||||
function traverseDependencies(sourceTasks, allTasks, options = {}) {
|
||||
const {
|
||||
maxDepth = 50,
|
||||
includeSelf = false,
|
||||
direction = 'forward',
|
||||
logger = null
|
||||
} = options;
|
||||
|
||||
const dependentTaskIds = new Set();
|
||||
const processedIds = new Set();
|
||||
|
||||
// Helper function to normalize dependency IDs while preserving subtask format
|
||||
function normalizeDependencyId(depId) {
|
||||
if (typeof depId === 'string') {
|
||||
// Preserve string format for subtask IDs like "1.2"
|
||||
if (depId.includes('.')) {
|
||||
return depId;
|
||||
}
|
||||
// Convert simple string numbers to numbers for consistency
|
||||
const parsed = parseInt(depId, 10);
|
||||
return isNaN(parsed) ? depId : parsed;
|
||||
}
|
||||
return depId;
|
||||
}
|
||||
|
||||
// Helper function for forward dependency traversal
|
||||
function findForwardDependencies(taskId, currentDepth = 0) {
|
||||
// Check depth limit
|
||||
if (currentDepth >= maxDepth) {
|
||||
const warnMsg = `Maximum recursion depth (${maxDepth}) reached for task ${taskId}`;
|
||||
if (logger && typeof logger.warn === 'function') {
|
||||
logger.warn(warnMsg);
|
||||
} else if (typeof log !== 'undefined' && log.warn) {
|
||||
log.warn(warnMsg);
|
||||
} else {
|
||||
console.warn(warnMsg);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (processedIds.has(taskId)) {
|
||||
return; // Avoid infinite loops
|
||||
}
|
||||
processedIds.add(taskId);
|
||||
|
||||
const task = allTasks.find((t) => t.id === taskId);
|
||||
if (!task || !Array.isArray(task.dependencies)) {
|
||||
return;
|
||||
}
|
||||
|
||||
task.dependencies.forEach((depId) => {
|
||||
const normalizedDepId = normalizeDependencyId(depId);
|
||||
|
||||
// Skip invalid dependencies and optionally skip self-references
|
||||
if (
|
||||
normalizedDepId == null ||
|
||||
(!includeSelf && normalizedDepId === taskId)
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
dependentTaskIds.add(normalizedDepId);
|
||||
// Recursively find dependencies of this dependency
|
||||
findForwardDependencies(normalizedDepId, currentDepth + 1);
|
||||
});
|
||||
}
|
||||
|
||||
// Helper function for reverse dependency traversal
|
||||
function findReverseDependencies(taskId, currentDepth = 0) {
|
||||
// Check depth limit
|
||||
if (currentDepth >= maxDepth) {
|
||||
const warnMsg = `Maximum recursion depth (${maxDepth}) reached for task ${taskId}`;
|
||||
if (logger && typeof logger.warn === 'function') {
|
||||
logger.warn(warnMsg);
|
||||
} else if (typeof log !== 'undefined' && log.warn) {
|
||||
log.warn(warnMsg);
|
||||
} else {
|
||||
console.warn(warnMsg);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (processedIds.has(taskId)) {
|
||||
return; // Avoid infinite loops
|
||||
}
|
||||
processedIds.add(taskId);
|
||||
|
||||
allTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
const dependsOnTaskId = task.dependencies.some((depId) => {
|
||||
const normalizedDepId = normalizeDependencyId(depId);
|
||||
return normalizedDepId === taskId;
|
||||
});
|
||||
|
||||
if (dependsOnTaskId) {
|
||||
// Skip invalid dependencies and optionally skip self-references
|
||||
if (task.id == null || (!includeSelf && task.id === taskId)) {
|
||||
return;
|
||||
}
|
||||
|
||||
dependentTaskIds.add(task.id);
|
||||
// Recursively find tasks that depend on this task
|
||||
findReverseDependencies(task.id, currentDepth + 1);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Choose traversal function based on direction
|
||||
const traversalFunc =
|
||||
direction === 'reverse' ? findReverseDependencies : findForwardDependencies;
|
||||
|
||||
// Start traversal from each source task
|
||||
sourceTasks.forEach((sourceTask) => {
|
||||
if (sourceTask && sourceTask.id) {
|
||||
traversalFunc(sourceTask.id);
|
||||
}
|
||||
});
|
||||
|
||||
return Array.from(dependentTaskIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a string from camelCase to kebab-case
|
||||
* @param {string} str - The string to convert
|
||||
@@ -1563,20 +1430,6 @@ function ensureTagMetadata(tagObj, opts = {}) {
|
||||
return tagObj;
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip ANSI color codes from a string
|
||||
* Useful for testing, logging to files, or when clean text output is needed
|
||||
* @param {string} text - The text that may contain ANSI color codes
|
||||
* @returns {string} - The text with ANSI color codes removed
|
||||
*/
|
||||
function stripAnsiCodes(text) {
|
||||
if (typeof text !== 'string') {
|
||||
return text;
|
||||
}
|
||||
// Remove ANSI escape sequences (color codes, cursor movements, etc.)
|
||||
return text.replace(/\x1b\[[0-9;]*m/g, '');
|
||||
}
|
||||
|
||||
// Export all utility functions and configuration
|
||||
export {
|
||||
LOG_LEVELS,
|
||||
@@ -1592,7 +1445,6 @@ export {
|
||||
truncate,
|
||||
isEmpty,
|
||||
findCycles,
|
||||
traverseDependencies,
|
||||
toKebabCase,
|
||||
detectCamelCaseFlags,
|
||||
disableSilentMode,
|
||||
@@ -1615,6 +1467,5 @@ export {
|
||||
markMigrationForNotice,
|
||||
flattenTasksWithSubtasks,
|
||||
ensureTagMetadata,
|
||||
stripAnsiCodes,
|
||||
normalizeTaskIds
|
||||
};
|
||||
|
||||
@@ -2,7 +2,6 @@ import {
|
||||
generateObject,
|
||||
generateText,
|
||||
streamText,
|
||||
streamObject,
|
||||
zodSchema,
|
||||
JSONParseError,
|
||||
NoObjectGeneratedError
|
||||
@@ -225,46 +224,6 @@ export class BaseAIProvider {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams a structured object using the provider's model
|
||||
*/
|
||||
async streamObject(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
if (!params.schema) {
|
||||
throw new Error('Schema is required for object streaming');
|
||||
}
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Streaming ${this.name} object with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
const client = await this.getClient(params);
|
||||
const result = await streamObject({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
schema: zodSchema(params.schema),
|
||||
mode: params.mode || 'auto',
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} streamObject initiated successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
// Return the stream result directly
|
||||
// The stream result contains partialObjectStream and other properties
|
||||
return result;
|
||||
} catch (error) {
|
||||
this.handleError('object streaming', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using the provider's model
|
||||
*/
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/**
|
||||
* @typedef {'amp' | 'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'kiro' | 'opencode' | 'kilo' | 'roo' | 'trae' | 'windsurf' | 'vscode' | 'zed'} RulesProfile
|
||||
* @typedef {'amp' | 'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'kiro' | 'opencode' | 'roo' | 'trae' | 'windsurf' | 'vscode' | 'zed'} RulesProfile
|
||||
*/
|
||||
|
||||
/**
|
||||
@@ -18,7 +18,6 @@
|
||||
* - gemini: Gemini integration
|
||||
* - kiro: Kiro IDE rules
|
||||
* - opencode: OpenCode integration
|
||||
* - kilo: Kilo Code integration
|
||||
* - roo: Roo Code IDE rules
|
||||
* - trae: Trae IDE rules
|
||||
* - vscode: VS Code with GitHub Copilot integration
|
||||
@@ -39,7 +38,6 @@ export const RULE_PROFILES = [
|
||||
'gemini',
|
||||
'kiro',
|
||||
'opencode',
|
||||
'kilo',
|
||||
'roo',
|
||||
'trae',
|
||||
'vscode',
|
||||
|
||||
@@ -5,7 +5,6 @@ export { clineProfile } from './cline.js';
|
||||
export { codexProfile } from './codex.js';
|
||||
export { cursorProfile } from './cursor.js';
|
||||
export { geminiProfile } from './gemini.js';
|
||||
export { kiloProfile } from './kilo.js';
|
||||
export { kiroProfile } from './kiro.js';
|
||||
export { opencodeProfile } from './opencode.js';
|
||||
export { rooProfile } from './roo.js';
|
||||
|
||||
@@ -1,186 +0,0 @@
|
||||
// Kilo Code conversion profile for rule-transformer
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
import { ROO_MODES } from '../constants/profiles.js';
|
||||
|
||||
// Utility function to apply kilo transformations to content
|
||||
function applyKiloTransformations(content) {
|
||||
const customReplacements = [
|
||||
// Replace roo-specific terms with kilo equivalents
|
||||
{
|
||||
from: /\broo\b/gi,
|
||||
to: (match) => (match.charAt(0) === 'R' ? 'Kilo' : 'kilo')
|
||||
},
|
||||
{ from: /Roo/g, to: 'Kilo' },
|
||||
{ from: /ROO/g, to: 'KILO' },
|
||||
{ from: /roocode\.com/gi, to: 'kilocode.com' },
|
||||
{ from: /docs\.roocode\.com/gi, to: 'docs.kilocode.com' },
|
||||
{ from: /https?:\/\/roocode\.com/gi, to: 'https://kilocode.com' },
|
||||
{
|
||||
from: /https?:\/\/docs\.roocode\.com/gi,
|
||||
to: 'https://docs.kilocode.com'
|
||||
},
|
||||
{ from: /\.roo\//g, to: '.kilo/' },
|
||||
{ from: /\.roomodes/g, to: '.kilocodemodes' },
|
||||
// Handle file extensions and directory references
|
||||
{ from: /roo-rules/g, to: 'kilo-rules' },
|
||||
{ from: /rules-roo/g, to: 'rules-kilo' }
|
||||
];
|
||||
|
||||
let transformedContent = content;
|
||||
for (const replacement of customReplacements) {
|
||||
transformedContent = transformedContent.replace(
|
||||
replacement.from,
|
||||
replacement.to
|
||||
);
|
||||
}
|
||||
return transformedContent;
|
||||
}
|
||||
|
||||
// Utility function to copy files recursively
|
||||
function copyRecursiveSync(src, dest) {
|
||||
const exists = fs.existsSync(src);
|
||||
const stats = exists && fs.statSync(src);
|
||||
const isDirectory = exists && stats.isDirectory();
|
||||
if (isDirectory) {
|
||||
if (!fs.existsSync(dest)) fs.mkdirSync(dest, { recursive: true });
|
||||
fs.readdirSync(src).forEach((childItemName) => {
|
||||
copyRecursiveSync(
|
||||
path.join(src, childItemName),
|
||||
path.join(dest, childItemName)
|
||||
);
|
||||
});
|
||||
} else {
|
||||
fs.copyFileSync(src, dest);
|
||||
}
|
||||
}
|
||||
|
||||
// Lifecycle functions for Kilo profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the roocode directory
|
||||
const sourceDir = path.join(assetsDir, 'roocode');
|
||||
|
||||
if (!fs.existsSync(sourceDir)) {
|
||||
log('error', `[Kilo] Source directory does not exist: ${sourceDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
// Copy basic roocode structure first
|
||||
copyRecursiveSync(sourceDir, targetDir);
|
||||
log('debug', `[Kilo] Copied roocode directory to ${targetDir}`);
|
||||
|
||||
// Transform .roomodes to .kilocodemodes
|
||||
const roomodesSrc = path.join(sourceDir, '.roomodes');
|
||||
const kilocodemodesDest = path.join(targetDir, '.kilocodemodes');
|
||||
if (fs.existsSync(roomodesSrc)) {
|
||||
try {
|
||||
const roomodesContent = fs.readFileSync(roomodesSrc, 'utf8');
|
||||
const transformedContent = applyKiloTransformations(roomodesContent);
|
||||
fs.writeFileSync(kilocodemodesDest, transformedContent);
|
||||
log('debug', `[Kilo] Created .kilocodemodes at ${kilocodemodesDest}`);
|
||||
|
||||
// Remove the original .roomodes file
|
||||
fs.unlinkSync(path.join(targetDir, '.roomodes'));
|
||||
} catch (err) {
|
||||
log('error', `[Kilo] Failed to transform .roomodes: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Transform .roo directory to .kilo and apply kilo transformations to mode-specific rules
|
||||
const rooModesDir = path.join(sourceDir, '.roo');
|
||||
const kiloModesDir = path.join(targetDir, '.kilo');
|
||||
|
||||
// Remove the copied .roo directory and create .kilo
|
||||
if (fs.existsSync(path.join(targetDir, '.roo'))) {
|
||||
fs.rmSync(path.join(targetDir, '.roo'), { recursive: true, force: true });
|
||||
}
|
||||
|
||||
for (const mode of ROO_MODES) {
|
||||
const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);
|
||||
const dest = path.join(kiloModesDir, `rules-${mode}`, `${mode}-rules`);
|
||||
if (fs.existsSync(src)) {
|
||||
try {
|
||||
const destDir = path.dirname(dest);
|
||||
if (!fs.existsSync(destDir)) fs.mkdirSync(destDir, { recursive: true });
|
||||
|
||||
// Read, transform, and write the rule file
|
||||
const ruleContent = fs.readFileSync(src, 'utf8');
|
||||
const transformedContent = applyKiloTransformations(ruleContent);
|
||||
fs.writeFileSync(dest, transformedContent);
|
||||
|
||||
log('debug', `[Kilo] Transformed and copied ${mode}-rules to ${dest}`);
|
||||
} catch (err) {
|
||||
log(
|
||||
'error',
|
||||
`[Kilo] Failed to transform ${src} to ${dest}: ${err.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onRemoveRulesProfile(targetDir) {
|
||||
const kilocodemodespath = path.join(targetDir, '.kilocodemodes');
|
||||
if (fs.existsSync(kilocodemodespath)) {
|
||||
try {
|
||||
fs.rmSync(kilocodemodespath, { force: true });
|
||||
log('debug', `[Kilo] Removed .kilocodemodes from ${kilocodemodespath}`);
|
||||
} catch (err) {
|
||||
log('error', `[Kilo] Failed to remove .kilocodemodes: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
const kiloDir = path.join(targetDir, '.kilo');
|
||||
if (fs.existsSync(kiloDir)) {
|
||||
fs.readdirSync(kiloDir).forEach((entry) => {
|
||||
if (entry.startsWith('rules-')) {
|
||||
const modeDir = path.join(kiloDir, entry);
|
||||
try {
|
||||
fs.rmSync(modeDir, { recursive: true, force: true });
|
||||
log('debug', `[Kilo] Removed ${entry} directory from ${modeDir}`);
|
||||
} catch (err) {
|
||||
log('error', `[Kilo] Failed to remove ${modeDir}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
if (fs.readdirSync(kiloDir).length === 0) {
|
||||
try {
|
||||
fs.rmSync(kiloDir, { recursive: true, force: true });
|
||||
log('debug', `[Kilo] Removed empty .kilo directory from ${kiloDir}`);
|
||||
} catch (err) {
|
||||
log('error', `[Kilo] Failed to remove .kilo directory: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
}
|
||||
|
||||
// Create and export kilo profile using the base factory with roo rule reuse
|
||||
export const kiloProfile = createProfile({
|
||||
name: 'kilo',
|
||||
displayName: 'Kilo Code',
|
||||
url: 'kilocode.com',
|
||||
docsUrl: 'docs.kilocode.com',
|
||||
profileDir: '.kilo',
|
||||
rulesDir: '.kilo/rules',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.ROO_STYLE,
|
||||
|
||||
fileMap: {
|
||||
// Map roo rule files to kilo equivalents
|
||||
'rules/cursor_rules.mdc': 'kilo_rules.md',
|
||||
'rules/dev_workflow.mdc': 'dev_workflow.md',
|
||||
'rules/self_improve.mdc': 'self_improve.md',
|
||||
'rules/taskmaster.mdc': 'taskmaster.md'
|
||||
},
|
||||
onAdd: onAddRulesProfile,
|
||||
onRemove: onRemoveRulesProfile,
|
||||
onPostConvert: onPostConvertRulesProfile
|
||||
});
|
||||
|
||||
// Export lifecycle functions separately to avoid naming conflicts
|
||||
export { onAddRulesProfile, onRemoveRulesProfile, onPostConvertRulesProfile };
|
||||
@@ -1,298 +0,0 @@
|
||||
import { newMultiBar } from './cli-progress-factory.js';
|
||||
|
||||
/**
|
||||
* Base class for progress trackers, handling common logic for time, tokens, estimation, and multibar management.
|
||||
*/
|
||||
export class BaseProgressTracker {
|
||||
constructor(options = {}) {
|
||||
this.numUnits = options.numUnits || 1;
|
||||
this.unitName = options.unitName || 'unit'; // e.g., 'task', 'subtask'
|
||||
this.startTime = null;
|
||||
this.completedUnits = 0;
|
||||
this.tokensIn = 0;
|
||||
this.tokensOut = 0;
|
||||
this.isEstimate = true; // For token display
|
||||
|
||||
// Time estimation properties
|
||||
this.bestAvgTimePerUnit = null;
|
||||
this.lastEstimateTime = null;
|
||||
this.lastEstimateSeconds = 0;
|
||||
|
||||
// UI components
|
||||
this.multibar = null;
|
||||
this.timeTokensBar = null;
|
||||
this.progressBar = null;
|
||||
this._timerInterval = null;
|
||||
|
||||
// State flags
|
||||
this.isStarted = false;
|
||||
this.isFinished = false;
|
||||
|
||||
// Allow subclasses to define custom properties
|
||||
this._initializeCustomProperties(options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method for subclasses to initialize custom properties.
|
||||
* @protected
|
||||
*/
|
||||
_initializeCustomProperties(options) {
|
||||
// Subclasses can override this
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the pluralized form of the unit name for safe property keys.
|
||||
* @returns {string} Pluralized unit name
|
||||
*/
|
||||
get unitNamePlural() {
|
||||
return `${this.unitName}s`;
|
||||
}
|
||||
|
||||
start() {
|
||||
if (this.isStarted || this.isFinished) return;
|
||||
|
||||
this.isStarted = true;
|
||||
this.startTime = Date.now();
|
||||
|
||||
this.multibar = newMultiBar();
|
||||
|
||||
// Create time/tokens bar using subclass-provided format
|
||||
this.timeTokensBar = this.multibar.create(
|
||||
1,
|
||||
0,
|
||||
{},
|
||||
{
|
||||
format: this._getTimeTokensBarFormat(),
|
||||
barsize: 1,
|
||||
hideCursor: true,
|
||||
clearOnComplete: false
|
||||
}
|
||||
);
|
||||
|
||||
// Create main progress bar using subclass-provided format
|
||||
this.progressBar = this.multibar.create(
|
||||
this.numUnits,
|
||||
0,
|
||||
{},
|
||||
{
|
||||
format: this._getProgressBarFormat(),
|
||||
barCompleteChar: '\u2588',
|
||||
barIncompleteChar: '\u2591'
|
||||
}
|
||||
);
|
||||
|
||||
this._updateTimeTokensBar();
|
||||
this.progressBar.update(0, { [this.unitNamePlural]: `0/${this.numUnits}` });
|
||||
|
||||
// Start timer
|
||||
this._timerInterval = setInterval(() => this._updateTimeTokensBar(), 1000);
|
||||
|
||||
// Allow subclasses to add custom bars or setup
|
||||
this._setupCustomUI();
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method for subclasses to add custom UI elements after start.
|
||||
* @protected
|
||||
*/
|
||||
_setupCustomUI() {
|
||||
// Subclasses can override this
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method to get the format for the time/tokens bar.
|
||||
* @protected
|
||||
* @returns {string} Format string for the time/tokens bar.
|
||||
*/
|
||||
_getTimeTokensBarFormat() {
|
||||
return `{clock} {elapsed} | Tokens (I/O): {in}/{out} | Est: {remaining}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method to get the format for the main progress bar.
|
||||
* @protected
|
||||
* @returns {string} Format string for the progress bar.
|
||||
*/
|
||||
_getProgressBarFormat() {
|
||||
return `${this.unitName.charAt(0).toUpperCase() + this.unitName.slice(1)}s {${this.unitNamePlural}} |{bar}| {percentage}%`;
|
||||
}
|
||||
|
||||
updateTokens(tokensIn, tokensOut, isEstimate = false) {
|
||||
this.tokensIn = tokensIn || 0;
|
||||
this.tokensOut = tokensOut || 0;
|
||||
this.isEstimate = isEstimate;
|
||||
this._updateTimeTokensBar();
|
||||
}
|
||||
|
||||
_updateTimeTokensBar() {
|
||||
if (!this.timeTokensBar || this.isFinished) return;
|
||||
|
||||
const elapsed = this._formatElapsedTime();
|
||||
const remaining = this._estimateRemainingTime();
|
||||
const tokensLabel = this.isEstimate ? '~ Tokens (I/O)' : 'Tokens (I/O)';
|
||||
|
||||
this.timeTokensBar.update(1, {
|
||||
clock: '⏱️',
|
||||
elapsed,
|
||||
in: this.tokensIn,
|
||||
out: this.tokensOut,
|
||||
remaining,
|
||||
tokensLabel,
|
||||
// Subclasses can add more payload here via override
|
||||
...this._getCustomTimeTokensPayload()
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method for subclasses to provide custom payload for time/tokens bar.
|
||||
* @protected
|
||||
* @returns {Object} Custom payload object.
|
||||
*/
|
||||
_getCustomTimeTokensPayload() {
|
||||
return {};
|
||||
}
|
||||
|
||||
_formatElapsedTime() {
|
||||
if (!this.startTime) return '0m 00s';
|
||||
const seconds = Math.floor((Date.now() - this.startTime) / 1000);
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
return `${minutes}m ${remainingSeconds.toString().padStart(2, '0')}s`;
|
||||
}
|
||||
|
||||
_estimateRemainingTime() {
|
||||
const progress = this._getProgressFraction();
|
||||
if (progress >= 1) return '~0s';
|
||||
|
||||
const now = Date.now();
|
||||
const elapsed = (now - this.startTime) / 1000;
|
||||
|
||||
if (progress === 0) return '~calculating...';
|
||||
|
||||
const avgTimePerUnit = elapsed / progress;
|
||||
|
||||
if (
|
||||
this.bestAvgTimePerUnit === null ||
|
||||
avgTimePerUnit < this.bestAvgTimePerUnit
|
||||
) {
|
||||
this.bestAvgTimePerUnit = avgTimePerUnit;
|
||||
}
|
||||
|
||||
const remainingUnits = this.numUnits * (1 - progress);
|
||||
let estimatedSeconds = Math.ceil(remainingUnits * this.bestAvgTimePerUnit);
|
||||
|
||||
// Stabilization logic
|
||||
if (this.lastEstimateTime) {
|
||||
const elapsedSinceEstimate = Math.floor(
|
||||
(now - this.lastEstimateTime) / 1000
|
||||
);
|
||||
const countdownSeconds = Math.max(
|
||||
0,
|
||||
this.lastEstimateSeconds - elapsedSinceEstimate
|
||||
);
|
||||
if (countdownSeconds === 0) return '~0s';
|
||||
estimatedSeconds = Math.min(estimatedSeconds, countdownSeconds);
|
||||
}
|
||||
|
||||
this.lastEstimateTime = now;
|
||||
this.lastEstimateSeconds = estimatedSeconds;
|
||||
|
||||
return `~${this._formatDuration(estimatedSeconds)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method for subclasses to calculate current progress fraction (0-1).
|
||||
* Defaults to simple completedUnits / numUnits.
|
||||
* @protected
|
||||
* @returns {number} Progress fraction (can be fractional for subtasks).
|
||||
*/
|
||||
_getProgressFraction() {
|
||||
return this.completedUnits / this.numUnits;
|
||||
}
|
||||
|
||||
_formatDuration(seconds) {
|
||||
if (seconds < 60) return `${seconds}s`;
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
if (minutes < 60) {
|
||||
return remainingSeconds > 0
|
||||
? `${minutes}m ${remainingSeconds}s`
|
||||
: `${minutes}m`;
|
||||
}
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const remainingMinutes = minutes % 60;
|
||||
return `${hours}h ${remainingMinutes}m`;
|
||||
}
|
||||
|
||||
getElapsedTime() {
|
||||
return this.startTime ? Date.now() - this.startTime : 0;
|
||||
}
|
||||
|
||||
stop() {
|
||||
if (this.isFinished) return;
|
||||
|
||||
this.isFinished = true;
|
||||
|
||||
if (this._timerInterval) {
|
||||
clearInterval(this._timerInterval);
|
||||
this._timerInterval = null;
|
||||
}
|
||||
|
||||
if (this.multibar) {
|
||||
this._updateTimeTokensBar();
|
||||
this.multibar.stop();
|
||||
}
|
||||
|
||||
// Ensure cleanup is called to prevent memory leaks
|
||||
this.cleanup();
|
||||
}
|
||||
|
||||
getSummary() {
|
||||
return {
|
||||
completedUnits: this.completedUnits,
|
||||
elapsedTime: this.getElapsedTime()
|
||||
// Subclasses should extend this
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup method to ensure proper resource disposal and prevent memory leaks.
|
||||
* Should be called when the progress tracker is no longer needed.
|
||||
*/
|
||||
cleanup() {
|
||||
// Stop any active timers
|
||||
if (this._timerInterval) {
|
||||
clearInterval(this._timerInterval);
|
||||
this._timerInterval = null;
|
||||
}
|
||||
|
||||
// Stop and clear multibar
|
||||
if (this.multibar) {
|
||||
try {
|
||||
this.multibar.stop();
|
||||
} catch (error) {
|
||||
// Ignore errors during cleanup
|
||||
}
|
||||
this.multibar = null;
|
||||
}
|
||||
|
||||
// Clear progress bar references
|
||||
this.timeTokensBar = null;
|
||||
this.progressBar = null;
|
||||
|
||||
// Reset state
|
||||
this.isStarted = false;
|
||||
this.isFinished = true;
|
||||
|
||||
// Allow subclasses to perform custom cleanup
|
||||
this._performCustomCleanup();
|
||||
}
|
||||
|
||||
/**
|
||||
* Protected method for subclasses to perform custom cleanup.
|
||||
* @protected
|
||||
*/
|
||||
_performCustomCleanup() {
|
||||
// Subclasses can override this
|
||||
}
|
||||
}
|
||||
@@ -1,115 +0,0 @@
|
||||
import cliProgress from 'cli-progress';
|
||||
|
||||
/**
|
||||
* Default configuration for progress bars
|
||||
* Extracted to avoid duplication and provide single source of truth
|
||||
*/
|
||||
const DEFAULT_CONFIG = {
|
||||
clearOnComplete: false,
|
||||
stopOnComplete: true,
|
||||
hideCursor: true,
|
||||
barsize: 40 // Standard terminal width for progress bar
|
||||
};
|
||||
|
||||
/**
|
||||
* Available presets for progress bar styling
|
||||
* Makes it easy to see what options are available
|
||||
*/
|
||||
const PRESETS = {
|
||||
shades_classic: cliProgress.Presets.shades_classic,
|
||||
shades_grey: cliProgress.Presets.shades_grey,
|
||||
rect: cliProgress.Presets.rect,
|
||||
legacy: cliProgress.Presets.legacy
|
||||
};
|
||||
|
||||
/**
|
||||
* Factory class for creating CLI progress bars
|
||||
* Provides a consistent interface for creating both single and multi-bar instances
|
||||
*/
|
||||
export class ProgressBarFactory {
|
||||
constructor(defaultOptions = {}, defaultPreset = PRESETS.shades_classic) {
|
||||
this.defaultOptions = { ...DEFAULT_CONFIG, ...defaultOptions };
|
||||
this.defaultPreset = defaultPreset;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a new single progress bar
|
||||
* @param {Object} opts - Custom options to override defaults
|
||||
* @param {Object} preset - Progress bar preset for styling
|
||||
* @returns {cliProgress.SingleBar} Configured single progress bar instance
|
||||
*/
|
||||
createSingleBar(opts = {}, preset = null) {
|
||||
const config = this._mergeConfig(opts);
|
||||
const barPreset = preset || this.defaultPreset;
|
||||
|
||||
return new cliProgress.SingleBar(config, barPreset);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a new multi-bar container
|
||||
* @param {Object} opts - Custom options to override defaults
|
||||
* @param {Object} preset - Progress bar preset for styling
|
||||
* @returns {cliProgress.MultiBar} Configured multi-bar instance
|
||||
*/
|
||||
createMultiBar(opts = {}, preset = null) {
|
||||
const config = this._mergeConfig(opts);
|
||||
const barPreset = preset || this.defaultPreset;
|
||||
|
||||
return new cliProgress.MultiBar(config, barPreset);
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges custom options with defaults
|
||||
* @private
|
||||
* @param {Object} customOpts - Custom options to merge
|
||||
* @returns {Object} Merged configuration
|
||||
*/
|
||||
_mergeConfig(customOpts) {
|
||||
return { ...this.defaultOptions, ...customOpts };
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the default configuration
|
||||
* @param {Object} options - New default options
|
||||
*/
|
||||
setDefaultOptions(options) {
|
||||
this.defaultOptions = { ...this.defaultOptions, ...options };
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the default preset
|
||||
* @param {Object} preset - New default preset
|
||||
*/
|
||||
setDefaultPreset(preset) {
|
||||
this.defaultPreset = preset;
|
||||
}
|
||||
}
|
||||
|
||||
// Create a default factory instance for backward compatibility
|
||||
const defaultFactory = new ProgressBarFactory();
|
||||
|
||||
/**
|
||||
* Legacy function for creating a single progress bar
|
||||
* @deprecated Use ProgressBarFactory.createSingleBar() instead
|
||||
* @param {Object} opts - Progress bar options
|
||||
* @returns {cliProgress.SingleBar} Single progress bar instance
|
||||
*/
|
||||
export function newSingle(opts = {}) {
|
||||
return defaultFactory.createSingleBar(opts);
|
||||
}
|
||||
|
||||
/**
|
||||
* Legacy function for creating a multi-bar
|
||||
* @deprecated Use ProgressBarFactory.createMultiBar() instead
|
||||
* @param {Object} opts - Progress bar options
|
||||
* @returns {cliProgress.MultiBar} Multi-bar instance
|
||||
*/
|
||||
export function newMultiBar(opts = {}) {
|
||||
return defaultFactory.createMultiBar(opts);
|
||||
}
|
||||
|
||||
// Export presets for easy access
|
||||
export { PRESETS };
|
||||
|
||||
// Export the factory class as default
|
||||
export default ProgressBarFactory;
|
||||
@@ -1,221 +0,0 @@
|
||||
import chalk from 'chalk';
|
||||
import { newMultiBar } from './cli-progress-factory.js';
|
||||
import { BaseProgressTracker } from './base-progress-tracker.js';
|
||||
import {
|
||||
createProgressHeader,
|
||||
createProgressRow,
|
||||
createBorder
|
||||
} from './tracker-ui.js';
|
||||
import {
|
||||
getCliPriorityIndicators,
|
||||
getPriorityIndicator,
|
||||
getStatusBarPriorityIndicators,
|
||||
getPriorityColors
|
||||
} from '../ui/indicators.js';
|
||||
|
||||
// Get centralized priority indicators
|
||||
const PRIORITY_INDICATORS = getCliPriorityIndicators();
|
||||
const PRIORITY_DOTS = getStatusBarPriorityIndicators();
|
||||
const PRIORITY_COLORS = getPriorityColors();
|
||||
|
||||
// Constants
|
||||
const CONSTANTS = {
|
||||
DEBOUNCE_DELAY: 100,
|
||||
MAX_TITLE_LENGTH: 57,
|
||||
TRUNCATED_LENGTH: 54,
|
||||
TASK_ID_PAD_START: 3,
|
||||
TASK_ID_PAD_END: 4,
|
||||
PRIORITY_PAD_END: 3,
|
||||
VALID_PRIORITIES: ['high', 'medium', 'low'],
|
||||
DEFAULT_PRIORITY: 'medium'
|
||||
};
|
||||
|
||||
/**
|
||||
* Helper class to manage update debouncing
|
||||
*/
|
||||
class UpdateDebouncer {
|
||||
constructor(delay = CONSTANTS.DEBOUNCE_DELAY) {
|
||||
this.delay = delay;
|
||||
this.pendingTimeout = null;
|
||||
}
|
||||
|
||||
debounce(callback) {
|
||||
this.clear();
|
||||
this.pendingTimeout = setTimeout(() => {
|
||||
callback();
|
||||
this.pendingTimeout = null;
|
||||
}, this.delay);
|
||||
}
|
||||
|
||||
clear() {
|
||||
if (this.pendingTimeout) {
|
||||
clearTimeout(this.pendingTimeout);
|
||||
this.pendingTimeout = null;
|
||||
}
|
||||
}
|
||||
|
||||
hasPending() {
|
||||
return this.pendingTimeout !== null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper class to manage priority counts
|
||||
*/
|
||||
class PriorityManager {
|
||||
constructor() {
|
||||
this.priorities = { high: 0, medium: 0, low: 0 };
|
||||
}
|
||||
|
||||
increment(priority) {
|
||||
const normalized = this.normalize(priority);
|
||||
this.priorities[normalized]++;
|
||||
return normalized;
|
||||
}
|
||||
|
||||
normalize(priority) {
|
||||
const lowercased = priority
|
||||
? priority.toLowerCase()
|
||||
: CONSTANTS.DEFAULT_PRIORITY;
|
||||
return CONSTANTS.VALID_PRIORITIES.includes(lowercased)
|
||||
? lowercased
|
||||
: CONSTANTS.DEFAULT_PRIORITY;
|
||||
}
|
||||
|
||||
getCounts() {
|
||||
return { ...this.priorities };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper class for formatting task display elements
|
||||
*/
|
||||
class TaskFormatter {
|
||||
static formatTitle(title, taskNumber) {
|
||||
if (!title) return `Task ${taskNumber}`;
|
||||
return title.length > CONSTANTS.MAX_TITLE_LENGTH
|
||||
? title.substring(0, CONSTANTS.TRUNCATED_LENGTH) + '...'
|
||||
: title;
|
||||
}
|
||||
|
||||
static formatPriority(priority) {
|
||||
return getPriorityIndicator(priority, false).padEnd(
|
||||
CONSTANTS.PRIORITY_PAD_END,
|
||||
' '
|
||||
);
|
||||
}
|
||||
|
||||
static formatTaskId(taskNumber) {
|
||||
return taskNumber
|
||||
.toString()
|
||||
.padStart(CONSTANTS.TASK_ID_PAD_START, ' ')
|
||||
.padEnd(CONSTANTS.TASK_ID_PAD_END, ' ');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Tracks progress for PRD parsing operations with multibar display
|
||||
*/
|
||||
class ParsePrdTracker extends BaseProgressTracker {
|
||||
_initializeCustomProperties(options) {
|
||||
this.append = options.append;
|
||||
this.priorityManager = new PriorityManager();
|
||||
this.debouncer = new UpdateDebouncer();
|
||||
this.headerShown = false;
|
||||
}
|
||||
|
||||
_getTimeTokensBarFormat() {
|
||||
return `{clock} {elapsed} | ${PRIORITY_DOTS.high} {high} ${PRIORITY_DOTS.medium} {medium} ${PRIORITY_DOTS.low} {low} | Tokens (I/O): {in}/{out} | Est: {remaining}`;
|
||||
}
|
||||
|
||||
_getProgressBarFormat() {
|
||||
return 'Tasks {tasks} |{bar}| {percentage}%';
|
||||
}
|
||||
|
||||
_getCustomTimeTokensPayload() {
|
||||
return this.priorityManager.getCounts();
|
||||
}
|
||||
|
||||
addTaskLine(taskNumber, title, priority = 'medium') {
|
||||
if (!this.multibar || this.isFinished) return;
|
||||
|
||||
this._ensureHeaderShown();
|
||||
const normalizedPriority = this._updateTaskCounters(taskNumber, priority);
|
||||
|
||||
// Immediately update the time/tokens bar to show the new priority count
|
||||
this._updateTimeTokensBar();
|
||||
|
||||
this.debouncer.debounce(() => {
|
||||
this._updateProgressDisplay(taskNumber, title, normalizedPriority);
|
||||
});
|
||||
}
|
||||
|
||||
_ensureHeaderShown() {
|
||||
if (!this.headerShown) {
|
||||
this.headerShown = true;
|
||||
createProgressHeader(
|
||||
this.multibar,
|
||||
' TASK | PRI | TITLE',
|
||||
'------+-----+----------------------------------------------------------------'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
_updateTaskCounters(taskNumber, priority) {
|
||||
const normalizedPriority = this.priorityManager.increment(priority);
|
||||
this.completedUnits = taskNumber;
|
||||
return normalizedPriority;
|
||||
}
|
||||
|
||||
_updateProgressDisplay(taskNumber, title, normalizedPriority) {
|
||||
this.progressBar.update(this.completedUnits, {
|
||||
tasks: `${this.completedUnits}/${this.numUnits}`
|
||||
});
|
||||
|
||||
const displayTitle = TaskFormatter.formatTitle(title, taskNumber);
|
||||
const priorityDisplay = TaskFormatter.formatPriority(normalizedPriority);
|
||||
const taskIdCentered = TaskFormatter.formatTaskId(taskNumber);
|
||||
|
||||
createProgressRow(
|
||||
this.multibar,
|
||||
` ${taskIdCentered} | ${priorityDisplay} | {title}`,
|
||||
{ title: displayTitle }
|
||||
);
|
||||
|
||||
createBorder(
|
||||
this.multibar,
|
||||
'------+-----+----------------------------------------------------------------'
|
||||
);
|
||||
|
||||
this._updateTimeTokensBar();
|
||||
}
|
||||
|
||||
finish() {
|
||||
// Flush any pending updates before finishing
|
||||
if (this.debouncer.hasPending()) {
|
||||
this.debouncer.clear();
|
||||
this._updateTimeTokensBar();
|
||||
}
|
||||
this.cleanup();
|
||||
super.finish();
|
||||
}
|
||||
|
||||
/**
|
||||
* Override cleanup to handle pending updates
|
||||
*/
|
||||
_performCustomCleanup() {
|
||||
this.debouncer.clear();
|
||||
}
|
||||
|
||||
getSummary() {
|
||||
return {
|
||||
...super.getSummary(),
|
||||
taskPriorities: this.priorityManager.getCounts(),
|
||||
actionVerb: this.append ? 'appended' : 'generated'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export function createParsePrdTracker(options = {}) {
|
||||
return new ParsePrdTracker(options);
|
||||
}
|
||||
@@ -1,152 +0,0 @@
|
||||
/**
|
||||
* Configuration for progress tracker features
|
||||
*/
|
||||
class TrackerConfig {
|
||||
constructor() {
|
||||
this.features = new Set();
|
||||
this.spinnerFrames = null;
|
||||
this.unitName = 'unit';
|
||||
this.totalUnits = 100;
|
||||
}
|
||||
|
||||
addFeature(feature) {
|
||||
this.features.add(feature);
|
||||
}
|
||||
|
||||
hasFeature(feature) {
|
||||
return this.features.has(feature);
|
||||
}
|
||||
|
||||
getOptions() {
|
||||
return {
|
||||
numUnits: this.totalUnits,
|
||||
unitName: this.unitName,
|
||||
spinnerFrames: this.spinnerFrames,
|
||||
features: Array.from(this.features)
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Builder for creating configured progress trackers
|
||||
*/
|
||||
export class ProgressTrackerBuilder {
|
||||
constructor() {
|
||||
this.config = new TrackerConfig();
|
||||
}
|
||||
|
||||
withPercent() {
|
||||
this.config.addFeature('percent');
|
||||
return this;
|
||||
}
|
||||
|
||||
withTokens() {
|
||||
this.config.addFeature('tokens');
|
||||
return this;
|
||||
}
|
||||
|
||||
withTasks() {
|
||||
this.config.addFeature('tasks');
|
||||
return this;
|
||||
}
|
||||
|
||||
withSpinner(messages) {
|
||||
if (!messages || !Array.isArray(messages)) {
|
||||
throw new Error('Spinner messages must be an array');
|
||||
}
|
||||
this.config.spinnerFrames = messages;
|
||||
return this;
|
||||
}
|
||||
|
||||
withUnits(total, unitName = 'unit') {
|
||||
this.config.totalUnits = total;
|
||||
this.config.unitName = unitName;
|
||||
return this;
|
||||
}
|
||||
|
||||
build() {
|
||||
return new ProgressTracker(this.config);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Base progress tracker with configurable features
|
||||
*/
|
||||
class ProgressTracker {
|
||||
constructor(config) {
|
||||
this.config = config;
|
||||
this.isActive = false;
|
||||
this.current = 0;
|
||||
this.spinnerIndex = 0;
|
||||
this.startTime = null;
|
||||
}
|
||||
|
||||
start() {
|
||||
this.isActive = true;
|
||||
this.startTime = Date.now();
|
||||
this.current = 0;
|
||||
|
||||
if (this.config.spinnerFrames) {
|
||||
this._startSpinner();
|
||||
}
|
||||
}
|
||||
|
||||
update(data = {}) {
|
||||
if (!this.isActive) return;
|
||||
|
||||
if (data.current !== undefined) {
|
||||
this.current = data.current;
|
||||
}
|
||||
|
||||
const progress = this._buildProgressData(data);
|
||||
return progress;
|
||||
}
|
||||
|
||||
finish() {
|
||||
this.isActive = false;
|
||||
|
||||
if (this.spinnerInterval) {
|
||||
clearInterval(this.spinnerInterval);
|
||||
this.spinnerInterval = null;
|
||||
}
|
||||
|
||||
return this._buildSummary();
|
||||
}
|
||||
|
||||
_startSpinner() {
|
||||
this.spinnerInterval = setInterval(() => {
|
||||
this.spinnerIndex =
|
||||
(this.spinnerIndex + 1) % this.config.spinnerFrames.length;
|
||||
}, 100);
|
||||
}
|
||||
|
||||
_buildProgressData(data) {
|
||||
const progress = { ...data };
|
||||
|
||||
if (this.config.hasFeature('percent')) {
|
||||
progress.percentage = Math.round(
|
||||
(this.current / this.config.totalUnits) * 100
|
||||
);
|
||||
}
|
||||
|
||||
if (this.config.hasFeature('tasks')) {
|
||||
progress.tasks = `${this.current}/${this.config.totalUnits}`;
|
||||
}
|
||||
|
||||
if (this.config.spinnerFrames) {
|
||||
progress.spinner = this.config.spinnerFrames[this.spinnerIndex];
|
||||
}
|
||||
|
||||
return progress;
|
||||
}
|
||||
|
||||
_buildSummary() {
|
||||
const elapsed = Date.now() - this.startTime;
|
||||
return {
|
||||
total: this.config.totalUnits,
|
||||
completed: this.current,
|
||||
elapsedMs: elapsed,
|
||||
features: Array.from(this.config.features)
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -1,159 +0,0 @@
|
||||
import chalk from 'chalk';
|
||||
|
||||
/**
|
||||
* Factory for creating progress bar elements
|
||||
*/
|
||||
class ProgressBarFactory {
|
||||
constructor(multibar) {
|
||||
if (!multibar) {
|
||||
throw new Error('Multibar instance is required');
|
||||
}
|
||||
this.multibar = multibar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a progress bar with the given format
|
||||
*/
|
||||
createBar(format, payload = {}) {
|
||||
if (typeof format !== 'string') {
|
||||
throw new Error('Format must be a string');
|
||||
}
|
||||
|
||||
const bar = this.multibar.create(
|
||||
1, // total
|
||||
1, // current
|
||||
{},
|
||||
{
|
||||
format,
|
||||
barsize: 1,
|
||||
hideCursor: true,
|
||||
clearOnComplete: false
|
||||
}
|
||||
);
|
||||
|
||||
bar.update(1, payload);
|
||||
return bar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a header with borders
|
||||
*/
|
||||
createHeader(headerFormat, borderFormat) {
|
||||
this.createBar(borderFormat); // Top border
|
||||
this.createBar(headerFormat); // Header
|
||||
this.createBar(borderFormat); // Bottom border
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a data row
|
||||
*/
|
||||
createRow(rowFormat, payload) {
|
||||
if (!payload || typeof payload !== 'object') {
|
||||
throw new Error('Payload must be an object');
|
||||
}
|
||||
return this.createBar(rowFormat, payload);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a border element
|
||||
*/
|
||||
createBorder(borderFormat) {
|
||||
return this.createBar(borderFormat);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a bordered header for progress tables.
|
||||
* @param {Object} multibar - The multibar instance.
|
||||
* @param {string} headerFormat - Format string for the header row.
|
||||
* @param {string} borderFormat - Format string for the top and bottom borders.
|
||||
* @returns {void}
|
||||
*/
|
||||
export function createProgressHeader(multibar, headerFormat, borderFormat) {
|
||||
const factory = new ProgressBarFactory(multibar);
|
||||
factory.createHeader(headerFormat, borderFormat);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a formatted data row for progress tables.
|
||||
* @param {Object} multibar - The multibar instance.
|
||||
* @param {string} rowFormat - Format string for the row.
|
||||
* @param {Object} payload - Data payload for the row format.
|
||||
* @returns {void}
|
||||
*/
|
||||
export function createProgressRow(multibar, rowFormat, payload) {
|
||||
const factory = new ProgressBarFactory(multibar);
|
||||
factory.createRow(rowFormat, payload);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a border row for progress tables.
|
||||
* @param {Object} multibar - The multibar instance.
|
||||
* @param {string} borderFormat - Format string for the border.
|
||||
* @returns {void}
|
||||
*/
|
||||
export function createBorder(multibar, borderFormat) {
|
||||
const factory = new ProgressBarFactory(multibar);
|
||||
factory.createBorder(borderFormat);
|
||||
}
|
||||
|
||||
/**
|
||||
* Builder for creating progress tables with consistent formatting
|
||||
*/
|
||||
export class ProgressTableBuilder {
|
||||
constructor(multibar) {
|
||||
this.factory = new ProgressBarFactory(multibar);
|
||||
this.borderStyle = '─';
|
||||
this.columnSeparator = '|';
|
||||
}
|
||||
|
||||
/**
|
||||
* Shows a formatted table header
|
||||
*/
|
||||
showHeader(columns = null) {
|
||||
// Default columns for task display
|
||||
const defaultColumns = [
|
||||
{ text: 'TASK', width: 6 },
|
||||
{ text: 'PRI', width: 5 },
|
||||
{ text: 'TITLE', width: 64 }
|
||||
];
|
||||
|
||||
const cols = columns || defaultColumns;
|
||||
const headerText = ' ' + cols.map((c) => c.text).join(' | ') + ' ';
|
||||
const borderLine = this.createBorderLine(cols.map((c) => c.width));
|
||||
|
||||
this.factory.createHeader(headerText, borderLine);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a border line based on column widths
|
||||
*/
|
||||
createBorderLine(columnWidths) {
|
||||
return columnWidths
|
||||
.map((width) => this.borderStyle.repeat(width))
|
||||
.join('─┼─');
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a task row to the table
|
||||
*/
|
||||
addTaskRow(taskId, priority, title) {
|
||||
const format = ` ${taskId} | ${priority} | {title}`;
|
||||
this.factory.createRow(format, { title });
|
||||
|
||||
// Add separator after each row
|
||||
const borderLine = '------+-----+' + '─'.repeat(64);
|
||||
this.factory.createBorder(borderLine);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a summary row
|
||||
*/
|
||||
addSummaryRow(label, value) {
|
||||
const format = ` ${label}: {value}`;
|
||||
this.factory.createRow(format, { value });
|
||||
return this;
|
||||
}
|
||||
}
|
||||
@@ -1,273 +0,0 @@
|
||||
/**
|
||||
* indicators.js
|
||||
* UI functions for displaying priority and complexity indicators in different contexts
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import { TASK_PRIORITY_OPTIONS } from '../constants/task-priority.js';
|
||||
|
||||
// Extract priority values for cleaner object keys
|
||||
const [HIGH, MEDIUM, LOW] = TASK_PRIORITY_OPTIONS;
|
||||
|
||||
// Cache for generated indicators
|
||||
const INDICATOR_CACHE = new Map();
|
||||
|
||||
/**
|
||||
* Base configuration for indicator systems
|
||||
*/
|
||||
class IndicatorConfig {
|
||||
constructor(name, levels, colors, thresholds = null) {
|
||||
this.name = name;
|
||||
this.levels = levels;
|
||||
this.colors = colors;
|
||||
this.thresholds = thresholds;
|
||||
}
|
||||
|
||||
getColor(level) {
|
||||
return this.colors[level] || chalk.gray;
|
||||
}
|
||||
|
||||
getLevelFromScore(score) {
|
||||
if (!this.thresholds) {
|
||||
throw new Error(`${this.name} does not support score-based levels`);
|
||||
}
|
||||
|
||||
if (score >= 7) return this.levels[0]; // high
|
||||
if (score <= 3) return this.levels[2]; // low
|
||||
return this.levels[1]; // medium
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Visual style definitions
|
||||
*/
|
||||
const VISUAL_STYLES = {
|
||||
cli: {
|
||||
filled: '●', // ●
|
||||
empty: '○' // ○
|
||||
},
|
||||
statusBar: {
|
||||
high: '⋮', // ⋮
|
||||
medium: ':', // :
|
||||
low: '.' // .
|
||||
},
|
||||
mcp: {
|
||||
high: '🔴', // 🔴
|
||||
medium: '🟠', // 🟠
|
||||
low: '🟢' // 🟢
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Priority configuration
|
||||
*/
|
||||
const PRIORITY_CONFIG = new IndicatorConfig('priority', [HIGH, MEDIUM, LOW], {
|
||||
[HIGH]: chalk.hex('#CC0000'),
|
||||
[MEDIUM]: chalk.hex('#FF8800'),
|
||||
[LOW]: chalk.yellow
|
||||
});
|
||||
|
||||
/**
|
||||
* Generates CLI indicator with intensity
|
||||
*/
|
||||
function generateCliIndicator(intensity, color) {
|
||||
const filled = VISUAL_STYLES.cli.filled;
|
||||
const empty = VISUAL_STYLES.cli.empty;
|
||||
|
||||
let indicator = '';
|
||||
for (let i = 0; i < 3; i++) {
|
||||
if (i < intensity) {
|
||||
indicator += color(filled);
|
||||
} else {
|
||||
indicator += chalk.white(empty);
|
||||
}
|
||||
}
|
||||
return indicator;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get intensity level from priority/complexity level
|
||||
*/
|
||||
function getIntensityFromLevel(level, levels) {
|
||||
const index = levels.indexOf(level);
|
||||
return 3 - index; // high=3, medium=2, low=1
|
||||
}
|
||||
|
||||
/**
|
||||
* Generic cached indicator getter
|
||||
* @param {string} cacheKey - Cache key for the indicators
|
||||
* @param {Function} generator - Function to generate the indicators
|
||||
* @returns {Object} Cached or newly generated indicators
|
||||
*/
|
||||
function getCachedIndicators(cacheKey, generator) {
|
||||
if (INDICATOR_CACHE.has(cacheKey)) {
|
||||
return INDICATOR_CACHE.get(cacheKey);
|
||||
}
|
||||
|
||||
const indicators = generator();
|
||||
INDICATOR_CACHE.set(cacheKey, indicators);
|
||||
return indicators;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority indicators for MCP context (single emojis)
|
||||
* @returns {Object} Priority to emoji mapping
|
||||
*/
|
||||
export function getMcpPriorityIndicators() {
|
||||
return getCachedIndicators('mcp-priority-all', () => ({
|
||||
[HIGH]: VISUAL_STYLES.mcp.high,
|
||||
[MEDIUM]: VISUAL_STYLES.mcp.medium,
|
||||
[LOW]: VISUAL_STYLES.mcp.low
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority indicators for CLI context (colored dots with visual hierarchy)
|
||||
* @returns {Object} Priority to colored dot string mapping
|
||||
*/
|
||||
export function getCliPriorityIndicators() {
|
||||
return getCachedIndicators('cli-priority-all', () => {
|
||||
const indicators = {};
|
||||
PRIORITY_CONFIG.levels.forEach((level) => {
|
||||
const intensity = getIntensityFromLevel(level, PRIORITY_CONFIG.levels);
|
||||
const color = PRIORITY_CONFIG.getColor(level);
|
||||
indicators[level] = generateCliIndicator(intensity, color);
|
||||
});
|
||||
return indicators;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority indicators for status bars (simplified single character versions)
|
||||
* @returns {Object} Priority to single character indicator mapping
|
||||
*/
|
||||
export function getStatusBarPriorityIndicators() {
|
||||
return getCachedIndicators('statusbar-priority-all', () => {
|
||||
const indicators = {};
|
||||
PRIORITY_CONFIG.levels.forEach((level, index) => {
|
||||
const style =
|
||||
index === 0
|
||||
? VISUAL_STYLES.statusBar.high
|
||||
: index === 1
|
||||
? VISUAL_STYLES.statusBar.medium
|
||||
: VISUAL_STYLES.statusBar.low;
|
||||
const color = PRIORITY_CONFIG.getColor(level);
|
||||
indicators[level] = color(style);
|
||||
});
|
||||
return indicators;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority colors for consistent styling
|
||||
* @returns {Object} Priority to chalk color function mapping
|
||||
*/
|
||||
export function getPriorityColors() {
|
||||
return {
|
||||
[HIGH]: PRIORITY_CONFIG.colors[HIGH],
|
||||
[MEDIUM]: PRIORITY_CONFIG.colors[MEDIUM],
|
||||
[LOW]: PRIORITY_CONFIG.colors[LOW]
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority indicators based on context
|
||||
* @param {boolean} isMcp - Whether this is for MCP context (true) or CLI context (false)
|
||||
* @returns {Object} Priority to indicator mapping
|
||||
*/
|
||||
export function getPriorityIndicators(isMcp = false) {
|
||||
return isMcp ? getMcpPriorityIndicators() : getCliPriorityIndicators();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific priority indicator
|
||||
* @param {string} priority - The priority level ('high', 'medium', 'low')
|
||||
* @param {boolean} isMcp - Whether this is for MCP context
|
||||
* @returns {string} The indicator string for the priority
|
||||
*/
|
||||
export function getPriorityIndicator(priority, isMcp = false) {
|
||||
const indicators = getPriorityIndicators(isMcp);
|
||||
return indicators[priority] || indicators[MEDIUM];
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Complexity Indicators
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Complexity configuration
|
||||
*/
|
||||
const COMPLEXITY_CONFIG = new IndicatorConfig(
|
||||
'complexity',
|
||||
['high', 'medium', 'low'],
|
||||
{
|
||||
high: chalk.hex('#CC0000'),
|
||||
medium: chalk.hex('#FF8800'),
|
||||
low: chalk.green
|
||||
},
|
||||
{
|
||||
high: (score) => score >= 7,
|
||||
medium: (score) => score >= 4 && score <= 6,
|
||||
low: (score) => score <= 3
|
||||
}
|
||||
);
|
||||
|
||||
/**
|
||||
* Get complexity indicators for CLI context (colored dots with visual hierarchy)
|
||||
* Complexity scores: 1-3 (low), 4-6 (medium), 7-10 (high)
|
||||
* @returns {Object} Complexity level to colored dot string mapping
|
||||
*/
|
||||
export function getCliComplexityIndicators() {
|
||||
return getCachedIndicators('cli-complexity-all', () => {
|
||||
const indicators = {};
|
||||
COMPLEXITY_CONFIG.levels.forEach((level) => {
|
||||
const intensity = getIntensityFromLevel(level, COMPLEXITY_CONFIG.levels);
|
||||
const color = COMPLEXITY_CONFIG.getColor(level);
|
||||
indicators[level] = generateCliIndicator(intensity, color);
|
||||
});
|
||||
return indicators;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get complexity indicators for status bars (simplified single character versions)
|
||||
* @returns {Object} Complexity level to single character indicator mapping
|
||||
*/
|
||||
export function getStatusBarComplexityIndicators() {
|
||||
return getCachedIndicators('statusbar-complexity-all', () => {
|
||||
const indicators = {};
|
||||
COMPLEXITY_CONFIG.levels.forEach((level, index) => {
|
||||
const style =
|
||||
index === 0
|
||||
? VISUAL_STYLES.statusBar.high
|
||||
: index === 1
|
||||
? VISUAL_STYLES.statusBar.medium
|
||||
: VISUAL_STYLES.statusBar.low;
|
||||
const color = COMPLEXITY_CONFIG.getColor(level);
|
||||
indicators[level] = color(style);
|
||||
});
|
||||
return indicators;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get complexity colors for consistent styling
|
||||
* @returns {Object} Complexity level to chalk color function mapping
|
||||
*/
|
||||
export function getComplexityColors() {
|
||||
return { ...COMPLEXITY_CONFIG.colors };
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific complexity indicator based on score
|
||||
* @param {number} score - The complexity score (1-10)
|
||||
* @param {boolean} statusBar - Whether to return status bar version (single char)
|
||||
* @returns {string} The indicator string for the complexity level
|
||||
*/
|
||||
export function getComplexityIndicator(score, statusBar = false) {
|
||||
const level = COMPLEXITY_CONFIG.getLevelFromScore(score);
|
||||
const indicators = statusBar
|
||||
? getStatusBarComplexityIndicators()
|
||||
: getCliComplexityIndicators();
|
||||
return indicators[level];
|
||||
}
|
||||
@@ -1,477 +0,0 @@
|
||||
/**
|
||||
* parse-prd.js
|
||||
* UI functions specifically for PRD parsing operations
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { formatElapsedTime } from '../utils/format.js';
|
||||
|
||||
// Constants
|
||||
const CONSTANTS = {
|
||||
BAR_WIDTH: 40,
|
||||
TABLE_COL_WIDTHS: [28, 50],
|
||||
DEFAULT_MODEL: 'Default',
|
||||
DEFAULT_TEMPERATURE: 0.7
|
||||
};
|
||||
|
||||
const PRIORITIES = {
|
||||
HIGH: 'high',
|
||||
MEDIUM: 'medium',
|
||||
LOW: 'low'
|
||||
};
|
||||
|
||||
const PRIORITY_COLORS = {
|
||||
[PRIORITIES.HIGH]: '#CC0000',
|
||||
[PRIORITIES.MEDIUM]: '#FF8800',
|
||||
[PRIORITIES.LOW]: '#FFCC00'
|
||||
};
|
||||
|
||||
// Reusable box styles
|
||||
const BOX_STYLES = {
|
||||
main: {
|
||||
padding: { top: 1, bottom: 1, left: 2, right: 2 },
|
||||
margin: { top: 0, bottom: 0 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round'
|
||||
},
|
||||
summary: {
|
||||
padding: { top: 1, right: 1, bottom: 1, left: 1 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, right: 1, bottom: 1, left: 0 }
|
||||
},
|
||||
warning: {
|
||||
padding: 1,
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, bottom: 1 }
|
||||
},
|
||||
nextSteps: {
|
||||
padding: 1,
|
||||
borderColor: 'cyan',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, right: 0, bottom: 1, left: 0 }
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Helper function for building main message content
|
||||
* @param {Object} params - Message parameters
|
||||
* @param {string} params.prdFilePath - Path to the PRD file
|
||||
* @param {string} params.outputPath - Path where tasks will be saved
|
||||
* @param {number} params.numTasks - Number of tasks to generate
|
||||
* @param {string} params.model - AI model name
|
||||
* @param {number} params.temperature - AI temperature setting
|
||||
* @param {boolean} params.append - Whether appending to existing tasks
|
||||
* @param {boolean} params.research - Whether research mode is enabled
|
||||
* @returns {string} The formatted message content
|
||||
*/
|
||||
function buildMainMessage({
|
||||
prdFilePath,
|
||||
outputPath,
|
||||
numTasks,
|
||||
model,
|
||||
temperature,
|
||||
append,
|
||||
research
|
||||
}) {
|
||||
const actionVerb = append ? 'Appending' : 'Generating';
|
||||
|
||||
let modelLine = `Model: ${model} | Temperature: ${temperature}`;
|
||||
if (research) {
|
||||
modelLine += ` | ${chalk.cyan.bold('🔬 Research Mode')}`;
|
||||
}
|
||||
|
||||
return (
|
||||
chalk.bold(`🤖 Parsing PRD and ${actionVerb} Tasks`) +
|
||||
'\n' +
|
||||
chalk.dim(modelLine) +
|
||||
'\n\n' +
|
||||
chalk.blue(`Input: ${prdFilePath}`) +
|
||||
'\n' +
|
||||
chalk.blue(`Output: ${outputPath}`) +
|
||||
'\n' +
|
||||
chalk.blue(`Tasks to ${append ? 'Append' : 'Generate'}: ${numTasks}`)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function for displaying the main message box
|
||||
* @param {string} message - The message content to display in the box
|
||||
*/
|
||||
function displayMainMessageBox(message) {
|
||||
console.log(boxen(message, BOX_STYLES.main));
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function for displaying append mode notice
|
||||
* @param {number} existingTasksCount - Number of existing tasks
|
||||
* @param {number} nextId - Next ID to be used
|
||||
*/
|
||||
function displayAppendModeNotice(existingTasksCount, nextId) {
|
||||
console.log(
|
||||
chalk.yellow.bold('📝 Append mode') +
|
||||
` - Adding to ${existingTasksCount} existing tasks (next ID: ${nextId})`
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function for force mode messages
|
||||
* @param {boolean} append - Whether in append mode
|
||||
* @returns {string} The formatted force mode message
|
||||
*/
|
||||
function createForceMessage(append) {
|
||||
const baseMessage = chalk.red.bold('⚠️ Force flag enabled');
|
||||
return append
|
||||
? `${baseMessage} - Will overwrite if conflicts occur`
|
||||
: `${baseMessage} - Overwriting existing tasks`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the start of PRD parsing with a boxen announcement
|
||||
* @param {Object} options - Options for PRD parsing start
|
||||
* @param {string} options.prdFilePath - Path to the PRD file being parsed
|
||||
* @param {string} options.outputPath - Path where the tasks will be saved
|
||||
* @param {number} options.numTasks - Number of tasks to generate
|
||||
* @param {string} [options.model] - AI model name
|
||||
* @param {number} [options.temperature] - AI temperature setting
|
||||
* @param {boolean} [options.append=false] - Whether to append to existing tasks
|
||||
* @param {boolean} [options.research=false] - Whether research mode is enabled
|
||||
* @param {boolean} [options.force=false] - Whether force mode is enabled
|
||||
* @param {Array} [options.existingTasks=[]] - Existing tasks array
|
||||
* @param {number} [options.nextId=1] - Next ID to be used
|
||||
*/
|
||||
function displayParsePrdStart({
|
||||
prdFilePath,
|
||||
outputPath,
|
||||
numTasks,
|
||||
model = CONSTANTS.DEFAULT_MODEL,
|
||||
temperature = CONSTANTS.DEFAULT_TEMPERATURE,
|
||||
append = false,
|
||||
research = false,
|
||||
force = false,
|
||||
existingTasks = [],
|
||||
nextId = 1
|
||||
}) {
|
||||
// Input validation
|
||||
if (
|
||||
!prdFilePath ||
|
||||
typeof prdFilePath !== 'string' ||
|
||||
prdFilePath.trim() === ''
|
||||
) {
|
||||
throw new Error('prdFilePath is required and must be a non-empty string');
|
||||
}
|
||||
if (
|
||||
!outputPath ||
|
||||
typeof outputPath !== 'string' ||
|
||||
outputPath.trim() === ''
|
||||
) {
|
||||
throw new Error('outputPath is required and must be a non-empty string');
|
||||
}
|
||||
|
||||
// Build and display the main message box
|
||||
const message = buildMainMessage({
|
||||
prdFilePath,
|
||||
outputPath,
|
||||
numTasks,
|
||||
model,
|
||||
temperature,
|
||||
append,
|
||||
research
|
||||
});
|
||||
displayMainMessageBox(message);
|
||||
|
||||
// Display append/force notices beneath the boxen if either flag is set
|
||||
if (append || force) {
|
||||
// Add append mode details if enabled
|
||||
if (append) {
|
||||
displayAppendModeNotice(existingTasks.length, nextId);
|
||||
}
|
||||
|
||||
// Add force mode details if enabled
|
||||
if (force) {
|
||||
console.log(createForceMessage(append));
|
||||
}
|
||||
|
||||
// Add a blank line after notices for spacing
|
||||
console.log();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate priority statistics
|
||||
* @param {Object} taskPriorities - Priority counts object
|
||||
* @param {number} totalTasks - Total number of tasks
|
||||
* @returns {Object} Priority statistics with counts and percentages
|
||||
*/
|
||||
function calculatePriorityStats(taskPriorities, totalTasks) {
|
||||
const stats = {};
|
||||
|
||||
Object.values(PRIORITIES).forEach((priority) => {
|
||||
const count = taskPriorities[priority] || 0;
|
||||
stats[priority] = {
|
||||
count,
|
||||
percentage: totalTasks > 0 ? Math.round((count / totalTasks) * 100) : 0
|
||||
};
|
||||
});
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate bar character distribution for priorities
|
||||
* @param {Object} priorityStats - Priority statistics
|
||||
* @param {number} totalTasks - Total number of tasks
|
||||
* @returns {Object} Character counts for each priority
|
||||
*/
|
||||
function calculateBarDistribution(priorityStats, totalTasks) {
|
||||
const barWidth = CONSTANTS.BAR_WIDTH;
|
||||
const distribution = {};
|
||||
|
||||
if (totalTasks === 0) {
|
||||
Object.values(PRIORITIES).forEach((priority) => {
|
||||
distribution[priority] = 0;
|
||||
});
|
||||
return distribution;
|
||||
}
|
||||
|
||||
// Calculate raw proportions
|
||||
const rawChars = {};
|
||||
Object.values(PRIORITIES).forEach((priority) => {
|
||||
rawChars[priority] =
|
||||
(priorityStats[priority].count / totalTasks) * barWidth;
|
||||
});
|
||||
|
||||
// Initial distribution - floor values
|
||||
Object.values(PRIORITIES).forEach((priority) => {
|
||||
distribution[priority] = Math.floor(rawChars[priority]);
|
||||
});
|
||||
|
||||
// Ensure non-zero priorities get at least 1 character
|
||||
Object.values(PRIORITIES).forEach((priority) => {
|
||||
if (priorityStats[priority].count > 0 && distribution[priority] === 0) {
|
||||
distribution[priority] = 1;
|
||||
}
|
||||
});
|
||||
|
||||
// Distribute remaining characters based on decimal parts
|
||||
const currentTotal = Object.values(distribution).reduce(
|
||||
(sum, val) => sum + val,
|
||||
0
|
||||
);
|
||||
const remainingChars = barWidth - currentTotal;
|
||||
|
||||
if (remainingChars > 0) {
|
||||
const decimals = Object.values(PRIORITIES)
|
||||
.map((priority) => ({
|
||||
priority,
|
||||
decimal: rawChars[priority] - Math.floor(rawChars[priority])
|
||||
}))
|
||||
.sort((a, b) => b.decimal - a.decimal);
|
||||
|
||||
for (let i = 0; i < remainingChars && i < decimals.length; i++) {
|
||||
distribution[decimals[i].priority]++;
|
||||
}
|
||||
}
|
||||
|
||||
return distribution;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create priority distribution bar visual
|
||||
* @param {Object} barDistribution - Character distribution for priorities
|
||||
* @returns {string} Visual bar string
|
||||
*/
|
||||
function createPriorityBar(barDistribution) {
|
||||
let bar = '';
|
||||
|
||||
bar += chalk.hex(PRIORITY_COLORS[PRIORITIES.HIGH])(
|
||||
'█'.repeat(barDistribution[PRIORITIES.HIGH])
|
||||
);
|
||||
bar += chalk.hex(PRIORITY_COLORS[PRIORITIES.MEDIUM])(
|
||||
'█'.repeat(barDistribution[PRIORITIES.MEDIUM])
|
||||
);
|
||||
bar += chalk.yellow('█'.repeat(barDistribution[PRIORITIES.LOW]));
|
||||
|
||||
const totalChars = Object.values(barDistribution).reduce(
|
||||
(sum, val) => sum + val,
|
||||
0
|
||||
);
|
||||
if (totalChars < CONSTANTS.BAR_WIDTH) {
|
||||
bar += chalk.gray('░'.repeat(CONSTANTS.BAR_WIDTH - totalChars));
|
||||
}
|
||||
|
||||
return bar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build priority distribution row for table
|
||||
* @param {Object} priorityStats - Priority statistics
|
||||
* @returns {Array} Table row for priority distribution
|
||||
*/
|
||||
function buildPriorityRow(priorityStats) {
|
||||
const parts = [];
|
||||
|
||||
Object.entries(PRIORITIES).forEach(([key, priority]) => {
|
||||
const stats = priorityStats[priority];
|
||||
const color =
|
||||
priority === PRIORITIES.HIGH
|
||||
? chalk.hex(PRIORITY_COLORS[PRIORITIES.HIGH])
|
||||
: priority === PRIORITIES.MEDIUM
|
||||
? chalk.hex(PRIORITY_COLORS[PRIORITIES.MEDIUM])
|
||||
: chalk.yellow;
|
||||
|
||||
const label = key.charAt(0) + key.slice(1).toLowerCase();
|
||||
parts.push(
|
||||
`${color.bold(stats.count)} ${color(label)} (${stats.percentage}%)`
|
||||
);
|
||||
});
|
||||
|
||||
return [chalk.cyan('Priority distribution:'), parts.join(' · ')];
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a summary of the PRD parsing results
|
||||
* @param {Object} summary - Summary of the parsing results
|
||||
* @param {number} summary.totalTasks - Total number of tasks generated
|
||||
* @param {string} summary.prdFilePath - Path to the PRD file
|
||||
* @param {string} summary.outputPath - Path where the tasks were saved
|
||||
* @param {number} summary.elapsedTime - Total elapsed time in seconds
|
||||
* @param {Object} summary.taskPriorities - Breakdown of tasks by category/priority
|
||||
* @param {boolean} summary.usedFallback - Whether fallback parsing was used
|
||||
* @param {string} summary.actionVerb - Whether tasks were 'generated' or 'appended'
|
||||
*/
|
||||
function displayParsePrdSummary(summary) {
|
||||
const {
|
||||
totalTasks,
|
||||
taskPriorities = {},
|
||||
prdFilePath,
|
||||
outputPath,
|
||||
elapsedTime,
|
||||
usedFallback = false,
|
||||
actionVerb = 'generated'
|
||||
} = summary;
|
||||
|
||||
// Format the elapsed time
|
||||
const timeDisplay = formatElapsedTime(elapsedTime);
|
||||
|
||||
// Create a table for better alignment
|
||||
const table = new Table({
|
||||
chars: {
|
||||
top: '',
|
||||
'top-mid': '',
|
||||
'top-left': '',
|
||||
'top-right': '',
|
||||
bottom: '',
|
||||
'bottom-mid': '',
|
||||
'bottom-left': '',
|
||||
'bottom-right': '',
|
||||
left: '',
|
||||
'left-mid': '',
|
||||
mid: '',
|
||||
'mid-mid': '',
|
||||
right: '',
|
||||
'right-mid': '',
|
||||
middle: ' '
|
||||
},
|
||||
style: { border: [], 'padding-left': 2 },
|
||||
colWidths: CONSTANTS.TABLE_COL_WIDTHS
|
||||
});
|
||||
|
||||
// Basic info
|
||||
// Use the action verb to properly display if tasks were generated or appended
|
||||
table.push(
|
||||
[chalk.cyan(`Total tasks ${actionVerb}:`), chalk.bold(totalTasks)],
|
||||
[chalk.cyan('Processing time:'), chalk.bold(timeDisplay)]
|
||||
);
|
||||
|
||||
// Priority distribution if available
|
||||
if (taskPriorities && Object.keys(taskPriorities).length > 0) {
|
||||
const priorityStats = calculatePriorityStats(taskPriorities, totalTasks);
|
||||
const priorityRow = buildPriorityRow(priorityStats);
|
||||
table.push(priorityRow);
|
||||
|
||||
// Visual bar representation
|
||||
const barDistribution = calculateBarDistribution(priorityStats, totalTasks);
|
||||
const distributionBar = createPriorityBar(barDistribution);
|
||||
table.push([chalk.cyan('Distribution:'), distributionBar]);
|
||||
}
|
||||
|
||||
// Add file paths
|
||||
table.push(
|
||||
[chalk.cyan('PRD source:'), chalk.italic(prdFilePath)],
|
||||
[chalk.cyan('Tasks file:'), chalk.italic(outputPath)]
|
||||
);
|
||||
|
||||
// Add fallback parsing indicator if applicable
|
||||
if (usedFallback) {
|
||||
table.push([
|
||||
chalk.yellow('Fallback parsing:'),
|
||||
chalk.yellow('✓ Used fallback parsing')
|
||||
]);
|
||||
}
|
||||
|
||||
// Final string output with title and footer
|
||||
const output = [
|
||||
chalk.bold.underline(
|
||||
`PRD Parsing Complete - Tasks ${actionVerb.charAt(0).toUpperCase() + actionVerb.slice(1)}`
|
||||
),
|
||||
'',
|
||||
table.toString()
|
||||
].join('\n');
|
||||
|
||||
// Display the summary box
|
||||
console.log(boxen(output, BOX_STYLES.summary));
|
||||
|
||||
// Show fallback parsing warning if needed
|
||||
if (usedFallback) {
|
||||
displayFallbackWarning();
|
||||
}
|
||||
|
||||
// Show next steps
|
||||
displayNextSteps();
|
||||
}
|
||||
|
||||
/**
|
||||
* Display fallback parsing warning
|
||||
*/
|
||||
function displayFallbackWarning() {
|
||||
const warningContent =
|
||||
chalk.yellow.bold('⚠️ Fallback Parsing Used') +
|
||||
'\n\n' +
|
||||
chalk.white(
|
||||
'The system used fallback parsing to complete task generation.'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(
|
||||
'This typically happens when streaming JSON parsing is incomplete.'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('Your tasks were successfully generated, but consider:') +
|
||||
'\n' +
|
||||
chalk.white('• Reviewing task completeness') +
|
||||
'\n' +
|
||||
chalk.white('• Checking for any missing details') +
|
||||
'\n\n' +
|
||||
chalk.white("This is normal and usually doesn't indicate any issues.");
|
||||
|
||||
console.log(boxen(warningContent, BOX_STYLES.warning));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display next steps after parsing
|
||||
*/
|
||||
function displayNextSteps() {
|
||||
const stepsContent =
|
||||
chalk.white.bold('Next Steps:') +
|
||||
'\n\n' +
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` +
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks\n` +
|
||||
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master analyze-complexity')} to analyze task complexity`;
|
||||
|
||||
console.log(boxen(stepsContent, BOX_STYLES.nextSteps));
|
||||
}
|
||||
|
||||
export { displayParsePrdStart, displayParsePrdSummary, formatElapsedTime };
|
||||
@@ -1,12 +0,0 @@
|
||||
// src/utils/format.js
|
||||
|
||||
/**
|
||||
* Formats elapsed time as 0m 00s.
|
||||
* @param {number} seconds - Elapsed time in seconds
|
||||
* @returns {string} Formatted time string
|
||||
*/
|
||||
export function formatElapsedTime(seconds) {
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = Math.floor(seconds % 60);
|
||||
return `${minutes}m ${remainingSeconds.toString().padStart(2, '0')}s`;
|
||||
}
|
||||
@@ -1,490 +0,0 @@
|
||||
import { JSONParser } from '@streamparser/json';
|
||||
|
||||
/**
|
||||
* Custom error class for streaming-related failures
|
||||
* Provides error codes for robust error handling without string matching
|
||||
*/
|
||||
export class StreamingError extends Error {
|
||||
constructor(message, code) {
|
||||
super(message);
|
||||
this.name = 'StreamingError';
|
||||
this.code = code;
|
||||
|
||||
// Maintain proper stack trace (V8 engines)
|
||||
if (Error.captureStackTrace) {
|
||||
Error.captureStackTrace(this, StreamingError);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Standard streaming error codes
|
||||
*/
|
||||
export const STREAMING_ERROR_CODES = {
|
||||
NOT_ASYNC_ITERABLE: 'STREAMING_NOT_SUPPORTED',
|
||||
STREAM_PROCESSING_FAILED: 'STREAM_PROCESSING_FAILED',
|
||||
STREAM_NOT_ITERABLE: 'STREAM_NOT_ITERABLE',
|
||||
BUFFER_SIZE_EXCEEDED: 'BUFFER_SIZE_EXCEEDED'
|
||||
};
|
||||
|
||||
/**
|
||||
* Default maximum buffer size (1MB)
|
||||
*/
|
||||
export const DEFAULT_MAX_BUFFER_SIZE = 1024 * 1024; // 1MB in bytes
|
||||
|
||||
/**
|
||||
* Configuration options for the streaming JSON parser
|
||||
*/
|
||||
class StreamParserConfig {
|
||||
constructor(config = {}) {
|
||||
this.jsonPaths = config.jsonPaths;
|
||||
this.onProgress = config.onProgress;
|
||||
this.onError = config.onError;
|
||||
this.estimateTokens =
|
||||
config.estimateTokens || ((text) => Math.ceil(text.length / 4));
|
||||
this.expectedTotal = config.expectedTotal || 0;
|
||||
this.fallbackItemExtractor = config.fallbackItemExtractor;
|
||||
this.itemValidator =
|
||||
config.itemValidator || StreamParserConfig.defaultItemValidator;
|
||||
this.maxBufferSize = config.maxBufferSize || DEFAULT_MAX_BUFFER_SIZE;
|
||||
|
||||
this.validate();
|
||||
}
|
||||
|
||||
validate() {
|
||||
if (!this.jsonPaths || !Array.isArray(this.jsonPaths)) {
|
||||
throw new Error('jsonPaths is required and must be an array');
|
||||
}
|
||||
if (this.jsonPaths.length === 0) {
|
||||
throw new Error('jsonPaths array cannot be empty');
|
||||
}
|
||||
if (this.maxBufferSize <= 0) {
|
||||
throw new Error('maxBufferSize must be positive');
|
||||
}
|
||||
if (this.expectedTotal < 0) {
|
||||
throw new Error('expectedTotal cannot be negative');
|
||||
}
|
||||
if (this.estimateTokens && typeof this.estimateTokens !== 'function') {
|
||||
throw new Error('estimateTokens must be a function');
|
||||
}
|
||||
if (this.onProgress && typeof this.onProgress !== 'function') {
|
||||
throw new Error('onProgress must be a function');
|
||||
}
|
||||
if (this.onError && typeof this.onError !== 'function') {
|
||||
throw new Error('onError must be a function');
|
||||
}
|
||||
if (
|
||||
this.fallbackItemExtractor &&
|
||||
typeof this.fallbackItemExtractor !== 'function'
|
||||
) {
|
||||
throw new Error('fallbackItemExtractor must be a function');
|
||||
}
|
||||
if (this.itemValidator && typeof this.itemValidator !== 'function') {
|
||||
throw new Error('itemValidator must be a function');
|
||||
}
|
||||
}
|
||||
|
||||
static defaultItemValidator(item) {
|
||||
return (
|
||||
item && item.title && typeof item.title === 'string' && item.title.trim()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Manages progress tracking and metadata
|
||||
*/
|
||||
class ProgressTracker {
|
||||
constructor(config) {
|
||||
this.onProgress = config.onProgress;
|
||||
this.onError = config.onError;
|
||||
this.estimateTokens = config.estimateTokens;
|
||||
this.expectedTotal = config.expectedTotal;
|
||||
this.parsedItems = [];
|
||||
this.accumulatedText = '';
|
||||
}
|
||||
|
||||
addItem(item) {
|
||||
this.parsedItems.push(item);
|
||||
this.reportProgress(item);
|
||||
}
|
||||
|
||||
addText(chunk) {
|
||||
this.accumulatedText += chunk;
|
||||
}
|
||||
|
||||
getMetadata() {
|
||||
return {
|
||||
currentCount: this.parsedItems.length,
|
||||
expectedTotal: this.expectedTotal,
|
||||
accumulatedText: this.accumulatedText,
|
||||
estimatedTokens: this.estimateTokens(this.accumulatedText)
|
||||
};
|
||||
}
|
||||
|
||||
reportProgress(item) {
|
||||
if (!this.onProgress) return;
|
||||
|
||||
try {
|
||||
this.onProgress(item, this.getMetadata());
|
||||
} catch (progressError) {
|
||||
this.handleProgressError(progressError);
|
||||
}
|
||||
}
|
||||
|
||||
handleProgressError(error) {
|
||||
if (this.onError) {
|
||||
this.onError(new Error(`Progress callback failed: ${error.message}`));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles stream processing with different stream types
|
||||
*/
|
||||
class StreamProcessor {
|
||||
constructor(onChunk) {
|
||||
this.onChunk = onChunk;
|
||||
}
|
||||
|
||||
async process(textStream) {
|
||||
const streamHandler = this.detectStreamType(textStream);
|
||||
await streamHandler(textStream);
|
||||
}
|
||||
|
||||
detectStreamType(textStream) {
|
||||
// Check for textStream property
|
||||
if (this.hasAsyncIterator(textStream?.textStream)) {
|
||||
return (stream) => this.processTextStream(stream.textStream);
|
||||
}
|
||||
|
||||
// Check for fullStream property
|
||||
if (this.hasAsyncIterator(textStream?.fullStream)) {
|
||||
return (stream) => this.processFullStream(stream.fullStream);
|
||||
}
|
||||
|
||||
// Check if stream itself is iterable
|
||||
if (this.hasAsyncIterator(textStream)) {
|
||||
return (stream) => this.processDirectStream(stream);
|
||||
}
|
||||
|
||||
throw new StreamingError(
|
||||
'Stream object is not iterable - no textStream, fullStream, or direct async iterator found',
|
||||
STREAMING_ERROR_CODES.STREAM_NOT_ITERABLE
|
||||
);
|
||||
}
|
||||
|
||||
hasAsyncIterator(obj) {
|
||||
return obj && typeof obj[Symbol.asyncIterator] === 'function';
|
||||
}
|
||||
|
||||
async processTextStream(stream) {
|
||||
for await (const chunk of stream) {
|
||||
this.onChunk(chunk);
|
||||
}
|
||||
}
|
||||
|
||||
async processFullStream(stream) {
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.type === 'text-delta' && chunk.textDelta) {
|
||||
this.onChunk(chunk.textDelta);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async processDirectStream(stream) {
|
||||
for await (const chunk of stream) {
|
||||
this.onChunk(chunk);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Manages JSON parsing with the streaming parser
|
||||
*/
|
||||
class JSONStreamParser {
|
||||
constructor(config, progressTracker) {
|
||||
this.config = config;
|
||||
this.progressTracker = progressTracker;
|
||||
this.parser = new JSONParser({ paths: config.jsonPaths });
|
||||
this.setupHandlers();
|
||||
}
|
||||
|
||||
setupHandlers() {
|
||||
this.parser.onValue = (value, key, parent, stack) => {
|
||||
this.handleParsedValue(value);
|
||||
};
|
||||
|
||||
this.parser.onError = (error) => {
|
||||
this.handleParseError(error);
|
||||
};
|
||||
}
|
||||
|
||||
handleParsedValue(value) {
|
||||
// Extract the actual item object from the parser's nested structure
|
||||
const item = value.value || value;
|
||||
|
||||
if (this.config.itemValidator(item)) {
|
||||
this.progressTracker.addItem(item);
|
||||
}
|
||||
}
|
||||
|
||||
handleParseError(error) {
|
||||
if (this.config.onError) {
|
||||
this.config.onError(new Error(`JSON parsing error: ${error.message}`));
|
||||
}
|
||||
// Don't throw here - we'll handle this in the fallback logic
|
||||
}
|
||||
|
||||
write(chunk) {
|
||||
this.parser.write(chunk);
|
||||
}
|
||||
|
||||
end() {
|
||||
this.parser.end();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles fallback parsing when streaming fails
|
||||
*/
|
||||
class FallbackParser {
|
||||
constructor(config, progressTracker) {
|
||||
this.config = config;
|
||||
this.progressTracker = progressTracker;
|
||||
}
|
||||
|
||||
async attemptParsing() {
|
||||
if (!this.shouldAttemptFallback()) {
|
||||
return [];
|
||||
}
|
||||
|
||||
try {
|
||||
return await this.parseFallbackItems();
|
||||
} catch (parseError) {
|
||||
this.handleFallbackError(parseError);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
shouldAttemptFallback() {
|
||||
return (
|
||||
this.config.expectedTotal > 0 &&
|
||||
this.progressTracker.parsedItems.length < this.config.expectedTotal &&
|
||||
this.progressTracker.accumulatedText &&
|
||||
this.config.fallbackItemExtractor
|
||||
);
|
||||
}
|
||||
|
||||
async parseFallbackItems() {
|
||||
const jsonText = this._cleanJsonText(this.progressTracker.accumulatedText);
|
||||
const fullResponse = JSON.parse(jsonText);
|
||||
const fallbackItems = this.config.fallbackItemExtractor(fullResponse);
|
||||
|
||||
if (!Array.isArray(fallbackItems)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return this._processNewItems(fallbackItems);
|
||||
}
|
||||
|
||||
_cleanJsonText(text) {
|
||||
// Remove markdown code block wrappers and trim whitespace
|
||||
return text
|
||||
.replace(/^```(?:json)?\s*\n?/i, '')
|
||||
.replace(/\n?```\s*$/i, '')
|
||||
.trim();
|
||||
}
|
||||
|
||||
_processNewItems(fallbackItems) {
|
||||
// Only add items we haven't already parsed
|
||||
const itemsToAdd = fallbackItems.slice(
|
||||
this.progressTracker.parsedItems.length
|
||||
);
|
||||
const newItems = [];
|
||||
|
||||
for (const item of itemsToAdd) {
|
||||
if (this.config.itemValidator(item)) {
|
||||
newItems.push(item);
|
||||
this.progressTracker.addItem(item);
|
||||
}
|
||||
}
|
||||
|
||||
return newItems;
|
||||
}
|
||||
|
||||
handleFallbackError(error) {
|
||||
if (this.progressTracker.parsedItems.length === 0) {
|
||||
throw new Error(`Failed to parse AI response as JSON: ${error.message}`);
|
||||
}
|
||||
// If we have some items from streaming, continue with those
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Buffer size validator
|
||||
*/
|
||||
class BufferSizeValidator {
|
||||
constructor(maxSize) {
|
||||
this.maxSize = maxSize;
|
||||
this.currentSize = 0;
|
||||
}
|
||||
|
||||
validateChunk(existingText, newChunk) {
|
||||
const newSize = Buffer.byteLength(existingText + newChunk, 'utf8');
|
||||
|
||||
if (newSize > this.maxSize) {
|
||||
throw new StreamingError(
|
||||
`Buffer size exceeded: ${newSize} bytes > ${this.maxSize} bytes maximum`,
|
||||
STREAMING_ERROR_CODES.BUFFER_SIZE_EXCEEDED
|
||||
);
|
||||
}
|
||||
|
||||
this.currentSize = newSize;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Main orchestrator for stream parsing
|
||||
*/
|
||||
class StreamParserOrchestrator {
|
||||
constructor(config) {
|
||||
this.config = new StreamParserConfig(config);
|
||||
this.progressTracker = new ProgressTracker(this.config);
|
||||
this.bufferValidator = new BufferSizeValidator(this.config.maxBufferSize);
|
||||
this.jsonParser = new JSONStreamParser(this.config, this.progressTracker);
|
||||
this.fallbackParser = new FallbackParser(this.config, this.progressTracker);
|
||||
}
|
||||
|
||||
async parse(textStream) {
|
||||
if (!textStream) {
|
||||
throw new Error('No text stream provided');
|
||||
}
|
||||
|
||||
await this.processStream(textStream);
|
||||
await this.waitForParsingCompletion();
|
||||
|
||||
const usedFallback = await this.attemptFallbackIfNeeded();
|
||||
|
||||
return this.buildResult(usedFallback);
|
||||
}
|
||||
|
||||
async processStream(textStream) {
|
||||
const processor = new StreamProcessor((chunk) => {
|
||||
this.bufferValidator.validateChunk(
|
||||
this.progressTracker.accumulatedText,
|
||||
chunk
|
||||
);
|
||||
this.progressTracker.addText(chunk);
|
||||
this.jsonParser.write(chunk);
|
||||
});
|
||||
|
||||
try {
|
||||
await processor.process(textStream);
|
||||
} catch (streamError) {
|
||||
this.handleStreamError(streamError);
|
||||
}
|
||||
|
||||
this.jsonParser.end();
|
||||
}
|
||||
|
||||
handleStreamError(error) {
|
||||
// Re-throw StreamingError as-is, wrap other errors
|
||||
if (error instanceof StreamingError) {
|
||||
throw error;
|
||||
}
|
||||
throw new StreamingError(
|
||||
`Failed to process AI text stream: ${error.message}`,
|
||||
STREAMING_ERROR_CODES.STREAM_PROCESSING_FAILED
|
||||
);
|
||||
}
|
||||
|
||||
async waitForParsingCompletion() {
|
||||
// Wait for final parsing to complete (JSON parser may still be processing)
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
}
|
||||
|
||||
async attemptFallbackIfNeeded() {
|
||||
const fallbackItems = await this.fallbackParser.attemptParsing();
|
||||
return fallbackItems.length > 0;
|
||||
}
|
||||
|
||||
buildResult(usedFallback) {
|
||||
const metadata = this.progressTracker.getMetadata();
|
||||
|
||||
return {
|
||||
items: this.progressTracker.parsedItems,
|
||||
accumulatedText: metadata.accumulatedText,
|
||||
estimatedTokens: metadata.estimatedTokens,
|
||||
usedFallback
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a streaming JSON response with progress tracking
|
||||
*
|
||||
* Example with custom buffer size:
|
||||
* ```js
|
||||
* const result = await parseStream(stream, {
|
||||
* jsonPaths: ['$.tasks.*'],
|
||||
* maxBufferSize: 2 * 1024 * 1024 // 2MB
|
||||
* });
|
||||
* ```
|
||||
*
|
||||
* @param {Object} textStream - The AI service text stream object
|
||||
* @param {Object} config - Configuration options
|
||||
* @returns {Promise<Object>} Parsed result with metadata
|
||||
*/
|
||||
export async function parseStream(textStream, config = {}) {
|
||||
const orchestrator = new StreamParserOrchestrator(config);
|
||||
return orchestrator.parse(textStream);
|
||||
}
|
||||
|
||||
/**
|
||||
* Process different types of text streams
|
||||
* @param {Object} textStream - The stream object from AI service
|
||||
* @param {Function} onChunk - Callback for each text chunk
|
||||
*/
|
||||
export async function processTextStream(textStream, onChunk) {
|
||||
const processor = new StreamProcessor(onChunk);
|
||||
await processor.process(textStream);
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempt fallback JSON parsing when streaming parsing is incomplete
|
||||
* @param {string} accumulatedText - Complete accumulated text
|
||||
* @param {Array} existingItems - Items already parsed from streaming
|
||||
* @param {number} expectedTotal - Expected total number of items
|
||||
* @param {Object} config - Configuration for progress reporting
|
||||
* @returns {Promise<Array>} Additional items found via fallback parsing
|
||||
*/
|
||||
export async function attemptFallbackParsing(
|
||||
accumulatedText,
|
||||
existingItems,
|
||||
expectedTotal,
|
||||
config
|
||||
) {
|
||||
// Create a temporary progress tracker for backward compatibility
|
||||
const progressTracker = new ProgressTracker({
|
||||
onProgress: config.onProgress,
|
||||
onError: config.onError,
|
||||
estimateTokens: config.estimateTokens,
|
||||
expectedTotal
|
||||
});
|
||||
|
||||
progressTracker.parsedItems = existingItems;
|
||||
progressTracker.accumulatedText = accumulatedText;
|
||||
|
||||
const fallbackParser = new FallbackParser(
|
||||
{
|
||||
...config,
|
||||
expectedTotal,
|
||||
itemValidator:
|
||||
config.itemValidator || StreamParserConfig.defaultItemValidator
|
||||
},
|
||||
progressTracker
|
||||
);
|
||||
|
||||
return fallbackParser.attemptParsing();
|
||||
}
|
||||
@@ -1,189 +0,0 @@
|
||||
import { StreamingError, STREAMING_ERROR_CODES } from './stream-parser.js';
|
||||
|
||||
/**
|
||||
* Utility class for managing timeouts in async operations
|
||||
* Reduces code duplication for timeout handling patterns
|
||||
*/
|
||||
export class TimeoutManager {
|
||||
/**
|
||||
* Wraps a promise with a timeout that will reject if not resolved in time
|
||||
*
|
||||
* @param {Promise} promise - The promise to wrap with timeout
|
||||
* @param {number} timeoutMs - Timeout duration in milliseconds
|
||||
* @param {string} operationName - Name of the operation for error messages
|
||||
* @returns {Promise} The result of the promise or throws timeout error
|
||||
*
|
||||
* @example
|
||||
* const result = await TimeoutManager.withTimeout(
|
||||
* fetchData(),
|
||||
* 5000,
|
||||
* 'Data fetch operation'
|
||||
* );
|
||||
*/
|
||||
static async withTimeout(promise, timeoutMs, operationName = 'Operation') {
|
||||
let timeoutHandle;
|
||||
|
||||
const timeoutPromise = new Promise((_, reject) => {
|
||||
timeoutHandle = setTimeout(() => {
|
||||
reject(
|
||||
new StreamingError(
|
||||
`${operationName} timed out after ${timeoutMs / 1000} seconds`,
|
||||
STREAMING_ERROR_CODES.STREAM_PROCESSING_FAILED
|
||||
)
|
||||
);
|
||||
}, timeoutMs);
|
||||
});
|
||||
|
||||
try {
|
||||
// Race between the actual promise and the timeout
|
||||
const result = await Promise.race([promise, timeoutPromise]);
|
||||
// Clear timeout if promise resolved first
|
||||
clearTimeout(timeoutHandle);
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Always clear timeout on error
|
||||
clearTimeout(timeoutHandle);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wraps a promise with a timeout, but returns undefined instead of throwing on timeout
|
||||
* Useful for optional operations that shouldn't fail the main flow
|
||||
*
|
||||
* @param {Promise} promise - The promise to wrap with timeout
|
||||
* @param {number} timeoutMs - Timeout duration in milliseconds
|
||||
* @param {*} defaultValue - Value to return on timeout (default: undefined)
|
||||
* @returns {Promise} The result of the promise or defaultValue on timeout
|
||||
*
|
||||
* @example
|
||||
* const usage = await TimeoutManager.withSoftTimeout(
|
||||
* getUsageStats(),
|
||||
* 1000,
|
||||
* { tokens: 0 }
|
||||
* );
|
||||
*/
|
||||
static async withSoftTimeout(promise, timeoutMs, defaultValue = undefined) {
|
||||
let timeoutHandle;
|
||||
|
||||
const timeoutPromise = new Promise((resolve) => {
|
||||
timeoutHandle = setTimeout(() => {
|
||||
resolve(defaultValue);
|
||||
}, timeoutMs);
|
||||
});
|
||||
|
||||
try {
|
||||
const result = await Promise.race([promise, timeoutPromise]);
|
||||
clearTimeout(timeoutHandle);
|
||||
return result;
|
||||
} catch (error) {
|
||||
// On error, clear timeout and return default value
|
||||
clearTimeout(timeoutHandle);
|
||||
return defaultValue;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a reusable timeout controller for multiple operations
|
||||
* Useful when you need to apply the same timeout to multiple promises
|
||||
*
|
||||
* @param {number} timeoutMs - Timeout duration in milliseconds
|
||||
* @param {string} operationName - Base name for operations
|
||||
* @returns {Object} Controller with wrap method
|
||||
*
|
||||
* @example
|
||||
* const controller = TimeoutManager.createController(60000, 'AI Service');
|
||||
* const result1 = await controller.wrap(service.call1(), 'call 1');
|
||||
* const result2 = await controller.wrap(service.call2(), 'call 2');
|
||||
*/
|
||||
static createController(timeoutMs, operationName = 'Operation') {
|
||||
return {
|
||||
timeoutMs,
|
||||
operationName,
|
||||
|
||||
async wrap(promise, specificName = null) {
|
||||
const fullName = specificName
|
||||
? `${operationName} - ${specificName}`
|
||||
: operationName;
|
||||
return TimeoutManager.withTimeout(promise, timeoutMs, fullName);
|
||||
},
|
||||
|
||||
async wrapSoft(promise, defaultValue = undefined) {
|
||||
return TimeoutManager.withSoftTimeout(promise, timeoutMs, defaultValue);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if an error is a timeout error from this manager
|
||||
*
|
||||
* @param {Error} error - The error to check
|
||||
* @returns {boolean} True if this is a timeout error
|
||||
*/
|
||||
static isTimeoutError(error) {
|
||||
return (
|
||||
error instanceof StreamingError &&
|
||||
error.code === STREAMING_ERROR_CODES.STREAM_PROCESSING_FAILED &&
|
||||
error.message.includes('timed out')
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Duration helper class for more readable timeout specifications
|
||||
*/
|
||||
export class Duration {
|
||||
constructor(value, unit = 'ms') {
|
||||
this.milliseconds = this._toMilliseconds(value, unit);
|
||||
}
|
||||
|
||||
static milliseconds(value) {
|
||||
return new Duration(value, 'ms');
|
||||
}
|
||||
|
||||
static seconds(value) {
|
||||
return new Duration(value, 's');
|
||||
}
|
||||
|
||||
static minutes(value) {
|
||||
return new Duration(value, 'm');
|
||||
}
|
||||
|
||||
static hours(value) {
|
||||
return new Duration(value, 'h');
|
||||
}
|
||||
|
||||
get seconds() {
|
||||
return this.milliseconds / 1000;
|
||||
}
|
||||
|
||||
get minutes() {
|
||||
return this.milliseconds / 60000;
|
||||
}
|
||||
|
||||
get hours() {
|
||||
return this.milliseconds / 3600000;
|
||||
}
|
||||
|
||||
toString() {
|
||||
if (this.milliseconds < 1000) {
|
||||
return `${this.milliseconds}ms`;
|
||||
} else if (this.milliseconds < 60000) {
|
||||
return `${this.seconds}s`;
|
||||
} else if (this.milliseconds < 3600000) {
|
||||
return `${Math.floor(this.minutes)}m ${Math.floor(this.seconds % 60)}s`;
|
||||
} else {
|
||||
return `${Math.floor(this.hours)}h ${Math.floor(this.minutes % 60)}m`;
|
||||
}
|
||||
}
|
||||
|
||||
_toMilliseconds(value, unit) {
|
||||
const conversions = {
|
||||
ms: 1,
|
||||
s: 1000,
|
||||
m: 60000,
|
||||
h: 3600000
|
||||
};
|
||||
return value * (conversions[unit] || 1);
|
||||
}
|
||||
}
|
||||
@@ -1,496 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import { execSync } from 'child_process';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
describe('Complex Cross-Tag Scenarios', () => {
|
||||
let testDir;
|
||||
let tasksPath;
|
||||
|
||||
// Define binPath once for the entire test suite
|
||||
const binPath = path.join(
|
||||
__dirname,
|
||||
'..',
|
||||
'..',
|
||||
'..',
|
||||
'bin',
|
||||
'task-master.js'
|
||||
);
|
||||
|
||||
beforeEach(() => {
|
||||
// Create test directory
|
||||
testDir = fs.mkdtempSync(path.join(__dirname, 'test-'));
|
||||
process.chdir(testDir);
|
||||
|
||||
// Initialize task-master
|
||||
execSync(`node ${binPath} init --yes`, {
|
||||
stdio: 'pipe'
|
||||
});
|
||||
|
||||
// Create test tasks with complex dependencies in the correct tagged format
|
||||
const complexTasks = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Setup Project',
|
||||
description: 'Initialize the project structure',
|
||||
status: 'done',
|
||||
priority: 'high',
|
||||
dependencies: [],
|
||||
details: 'Create basic project structure',
|
||||
testStrategy: 'Verify project structure exists',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Database Schema',
|
||||
description: 'Design and implement database schema',
|
||||
status: 'pending',
|
||||
priority: 'high',
|
||||
dependencies: [1],
|
||||
details: 'Create database tables and relationships',
|
||||
testStrategy: 'Run database migrations',
|
||||
subtasks: [
|
||||
{
|
||||
id: '2.1',
|
||||
title: 'User Table',
|
||||
description: 'Create user table',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
dependencies: [],
|
||||
details: 'Design user table schema',
|
||||
testStrategy: 'Test user creation'
|
||||
},
|
||||
{
|
||||
id: '2.2',
|
||||
title: 'Product Table',
|
||||
description: 'Create product table',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
dependencies: ['2.1'],
|
||||
details: 'Design product table schema',
|
||||
testStrategy: 'Test product creation'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'API Development',
|
||||
description: 'Develop REST API endpoints',
|
||||
status: 'pending',
|
||||
priority: 'high',
|
||||
dependencies: [2],
|
||||
details: 'Create API endpoints for CRUD operations',
|
||||
testStrategy: 'Test API endpoints',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Frontend Development',
|
||||
description: 'Develop user interface',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
dependencies: [3],
|
||||
details: 'Create React components and pages',
|
||||
testStrategy: 'Test UI components',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 5,
|
||||
title: 'Testing',
|
||||
description: 'Comprehensive testing',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
dependencies: [4],
|
||||
details: 'Write unit and integration tests',
|
||||
testStrategy: 'Run test suite',
|
||||
subtasks: []
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Test tasks for complex cross-tag scenarios'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Write tasks to file
|
||||
tasksPath = path.join(testDir, '.taskmaster', 'tasks', 'tasks.json');
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(complexTasks, null, 2));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Change back to project root before cleanup
|
||||
try {
|
||||
process.chdir(global.projectRoot || path.resolve(__dirname, '../../..'));
|
||||
} catch (error) {
|
||||
// If we can't change directory, try a known safe directory
|
||||
process.chdir(require('os').homedir());
|
||||
}
|
||||
|
||||
// Cleanup test directory
|
||||
if (testDir && fs.existsSync(testDir)) {
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe('Circular Dependency Detection', () => {
|
||||
it('should detect and prevent circular dependencies', () => {
|
||||
// Create a circular dependency scenario
|
||||
const circularTasks = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
status: 'pending',
|
||||
dependencies: [2],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
status: 'pending',
|
||||
dependencies: [3],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
subtasks: []
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Backlog tasks with circular dependencies'
|
||||
}
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'In-progress tasks'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(circularTasks, null, 2));
|
||||
|
||||
// Try to move task 1 - should fail due to circular dependency
|
||||
expect(() => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=1 --from-tag=backlog --to-tag=in-progress`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
}).toThrow();
|
||||
|
||||
// Check that the move was not performed
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(tasksAfter.backlog.tasks.find((t) => t.id === 1)).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
|
||||
).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Dependency Chains', () => {
|
||||
it('should handle deep dependency chains correctly', () => {
|
||||
// Create a deep dependency chain
|
||||
const deepChainTasks = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
status: 'pending',
|
||||
dependencies: [2],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
status: 'pending',
|
||||
dependencies: [3],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
status: 'pending',
|
||||
dependencies: [4],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
status: 'pending',
|
||||
dependencies: [5],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
subtasks: []
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Deep dependency chain tasks'
|
||||
}
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'In-progress tasks'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(deepChainTasks, null, 2));
|
||||
|
||||
// Move task 1 with dependencies - should move entire chain
|
||||
execSync(
|
||||
`node ${binPath} move --from=1 --from-tag=master --to-tag=in-progress --with-dependencies`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
|
||||
// Verify all tasks in the chain were moved
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 1)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 3)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 4)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 5)).toBeUndefined();
|
||||
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 2)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 3)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 4)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 5)
|
||||
).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Subtask Movement Restrictions', () => {
|
||||
it('should prevent direct subtask movement between tags', () => {
|
||||
// Try to move a subtask directly
|
||||
expect(() => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=2.1 --from-tag=master --to-tag=in-progress`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
}).toThrow();
|
||||
|
||||
// Verify subtask was not moved
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
const task2 = tasksAfter.master.tasks.find((t) => t.id === 2);
|
||||
expect(task2).toBeDefined();
|
||||
expect(task2.subtasks.find((s) => s.id === '2.1')).toBeDefined();
|
||||
});
|
||||
|
||||
it('should allow moving parent task with all subtasks', () => {
|
||||
// Move parent task with dependencies (includes subtasks)
|
||||
execSync(
|
||||
`node ${binPath} move --from=2 --from-tag=master --to-tag=in-progress --with-dependencies`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
|
||||
// Verify parent and subtasks were moved
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
|
||||
const movedTask2 = tasksAfter['in-progress'].tasks.find(
|
||||
(t) => t.id === 2
|
||||
);
|
||||
expect(movedTask2).toBeDefined();
|
||||
expect(movedTask2.subtasks).toHaveLength(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Large Task Set Performance', () => {
|
||||
it('should handle large task sets efficiently', () => {
|
||||
// Create a large task set (100 tasks)
|
||||
const largeTaskSet = {
|
||||
master: {
|
||||
tasks: [],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Large task set for performance testing'
|
||||
}
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'In-progress tasks'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Add 50 tasks to master with dependencies
|
||||
for (let i = 1; i <= 50; i++) {
|
||||
largeTaskSet.master.tasks.push({
|
||||
id: i,
|
||||
title: `Task ${i}`,
|
||||
status: 'pending',
|
||||
dependencies: i > 1 ? [i - 1] : [],
|
||||
subtasks: []
|
||||
});
|
||||
}
|
||||
|
||||
// Add 50 tasks to in-progress
|
||||
for (let i = 51; i <= 100; i++) {
|
||||
largeTaskSet['in-progress'].tasks.push({
|
||||
id: i,
|
||||
title: `Task ${i}`,
|
||||
status: 'in-progress',
|
||||
dependencies: [],
|
||||
subtasks: []
|
||||
});
|
||||
}
|
||||
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2));
|
||||
// Should complete within reasonable time
|
||||
const timeout = process.env.CI ? 10000 : 5000;
|
||||
const startTime = Date.now();
|
||||
execSync(
|
||||
`node ${binPath} move --from=50 --from-tag=master --to-tag=in-progress --with-dependencies`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
const endTime = Date.now();
|
||||
expect(endTime - startTime).toBeLessThan(timeout);
|
||||
|
||||
// Verify the move was successful
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 50)
|
||||
).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Recovery and Edge Cases', () => {
|
||||
it('should handle invalid task IDs gracefully', () => {
|
||||
expect(() => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=999 --from-tag=master --to-tag=in-progress`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should handle invalid tag names gracefully', () => {
|
||||
expect(() => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=1 --from-tag=invalid-tag --to-tag=in-progress`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should handle same source and target tags', () => {
|
||||
expect(() => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=1 --from-tag=master --to-tag=master`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
}).toThrow();
|
||||
});
|
||||
|
||||
it('should create target tag if it does not exist', () => {
|
||||
execSync(
|
||||
`node ${binPath} move --from=1 --from-tag=master --to-tag=new-tag`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(tasksAfter['new-tag']).toBeDefined();
|
||||
expect(tasksAfter['new-tag'].tasks.find((t) => t.id === 1)).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Multiple Task Movement', () => {
|
||||
it('should move multiple tasks simultaneously', () => {
|
||||
// Create tasks for multiple movement test
|
||||
const multiTaskSet = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
subtasks: []
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Tasks for multiple movement test'
|
||||
}
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'In-progress tasks'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
fs.writeFileSync(tasksPath, JSON.stringify(multiTaskSet, null, 2));
|
||||
|
||||
// Move multiple tasks
|
||||
execSync(
|
||||
`node ${binPath} move --from=1,2,3 --from-tag=master --to-tag=in-progress`,
|
||||
{ stdio: 'pipe' }
|
||||
);
|
||||
|
||||
// Verify all tasks were moved
|
||||
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 1)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
|
||||
expect(tasksAfter.master.tasks.find((t) => t.id === 3)).toBeUndefined();
|
||||
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 2)
|
||||
).toBeDefined();
|
||||
expect(
|
||||
tasksAfter['in-progress'].tasks.find((t) => t.id === 3)
|
||||
).toBeDefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,882 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
// --- Define mock functions ---
|
||||
const mockMoveTasksBetweenTags = jest.fn();
|
||||
const mockMoveTask = jest.fn();
|
||||
const mockGenerateTaskFiles = jest.fn();
|
||||
const mockLog = jest.fn();
|
||||
|
||||
// --- Setup mocks using unstable_mockModule ---
|
||||
jest.unstable_mockModule(
|
||||
'../../../scripts/modules/task-manager/move-task.js',
|
||||
() => ({
|
||||
default: mockMoveTask,
|
||||
moveTasksBetweenTags: mockMoveTasksBetweenTags
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: mockGenerateTaskFiles
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../../scripts/modules/utils.js', () => ({
|
||||
log: mockLog,
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
findProjectRoot: jest.fn(() => '/test/project/root'),
|
||||
getCurrentTag: jest.fn(() => 'master')
|
||||
}));
|
||||
|
||||
// --- Mock chalk for consistent output formatting ---
|
||||
const mockChalk = {
|
||||
red: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
gray: jest.fn((text) => text),
|
||||
dim: jest.fn((text) => text),
|
||||
bold: {
|
||||
cyan: jest.fn((text) => text),
|
||||
white: jest.fn((text) => text),
|
||||
red: jest.fn((text) => text)
|
||||
},
|
||||
cyan: {
|
||||
bold: jest.fn((text) => text)
|
||||
},
|
||||
white: {
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
};
|
||||
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: mockChalk
|
||||
}));
|
||||
|
||||
// --- Import modules (AFTER mock setup) ---
|
||||
let moveTaskModule, generateTaskFilesModule, utilsModule, chalk;
|
||||
|
||||
describe('Cross-Tag Move CLI Integration', () => {
|
||||
// Setup dynamic imports before tests run
|
||||
beforeAll(async () => {
|
||||
moveTaskModule = await import(
|
||||
'../../../scripts/modules/task-manager/move-task.js'
|
||||
);
|
||||
generateTaskFilesModule = await import(
|
||||
'../../../scripts/modules/task-manager/generate-task-files.js'
|
||||
);
|
||||
utilsModule = await import('../../../scripts/modules/utils.js');
|
||||
chalk = (await import('chalk')).default;
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
// Helper function to capture console output and process.exit calls
|
||||
function captureConsoleAndExit() {
|
||||
const originalConsoleError = console.error;
|
||||
const originalConsoleLog = console.log;
|
||||
const originalProcessExit = process.exit;
|
||||
|
||||
const errorMessages = [];
|
||||
const logMessages = [];
|
||||
const exitCodes = [];
|
||||
|
||||
console.error = jest.fn((...args) => {
|
||||
errorMessages.push(args.join(' '));
|
||||
});
|
||||
|
||||
console.log = jest.fn((...args) => {
|
||||
logMessages.push(args.join(' '));
|
||||
});
|
||||
|
||||
process.exit = jest.fn((code) => {
|
||||
exitCodes.push(code);
|
||||
});
|
||||
|
||||
return {
|
||||
errorMessages,
|
||||
logMessages,
|
||||
exitCodes,
|
||||
restore: () => {
|
||||
console.error = originalConsoleError;
|
||||
console.log = originalConsoleLog;
|
||||
process.exit = originalProcessExit;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// --- Replicate the move command action handler logic from commands.js ---
|
||||
async function moveAction(options) {
|
||||
const sourceId = options.from;
|
||||
const destinationId = options.to;
|
||||
const fromTag = options.fromTag;
|
||||
const toTag = options.toTag;
|
||||
const withDependencies = options.withDependencies;
|
||||
const ignoreDependencies = options.ignoreDependencies;
|
||||
const force = options.force;
|
||||
|
||||
// Get the source tag - fallback to current tag if not provided
|
||||
const sourceTag = fromTag || utilsModule.getCurrentTag();
|
||||
|
||||
// Check if this is a cross-tag move (different tags)
|
||||
const isCrossTagMove = sourceTag && toTag && sourceTag !== toTag;
|
||||
|
||||
if (isCrossTagMove) {
|
||||
// Cross-tag move logic
|
||||
if (!sourceId) {
|
||||
const error = new Error(
|
||||
'--from parameter is required for cross-tag moves'
|
||||
);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
|
||||
const taskIds = sourceId.split(',').map((id) => parseInt(id.trim(), 10));
|
||||
|
||||
// Validate parsed task IDs
|
||||
for (let i = 0; i < taskIds.length; i++) {
|
||||
if (isNaN(taskIds[i])) {
|
||||
const error = new Error(
|
||||
`Invalid task ID at position ${i + 1}: "${sourceId.split(',')[i].trim()}" is not a valid number`
|
||||
);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
const tasksPath = path.join(
|
||||
utilsModule.findProjectRoot(),
|
||||
'.taskmaster',
|
||||
'tasks',
|
||||
'tasks.json'
|
||||
);
|
||||
|
||||
try {
|
||||
await moveTaskModule.moveTasksBetweenTags(
|
||||
tasksPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
toTag,
|
||||
{
|
||||
withDependencies,
|
||||
ignoreDependencies,
|
||||
force
|
||||
}
|
||||
);
|
||||
|
||||
console.log(chalk.green('Successfully moved task(s) between tags'));
|
||||
|
||||
// Generate task files for both tags
|
||||
await generateTaskFilesModule.default(
|
||||
tasksPath,
|
||||
path.dirname(tasksPath),
|
||||
{ tag: sourceTag }
|
||||
);
|
||||
await generateTaskFilesModule.default(
|
||||
tasksPath,
|
||||
path.dirname(tasksPath),
|
||||
{ tag: toTag }
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
} else {
|
||||
// Handle case where both tags are provided but are the same
|
||||
if (sourceTag && toTag && sourceTag === toTag) {
|
||||
// If both tags are the same and we have destinationId, treat as within-tag move
|
||||
if (destinationId) {
|
||||
if (!sourceId) {
|
||||
const error = new Error(
|
||||
'Both --from and --to parameters are required for within-tag moves'
|
||||
);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Call the existing moveTask function for within-tag moves
|
||||
try {
|
||||
await moveTaskModule.default(sourceId, destinationId);
|
||||
console.log(chalk.green('Successfully moved task'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
} else {
|
||||
// Same tags but no destinationId - this is an error
|
||||
const error = new Error(
|
||||
`Source and target tags are the same ("${sourceTag}") but no destination specified`
|
||||
);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'For within-tag moves, use: task-master move --from=<sourceId> --to=<destinationId>'
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'For cross-tag moves, use different tags: task-master move --from=<sourceId> --from-tag=<sourceTag> --to-tag=<targetTag>'
|
||||
)
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
} else {
|
||||
// Within-tag move logic (existing functionality)
|
||||
if (!sourceId || !destinationId) {
|
||||
const error = new Error(
|
||||
'Both --from and --to parameters are required for within-tag moves'
|
||||
);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Call the existing moveTask function for within-tag moves
|
||||
try {
|
||||
await moveTaskModule.default(sourceId, destinationId);
|
||||
console.log(chalk.green('Successfully moved task'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
it('should move task without dependencies successfully', async () => {
|
||||
// Mock successful cross-tag move
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '2',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[2],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: undefined,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should fail to move task with cross-tag dependencies', async () => {
|
||||
// Mock dependency conflict error
|
||||
mockMoveTasksBetweenTags.mockRejectedValue(
|
||||
new Error('Cannot move task due to cross-tag dependency conflicts')
|
||||
);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Cannot move task due to cross-tag dependency conflicts'
|
||||
);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes('cross-tag dependency conflicts')
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should move task with dependencies when --with-dependencies is used', async () => {
|
||||
// Mock successful cross-tag move with dependencies
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: true,
|
||||
ignoreDependencies: undefined,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should break dependencies when --ignore-dependencies is used', async () => {
|
||||
// Mock successful cross-tag move with dependency breaking
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
ignoreDependencies: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: true,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target tag if it does not exist', async () => {
|
||||
// Mock successful cross-tag move to new tag
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '2',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'new-tag'
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[2],
|
||||
'backlog',
|
||||
'new-tag',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: undefined,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should fail to move a subtask directly', async () => {
|
||||
// Mock subtask movement error
|
||||
mockMoveTasksBetweenTags.mockRejectedValue(
|
||||
new Error(
|
||||
'Cannot move subtasks directly between tags. Please promote the subtask to a full task first.'
|
||||
)
|
||||
);
|
||||
|
||||
const options = {
|
||||
from: '1.2',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Cannot move subtasks directly between tags. Please promote the subtask to a full task first.'
|
||||
);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
|
||||
expect(errorMessages.some((msg) => msg.includes('subtasks directly'))).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should provide helpful error messages for dependency conflicts', async () => {
|
||||
// Mock dependency conflict with detailed error
|
||||
mockMoveTasksBetweenTags.mockRejectedValue(
|
||||
new Error(
|
||||
'Cross-tag dependency conflicts detected. Task 1 depends on Task 2 which is in a different tag.'
|
||||
)
|
||||
);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Cross-tag dependency conflicts detected. Task 1 depends on Task 2 which is in a different tag.'
|
||||
);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes('Cross-tag dependency conflicts detected')
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should handle same tag error correctly', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'backlog' // Same tag but no destination
|
||||
};
|
||||
|
||||
const { errorMessages, logMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Source and target tags are the same ("backlog") but no destination specified'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes(
|
||||
'Source and target tags are the same ("backlog") but no destination specified'
|
||||
)
|
||||
)
|
||||
).toBe(true);
|
||||
expect(
|
||||
logMessages.some((msg) => msg.includes('For within-tag moves'))
|
||||
).toBe(true);
|
||||
expect(logMessages.some((msg) => msg.includes('For cross-tag moves'))).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should use current tag when --from-tag is not provided', async () => {
|
||||
// Mock successful move with current tag fallback
|
||||
mockMoveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved task(s) between tags'
|
||||
});
|
||||
|
||||
// Mock getCurrentTag to return 'master'
|
||||
utilsModule.getCurrentTag.mockReturnValue('master');
|
||||
|
||||
// Simulate command: task-master move --from=1 --to-tag=in-progress
|
||||
// (no --from-tag provided, should use current tag 'master')
|
||||
await moveAction({
|
||||
from: '1',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: false,
|
||||
ignoreDependencies: false,
|
||||
force: false
|
||||
// fromTag is intentionally not provided to test fallback
|
||||
});
|
||||
|
||||
// Verify that moveTasksBetweenTags was called with 'master' as source tag
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('.taskmaster/tasks/tasks.json'),
|
||||
[1], // parseInt converts string to number
|
||||
'master', // Should use current tag as fallback
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: false,
|
||||
ignoreDependencies: false,
|
||||
force: false
|
||||
}
|
||||
);
|
||||
|
||||
// Verify that generateTaskFiles was called for both tags
|
||||
expect(generateTaskFilesModule.default).toHaveBeenCalledWith(
|
||||
expect.stringContaining('.taskmaster/tasks/tasks.json'),
|
||||
expect.stringContaining('.taskmaster/tasks'),
|
||||
{ tag: 'master' }
|
||||
);
|
||||
expect(generateTaskFilesModule.default).toHaveBeenCalledWith(
|
||||
expect.stringContaining('.taskmaster/tasks/tasks.json'),
|
||||
expect.stringContaining('.taskmaster/tasks'),
|
||||
{ tag: 'in-progress' }
|
||||
);
|
||||
});
|
||||
|
||||
it('should move multiple tasks with comma-separated IDs successfully', async () => {
|
||||
// Mock successful cross-tag move for multiple tasks
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1,2,3',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1, 2, 3], // Should parse comma-separated string to array of integers
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: undefined,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
|
||||
// Verify task files are generated for both tags
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalledTimes(2);
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
expect.stringContaining('.taskmaster/tasks'),
|
||||
{ tag: 'backlog' }
|
||||
);
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
expect.stringContaining('.taskmaster/tasks'),
|
||||
{ tag: 'in-progress' }
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle --force flag correctly', async () => {
|
||||
// Mock successful cross-tag move with force flag
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
force: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: undefined,
|
||||
force: true // Force flag should be passed through
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should fail when invalid task ID is provided', async () => {
|
||||
const options = {
|
||||
from: '1,abc,3', // Invalid ID in middle
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Invalid task ID at position 2: "abc" is not a valid number'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) => msg.includes('Invalid task ID at position 2'))
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should fail when first task ID is invalid', async () => {
|
||||
const options = {
|
||||
from: 'abc,2,3', // Invalid ID at start
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Invalid task ID at position 1: "abc" is not a valid number'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) => msg.includes('Invalid task ID at position 1'))
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should fail when last task ID is invalid', async () => {
|
||||
const options = {
|
||||
from: '1,2,xyz', // Invalid ID at end
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Invalid task ID at position 3: "xyz" is not a valid number'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) => msg.includes('Invalid task ID at position 3'))
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should fail when single invalid task ID is provided', async () => {
|
||||
const options = {
|
||||
from: 'invalid',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Invalid task ID at position 1: "invalid" is not a valid number'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) => msg.includes('Invalid task ID at position 1'))
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should combine --with-dependencies and --force flags correctly', async () => {
|
||||
// Mock successful cross-tag move with both flags
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1,2',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: true,
|
||||
force: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1, 2],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: true,
|
||||
ignoreDependencies: undefined,
|
||||
force: true // Both flags should be passed
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should combine --ignore-dependencies and --force flags correctly', async () => {
|
||||
// Mock successful cross-tag move with both flags
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
ignoreDependencies: true,
|
||||
force: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: true,
|
||||
force: true // Both flags should be passed
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle all three flags combined correctly', async () => {
|
||||
// Mock successful cross-tag move with all flags
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1,2,3',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: true,
|
||||
ignoreDependencies: true,
|
||||
force: true
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1, 2, 3],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: true,
|
||||
ignoreDependencies: true,
|
||||
force: true // All three flags should be passed
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle whitespace in comma-separated task IDs', async () => {
|
||||
// Mock successful cross-tag move with whitespace
|
||||
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
|
||||
mockGenerateTaskFiles.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: ' 1 , 2 , 3 ', // Whitespace around IDs and commas
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
expect.stringContaining('tasks.json'),
|
||||
[1, 2, 3], // Should trim whitespace and parse as integers
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: undefined,
|
||||
ignoreDependencies: undefined,
|
||||
force: undefined
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
it('should fail when --from parameter is missing for cross-tag move', async () => {
|
||||
const options = {
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
// from is intentionally missing
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'--from parameter is required for cross-tag moves'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes('--from parameter is required for cross-tag moves')
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should fail when both --from and --to are missing for within-tag move', async () => {
|
||||
const options = {
|
||||
// Both from and to are missing for within-tag move
|
||||
};
|
||||
|
||||
const { errorMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Both --from and --to parameters are required for within-tag moves'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes(
|
||||
'Both --from and --to parameters are required for within-tag moves'
|
||||
)
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
restore();
|
||||
});
|
||||
|
||||
it('should handle within-tag move when only --from is provided', async () => {
|
||||
// Mock successful within-tag move
|
||||
mockMoveTask.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
to: '2'
|
||||
// No tags specified, should use within-tag logic
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTask).toHaveBeenCalledWith('1', '2');
|
||||
expect(mockMoveTasksBetweenTags).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle within-tag move when both tags are the same', async () => {
|
||||
// Mock successful within-tag move
|
||||
mockMoveTask.mockResolvedValue(undefined);
|
||||
|
||||
const options = {
|
||||
from: '1',
|
||||
to: '2',
|
||||
fromTag: 'master',
|
||||
toTag: 'master' // Same tag, should use within-tag logic
|
||||
};
|
||||
|
||||
await moveAction(options);
|
||||
|
||||
expect(mockMoveTask).toHaveBeenCalledWith('1', '2');
|
||||
expect(mockMoveTasksBetweenTags).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should fail when both tags are the same but no destination is provided', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'master',
|
||||
toTag: 'master' // Same tag but no destination
|
||||
};
|
||||
|
||||
const { errorMessages, logMessages, restore } = captureConsoleAndExit();
|
||||
|
||||
await expect(moveAction(options)).rejects.toThrow(
|
||||
'Source and target tags are the same ("master") but no destination specified'
|
||||
);
|
||||
|
||||
expect(
|
||||
errorMessages.some((msg) =>
|
||||
msg.includes(
|
||||
'Source and target tags are the same ("master") but no destination specified'
|
||||
)
|
||||
)
|
||||
).toBe(true);
|
||||
expect(
|
||||
logMessages.some((msg) => msg.includes('For within-tag moves'))
|
||||
).toBe(true);
|
||||
expect(logMessages.some((msg) => msg.includes('For cross-tag moves'))).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
restore();
|
||||
});
|
||||
});
|
||||
@@ -1,772 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Mock dependencies before importing
|
||||
const mockUtils = {
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
findProjectRoot: jest.fn(() => '/test/project/root'),
|
||||
log: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => {
|
||||
// Mock realistic dependency behavior for testing
|
||||
const { direction = 'forward' } = options;
|
||||
|
||||
if (direction === 'forward') {
|
||||
// Return dependencies that tasks have
|
||||
const result = [];
|
||||
sourceTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
result.push(...task.dependencies);
|
||||
}
|
||||
});
|
||||
return result;
|
||||
} else if (direction === 'reverse') {
|
||||
// Return tasks that depend on the source tasks
|
||||
const sourceIds = sourceTasks.map((t) => t.id);
|
||||
const normalizedSourceIds = sourceIds.map((id) => String(id));
|
||||
const result = [];
|
||||
allTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
const hasDependency = task.dependencies.some((depId) =>
|
||||
normalizedSourceIds.includes(String(depId))
|
||||
);
|
||||
if (hasDependency) {
|
||||
result.push(task.id);
|
||||
}
|
||||
}
|
||||
});
|
||||
return result;
|
||||
}
|
||||
return [];
|
||||
})
|
||||
};
|
||||
|
||||
// Mock the utils module
|
||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => mockUtils);
|
||||
|
||||
// Mock other dependencies
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/task-manager/is-task-dependent.js',
|
||||
() => ({
|
||||
default: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../scripts/modules/dependency-manager.js', () => ({
|
||||
findCrossTagDependencies: jest.fn(() => {
|
||||
// Since dependencies can only exist within the same tag,
|
||||
// this function should never find any cross-tag conflicts
|
||||
return [];
|
||||
}),
|
||||
getDependentTaskIds: jest.fn(
|
||||
(sourceTasks, crossTagDependencies, allTasks) => {
|
||||
// Since we now use findAllDependenciesRecursively in the actual implementation,
|
||||
// this mock simulates finding all dependencies recursively within the same tag
|
||||
const dependentIds = new Set();
|
||||
const processedIds = new Set();
|
||||
|
||||
function findAllDependencies(taskId) {
|
||||
if (processedIds.has(taskId)) return;
|
||||
processedIds.add(taskId);
|
||||
|
||||
const task = allTasks.find((t) => t.id === taskId);
|
||||
if (!task || !Array.isArray(task.dependencies)) return;
|
||||
|
||||
task.dependencies.forEach((depId) => {
|
||||
const normalizedDepId =
|
||||
typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
if (!isNaN(normalizedDepId) && normalizedDepId !== taskId) {
|
||||
dependentIds.add(normalizedDepId);
|
||||
findAllDependencies(normalizedDepId);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
sourceTasks.forEach((sourceTask) => {
|
||||
if (sourceTask && sourceTask.id) {
|
||||
findAllDependencies(sourceTask.id);
|
||||
}
|
||||
});
|
||||
|
||||
return Array.from(dependentIds);
|
||||
}
|
||||
),
|
||||
validateSubtaskMove: jest.fn((taskId, sourceTag, targetTag) => {
|
||||
// Throw error for subtask IDs
|
||||
const taskIdStr = String(taskId);
|
||||
if (taskIdStr.includes('.')) {
|
||||
throw new Error('Cannot move subtasks directly between tags');
|
||||
}
|
||||
})
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn().mockResolvedValue()
|
||||
})
|
||||
);
|
||||
|
||||
// Import the modules we'll be testing after mocking
|
||||
const { moveTasksBetweenTags } = await import(
|
||||
'../../scripts/modules/task-manager/move-task.js'
|
||||
);
|
||||
|
||||
describe('Cross-Tag Task Movement Integration Tests', () => {
|
||||
let testDataPath;
|
||||
let mockTasksData;
|
||||
|
||||
beforeEach(() => {
|
||||
// Setup test data path
|
||||
testDataPath = path.join(__dirname, 'temp-test-tasks.json');
|
||||
|
||||
// Initialize mock data with multiple tags
|
||||
mockTasksData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Backlog Task 1',
|
||||
description: 'A task in backlog',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
priority: 'medium',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Backlog Task 2',
|
||||
description: 'Another task in backlog',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'high',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Backlog Task 3',
|
||||
description: 'Independent task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
priority: 'low',
|
||||
tag: 'backlog'
|
||||
}
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{
|
||||
id: 4,
|
||||
title: 'In Progress Task 1',
|
||||
description: 'A task being worked on',
|
||||
status: 'in-progress',
|
||||
dependencies: [],
|
||||
priority: 'high',
|
||||
tag: 'in-progress'
|
||||
}
|
||||
]
|
||||
},
|
||||
done: {
|
||||
tasks: [
|
||||
{
|
||||
id: 5,
|
||||
title: 'Completed Task 1',
|
||||
description: 'A completed task',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
priority: 'medium',
|
||||
tag: 'done'
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Setup mock utils
|
||||
mockUtils.readJSON.mockReturnValue(mockTasksData);
|
||||
mockUtils.writeJSON.mockImplementation((path, data, projectRoot, tag) => {
|
||||
// Simulate writing to file
|
||||
return Promise.resolve();
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Clean up temp file if it exists
|
||||
if (fs.existsSync(testDataPath)) {
|
||||
fs.unlinkSync(testDataPath);
|
||||
}
|
||||
});
|
||||
|
||||
describe('Basic Cross-Tag Movement', () => {
|
||||
it('should move a single task between tags successfully', async () => {
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify readJSON was called with correct parameters
|
||||
expect(mockUtils.readJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
'/test/project',
|
||||
sourceTag
|
||||
);
|
||||
|
||||
// Verify writeJSON was called with updated data
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 2 }),
|
||||
expect.objectContaining({ id: 3 })
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 4 }),
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
tag: 'in-progress'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
|
||||
// Verify result structure
|
||||
expect(result).toEqual({
|
||||
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"',
|
||||
movedTasks: [
|
||||
{
|
||||
id: 1,
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
}
|
||||
]
|
||||
});
|
||||
});
|
||||
|
||||
it('should move multiple tasks between tags', async () => {
|
||||
const taskIds = [1, 3];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'done';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify the moved tasks are in the target tag
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([expect.objectContaining({ id: 2 })])
|
||||
}),
|
||||
done: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 5 }),
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
tag: 'done'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
tag: 'done'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
|
||||
// Verify result structure
|
||||
expect(result.movedTasks).toHaveLength(2);
|
||||
expect(result.movedTasks).toEqual(
|
||||
expect.arrayContaining([
|
||||
{ id: 1, fromTag: 'backlog', toTag: 'done' },
|
||||
{ id: 3, fromTag: 'backlog', toTag: 'done' }
|
||||
])
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target tag if it does not exist', async () => {
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'new-tag';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify new tag was created
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
'new-tag': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
tag: 'new-tag'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Dependency Handling', () => {
|
||||
it('should move task with dependencies when withDependencies is true', async () => {
|
||||
const taskIds = [2]; // Task 2 depends on Task 1
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{ withDependencies: true },
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify both task 2 and its dependency (task 1) were moved
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([expect.objectContaining({ id: 3 })])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 4 }),
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
tag: 'in-progress'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
tag: 'in-progress'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should move task normally when ignoreDependencies is true (no cross-tag conflicts to ignore)', async () => {
|
||||
const taskIds = [2]; // Task 2 depends on Task 1
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{ ignoreDependencies: true },
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Since dependencies only exist within tags, there are no cross-tag conflicts to ignore
|
||||
// Task 2 moves with its dependencies intact
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 1 }),
|
||||
expect.objectContaining({ id: 3 })
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 4 }),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
tag: 'in-progress',
|
||||
dependencies: [1] // Dependencies preserved since no cross-tag conflicts
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should move task without cross-tag dependency conflicts (since dependencies only exist within tags)', async () => {
|
||||
const taskIds = [2]; // Task 2 depends on Task 1 (both in same tag)
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
// Since dependencies can only exist within the same tag,
|
||||
// there should be no cross-tag conflicts
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify task was moved successfully (without dependencies)
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 1 }), // Task 1 stays in backlog
|
||||
expect.objectContaining({ id: 3 })
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 4 }),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
tag: 'in-progress'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
it('should throw error for invalid source tag', async () => {
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'nonexistent-tag';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
// Mock readJSON to return data without the source tag
|
||||
mockUtils.readJSON.mockReturnValue({
|
||||
'in-progress': { tasks: [] }
|
||||
});
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
)
|
||||
).rejects.toThrow('Source tag "nonexistent-tag" not found or invalid');
|
||||
});
|
||||
|
||||
it('should throw error for invalid task IDs', async () => {
|
||||
const taskIds = [999]; // Non-existent task ID
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
)
|
||||
).rejects.toThrow('Task 999 not found in source tag "backlog"');
|
||||
});
|
||||
|
||||
it('should throw error for subtask movement', async () => {
|
||||
const taskIds = ['1.1']; // Subtask ID
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
)
|
||||
).rejects.toThrow('Cannot move subtasks directly between tags');
|
||||
});
|
||||
|
||||
it('should handle ID conflicts in target tag', async () => {
|
||||
// Setup data with conflicting IDs
|
||||
const conflictingData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Backlog Task',
|
||||
tag: 'backlog'
|
||||
}
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{
|
||||
id: 1, // Same ID as in backlog
|
||||
title: 'In Progress Task',
|
||||
tag: 'in-progress'
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
mockUtils.readJSON.mockReturnValue(conflictingData);
|
||||
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
)
|
||||
).rejects.toThrow('Task 1 already exists in target tag "in-progress"');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
it('should handle empty task list in source tag', async () => {
|
||||
const emptyData = {
|
||||
backlog: { tasks: [] },
|
||||
'in-progress': { tasks: [] }
|
||||
};
|
||||
|
||||
mockUtils.readJSON.mockReturnValue(emptyData);
|
||||
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
)
|
||||
).rejects.toThrow('Task 1 not found in source tag "backlog"');
|
||||
});
|
||||
|
||||
it('should preserve task metadata during move', async () => {
|
||||
const taskIds = [1];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify task metadata is preserved
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Backlog Task 1',
|
||||
description: 'A task in backlog',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
tag: 'in-progress', // Tag should be updated
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle force flag for dependency conflicts', async () => {
|
||||
const taskIds = [2]; // Task 2 depends on Task 1
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{ force: true },
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify task was moved despite dependency conflicts
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
tag: 'in-progress'
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Scenarios', () => {
|
||||
it('should handle complex moves without cross-tag conflicts (dependencies only within tags)', async () => {
|
||||
// Setup data with valid within-tag dependencies
|
||||
const validData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [], // No dependencies
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [1], // Depends on Task 1 (same tag)
|
||||
tag: 'backlog'
|
||||
}
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [], // No dependencies
|
||||
tag: 'in-progress'
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
mockUtils.readJSON.mockReturnValue(validData);
|
||||
|
||||
const taskIds = [3];
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
// Should succeed since there are no cross-tag conflicts
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
expect(result).toEqual({
|
||||
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"',
|
||||
movedTasks: [{ id: 3, fromTag: 'backlog', toTag: 'in-progress' }]
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle bulk move with mixed dependency scenarios', async () => {
|
||||
const taskIds = [1, 2, 3]; // Multiple tasks with dependencies
|
||||
const sourceTag = 'backlog';
|
||||
const targetTag = 'in-progress';
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
testDataPath,
|
||||
taskIds,
|
||||
sourceTag,
|
||||
targetTag,
|
||||
{ withDependencies: true },
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
|
||||
// Verify all tasks were moved
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
testDataPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: [] // All tasks should be moved
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 4 }),
|
||||
expect.objectContaining({ id: 1, tag: 'in-progress' }),
|
||||
expect.objectContaining({ id: 2, tag: 'in-progress' }),
|
||||
expect.objectContaining({ id: 3, tag: 'in-progress' })
|
||||
])
|
||||
})
|
||||
}),
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
|
||||
// Verify result structure
|
||||
expect(result.movedTasks).toHaveLength(3);
|
||||
expect(result.movedTasks).toEqual(
|
||||
expect.arrayContaining([
|
||||
{ id: 1, fromTag: 'backlog', toTag: 'in-progress' },
|
||||
{ id: 2, fromTag: 'backlog', toTag: 'in-progress' },
|
||||
{ id: 3, fromTag: 'backlog', toTag: 'in-progress' }
|
||||
])
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,537 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import path from 'path';
|
||||
import mockFs from 'mock-fs';
|
||||
import fs from 'fs';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
// Import the actual move task functionality
|
||||
import moveTask, {
|
||||
moveTasksBetweenTags
|
||||
} from '../../scripts/modules/task-manager/move-task.js';
|
||||
import { readJSON, writeJSON } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Mock console to avoid conflicts with mock-fs
|
||||
const originalConsole = { ...console };
|
||||
beforeAll(() => {
|
||||
global.console = {
|
||||
...console,
|
||||
log: jest.fn(),
|
||||
error: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
info: jest.fn()
|
||||
};
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
global.console = originalConsole;
|
||||
});
|
||||
|
||||
// Get __dirname equivalent for ES modules
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
describe('Cross-Tag Task Movement Simple Integration Tests', () => {
|
||||
const testDataDir = path.join(__dirname, 'fixtures');
|
||||
const testTasksPath = path.join(testDataDir, 'tasks.json');
|
||||
|
||||
// Test data structure with proper tagged format
|
||||
const testData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [], status: 'pending' },
|
||||
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 3, title: 'Task 3', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Set up mock file system with test data
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(testData, null, 2)
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up mock file system
|
||||
mockFs.restore();
|
||||
});
|
||||
|
||||
describe('Real Module Integration Tests', () => {
|
||||
it('should move task within same tag using actual moveTask function', async () => {
|
||||
// Test moving Task 1 from position 1 to position 5 within backlog tag
|
||||
const result = await moveTask(
|
||||
testTasksPath,
|
||||
'1',
|
||||
'5',
|
||||
false, // Don't generate files for this test
|
||||
{ tag: 'backlog' }
|
||||
);
|
||||
|
||||
// Verify the move operation was successful
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain('Moved task 1 to new ID 5');
|
||||
|
||||
// Read the updated data to verify the move actually happened
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
const backlogTasks = rawData.backlog.tasks;
|
||||
|
||||
// Verify Task 1 is no longer at position 1
|
||||
const taskAtPosition1 = backlogTasks.find((t) => t.id === 1);
|
||||
expect(taskAtPosition1).toBeUndefined();
|
||||
|
||||
// Verify Task 1 is now at position 5
|
||||
const taskAtPosition5 = backlogTasks.find((t) => t.id === 5);
|
||||
expect(taskAtPosition5).toBeDefined();
|
||||
expect(taskAtPosition5.title).toBe('Task 1');
|
||||
expect(taskAtPosition5.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should move tasks between tags using moveTasksBetweenTags function', async () => {
|
||||
// Test moving Task 1 from backlog to in-progress tag
|
||||
const result = await moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Task IDs to move (as strings)
|
||||
'backlog', // Source tag
|
||||
'in-progress', // Target tag
|
||||
{ withDependencies: false, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
);
|
||||
|
||||
// Verify the cross-tag move operation was successful
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain(
|
||||
'Successfully moved 1 tasks from "backlog" to "in-progress"'
|
||||
);
|
||||
expect(result.movedTasks).toHaveLength(1);
|
||||
expect(result.movedTasks[0].id).toBe('1');
|
||||
expect(result.movedTasks[0].fromTag).toBe('backlog');
|
||||
expect(result.movedTasks[0].toTag).toBe('in-progress');
|
||||
|
||||
// Read the updated data to verify the move actually happened
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
// readJSON returns resolved data, so we need to access the raw tagged data
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
const backlogTasks = rawData.backlog?.tasks || [];
|
||||
const inProgressTasks = rawData['in-progress']?.tasks || [];
|
||||
|
||||
// Verify Task 1 is no longer in backlog
|
||||
const taskInBacklog = backlogTasks.find((t) => t.id === 1);
|
||||
expect(taskInBacklog).toBeUndefined();
|
||||
|
||||
// Verify Task 1 is now in in-progress
|
||||
const taskInProgress = inProgressTasks.find((t) => t.id === 1);
|
||||
expect(taskInProgress).toBeDefined();
|
||||
expect(taskInProgress.title).toBe('Task 1');
|
||||
expect(taskInProgress.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should handle subtask movement restrictions', async () => {
|
||||
// Create data with subtasks
|
||||
const dataWithSubtasks = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
subtasks: [
|
||||
{ id: '1.1', title: 'Subtask 1.1', status: 'pending' },
|
||||
{ id: '1.2', title: 'Subtask 1.2', status: 'pending' }
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 2, title: 'Task 2', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Write subtask data to mock file system
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(dataWithSubtasks, null, 2)
|
||||
}
|
||||
});
|
||||
|
||||
// Try to move a subtask directly - this should actually work (converts subtask to task)
|
||||
const result = await moveTask(
|
||||
testTasksPath,
|
||||
'1.1', // Subtask ID
|
||||
'5', // New task ID
|
||||
false,
|
||||
{ tag: 'backlog' }
|
||||
);
|
||||
|
||||
// Verify the subtask was converted to a task
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain('Converted subtask 1.1 to task 5');
|
||||
|
||||
// Verify the subtask was removed from the parent and converted to a standalone task
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
const task1 = rawData.backlog?.tasks?.find((t) => t.id === 1);
|
||||
const newTask5 = rawData.backlog?.tasks?.find((t) => t.id === 5);
|
||||
|
||||
expect(task1).toBeDefined();
|
||||
expect(task1.subtasks).toHaveLength(1); // Only 1.2 remains
|
||||
expect(task1.subtasks[0].id).toBe(2);
|
||||
|
||||
expect(newTask5).toBeDefined();
|
||||
expect(newTask5.title).toBe('Subtask 1.1');
|
||||
expect(newTask5.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should handle missing source tag errors', async () => {
|
||||
// Try to move from a non-existent tag
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'],
|
||||
'non-existent-tag', // Source tag doesn't exist
|
||||
'in-progress',
|
||||
{ withDependencies: false, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
)
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('should handle missing task ID errors', async () => {
|
||||
// Try to move a non-existent task
|
||||
await expect(
|
||||
moveTask(
|
||||
testTasksPath,
|
||||
'999', // Non-existent task ID
|
||||
'5',
|
||||
false,
|
||||
{ tag: 'backlog' }
|
||||
)
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('should handle ignoreDependencies option correctly', async () => {
|
||||
// Create data with dependencies
|
||||
const dataWithDependencies = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
|
||||
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 3, title: 'Task 3', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Write dependency data to mock file system
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(dataWithDependencies, null, 2)
|
||||
}
|
||||
});
|
||||
|
||||
// Move Task 1 while ignoring its dependencies
|
||||
const result = await moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Only Task 1
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: false, ignoreDependencies: true },
|
||||
{ projectRoot: testDataDir }
|
||||
);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.movedTasks).toHaveLength(1);
|
||||
|
||||
// Verify Task 1 moved but Task 2 stayed
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
expect(rawData.backlog.tasks).toHaveLength(1); // Task 2 remains
|
||||
expect(rawData['in-progress'].tasks).toHaveLength(2); // Task 3 + Task 1
|
||||
|
||||
// Verify Task 1 has no dependencies (they were ignored)
|
||||
const movedTask = rawData['in-progress'].tasks.find((t) => t.id === 1);
|
||||
expect(movedTask.dependencies).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Dependency Scenarios', () => {
|
||||
beforeAll(() => {
|
||||
// Document the mock-fs limitation for complex dependency scenarios
|
||||
console.warn(
|
||||
'⚠️ Complex dependency tests are skipped due to mock-fs limitations. ' +
|
||||
'These tests require real filesystem operations for proper dependency resolution. ' +
|
||||
'Consider using real temporary filesystem setup for these scenarios.'
|
||||
);
|
||||
});
|
||||
|
||||
it.skip('should handle dependency conflicts during cross-tag moves', async () => {
|
||||
// For now, skip this test as the mock setup is not working correctly
|
||||
// TODO: Fix mock-fs setup for complex dependency scenarios
|
||||
});
|
||||
|
||||
it.skip('should handle withDependencies option correctly', async () => {
|
||||
// For now, skip this test as the mock setup is not working correctly
|
||||
// TODO: Fix mock-fs setup for complex dependency scenarios
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Dependency Integration Tests with Mock-fs', () => {
|
||||
const complexTestData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2, 3], status: 'pending' },
|
||||
{ id: 2, title: 'Task 2', dependencies: [4], status: 'pending' },
|
||||
{ id: 3, title: 'Task 3', dependencies: [], status: 'pending' },
|
||||
{ id: 4, title: 'Task 4', dependencies: [], status: 'pending' }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Set up mock file system with complex dependency data
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(complexTestData, null, 2)
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up mock file system
|
||||
mockFs.restore();
|
||||
});
|
||||
|
||||
it('should handle dependency conflicts during cross-tag moves using actual move functions', async () => {
|
||||
// Test moving Task 1 which has dependencies on Tasks 2 and 3
|
||||
// This should fail because Task 1 depends on Tasks 2 and 3 which are in the same tag
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Task 1 with dependencies
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: false, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
)
|
||||
).rejects.toThrow(
|
||||
'Cannot move tasks: 2 cross-tag dependency conflicts found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle withDependencies option correctly using actual move functions', async () => {
|
||||
// Test moving Task 1 with its dependencies (Tasks 2 and 3)
|
||||
// Task 2 also depends on Task 4, so all 4 tasks should move
|
||||
const result = await moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Task 1
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: true, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
);
|
||||
|
||||
// Verify the move operation was successful
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain(
|
||||
'Successfully moved 4 tasks from "backlog" to "in-progress"'
|
||||
);
|
||||
expect(result.movedTasks).toHaveLength(4); // Task 1 + Tasks 2, 3, 4
|
||||
|
||||
// Read the updated data to verify all dependent tasks moved
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
|
||||
// Verify all tasks moved from backlog
|
||||
expect(rawData.backlog?.tasks || []).toHaveLength(0); // All tasks moved
|
||||
|
||||
// Verify all tasks are now in in-progress
|
||||
expect(rawData['in-progress']?.tasks || []).toHaveLength(5); // Task 5 + Tasks 1, 2, 3, 4
|
||||
|
||||
// Verify dependency relationships are preserved
|
||||
const task1 = rawData['in-progress']?.tasks?.find((t) => t.id === 1);
|
||||
const task2 = rawData['in-progress']?.tasks?.find((t) => t.id === 2);
|
||||
const task3 = rawData['in-progress']?.tasks?.find((t) => t.id === 3);
|
||||
const task4 = rawData['in-progress']?.tasks?.find((t) => t.id === 4);
|
||||
|
||||
expect(task1?.dependencies).toEqual([2, 3]);
|
||||
expect(task2?.dependencies).toEqual([4]);
|
||||
expect(task3?.dependencies).toEqual([]);
|
||||
expect(task4?.dependencies).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle circular dependency detection using actual move functions', async () => {
|
||||
// Create data with circular dependencies
|
||||
const circularData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
|
||||
{ id: 2, title: 'Task 2', dependencies: [3], status: 'pending' },
|
||||
{ id: 3, title: 'Task 3', dependencies: [1], status: 'pending' } // Circular dependency
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 4, title: 'Task 4', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Set up mock file system with circular dependency data
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(circularData, null, 2)
|
||||
}
|
||||
});
|
||||
|
||||
// Attempt to move Task 1 with dependencies should fail due to circular dependency
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: true, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
)
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('should handle nested dependency chains using actual move functions', async () => {
|
||||
// Create data with nested dependency chains
|
||||
const nestedData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
|
||||
{ id: 2, title: 'Task 2', dependencies: [3], status: 'pending' },
|
||||
{ id: 3, title: 'Task 3', dependencies: [4], status: 'pending' },
|
||||
{ id: 4, title: 'Task 4', dependencies: [], status: 'pending' }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Set up mock file system with nested dependency data
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(nestedData, null, 2)
|
||||
}
|
||||
});
|
||||
|
||||
// Test moving Task 1 with all its nested dependencies
|
||||
const result = await moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Task 1
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: true, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
);
|
||||
|
||||
// Verify the move operation was successful
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain(
|
||||
'Successfully moved 4 tasks from "backlog" to "in-progress"'
|
||||
);
|
||||
expect(result.movedTasks).toHaveLength(4); // Tasks 1, 2, 3, 4
|
||||
|
||||
// Read the updated data to verify all tasks moved
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
|
||||
// Verify all tasks moved from backlog
|
||||
expect(rawData.backlog?.tasks || []).toHaveLength(0); // All tasks moved
|
||||
|
||||
// Verify all tasks are now in in-progress
|
||||
expect(rawData['in-progress']?.tasks || []).toHaveLength(5); // Task 5 + Tasks 1, 2, 3, 4
|
||||
|
||||
// Verify dependency relationships are preserved
|
||||
const task1 = rawData['in-progress']?.tasks?.find((t) => t.id === 1);
|
||||
const task2 = rawData['in-progress']?.tasks?.find((t) => t.id === 2);
|
||||
const task3 = rawData['in-progress']?.tasks?.find((t) => t.id === 3);
|
||||
const task4 = rawData['in-progress']?.tasks?.find((t) => t.id === 4);
|
||||
|
||||
expect(task1?.dependencies).toEqual([2]);
|
||||
expect(task2?.dependencies).toEqual([3]);
|
||||
expect(task3?.dependencies).toEqual([4]);
|
||||
expect(task4?.dependencies).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle cross-tag dependency resolution using actual move functions', async () => {
|
||||
// Create data with cross-tag dependencies
|
||||
const crossTagData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [5], status: 'pending' }, // Depends on task in in-progress
|
||||
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [
|
||||
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Set up mock file system with cross-tag dependency data
|
||||
mockFs({
|
||||
[testDataDir]: {
|
||||
'tasks.json': JSON.stringify(crossTagData, null, 2)
|
||||
}
|
||||
});
|
||||
|
||||
// Test moving Task 1 which depends on Task 5 in another tag
|
||||
const result = await moveTasksBetweenTags(
|
||||
testTasksPath,
|
||||
['1'], // Task 1
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: false, ignoreDependencies: false },
|
||||
{ projectRoot: testDataDir }
|
||||
);
|
||||
|
||||
// Verify the move operation was successful
|
||||
expect(result).toBeDefined();
|
||||
expect(result.message).toContain(
|
||||
'Successfully moved 1 tasks from "backlog" to "in-progress"'
|
||||
);
|
||||
|
||||
// Read the updated data to verify the move actually happened
|
||||
const updatedData = readJSON(testTasksPath, null, 'backlog');
|
||||
const rawData = updatedData._rawTaggedData || updatedData;
|
||||
|
||||
// Verify Task 1 is no longer in backlog
|
||||
const taskInBacklog = rawData.backlog?.tasks?.find((t) => t.id === 1);
|
||||
expect(taskInBacklog).toBeUndefined();
|
||||
|
||||
// Verify Task 1 is now in in-progress with its dependency preserved
|
||||
const taskInProgress = rawData['in-progress']?.tasks?.find(
|
||||
(t) => t.id === 1
|
||||
);
|
||||
expect(taskInProgress).toBeDefined();
|
||||
expect(taskInProgress.title).toBe('Task 1');
|
||||
expect(taskInProgress.dependencies).toEqual([5]); // Cross-tag dependency preserved
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,97 +0,0 @@
|
||||
# Task Master Progress Testing Guide
|
||||
|
||||
Quick reference for testing streaming/non-streaming functionality with token tracking.
|
||||
|
||||
## 🎯 Test Modes
|
||||
|
||||
1. **MCP Streaming** - Has `reportProgress` + `mcpLog`, shows emoji indicators (🔴🟠🟢)
|
||||
2. **CLI Streaming** - No `reportProgress`, shows terminal progress bars
|
||||
3. **Non-Streaming** - No progress reporting, single response
|
||||
|
||||
## 🚀 Quick Commands
|
||||
|
||||
```bash
|
||||
# Test Scripts (accept: mcp-streaming, cli-streaming, non-streaming, both, all)
|
||||
node test-parse-prd.js [mode]
|
||||
node test-analyze-complexity.js [mode]
|
||||
node test-expand.js [mode] [num_subtasks]
|
||||
node test-expand-all.js [mode] [num_subtasks]
|
||||
node parse-prd-analysis.js [accuracy|complexity|all]
|
||||
|
||||
# CLI Commands
|
||||
node scripts/dev.js parse-prd test.txt # Local dev (streaming)
|
||||
node scripts/dev.js analyze-complexity --research
|
||||
node scripts/dev.js expand --id=1 --force
|
||||
node scripts/dev.js expand --all --force
|
||||
|
||||
task-master [command] # Global CLI (non-streaming)
|
||||
```
|
||||
|
||||
## ✅ Success Indicators
|
||||
|
||||
### Indicators
|
||||
- **Priority**: 🔴🔴🔴 (high), 🟠🟠⚪ (medium), 🟢⚪⚪ (low)
|
||||
- **Complexity**: ●●● (7-10), ●●○ (4-6), ●○○ (1-3)
|
||||
|
||||
### Token Format
|
||||
`Tokens (I/O): 2,150/1,847 ($0.0423)` (~4 chars per token)
|
||||
|
||||
### Progress Bars
|
||||
```
|
||||
Single: Generating subtasks... |████████░░| 80% (4/5)
|
||||
Dual: Expanding 3 tasks | Task 2/3 |████████░░| 66%
|
||||
Generating 5 subtasks... |██████░░░░| 60%
|
||||
```
|
||||
|
||||
### Fractional Progress
|
||||
`(completedTasks + currentSubtask/totalSubtasks) / totalTasks`
|
||||
Example: 33% → 46% → 60% → 66% → 80% → 93% → 100%
|
||||
|
||||
## 🐛 Quick Fixes
|
||||
|
||||
| Issue | Fix |
|
||||
|-------|-----|
|
||||
| No streaming | Check `reportProgress` is passed |
|
||||
| NaN% progress | Filter duplicate `subtask_progress` events |
|
||||
| Missing tokens | Check `.env` has API keys |
|
||||
| Broken bars | Terminal width > 80 |
|
||||
| projectRoot.split | Use `projectRoot` not `session` |
|
||||
|
||||
```bash
|
||||
# Debug
|
||||
TASKMASTER_DEBUG=true node test-expand.js
|
||||
npm run lint
|
||||
```
|
||||
|
||||
## 📊 Benchmarks
|
||||
- Single task: 10-20s (5 subtasks)
|
||||
- Expand all: 30-45s (3 tasks)
|
||||
- Streaming: ~10-20% faster
|
||||
- Updates: Every 2-5s
|
||||
|
||||
## 🔄 Test Workflow
|
||||
|
||||
```bash
|
||||
# Quick check
|
||||
node test-parse-prd.js both && npm test
|
||||
|
||||
# Full suite (before release)
|
||||
for test in parse-prd analyze-complexity expand expand-all; do
|
||||
node test-$test.js all
|
||||
done
|
||||
node parse-prd-analysis.js all
|
||||
npm test
|
||||
```
|
||||
|
||||
## 🎯 MCP Tool Example
|
||||
|
||||
```javascript
|
||||
{
|
||||
"tool": "parse_prd",
|
||||
"args": {
|
||||
"input": "prd.txt",
|
||||
"numTasks": "8",
|
||||
"force": true,
|
||||
"projectRoot": "/path/to/project"
|
||||
}
|
||||
}
|
||||
@@ -1,334 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* parse-prd-analysis.js
|
||||
*
|
||||
* Detailed timing and accuracy analysis for parse-prd progress reporting.
|
||||
* Tests different task generation complexities using the sample PRD from fixtures.
|
||||
* Validates real-time characteristics and focuses on progress behavior and performance metrics.
|
||||
* Uses tests/fixtures/sample-prd.txt for consistent testing across all scenarios.
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
import parsePRD from '../../../scripts/modules/task-manager/parse-prd/index.js';
|
||||
|
||||
// Use the same project root as the main test file
|
||||
const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..');
|
||||
|
||||
/**
|
||||
* Get the path to the sample PRD file
|
||||
*/
|
||||
function getSamplePRDPath() {
|
||||
return path.resolve(PROJECT_ROOT, 'tests', 'fixtures', 'sample-prd.txt');
|
||||
}
|
||||
|
||||
/**
|
||||
* Detailed Progress Reporter for timing analysis
|
||||
*/
|
||||
class DetailedProgressReporter {
|
||||
constructor() {
|
||||
this.progressHistory = [];
|
||||
this.startTime = Date.now();
|
||||
this.lastProgress = 0;
|
||||
}
|
||||
|
||||
async reportProgress(data) {
|
||||
const timestamp = Date.now() - this.startTime;
|
||||
const timeSinceLastProgress =
|
||||
this.progressHistory.length > 0
|
||||
? timestamp -
|
||||
this.progressHistory[this.progressHistory.length - 1].timestamp
|
||||
: timestamp;
|
||||
|
||||
const entry = {
|
||||
timestamp,
|
||||
timeSinceLastProgress,
|
||||
...data
|
||||
};
|
||||
|
||||
this.progressHistory.push(entry);
|
||||
|
||||
const percentage = data.total
|
||||
? Math.round((data.progress / data.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
chalk.blue(`[${timestamp}ms] (+${timeSinceLastProgress}ms)`),
|
||||
chalk.green(`${percentage}%`),
|
||||
`(${data.progress}/${data.total})`,
|
||||
chalk.yellow(data.message)
|
||||
);
|
||||
}
|
||||
|
||||
getAnalysis() {
|
||||
if (this.progressHistory.length === 0) return null;
|
||||
|
||||
const totalDuration =
|
||||
this.progressHistory[this.progressHistory.length - 1].timestamp;
|
||||
const intervals = this.progressHistory
|
||||
.slice(1)
|
||||
.map((entry) => entry.timeSinceLastProgress);
|
||||
const avgInterval =
|
||||
intervals.length > 0
|
||||
? intervals.reduce((a, b) => a + b, 0) / intervals.length
|
||||
: 0;
|
||||
const minInterval = intervals.length > 0 ? Math.min(...intervals) : 0;
|
||||
const maxInterval = intervals.length > 0 ? Math.max(...intervals) : 0;
|
||||
|
||||
return {
|
||||
totalReports: this.progressHistory.length,
|
||||
totalDuration,
|
||||
avgInterval: Math.round(avgInterval),
|
||||
minInterval,
|
||||
maxInterval,
|
||||
intervals
|
||||
};
|
||||
}
|
||||
|
||||
printDetailedAnalysis() {
|
||||
const analysis = this.getAnalysis();
|
||||
if (!analysis) {
|
||||
console.log(chalk.red('No progress data to analyze'));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(chalk.cyan('\n=== Detailed Progress Analysis ==='));
|
||||
console.log(`Total Progress Reports: ${analysis.totalReports}`);
|
||||
console.log(`Total Duration: ${analysis.totalDuration}ms`);
|
||||
console.log(`Average Interval: ${analysis.avgInterval}ms`);
|
||||
console.log(`Min Interval: ${analysis.minInterval}ms`);
|
||||
console.log(`Max Interval: ${analysis.maxInterval}ms`);
|
||||
|
||||
console.log(chalk.cyan('\n=== Progress Timeline ==='));
|
||||
this.progressHistory.forEach((entry, index) => {
|
||||
const percentage = entry.total
|
||||
? Math.round((entry.progress / entry.total) * 100)
|
||||
: 0;
|
||||
const intervalText =
|
||||
index > 0 ? ` (+${entry.timeSinceLastProgress}ms)` : '';
|
||||
console.log(
|
||||
`${index + 1}. [${entry.timestamp}ms]${intervalText} ${percentage}% - ${entry.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Check for real-time characteristics
|
||||
console.log(chalk.cyan('\n=== Real-time Characteristics ==='));
|
||||
const hasRealTimeUpdates = analysis.intervals.some(
|
||||
(interval) => interval < 10000
|
||||
); // Less than 10s
|
||||
const hasConsistentUpdates = analysis.intervals.length > 3;
|
||||
const hasProgressiveUpdates = this.progressHistory.every(
|
||||
(entry, index) =>
|
||||
index === 0 ||
|
||||
entry.progress >= this.progressHistory[index - 1].progress
|
||||
);
|
||||
|
||||
console.log(`✅ Real-time updates: ${hasRealTimeUpdates ? 'YES' : 'NO'}`);
|
||||
console.log(
|
||||
`✅ Consistent updates: ${hasConsistentUpdates ? 'YES' : 'NO'}`
|
||||
);
|
||||
console.log(
|
||||
`✅ Progressive updates: ${hasProgressiveUpdates ? 'YES' : 'NO'}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get PRD path for complexity testing
|
||||
* For complexity testing, we'll use the same sample PRD but request different numbers of tasks
|
||||
* This provides more realistic testing since the AI will generate different complexity based on task count
|
||||
*/
|
||||
function getPRDPathForComplexity(complexity = 'medium') {
|
||||
// Always use the same sample PRD file - complexity will be controlled by task count
|
||||
return getSamplePRDPath();
|
||||
}
|
||||
|
||||
/**
|
||||
* Test streaming with different task generation complexities
|
||||
* Uses the same sample PRD but requests different numbers of tasks to test complexity scaling
|
||||
*/
|
||||
async function testStreamingComplexity() {
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
'🧪 Testing Streaming with Different Task Generation Complexities\n'
|
||||
)
|
||||
);
|
||||
|
||||
const complexities = ['simple', 'medium', 'complex'];
|
||||
const results = [];
|
||||
|
||||
for (const complexity of complexities) {
|
||||
console.log(
|
||||
chalk.yellow(`\n--- Testing ${complexity.toUpperCase()} Complexity ---`)
|
||||
);
|
||||
|
||||
const testPRDPath = getPRDPathForComplexity(complexity);
|
||||
const testTasksPath = path.join(__dirname, `test-tasks-${complexity}.json`);
|
||||
|
||||
// Clean up existing file
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
const progressReporter = new DetailedProgressReporter();
|
||||
const expectedTasks =
|
||||
complexity === 'simple' ? 3 : complexity === 'medium' ? 6 : 10;
|
||||
|
||||
try {
|
||||
const startTime = Date.now();
|
||||
|
||||
await parsePRD(testPRDPath, testTasksPath, expectedTasks, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
projectRoot: PROJECT_ROOT
|
||||
});
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
console.log(
|
||||
chalk.green(`✅ ${complexity} complexity completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
progressReporter.printDetailedAnalysis();
|
||||
|
||||
results.push({
|
||||
complexity,
|
||||
duration,
|
||||
analysis: progressReporter.getAnalysis()
|
||||
});
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.red(`❌ ${complexity} complexity failed: ${error.message}`)
|
||||
);
|
||||
results.push({
|
||||
complexity,
|
||||
error: error.message
|
||||
});
|
||||
} finally {
|
||||
// Clean up (only the tasks file, not the PRD since we're using the fixture)
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log(chalk.cyan('\n=== Complexity Test Summary ==='));
|
||||
results.forEach((result) => {
|
||||
if (result.error) {
|
||||
console.log(`${result.complexity}: ❌ FAILED - ${result.error}`);
|
||||
} else {
|
||||
console.log(
|
||||
`${result.complexity}: ✅ ${result.duration}ms (${result.analysis.totalReports} reports)`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test progress accuracy
|
||||
*/
|
||||
async function testProgressAccuracy() {
|
||||
console.log(chalk.cyan('🧪 Testing Progress Accuracy\n'));
|
||||
|
||||
const testPRDPath = getSamplePRDPath();
|
||||
const testTasksPath = path.join(__dirname, 'test-accuracy-tasks.json');
|
||||
|
||||
// Clean up existing file
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
const progressReporter = new DetailedProgressReporter();
|
||||
|
||||
try {
|
||||
await parsePRD(testPRDPath, testTasksPath, 8, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
projectRoot: PROJECT_ROOT
|
||||
});
|
||||
|
||||
console.log(chalk.green('✅ Progress accuracy test completed'));
|
||||
progressReporter.printDetailedAnalysis();
|
||||
|
||||
// Additional accuracy checks
|
||||
const analysis = progressReporter.getAnalysis();
|
||||
console.log(chalk.cyan('\n=== Accuracy Metrics ==='));
|
||||
console.log(
|
||||
`Progress consistency: ${analysis.intervals.every((i) => i > 0) ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Reasonable intervals: ${analysis.intervals.every((i) => i < 30000) ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Expected report count: ${analysis.totalReports >= 8 ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.red(`❌ Progress accuracy test failed: ${error.message}`)
|
||||
);
|
||||
} finally {
|
||||
// Clean up (only the tasks file, not the PRD since we're using the fixture)
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Main test runner
|
||||
*/
|
||||
async function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const testType = args[0] || 'accuracy';
|
||||
|
||||
console.log(chalk.bold.cyan('🚀 Task Master Detailed Progress Tests\n'));
|
||||
console.log(chalk.blue(`Test type: ${testType}\n`));
|
||||
|
||||
try {
|
||||
switch (testType.toLowerCase()) {
|
||||
case 'accuracy':
|
||||
await testProgressAccuracy();
|
||||
break;
|
||||
|
||||
case 'complexity':
|
||||
await testStreamingComplexity();
|
||||
break;
|
||||
|
||||
case 'all':
|
||||
console.log(chalk.yellow('Running all detailed tests...\n'));
|
||||
await testProgressAccuracy();
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
await testStreamingComplexity();
|
||||
break;
|
||||
|
||||
default:
|
||||
console.log(chalk.red(`Unknown test type: ${testType}`));
|
||||
console.log(
|
||||
chalk.yellow('Available options: accuracy, complexity, all')
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.green('\n🎉 Detailed tests completed successfully!'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n❌ Test failed: ${error.message}`));
|
||||
console.error(chalk.red(error.stack));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
// Top-level await is available in ESM; keep compatibility with Node ≥14
|
||||
await main();
|
||||
}
|
||||
@@ -1,577 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* test-parse-prd.js
|
||||
*
|
||||
* Comprehensive integration test for parse-prd functionality.
|
||||
* Tests MCP streaming, CLI streaming, and non-streaming modes.
|
||||
* Validates token tracking, message formats, and priority indicators across all contexts.
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
// Get current directory
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Get project root (three levels up from tests/manual/progress/)
|
||||
const PROJECT_ROOT = path.resolve(__dirname, '..', '..', '..');
|
||||
|
||||
// Import the parse-prd function
|
||||
import parsePRD from '../../../scripts/modules/task-manager/parse-prd/index.js';
|
||||
|
||||
/**
|
||||
* Mock Progress Reporter for testing
|
||||
*/
|
||||
class MockProgressReporter {
|
||||
constructor(enableDebug = true) {
|
||||
this.enableDebug = enableDebug;
|
||||
this.progressHistory = [];
|
||||
this.startTime = Date.now();
|
||||
}
|
||||
|
||||
async reportProgress(data) {
|
||||
const timestamp = Date.now() - this.startTime;
|
||||
|
||||
const entry = {
|
||||
timestamp,
|
||||
...data
|
||||
};
|
||||
|
||||
this.progressHistory.push(entry);
|
||||
|
||||
if (this.enableDebug) {
|
||||
const percentage = data.total
|
||||
? Math.round((data.progress / data.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
chalk.blue(`[${timestamp}ms]`),
|
||||
chalk.green(`${percentage}%`),
|
||||
chalk.yellow(data.message)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
getProgressHistory() {
|
||||
return this.progressHistory;
|
||||
}
|
||||
|
||||
printSummary() {
|
||||
console.log(chalk.green('\n=== Progress Summary ==='));
|
||||
console.log(`Total progress reports: ${this.progressHistory.length}`);
|
||||
console.log(
|
||||
`Duration: ${this.progressHistory[this.progressHistory.length - 1]?.timestamp || 0}ms`
|
||||
);
|
||||
|
||||
this.progressHistory.forEach((entry, index) => {
|
||||
const percentage = entry.total
|
||||
? Math.round((entry.progress / entry.total) * 100)
|
||||
: 0;
|
||||
console.log(
|
||||
`${index + 1}. [${entry.timestamp}ms] ${percentage}% - ${entry.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Check for expected message formats
|
||||
const hasInitialMessage = this.progressHistory.some(
|
||||
(entry) =>
|
||||
entry.message.includes('Starting PRD analysis') &&
|
||||
entry.message.includes('Input:') &&
|
||||
entry.message.includes('tokens')
|
||||
);
|
||||
// Make regex more flexible to handle potential whitespace variations
|
||||
const hasTaskMessages = this.progressHistory.some((entry) =>
|
||||
/^[🔴🟠🟢⚪]{3} Task \d+\/\d+ - .+ \| ~Output: \d+ tokens/u.test(
|
||||
entry.message.trim()
|
||||
)
|
||||
);
|
||||
|
||||
const hasCompletionMessage = this.progressHistory.some(
|
||||
(entry) =>
|
||||
entry.message.includes('✅ Task Generation Completed') &&
|
||||
entry.message.includes('Tokens (I/O):')
|
||||
);
|
||||
|
||||
console.log(chalk.cyan('\n=== Message Format Validation ==='));
|
||||
console.log(
|
||||
`✅ Initial message format: ${hasInitialMessage ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
console.log(`✅ Task message format: ${hasTaskMessages ? 'PASS' : 'FAIL'}`);
|
||||
console.log(
|
||||
`✅ Completion message format: ${hasCompletionMessage ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock MCP Logger for testing
|
||||
*/
|
||||
class MockMCPLogger {
|
||||
constructor(enableDebug = true) {
|
||||
this.enableDebug = enableDebug;
|
||||
this.logs = [];
|
||||
}
|
||||
|
||||
_log(level, ...args) {
|
||||
const entry = {
|
||||
level,
|
||||
timestamp: Date.now(),
|
||||
message: args.join(' ')
|
||||
};
|
||||
this.logs.push(entry);
|
||||
|
||||
if (this.enableDebug) {
|
||||
const color =
|
||||
{
|
||||
info: chalk.blue,
|
||||
warn: chalk.yellow,
|
||||
error: chalk.red,
|
||||
debug: chalk.gray,
|
||||
success: chalk.green
|
||||
}[level] || chalk.white;
|
||||
|
||||
console.log(color(`[${level.toUpperCase()}]`), ...args);
|
||||
}
|
||||
}
|
||||
|
||||
info(...args) {
|
||||
this._log('info', ...args);
|
||||
}
|
||||
warn(...args) {
|
||||
this._log('warn', ...args);
|
||||
}
|
||||
error(...args) {
|
||||
this._log('error', ...args);
|
||||
}
|
||||
debug(...args) {
|
||||
this._log('debug', ...args);
|
||||
}
|
||||
success(...args) {
|
||||
this._log('success', ...args);
|
||||
}
|
||||
|
||||
getLogs() {
|
||||
return this.logs;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the path to the sample PRD file
|
||||
*/
|
||||
function getSamplePRDPath() {
|
||||
return path.resolve(PROJECT_ROOT, 'tests', 'fixtures', 'sample-prd.txt');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a basic test config file
|
||||
*/
|
||||
function createTestConfig() {
|
||||
const testConfig = {
|
||||
models: {
|
||||
main: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
temperature: 0.2
|
||||
},
|
||||
research: {
|
||||
provider: 'perplexity',
|
||||
modelId: 'sonar-pro',
|
||||
maxTokens: 8700,
|
||||
temperature: 0.1
|
||||
},
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
global: {
|
||||
logLevel: 'info',
|
||||
debug: false,
|
||||
defaultSubtasks: 5,
|
||||
defaultPriority: 'medium',
|
||||
projectName: 'Task Master Test',
|
||||
ollamaBaseURL: 'http://localhost:11434/api',
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
|
||||
}
|
||||
};
|
||||
|
||||
const taskmasterDir = path.join(__dirname, '.taskmaster');
|
||||
const configPath = path.join(taskmasterDir, 'config.json');
|
||||
|
||||
// Create .taskmaster directory if it doesn't exist
|
||||
if (!fs.existsSync(taskmasterDir)) {
|
||||
fs.mkdirSync(taskmasterDir, { recursive: true });
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, JSON.stringify(testConfig, null, 2));
|
||||
return configPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup test files and configuration
|
||||
*/
|
||||
function setupTestFiles(testName) {
|
||||
const testPRDPath = getSamplePRDPath();
|
||||
const testTasksPath = path.join(__dirname, `test-${testName}-tasks.json`);
|
||||
const configPath = createTestConfig();
|
||||
|
||||
// Clean up existing files
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
fs.unlinkSync(testTasksPath);
|
||||
}
|
||||
|
||||
return { testPRDPath, testTasksPath, configPath };
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test files
|
||||
*/
|
||||
function cleanupTestFiles(testTasksPath, configPath) {
|
||||
if (fs.existsSync(testTasksPath)) fs.unlinkSync(testTasksPath);
|
||||
if (fs.existsSync(configPath)) fs.unlinkSync(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Run parsePRD with configurable options
|
||||
*/
|
||||
async function runParsePRD(testPRDPath, testTasksPath, numTasks, options = {}) {
|
||||
const startTime = Date.now();
|
||||
|
||||
const result = await parsePRD(testPRDPath, testTasksPath, numTasks, {
|
||||
force: true,
|
||||
append: false,
|
||||
research: false,
|
||||
projectRoot: PROJECT_ROOT,
|
||||
...options
|
||||
});
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
return { result, duration };
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify task file existence and structure
|
||||
*/
|
||||
function verifyTaskResults(testTasksPath) {
|
||||
if (fs.existsSync(testTasksPath)) {
|
||||
const tasksData = JSON.parse(fs.readFileSync(testTasksPath, 'utf8'));
|
||||
console.log(
|
||||
chalk.green(
|
||||
`\n✅ Tasks file created with ${tasksData.tasks.length} tasks`
|
||||
)
|
||||
);
|
||||
|
||||
// Verify task structure
|
||||
const firstTask = tasksData.tasks[0];
|
||||
if (firstTask && firstTask.id && firstTask.title && firstTask.description) {
|
||||
console.log(chalk.green('✅ Task structure is valid'));
|
||||
return true;
|
||||
} else {
|
||||
console.log(chalk.red('❌ Task structure is invalid'));
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.red('❌ Tasks file was not created'));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Print MCP-specific logs and validation
|
||||
*/
|
||||
function printMCPResults(mcpLogger, progressReporter) {
|
||||
// Print progress summary
|
||||
progressReporter.printSummary();
|
||||
|
||||
// Print MCP logs
|
||||
console.log(chalk.cyan('\n=== MCP Logs ==='));
|
||||
const logs = mcpLogger.getLogs();
|
||||
logs.forEach((log, index) => {
|
||||
const color =
|
||||
{
|
||||
info: chalk.blue,
|
||||
warn: chalk.yellow,
|
||||
error: chalk.red,
|
||||
debug: chalk.gray,
|
||||
success: chalk.green
|
||||
}[log.level] || chalk.white;
|
||||
console.log(
|
||||
`${index + 1}. ${color(`[${log.level.toUpperCase()}]`)} ${log.message}`
|
||||
);
|
||||
});
|
||||
|
||||
// Verify MCP-specific message formats (should use emoji indicators)
|
||||
const hasEmojiIndicators = progressReporter
|
||||
.getProgressHistory()
|
||||
.some((entry) => /[🔴🟠🟢]/u.test(entry.message));
|
||||
|
||||
console.log(chalk.cyan('\n=== MCP-Specific Validation ==='));
|
||||
console.log(
|
||||
`✅ Emoji priority indicators: ${hasEmojiIndicators ? 'PASS' : 'FAIL'}`
|
||||
);
|
||||
|
||||
return { hasEmojiIndicators, logs };
|
||||
}
|
||||
|
||||
/**
|
||||
* Test MCP streaming with proper MCP context
|
||||
*/
|
||||
async function testMCPStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing MCP Streaming Functionality\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } = setupTestFiles('mcp');
|
||||
const progressReporter = new MockProgressReporter(true);
|
||||
const mcpLogger = new MockMCPLogger(true); // Enable debug for MCP context
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting MCP streaming test...'));
|
||||
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks,
|
||||
{
|
||||
reportProgress: progressReporter.reportProgress.bind(progressReporter),
|
||||
mcpLog: mcpLogger // Add MCP context - this is the key difference
|
||||
}
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ MCP streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const { hasEmojiIndicators, logs } = printMCPResults(
|
||||
mcpLogger,
|
||||
progressReporter
|
||||
);
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
progressHistory: progressReporter.getProgressHistory(),
|
||||
mcpLogs: logs,
|
||||
hasEmojiIndicators,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ MCP streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test CLI streaming (no reportProgress)
|
||||
*/
|
||||
async function testCLIStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing CLI Streaming (No Progress Reporter)\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } = setupTestFiles('cli');
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting CLI streaming test...'));
|
||||
|
||||
// No reportProgress provided; CLI text mode uses the default streaming reporter
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ CLI streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ CLI streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test non-streaming functionality
|
||||
*/
|
||||
async function testNonStreaming(numTasks = 10) {
|
||||
console.log(chalk.cyan('🧪 Testing Non-Streaming Functionality\n'));
|
||||
|
||||
const { testPRDPath, testTasksPath, configPath } =
|
||||
setupTestFiles('non-streaming');
|
||||
|
||||
try {
|
||||
console.log(chalk.yellow('Starting non-streaming test...'));
|
||||
|
||||
// Force non-streaming by not providing reportProgress
|
||||
const { result, duration } = await runParsePRD(
|
||||
testPRDPath,
|
||||
testTasksPath,
|
||||
numTasks
|
||||
);
|
||||
|
||||
console.log(
|
||||
chalk.green(`\n✅ Non-streaming test completed in ${duration}ms`)
|
||||
);
|
||||
|
||||
const isValidStructure = verifyTaskResults(testTasksPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
duration,
|
||||
result
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`❌ Non-streaming test failed: ${error.message}`));
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
} finally {
|
||||
cleanupTestFiles(testTasksPath, configPath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare results between streaming and non-streaming
|
||||
*/
|
||||
function compareResults(streamingResult, nonStreamingResult) {
|
||||
console.log(chalk.cyan('\n=== Results Comparison ==='));
|
||||
|
||||
if (!streamingResult.success || !nonStreamingResult.success) {
|
||||
console.log(chalk.red('❌ Cannot compare - one or both tests failed'));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Streaming duration: ${streamingResult.duration}ms`);
|
||||
console.log(`Non-streaming duration: ${nonStreamingResult.duration}ms`);
|
||||
|
||||
const durationDiff = Math.abs(
|
||||
streamingResult.duration - nonStreamingResult.duration
|
||||
);
|
||||
const durationDiffPercent = Math.round(
|
||||
(durationDiff /
|
||||
Math.max(streamingResult.duration, nonStreamingResult.duration)) *
|
||||
100
|
||||
);
|
||||
|
||||
console.log(
|
||||
`Duration difference: ${durationDiff}ms (${durationDiffPercent}%)`
|
||||
);
|
||||
|
||||
if (streamingResult.progressHistory) {
|
||||
console.log(
|
||||
`Streaming progress reports: ${streamingResult.progressHistory.length}`
|
||||
);
|
||||
}
|
||||
|
||||
console.log(chalk.green('✅ Both methods completed successfully'));
|
||||
}
|
||||
|
||||
/**
|
||||
* Main test runner
|
||||
*/
|
||||
async function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const testType = args[0] || 'streaming';
|
||||
const numTasks = parseInt(args[1]) || 8;
|
||||
|
||||
console.log(chalk.bold.cyan('🚀 Task Master PRD Streaming Tests\n'));
|
||||
console.log(chalk.blue(`Test type: ${testType}`));
|
||||
console.log(chalk.blue(`Number of tasks: ${numTasks}\n`));
|
||||
|
||||
try {
|
||||
switch (testType.toLowerCase()) {
|
||||
case 'mcp':
|
||||
case 'mcp-streaming':
|
||||
await testMCPStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'cli':
|
||||
case 'cli-streaming':
|
||||
await testCLIStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'non-streaming':
|
||||
case 'non':
|
||||
await testNonStreaming(numTasks);
|
||||
break;
|
||||
|
||||
case 'both': {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Running both MCP streaming and non-streaming tests...\n'
|
||||
)
|
||||
);
|
||||
const mcpStreamingResult = await testMCPStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const nonStreamingResult = await testNonStreaming(numTasks);
|
||||
compareResults(mcpStreamingResult, nonStreamingResult);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'all': {
|
||||
console.log(chalk.yellow('Running all test types...\n'));
|
||||
const mcpResult = await testMCPStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const cliResult = await testCLIStreaming(numTasks);
|
||||
console.log('\n' + '='.repeat(60) + '\n');
|
||||
const nonStreamResult = await testNonStreaming(numTasks);
|
||||
|
||||
console.log(chalk.cyan('\n=== All Tests Summary ==='));
|
||||
console.log(
|
||||
`MCP Streaming: ${mcpResult.success ? '✅ PASS' : '❌ FAIL'} ${mcpResult.hasEmojiIndicators ? '(✅ Emojis)' : '(❌ No Emojis)'}`
|
||||
);
|
||||
console.log(
|
||||
`CLI Streaming: ${cliResult.success ? '✅ PASS' : '❌ FAIL'}`
|
||||
);
|
||||
console.log(
|
||||
`Non-streaming: ${nonStreamResult.success ? '✅ PASS' : '❌ FAIL'}`
|
||||
);
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
console.log(chalk.red(`Unknown test type: ${testType}`));
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Available options: mcp-streaming, cli-streaming, non-streaming, both, all'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.green('\n🎉 Tests completed successfully!'));
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n❌ Test failed: ${error.message}`));
|
||||
console.error(chalk.red(error.stack));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
main();
|
||||
}
|
||||
@@ -4,22 +4,6 @@
|
||||
* This file is run before each test suite to set up the test environment.
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
// Capture the actual original working directory before any changes
|
||||
const originalWorkingDirectory = process.cwd();
|
||||
|
||||
// Store original working directory and project root
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const projectRoot = path.resolve(__dirname, '..');
|
||||
|
||||
// Ensure we're always starting from the project root
|
||||
if (process.cwd() !== projectRoot) {
|
||||
process.chdir(projectRoot);
|
||||
}
|
||||
|
||||
// Mock environment variables
|
||||
process.env.MODEL = 'sonar-pro';
|
||||
process.env.MAX_TOKENS = '64000';
|
||||
@@ -37,10 +21,6 @@ process.env.PERPLEXITY_API_KEY = 'test-mock-perplexity-key-for-tests';
|
||||
// Add global test helpers if needed
|
||||
global.wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
|
||||
|
||||
// Store original working directory for tests that need it
|
||||
global.originalWorkingDirectory = originalWorkingDirectory;
|
||||
global.projectRoot = projectRoot;
|
||||
|
||||
// If needed, silence console during tests
|
||||
if (process.env.SILENCE_CONSOLE === 'true') {
|
||||
global.console = {
|
||||
|
||||
@@ -391,7 +391,7 @@ describe('Unified AI Services', () => {
|
||||
expect.stringContaining('Service call failed for role main')
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'debug',
|
||||
'info',
|
||||
expect.stringContaining('New AI service call with role: fallback')
|
||||
);
|
||||
});
|
||||
@@ -435,7 +435,7 @@ describe('Unified AI Services', () => {
|
||||
expect.stringContaining('Service call failed for role fallback')
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'debug',
|
||||
'info',
|
||||
expect.stringContaining('New AI service call with role: research')
|
||||
);
|
||||
});
|
||||
|
||||
@@ -9,8 +9,7 @@ import {
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies,
|
||||
canMoveWithDependencies
|
||||
validateAndFixDependencies
|
||||
} from '../../scripts/modules/dependency-manager.js';
|
||||
import * as utils from '../../scripts/modules/utils.js';
|
||||
import { sampleTasks } from '../fixtures/sample-tasks.js';
|
||||
@@ -811,113 +810,4 @@ describe('Dependency Manager Module', () => {
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('canMoveWithDependencies', () => {
|
||||
it('should return canMove: false when conflicts exist', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
tag: 'source',
|
||||
dependencies: [2],
|
||||
title: 'Task 1'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
tag: 'other',
|
||||
dependencies: [],
|
||||
title: 'Task 2'
|
||||
}
|
||||
];
|
||||
|
||||
const result = canMoveWithDependencies('1', 'source', 'target', allTasks);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toBeDefined();
|
||||
expect(result.conflicts.length).toBeGreaterThan(0);
|
||||
expect(result.dependentTaskIds).toBeDefined();
|
||||
});
|
||||
|
||||
it('should return canMove: true when no conflicts exist', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
tag: 'source',
|
||||
dependencies: [],
|
||||
title: 'Task 1'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
tag: 'target',
|
||||
dependencies: [],
|
||||
title: 'Task 2'
|
||||
}
|
||||
];
|
||||
|
||||
const result = canMoveWithDependencies('1', 'source', 'target', allTasks);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
expect(result.conflicts).toBeDefined();
|
||||
expect(result.conflicts.length).toBe(0);
|
||||
expect(result.dependentTaskIds).toBeDefined();
|
||||
expect(result.dependentTaskIds.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should handle subtask lookup correctly', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
tag: 'source',
|
||||
dependencies: [],
|
||||
title: 'Parent Task',
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
dependencies: [2],
|
||||
title: 'Subtask 1'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
tag: 'other',
|
||||
dependencies: [],
|
||||
title: 'Task 2'
|
||||
}
|
||||
];
|
||||
|
||||
const result = canMoveWithDependencies(
|
||||
'1.1',
|
||||
'source',
|
||||
'target',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toBeDefined();
|
||||
expect(result.conflicts.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should return error when task not found', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
tag: 'source',
|
||||
dependencies: [],
|
||||
title: 'Task 1'
|
||||
}
|
||||
];
|
||||
|
||||
const result = canMoveWithDependencies(
|
||||
'999',
|
||||
'source',
|
||||
'target',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.error).toBe('Task not found');
|
||||
expect(result.dependentTaskIds).toEqual([]);
|
||||
expect(result.conflicts).toEqual([]);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
/**
|
||||
* Mock for move-task module
|
||||
* Provides mock implementations for testing scenarios
|
||||
*/
|
||||
|
||||
// Mock the moveTask function from the core module
|
||||
const mockMoveTask = jest
|
||||
.fn()
|
||||
.mockImplementation(
|
||||
async (tasksPath, sourceId, destinationId, generateFiles, options) => {
|
||||
// Simulate successful move operation
|
||||
return {
|
||||
success: true,
|
||||
sourceId,
|
||||
destinationId,
|
||||
message: `Successfully moved task ${sourceId} to ${destinationId}`,
|
||||
...options
|
||||
};
|
||||
}
|
||||
);
|
||||
|
||||
// Mock the moveTaskDirect function
|
||||
const mockMoveTaskDirect = jest
|
||||
.fn()
|
||||
.mockImplementation(async (args, log, context = {}) => {
|
||||
// Validate required parameters
|
||||
if (!args.sourceId) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Source ID is required',
|
||||
code: 'MISSING_SOURCE_ID'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!args.destinationId) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Destination ID is required',
|
||||
code: 'MISSING_DESTINATION_ID'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Simulate successful move
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
sourceId: args.sourceId,
|
||||
destinationId: args.destinationId,
|
||||
message: `Successfully moved task/subtask ${args.sourceId} to ${args.destinationId}`,
|
||||
tag: args.tag,
|
||||
projectRoot: args.projectRoot
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Mock the moveTaskCrossTagDirect function
|
||||
const mockMoveTaskCrossTagDirect = jest
|
||||
.fn()
|
||||
.mockImplementation(async (args, log, context = {}) => {
|
||||
// Validate required parameters
|
||||
if (!args.sourceIds) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Source IDs are required',
|
||||
code: 'MISSING_SOURCE_IDS'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!args.sourceTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Source tag is required for cross-tag moves',
|
||||
code: 'MISSING_SOURCE_TAG'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!args.targetTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Target tag is required for cross-tag moves',
|
||||
code: 'MISSING_TARGET_TAG'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (args.sourceTag === args.targetTag) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
message: `Source and target tags are the same ("${args.sourceTag}")`,
|
||||
code: 'SAME_SOURCE_TARGET_TAG'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Simulate successful cross-tag move
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
sourceIds: args.sourceIds,
|
||||
sourceTag: args.sourceTag,
|
||||
targetTag: args.targetTag,
|
||||
message: `Successfully moved tasks ${args.sourceIds} from ${args.sourceTag} to ${args.targetTag}`,
|
||||
withDependencies: args.withDependencies || false,
|
||||
ignoreDependencies: args.ignoreDependencies || false
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Mock the registerMoveTaskTool function
|
||||
const mockRegisterMoveTaskTool = jest.fn().mockImplementation((server) => {
|
||||
// Simulate tool registration
|
||||
server.addTool({
|
||||
name: 'move_task',
|
||||
description: 'Move a task or subtask to a new position',
|
||||
parameters: {},
|
||||
execute: jest.fn()
|
||||
});
|
||||
});
|
||||
|
||||
// Export the mock functions
|
||||
export {
|
||||
mockMoveTask,
|
||||
mockMoveTaskDirect,
|
||||
mockMoveTaskCrossTagDirect,
|
||||
mockRegisterMoveTaskTool
|
||||
};
|
||||
|
||||
// Default export for the main moveTask function
|
||||
export default mockMoveTask;
|
||||
@@ -1,291 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the utils functions
|
||||
const mockFindTasksPath = jest
|
||||
.fn()
|
||||
.mockReturnValue('/test/path/.taskmaster/tasks/tasks.json');
|
||||
jest.mock('../../../../mcp-server/src/core/utils/path-utils.js', () => ({
|
||||
findTasksPath: mockFindTasksPath
|
||||
}));
|
||||
|
||||
const mockEnableSilentMode = jest.fn();
|
||||
const mockDisableSilentMode = jest.fn();
|
||||
const mockReadJSON = jest.fn();
|
||||
const mockWriteJSON = jest.fn();
|
||||
jest.mock('../../../../scripts/modules/utils.js', () => ({
|
||||
enableSilentMode: mockEnableSilentMode,
|
||||
disableSilentMode: mockDisableSilentMode,
|
||||
readJSON: mockReadJSON,
|
||||
writeJSON: mockWriteJSON
|
||||
}));
|
||||
|
||||
// Import the direct function after setting up mocks
|
||||
import { moveTaskCrossTagDirect } from '../../../../mcp-server/src/core/direct-functions/move-task-cross-tag.js';
|
||||
|
||||
describe('MCP Cross-Tag Move Direct Function', () => {
|
||||
const mockLog = {
|
||||
info: jest.fn(),
|
||||
error: jest.fn(),
|
||||
warn: jest.fn()
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('Mock Verification', () => {
|
||||
it('should verify that mocks are working', () => {
|
||||
// Test that findTasksPath mock is working
|
||||
expect(mockFindTasksPath()).toBe(
|
||||
'/test/path/.taskmaster/tasks/tasks.json'
|
||||
);
|
||||
|
||||
// Test that readJSON mock is working
|
||||
mockReadJSON.mockReturnValue('test');
|
||||
expect(mockReadJSON()).toBe('test');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Parameter Validation', () => {
|
||||
it('should return error when source IDs are missing', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('MISSING_SOURCE_IDS');
|
||||
expect(result.error.message).toBe('Source IDs are required');
|
||||
});
|
||||
|
||||
it('should return error when source tag is missing', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1,2',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('MISSING_SOURCE_TAG');
|
||||
expect(result.error.message).toBe(
|
||||
'Source tag is required for cross-tag moves'
|
||||
);
|
||||
});
|
||||
|
||||
it('should return error when target tag is missing', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1,2',
|
||||
sourceTag: 'backlog',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('MISSING_TARGET_TAG');
|
||||
expect(result.error.message).toBe(
|
||||
'Target tag is required for cross-tag moves'
|
||||
);
|
||||
});
|
||||
|
||||
it('should return error when source and target tags are the same', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1,2',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'backlog',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('SAME_SOURCE_TARGET_TAG');
|
||||
expect(result.error.message).toBe(
|
||||
'Source and target tags are the same ("backlog")'
|
||||
);
|
||||
expect(result.error.suggestions).toHaveLength(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Code Mapping', () => {
|
||||
it('should map tag not found errors correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'invalid',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
expect(result.error.message).toBe(
|
||||
'Source tag "invalid" not found or invalid'
|
||||
);
|
||||
expect(result.error.suggestions).toHaveLength(3);
|
||||
});
|
||||
|
||||
it('should map missing project root errors correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress'
|
||||
// Missing projectRoot
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('MISSING_PROJECT_ROOT');
|
||||
expect(result.error.message).toBe(
|
||||
'Project root is required if tasksJsonPath is not provided'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Move Options Handling', () => {
|
||||
it('should handle move options correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
withDependencies: true,
|
||||
ignoreDependencies: false,
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
// The function should fail due to missing tag, but options should be processed
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Function Call Flow', () => {
|
||||
it('should call findTasksPath when projectRoot is provided', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
// The function should fail due to tag validation before reaching path resolution
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
|
||||
// Since the function fails early, findTasksPath is not called
|
||||
expect(mockFindTasksPath).toHaveBeenCalledTimes(0);
|
||||
});
|
||||
|
||||
it('should enable and disable silent mode during execution', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
// The function should fail due to tag validation before reaching silent mode calls
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
|
||||
// Since the function fails early, silent mode is not called
|
||||
expect(mockEnableSilentMode).toHaveBeenCalledTimes(0);
|
||||
expect(mockDisableSilentMode).toHaveBeenCalledTimes(0);
|
||||
});
|
||||
|
||||
it('should parse source IDs correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1, 2, 3', // With spaces
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
// Should fail due to tag validation, but ID parsing should work
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
});
|
||||
|
||||
it('should handle move options correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress',
|
||||
withDependencies: true,
|
||||
ignoreDependencies: false,
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
// Should fail due to tag validation, but option processing should work
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
it('should handle missing project root correctly', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'in-progress'
|
||||
// Missing projectRoot
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('MISSING_PROJECT_ROOT');
|
||||
expect(result.error.message).toBe(
|
||||
'Project root is required if tasksJsonPath is not provided'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle same source and target tags', async () => {
|
||||
const result = await moveTaskCrossTagDirect(
|
||||
{
|
||||
sourceIds: '1',
|
||||
sourceTag: 'backlog',
|
||||
targetTag: 'backlog',
|
||||
projectRoot: '/test'
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('SAME_SOURCE_TARGET_TAG');
|
||||
expect(result.error.message).toBe(
|
||||
'Source and target tags are the same ("backlog")'
|
||||
);
|
||||
expect(result.error.suggestions).toHaveLength(3);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,470 +1,68 @@
|
||||
// In tests/unit/parse-prd.test.js
|
||||
// Testing parse-prd.js file extension compatibility with real files
|
||||
// Testing that parse-prd.js handles both .txt and .md files the same way
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import os from 'os';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Mock the AI services to avoid real API calls
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/ai-services-unified.js',
|
||||
() => ({
|
||||
streamTextService: jest.fn(),
|
||||
generateObjectService: jest.fn(),
|
||||
streamObjectService: jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
get partialObjectStream() {
|
||||
return (async function* () {
|
||||
yield { tasks: [] };
|
||||
yield { tasks: [{ id: 1, title: 'Test Task', priority: 'high' }] };
|
||||
})();
|
||||
},
|
||||
object: Promise.resolve({
|
||||
tasks: [{ id: 1, title: 'Test Task', priority: 'high' }]
|
||||
})
|
||||
};
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock all config-manager exports comprehensively
|
||||
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
getDefaultPriority: jest.fn(() => 'medium'),
|
||||
getMainModelId: jest.fn(() => 'test-model'),
|
||||
getResearchModelId: jest.fn(() => 'test-research-model'),
|
||||
getParametersForRole: jest.fn(() => ({ maxTokens: 1000, temperature: 0.7 })),
|
||||
getMainProvider: jest.fn(() => 'anthropic'),
|
||||
getResearchProvider: jest.fn(() => 'perplexity'),
|
||||
getFallbackProvider: jest.fn(() => 'anthropic'),
|
||||
getResponseLanguage: jest.fn(() => 'English'),
|
||||
getDefaultNumTasks: jest.fn(() => 10),
|
||||
getDefaultSubtasks: jest.fn(() => 5),
|
||||
getLogLevel: jest.fn(() => 'info'),
|
||||
getConfig: jest.fn(() => ({})),
|
||||
getAllProviders: jest.fn(() => ['anthropic', 'perplexity']),
|
||||
MODEL_MAP: {},
|
||||
VALID_PROVIDERS: ['anthropic', 'perplexity'],
|
||||
validateProvider: jest.fn(() => true),
|
||||
validateProviderModelCombination: jest.fn(() => true),
|
||||
isApiKeySet: jest.fn(() => true)
|
||||
}));
|
||||
|
||||
// Mock utils comprehensively to prevent CLI behavior
|
||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||
log: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn(),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
ensureTagMetadata: jest.fn(),
|
||||
readJSON: jest.fn(() => ({ master: { tasks: [] } })),
|
||||
findProjectRoot: jest.fn(() => '/tmp/test'),
|
||||
resolveEnvVariable: jest.fn(() => 'mock-key'),
|
||||
findTaskById: jest.fn(() => null),
|
||||
findTaskByPattern: jest.fn(() => []),
|
||||
validateTaskId: jest.fn(() => true),
|
||||
createTask: jest.fn(() => ({ id: 1, title: 'Mock Task' })),
|
||||
sortByDependencies: jest.fn((tasks) => tasks),
|
||||
isEmpty: jest.fn(() => false),
|
||||
truncate: jest.fn((text) => text),
|
||||
slugify: jest.fn((text) => text.toLowerCase()),
|
||||
getTagFromPath: jest.fn(() => 'master'),
|
||||
isValidTag: jest.fn(() => true),
|
||||
migrateToTaggedFormat: jest.fn(() => ({ master: { tasks: [] } })),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
resolveCurrentTag: jest.fn(() => 'master'),
|
||||
getDefaultTag: jest.fn(() => 'master'),
|
||||
performMigrationIfNeeded: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock prompt manager
|
||||
jest.unstable_mockModule('../../scripts/modules/prompt-manager.js', () => ({
|
||||
getPromptManager: jest.fn(() => ({
|
||||
loadPrompt: jest.fn(() => ({
|
||||
systemPrompt: 'Test system prompt',
|
||||
userPrompt: 'Test user prompt'
|
||||
}))
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock progress/UI components to prevent real CLI UI
|
||||
jest.unstable_mockModule('../../src/progress/parse-prd-tracker.js', () => ({
|
||||
createParsePrdTracker: jest.fn(() => ({
|
||||
start: jest.fn(),
|
||||
stop: jest.fn(),
|
||||
cleanup: jest.fn(),
|
||||
addTaskLine: jest.fn(),
|
||||
updateTokens: jest.fn(),
|
||||
complete: jest.fn(),
|
||||
getSummary: jest.fn().mockReturnValue({
|
||||
taskPriorities: { high: 0, medium: 0, low: 0 },
|
||||
elapsedTime: 0,
|
||||
actionVerb: 'generated'
|
||||
})
|
||||
}))
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../src/ui/parse-prd.js', () => ({
|
||||
displayParsePrdStart: jest.fn(),
|
||||
displayParsePrdSummary: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../scripts/modules/ui.js', () => ({
|
||||
displayAiUsageSummary: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock task generation to prevent file operations
|
||||
jest.unstable_mockModule(
|
||||
'../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
// Mock stream parser
|
||||
jest.unstable_mockModule('../../src/utils/stream-parser.js', () => {
|
||||
// Define mock StreamingError class
|
||||
class StreamingError extends Error {
|
||||
constructor(message, code) {
|
||||
super(message);
|
||||
this.name = 'StreamingError';
|
||||
this.code = code;
|
||||
}
|
||||
}
|
||||
|
||||
// Define mock error codes
|
||||
const STREAMING_ERROR_CODES = {
|
||||
NOT_ASYNC_ITERABLE: 'STREAMING_NOT_SUPPORTED',
|
||||
STREAM_PROCESSING_FAILED: 'STREAM_PROCESSING_FAILED',
|
||||
STREAM_NOT_ITERABLE: 'STREAM_NOT_ITERABLE'
|
||||
};
|
||||
|
||||
return {
|
||||
parseStream: jest.fn(),
|
||||
StreamingError,
|
||||
STREAMING_ERROR_CODES
|
||||
};
|
||||
});
|
||||
|
||||
// Mock other potential UI elements
|
||||
jest.unstable_mockModule('ora', () => ({
|
||||
default: jest.fn(() => ({
|
||||
start: jest.fn(),
|
||||
stop: jest.fn(),
|
||||
succeed: jest.fn(),
|
||||
fail: jest.fn()
|
||||
}))
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
red: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: {
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
},
|
||||
red: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: {
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock boxen
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((content) => content)
|
||||
}));
|
||||
|
||||
// Mock constants
|
||||
jest.unstable_mockModule('../../src/constants/task-priority.js', () => ({
|
||||
DEFAULT_TASK_PRIORITY: 'medium',
|
||||
TASK_PRIORITY_OPTIONS: ['low', 'medium', 'high']
|
||||
}));
|
||||
|
||||
// Mock UI indicators
|
||||
jest.unstable_mockModule('../../src/ui/indicators.js', () => ({
|
||||
getPriorityIndicators: jest.fn(() => ({
|
||||
high: '🔴',
|
||||
medium: '🟡',
|
||||
low: '🟢'
|
||||
}))
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
const { generateObjectService } = await import(
|
||||
'../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
const parsePRD = (
|
||||
await import('../../scripts/modules/task-manager/parse-prd/parse-prd.js')
|
||||
).default;
|
||||
|
||||
describe('parse-prd file extension compatibility', () => {
|
||||
let tempDir;
|
||||
let testFiles;
|
||||
// Test directly that the parse-prd functionality works with different extensions
|
||||
// by examining the parameter handling in mcp-server/src/tools/parse-prd.js
|
||||
|
||||
const mockTasksResponse = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Test Task 1',
|
||||
description: 'First test task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
priority: 'high',
|
||||
details: 'Implementation details for task 1',
|
||||
testStrategy: 'Unit tests for task 1'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Test Task 2',
|
||||
description: 'Second test task',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'medium',
|
||||
details: 'Implementation details for task 2',
|
||||
testStrategy: 'Integration tests for task 2'
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
projectName: 'Test Project',
|
||||
totalTasks: 2,
|
||||
sourceFile: 'test-prd',
|
||||
generatedAt: new Date().toISOString()
|
||||
test('Parameter description mentions support for .md files', () => {
|
||||
// The parameter description for 'input' in parse-prd.js includes .md files
|
||||
const description =
|
||||
'Absolute path to the PRD document file (.txt, .md, etc.)';
|
||||
|
||||
// Verify the description explicitly mentions .md files
|
||||
expect(description).toContain('.md');
|
||||
});
|
||||
|
||||
test('File extension validation is not restricted to .txt files', () => {
|
||||
// Check for absence of extension validation
|
||||
const fileValidator = (filePath) => {
|
||||
// Return a boolean value to ensure the test passes
|
||||
if (!filePath || filePath.length === 0) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
};
|
||||
|
||||
const samplePRDContent = `# Test Project PRD
|
||||
// Test with different extensions
|
||||
expect(fileValidator('/path/to/prd.txt')).toBe(true);
|
||||
expect(fileValidator('/path/to/prd.md')).toBe(true);
|
||||
|
||||
## Overview
|
||||
Build a simple task management application.
|
||||
// Invalid cases should still fail regardless of extension
|
||||
expect(fileValidator('')).toBe(false);
|
||||
});
|
||||
|
||||
## Features
|
||||
1. Create and manage tasks
|
||||
2. Set task priorities
|
||||
3. Track task dependencies
|
||||
test('Implementation handles all file types the same way', () => {
|
||||
// This test confirms that the implementation treats all file types equally
|
||||
// by simulating the core functionality
|
||||
|
||||
## Technical Requirements
|
||||
- React frontend
|
||||
- Node.js backend
|
||||
- PostgreSQL database
|
||||
const mockImplementation = (filePath) => {
|
||||
// The parse-prd.js implementation only checks file existence,
|
||||
// not the file extension, which is what we want to verify
|
||||
|
||||
## Success Criteria
|
||||
- Users can create tasks successfully
|
||||
- Task dependencies work correctly`;
|
||||
if (!filePath) {
|
||||
return { success: false, error: { code: 'MISSING_INPUT_FILE' } };
|
||||
}
|
||||
|
||||
beforeAll(() => {
|
||||
// Create temporary directory for test files
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'parse-prd-test-'));
|
||||
// In the real implementation, this would check if the file exists
|
||||
// But for our test, we're verifying that the same logic applies
|
||||
// regardless of file extension
|
||||
|
||||
// Create test files with different extensions
|
||||
testFiles = {
|
||||
txt: path.join(tempDir, 'test-prd.txt'),
|
||||
md: path.join(tempDir, 'test-prd.md'),
|
||||
rst: path.join(tempDir, 'test-prd.rst'),
|
||||
noExt: path.join(tempDir, 'test-prd')
|
||||
// No special handling for different extensions
|
||||
return { success: true };
|
||||
};
|
||||
|
||||
// Write the same content to all test files
|
||||
Object.values(testFiles).forEach((filePath) => {
|
||||
fs.writeFileSync(filePath, samplePRDContent);
|
||||
});
|
||||
// Verify same behavior for different extensions
|
||||
const txtResult = mockImplementation('/path/to/prd.txt');
|
||||
const mdResult = mockImplementation('/path/to/prd.md');
|
||||
|
||||
// Mock process.exit to prevent actual exit
|
||||
jest.spyOn(process, 'exit').mockImplementation(() => undefined);
|
||||
// Both should succeed since there's no extension-specific logic
|
||||
expect(txtResult.success).toBe(true);
|
||||
expect(mdResult.success).toBe(true);
|
||||
|
||||
// Mock console methods to prevent output
|
||||
jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
jest.spyOn(console, 'error').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
// Clean up temporary directory
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
|
||||
// Restore mocks
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Mock successful AI response
|
||||
generateObjectService.mockResolvedValue({
|
||||
mainResult: { object: mockTasksResponse },
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: 'test-user',
|
||||
commandName: 'parse-prd',
|
||||
modelUsed: 'test-model',
|
||||
providerName: 'test-provider',
|
||||
inputTokens: 100,
|
||||
outputTokens: 200,
|
||||
totalTokens: 300,
|
||||
totalCost: 0.01,
|
||||
currency: 'USD'
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should accept and parse .txt files', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-txt.json');
|
||||
|
||||
const result = await parsePRD(testFiles.txt, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
|
||||
// Verify the content was parsed correctly
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
expect(tasksData.master.tasks).toHaveLength(2);
|
||||
expect(tasksData.master.tasks[0].title).toBe('Test Task 1');
|
||||
});
|
||||
|
||||
test('should accept and parse .md files', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-md.json');
|
||||
|
||||
const result = await parsePRD(testFiles.md, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
|
||||
// Verify the content was parsed correctly
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
expect(tasksData.master.tasks).toHaveLength(2);
|
||||
});
|
||||
|
||||
test('should accept and parse files with other text extensions', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-rst.json');
|
||||
|
||||
const result = await parsePRD(testFiles.rst, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
});
|
||||
|
||||
test('should accept and parse files with no extension', async () => {
|
||||
const outputPath = path.join(tempDir, 'tasks-noext.json');
|
||||
|
||||
const result = await parsePRD(testFiles.noExt, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.tasksPath).toBe(outputPath);
|
||||
expect(fs.existsSync(outputPath)).toBe(true);
|
||||
});
|
||||
|
||||
test('should produce identical results regardless of file extension', async () => {
|
||||
const outputs = {};
|
||||
|
||||
// Parse each file type with a unique project root to avoid ID conflicts
|
||||
for (const [ext, filePath] of Object.entries(testFiles)) {
|
||||
// Create a unique subdirectory for each test to isolate them
|
||||
const testSubDir = path.join(tempDir, `test-${ext}`);
|
||||
fs.mkdirSync(testSubDir, { recursive: true });
|
||||
|
||||
const outputPath = path.join(testSubDir, `tasks.json`);
|
||||
|
||||
await parsePRD(filePath, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: testSubDir
|
||||
});
|
||||
|
||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
outputs[ext] = tasksData;
|
||||
}
|
||||
|
||||
// Compare all outputs - they should be identical (except metadata timestamps)
|
||||
const baseOutput = outputs.txt;
|
||||
Object.values(outputs).forEach((output) => {
|
||||
expect(output.master.tasks).toEqual(baseOutput.master.tasks);
|
||||
expect(output.master.metadata.projectName).toEqual(
|
||||
baseOutput.master.metadata.projectName
|
||||
);
|
||||
expect(output.master.metadata.totalTasks).toEqual(
|
||||
baseOutput.master.metadata.totalTasks
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle non-existent files gracefully', async () => {
|
||||
const nonExistentFile = path.join(tempDir, 'does-not-exist.txt');
|
||||
const outputPath = path.join(tempDir, 'tasks-error.json');
|
||||
|
||||
await expect(
|
||||
parsePRD(nonExistentFile, outputPath, 2, {
|
||||
force: true,
|
||||
mcpLog: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
},
|
||||
projectRoot: tempDir
|
||||
})
|
||||
).rejects.toThrow();
|
||||
// Both should have the same structure
|
||||
expect(Object.keys(txtResult)).toEqual(Object.keys(mdResult));
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,192 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Kilo Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('.kilocodemodes')) {
|
||||
return 'Existing kilocodemodes content';
|
||||
}
|
||||
if (filePath.toString().includes('-rules')) {
|
||||
return 'Existing mode rules content';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for Kilo files
|
||||
function mockCreateKiloStructure() {
|
||||
// Create main .kilo directory
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo'), { recursive: true });
|
||||
|
||||
// Create rules directory
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo', 'rules'), { recursive: true });
|
||||
|
||||
// Create mode-specific rule directories
|
||||
const kiloModes = [
|
||||
'architect',
|
||||
'ask',
|
||||
'orchestrator',
|
||||
'code',
|
||||
'debug',
|
||||
'test'
|
||||
];
|
||||
for (const mode of kiloModes) {
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo', `rules-${mode}`), {
|
||||
recursive: true
|
||||
});
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.kilo', `rules-${mode}`, `${mode}-rules`),
|
||||
`Content for ${mode} rules`
|
||||
);
|
||||
}
|
||||
|
||||
// Create additional directories
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo', 'config'), { recursive: true });
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo', 'templates'), { recursive: true });
|
||||
fs.mkdirSync(path.join(tempDir, '.kilo', 'logs'), { recursive: true });
|
||||
|
||||
// Copy .kilocodemodes file
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.kilocodemodes'),
|
||||
'Kilocodemodes file content'
|
||||
);
|
||||
}
|
||||
|
||||
test('creates all required .kilo directories', () => {
|
||||
// Act
|
||||
mockCreateKiloStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(path.join(tempDir, '.kilo'), {
|
||||
recursive: true
|
||||
});
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules'),
|
||||
{ recursive: true }
|
||||
);
|
||||
|
||||
// Verify all mode directories are created
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-architect'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-ask'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-orchestrator'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-code'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-debug'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-test'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
|
||||
test('creates rule files for all modes', () => {
|
||||
// Act
|
||||
mockCreateKiloStructure();
|
||||
|
||||
// Assert - check all rule files are created
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-architect', 'architect-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-ask', 'ask-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-orchestrator', 'orchestrator-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-code', 'code-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-debug', 'debug-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'rules-test', 'test-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
|
||||
test('creates .kilocodemodes file in project root', () => {
|
||||
// Act
|
||||
mockCreateKiloStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilocodemodes'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
|
||||
test('creates additional required Kilo directories', () => {
|
||||
// Act
|
||||
mockCreateKiloStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'config'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'templates'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.kilo', 'logs'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -1,216 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { kiloProfile } from '../../../src/profiles/kilo.js';
|
||||
|
||||
describe('Kilo Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('Kilo');
|
||||
expect(transformedContent).toContain('kilocode.com');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Kilo uses different tool names)
|
||||
expect(transformedContent).toContain('search_files tool');
|
||||
expect(transformedContent).toContain('apply_diff tool');
|
||||
expect(transformedContent).toContain('execute_command');
|
||||
expect(transformedContent).toContain('use_mcp_tool');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations - no taskmaster subdirectory for Kilo
|
||||
expect(transformedContent).toContain('(.kilo/rules/dev_workflow.md)'); // File path transformation for dev_workflow - no taskmaster subdirectory for Kilo
|
||||
expect(transformedContent).toContain('(.kilo/rules/taskmaster.md)'); // File path transformation for taskmaster - no taskmaster subdirectory for Kilo
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.md',
|
||||
kiloProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -228,11 +228,6 @@ describe('Rule Transformer - General', () => {
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.roo/mcp.json'
|
||||
},
|
||||
kilo: {
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.kilo/mcp.json'
|
||||
},
|
||||
trae: {
|
||||
mcpConfig: false,
|
||||
mcpConfigName: null,
|
||||
|
||||
@@ -1,134 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock cli-progress factory before importing BaseProgressTracker
|
||||
jest.unstable_mockModule(
|
||||
'../../../src/progress/cli-progress-factory.js',
|
||||
() => ({
|
||||
newMultiBar: jest.fn(() => ({
|
||||
create: jest.fn(() => ({
|
||||
update: jest.fn()
|
||||
})),
|
||||
stop: jest.fn()
|
||||
}))
|
||||
})
|
||||
);
|
||||
|
||||
const { newMultiBar } = await import(
|
||||
'../../../src/progress/cli-progress-factory.js'
|
||||
);
|
||||
const { BaseProgressTracker } = await import(
|
||||
'../../../src/progress/base-progress-tracker.js'
|
||||
);
|
||||
|
||||
describe('BaseProgressTracker', () => {
|
||||
let tracker;
|
||||
let mockMultiBar;
|
||||
let mockProgressBar;
|
||||
let mockTimeTokensBar;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
jest.useFakeTimers();
|
||||
|
||||
// Setup mocks
|
||||
mockProgressBar = { update: jest.fn() };
|
||||
mockTimeTokensBar = { update: jest.fn() };
|
||||
mockMultiBar = {
|
||||
create: jest
|
||||
.fn()
|
||||
.mockReturnValueOnce(mockTimeTokensBar)
|
||||
.mockReturnValueOnce(mockProgressBar),
|
||||
stop: jest.fn()
|
||||
};
|
||||
newMultiBar.mockReturnValue(mockMultiBar);
|
||||
|
||||
tracker = new BaseProgressTracker({ numUnits: 10, unitName: 'task' });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.useRealTimers();
|
||||
});
|
||||
|
||||
describe('cleanup', () => {
|
||||
it('should stop and clear timer interval', () => {
|
||||
tracker.start();
|
||||
expect(tracker._timerInterval).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker._timerInterval).toBeNull();
|
||||
});
|
||||
|
||||
it('should stop and null multibar reference', () => {
|
||||
tracker.start();
|
||||
expect(tracker.multibar).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(mockMultiBar.stop).toHaveBeenCalled();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
|
||||
it('should null progress bar references', () => {
|
||||
tracker.start();
|
||||
expect(tracker.timeTokensBar).toBeTruthy();
|
||||
expect(tracker.progressBar).toBeTruthy();
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker.timeTokensBar).toBeNull();
|
||||
expect(tracker.progressBar).toBeNull();
|
||||
});
|
||||
|
||||
it('should set finished state', () => {
|
||||
tracker.start();
|
||||
expect(tracker.isStarted).toBe(true);
|
||||
expect(tracker.isFinished).toBe(false);
|
||||
|
||||
tracker.cleanup();
|
||||
expect(tracker.isStarted).toBe(false);
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle cleanup when multibar.stop throws error', () => {
|
||||
tracker.start();
|
||||
mockMultiBar.stop.mockImplementation(() => {
|
||||
throw new Error('Stop failed');
|
||||
});
|
||||
|
||||
expect(() => tracker.cleanup()).not.toThrow();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
|
||||
it('should be safe to call multiple times', () => {
|
||||
tracker.start();
|
||||
|
||||
tracker.cleanup();
|
||||
tracker.cleanup();
|
||||
tracker.cleanup();
|
||||
|
||||
expect(mockMultiBar.stop).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should be safe to call without starting', () => {
|
||||
expect(() => tracker.cleanup()).not.toThrow();
|
||||
expect(tracker.multibar).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('stop vs cleanup', () => {
|
||||
it('stop should call cleanup and null multibar reference', () => {
|
||||
tracker.start();
|
||||
tracker.stop();
|
||||
|
||||
// stop() now calls cleanup() which nulls the multibar
|
||||
expect(tracker.multibar).toBeNull();
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
|
||||
it('cleanup should null multibar preventing getSummary', () => {
|
||||
tracker.start();
|
||||
tracker.cleanup();
|
||||
|
||||
expect(tracker.multibar).toBeNull();
|
||||
expect(tracker.isFinished).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,134 +0,0 @@
|
||||
# Mock System Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The `move-cross-tag.test.js` file has been refactored to use a focused, maintainable mock system that addresses the brittleness and complexity of the original implementation.
|
||||
|
||||
## Key Improvements
|
||||
|
||||
### 1. **Focused Mocking**
|
||||
|
||||
- **Before**: Mocked 20+ modules, many irrelevant to cross-tag functionality
|
||||
- **After**: Only mocks 5 core modules actually used in cross-tag moves
|
||||
|
||||
### 2. **Configuration-Driven Mocking**
|
||||
|
||||
```javascript
|
||||
const mockConfig = {
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: true,
|
||||
readJSON: true,
|
||||
initTaskMaster: true,
|
||||
findProjectRoot: true
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 3. **Reusable Mock Factory**
|
||||
|
||||
```javascript
|
||||
function createMockFactory(config = mockConfig) {
|
||||
const mocks = {};
|
||||
|
||||
if (config.core?.moveTasksBetweenTags) {
|
||||
mocks.moveTasksBetweenTags = createMock('moveTasksBetweenTags');
|
||||
}
|
||||
// ... other mocks
|
||||
|
||||
return mocks;
|
||||
}
|
||||
```
|
||||
|
||||
## Mock Configuration
|
||||
|
||||
### Core Mocks (Required for Cross-Tag Functionality)
|
||||
|
||||
- `moveTasksBetweenTags`: Core move functionality
|
||||
- `generateTaskFiles`: File generation after moves
|
||||
- `readJSON`: Reading task data
|
||||
- `initTaskMaster`: TaskMaster initialization
|
||||
- `findProjectRoot`: Project path resolution
|
||||
|
||||
### Optional Mocks
|
||||
|
||||
- Console methods: `error`, `log`, `exit`
|
||||
- TaskMaster instance methods: `getCurrentTag`, `getTasksPath`, `getProjectRoot`
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Default Configuration
|
||||
|
||||
```javascript
|
||||
const mocks = setupMocks(); // Uses default mockConfig
|
||||
```
|
||||
|
||||
### Minimal Configuration
|
||||
|
||||
```javascript
|
||||
const minimalConfig = {
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: true,
|
||||
readJSON: true
|
||||
}
|
||||
};
|
||||
const mocks = setupMocks(minimalConfig);
|
||||
```
|
||||
|
||||
### Selective Mocking
|
||||
|
||||
```javascript
|
||||
const selectiveConfig = {
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: false, // Disabled
|
||||
readJSON: true
|
||||
}
|
||||
};
|
||||
const mocks = setupMocks(selectiveConfig);
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Reduced Complexity**: From 150+ lines of mock setup to 50 lines
|
||||
2. **Better Maintainability**: Clear configuration object shows dependencies
|
||||
3. **Focused Testing**: Only mocks what's actually used
|
||||
4. **Flexible Configuration**: Easy to enable/disable specific mocks
|
||||
5. **Consistent Naming**: All mocks use `createMock()` with descriptive names
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### For Other Test Files
|
||||
|
||||
1. Identify actual module dependencies
|
||||
2. Create configuration object for required mocks
|
||||
3. Use `createMockFactory()` and `setupMocks()`
|
||||
4. Remove unnecessary mocks
|
||||
|
||||
### Example Migration
|
||||
|
||||
```javascript
|
||||
// Before: 20+ jest.mock() calls
|
||||
jest.mock('module1', () => ({ ... }));
|
||||
jest.mock('module2', () => ({ ... }));
|
||||
// ... many more
|
||||
|
||||
// After: Configuration-driven
|
||||
const mockConfig = {
|
||||
core: {
|
||||
requiredFunction1: true,
|
||||
requiredFunction2: true
|
||||
}
|
||||
};
|
||||
const mocks = setupMocks(mockConfig);
|
||||
```
|
||||
|
||||
## Testing the Mock System
|
||||
|
||||
The test suite includes validation tests:
|
||||
|
||||
- `should work with minimal mock configuration`
|
||||
- `should allow disabling specific mocks`
|
||||
|
||||
These ensure the mock factory works correctly and can be configured flexibly.
|
||||
@@ -1,512 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import chalk from 'chalk';
|
||||
|
||||
// ============================================================================
|
||||
// MOCK FACTORY & CONFIGURATION SYSTEM
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Mock configuration object to enable/disable specific mocks per test
|
||||
*/
|
||||
const mockConfig = {
|
||||
// Core functionality mocks (always needed)
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: true,
|
||||
readJSON: true,
|
||||
initTaskMaster: true,
|
||||
findProjectRoot: true
|
||||
},
|
||||
// Console and process mocks
|
||||
console: {
|
||||
error: true,
|
||||
log: true,
|
||||
exit: true
|
||||
},
|
||||
// TaskMaster instance mocks
|
||||
taskMaster: {
|
||||
getCurrentTag: true,
|
||||
getTasksPath: true,
|
||||
getProjectRoot: true
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates mock functions with consistent naming
|
||||
*/
|
||||
function createMock(name) {
|
||||
return jest.fn().mockName(name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock factory for creating focused mocks based on configuration
|
||||
*/
|
||||
function createMockFactory(config = mockConfig) {
|
||||
const mocks = {};
|
||||
|
||||
// Core functionality mocks
|
||||
if (config.core?.moveTasksBetweenTags) {
|
||||
mocks.moveTasksBetweenTags = createMock('moveTasksBetweenTags');
|
||||
}
|
||||
if (config.core?.generateTaskFiles) {
|
||||
mocks.generateTaskFiles = createMock('generateTaskFiles');
|
||||
}
|
||||
if (config.core?.readJSON) {
|
||||
mocks.readJSON = createMock('readJSON');
|
||||
}
|
||||
if (config.core?.initTaskMaster) {
|
||||
mocks.initTaskMaster = createMock('initTaskMaster');
|
||||
}
|
||||
if (config.core?.findProjectRoot) {
|
||||
mocks.findProjectRoot = createMock('findProjectRoot');
|
||||
}
|
||||
|
||||
return mocks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets up mocks based on configuration
|
||||
*/
|
||||
function setupMocks(config = mockConfig) {
|
||||
const mocks = createMockFactory(config);
|
||||
|
||||
// Only mock the modules that are actually used in cross-tag move functionality
|
||||
if (config.core?.moveTasksBetweenTags) {
|
||||
jest.mock(
|
||||
'../../../../../scripts/modules/task-manager/move-task.js',
|
||||
() => ({
|
||||
moveTasksBetweenTags: mocks.moveTasksBetweenTags
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
if (
|
||||
config.core?.generateTaskFiles ||
|
||||
config.core?.readJSON ||
|
||||
config.core?.findProjectRoot
|
||||
) {
|
||||
jest.mock('../../../../../scripts/modules/utils.js', () => ({
|
||||
findProjectRoot: mocks.findProjectRoot,
|
||||
generateTaskFiles: mocks.generateTaskFiles,
|
||||
readJSON: mocks.readJSON,
|
||||
// Minimal set of utils that might be used
|
||||
log: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
getCurrentTag: jest.fn(() => 'master')
|
||||
}));
|
||||
}
|
||||
|
||||
if (config.core?.initTaskMaster) {
|
||||
jest.mock('../../../../../scripts/modules/config-manager.js', () => ({
|
||||
initTaskMaster: mocks.initTaskMaster,
|
||||
isApiKeySet: jest.fn(() => true),
|
||||
getConfig: jest.fn(() => ({}))
|
||||
}));
|
||||
}
|
||||
|
||||
// Mock chalk for consistent output testing
|
||||
jest.mock('chalk', () => ({
|
||||
red: jest.fn((text) => text),
|
||||
blue: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
white: jest.fn((text) => ({
|
||||
bold: jest.fn((text) => text)
|
||||
})),
|
||||
reset: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
return mocks;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// TEST SETUP
|
||||
// ============================================================================
|
||||
|
||||
// Set up mocks with default configuration
|
||||
const mocks = setupMocks();
|
||||
|
||||
// Import the actual command handler functions
|
||||
import { registerCommands } from '../../../../../scripts/modules/commands.js';
|
||||
|
||||
// Extract the handleCrossTagMove function from the commands module
|
||||
// This is a simplified version of the actual function for testing
|
||||
async function handleCrossTagMove(moveContext, options) {
|
||||
const { sourceId, sourceTag, toTag, taskMaster } = moveContext;
|
||||
|
||||
if (!sourceId) {
|
||||
console.error('Error: --from parameter is required for cross-tag moves');
|
||||
process.exit(1);
|
||||
throw new Error('--from parameter is required for cross-tag moves');
|
||||
}
|
||||
|
||||
if (sourceTag === toTag) {
|
||||
console.error(
|
||||
`Error: Source and target tags are the same ("${sourceTag}")`
|
||||
);
|
||||
process.exit(1);
|
||||
throw new Error(`Source and target tags are the same ("${sourceTag}")`);
|
||||
}
|
||||
|
||||
const sourceIds = sourceId.split(',').map((id) => id.trim());
|
||||
const moveOptions = {
|
||||
withDependencies: options.withDependencies || false,
|
||||
ignoreDependencies: options.ignoreDependencies || false
|
||||
};
|
||||
|
||||
const result = await mocks.moveTasksBetweenTags(
|
||||
taskMaster.getTasksPath(),
|
||||
sourceIds,
|
||||
sourceTag,
|
||||
toTag,
|
||||
moveOptions,
|
||||
{ projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
|
||||
// Check if source tag still contains tasks before regenerating files
|
||||
const tasksData = mocks.readJSON(
|
||||
taskMaster.getTasksPath(),
|
||||
taskMaster.getProjectRoot(),
|
||||
sourceTag
|
||||
);
|
||||
const sourceTagHasTasks =
|
||||
tasksData && Array.isArray(tasksData.tasks) && tasksData.tasks.length > 0;
|
||||
|
||||
// Generate task files for the affected tags
|
||||
await mocks.generateTaskFiles(taskMaster.getTasksPath(), 'tasks', {
|
||||
tag: toTag,
|
||||
projectRoot: taskMaster.getProjectRoot()
|
||||
});
|
||||
|
||||
// Only regenerate source tag files if it still contains tasks
|
||||
if (sourceTagHasTasks) {
|
||||
await mocks.generateTaskFiles(taskMaster.getTasksPath(), 'tasks', {
|
||||
tag: sourceTag,
|
||||
projectRoot: taskMaster.getProjectRoot()
|
||||
});
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// TEST SUITE
|
||||
// ============================================================================
|
||||
|
||||
describe('CLI Move Command Cross-Tag Functionality', () => {
|
||||
let mockTaskMaster;
|
||||
let mockConsoleError;
|
||||
let mockConsoleLog;
|
||||
let mockProcessExit;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Mock console methods
|
||||
mockConsoleError = jest.spyOn(console, 'error').mockImplementation();
|
||||
mockConsoleLog = jest.spyOn(console, 'log').mockImplementation();
|
||||
mockProcessExit = jest.spyOn(process, 'exit').mockImplementation();
|
||||
|
||||
// Mock TaskMaster instance
|
||||
mockTaskMaster = {
|
||||
getCurrentTag: jest.fn().mockReturnValue('master'),
|
||||
getTasksPath: jest.fn().mockReturnValue('/test/path/tasks.json'),
|
||||
getProjectRoot: jest.fn().mockReturnValue('/test/project')
|
||||
};
|
||||
|
||||
mocks.initTaskMaster.mockReturnValue(mockTaskMaster);
|
||||
mocks.findProjectRoot.mockReturnValue('/test/project');
|
||||
mocks.generateTaskFiles.mockResolvedValue();
|
||||
mocks.readJSON.mockReturnValue({
|
||||
tasks: [
|
||||
{ id: 1, title: 'Test Task 1' },
|
||||
{ id: 2, title: 'Test Task 2' }
|
||||
]
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('Cross-Tag Move Logic', () => {
|
||||
it('should handle basic cross-tag move', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: false,
|
||||
ignoreDependencies: false
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: false,
|
||||
ignoreDependencies: false
|
||||
},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle --with-dependencies flag', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: true,
|
||||
ignoreDependencies: false
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 2 tasks from "backlog" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: true,
|
||||
ignoreDependencies: false
|
||||
},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle --ignore-dependencies flag', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
withDependencies: false,
|
||||
ignoreDependencies: true
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{
|
||||
withDependencies: false,
|
||||
ignoreDependencies: true
|
||||
},
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
it('should handle missing --from parameter', async () => {
|
||||
const options = {
|
||||
from: undefined,
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
await expect(handleCrossTagMove(moveContext, options)).rejects.toThrow();
|
||||
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error: --from parameter is required for cross-tag moves'
|
||||
);
|
||||
expect(mockProcessExit).toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
it('should handle same source and target tags', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'backlog'
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
await expect(handleCrossTagMove(moveContext, options)).rejects.toThrow();
|
||||
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error: Source and target tags are the same ("backlog")'
|
||||
);
|
||||
expect(mockProcessExit).toHaveBeenCalledWith(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Fallback to Current Tag', () => {
|
||||
it('should use current tag when --from-tag is not provided', async () => {
|
||||
const options = {
|
||||
from: '1',
|
||||
fromTag: undefined,
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: 'master', // Should use current tag
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 1 tasks from "master" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1'],
|
||||
'master',
|
||||
'in-progress',
|
||||
expect.any(Object),
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Multiple Task Movement', () => {
|
||||
it('should handle comma-separated task IDs', async () => {
|
||||
const options = {
|
||||
from: '1,2,3',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 3 tasks from "backlog" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1', '2', '3'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
expect.any(Object),
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle whitespace in comma-separated task IDs', async () => {
|
||||
const options = {
|
||||
from: '1, 2, 3',
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress'
|
||||
};
|
||||
|
||||
const moveContext = {
|
||||
sourceId: options.from,
|
||||
sourceTag: options.fromTag,
|
||||
toTag: options.toTag,
|
||||
taskMaster: mockTaskMaster
|
||||
};
|
||||
|
||||
mocks.moveTasksBetweenTags.mockResolvedValue({
|
||||
message: 'Successfully moved 3 tasks from "backlog" to "in-progress"'
|
||||
});
|
||||
|
||||
await handleCrossTagMove(moveContext, options);
|
||||
|
||||
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
|
||||
'/test/path/tasks.json',
|
||||
['1', '2', '3'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
expect.any(Object),
|
||||
{ projectRoot: '/test/project' }
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Mock Configuration Tests', () => {
|
||||
it('should work with minimal mock configuration', async () => {
|
||||
// Test that the mock factory works with minimal config
|
||||
const minimalConfig = {
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: true,
|
||||
readJSON: true
|
||||
}
|
||||
};
|
||||
|
||||
const minimalMocks = createMockFactory(minimalConfig);
|
||||
expect(minimalMocks.moveTasksBetweenTags).toBeDefined();
|
||||
expect(minimalMocks.generateTaskFiles).toBeDefined();
|
||||
expect(minimalMocks.readJSON).toBeDefined();
|
||||
});
|
||||
|
||||
it('should allow disabling specific mocks', async () => {
|
||||
// Test that mocks can be selectively disabled
|
||||
const selectiveConfig = {
|
||||
core: {
|
||||
moveTasksBetweenTags: true,
|
||||
generateTaskFiles: false, // Disabled
|
||||
readJSON: true
|
||||
}
|
||||
};
|
||||
|
||||
const selectiveMocks = createMockFactory(selectiveConfig);
|
||||
expect(selectiveMocks.moveTasksBetweenTags).toBeDefined();
|
||||
expect(selectiveMocks.generateTaskFiles).toBeUndefined();
|
||||
expect(selectiveMocks.readJSON).toBeDefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,330 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
validateCrossTagMove,
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
validateSubtaskMove,
|
||||
canMoveWithDependencies
|
||||
} from '../../../../../scripts/modules/dependency-manager.js';
|
||||
|
||||
describe('Circular Dependency Scenarios', () => {
|
||||
describe('Circular Cross-Tag Dependencies', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [3],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [1],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
}
|
||||
];
|
||||
|
||||
it('should detect circular dependencies across tags', () => {
|
||||
// Task 1 depends on 2, 2 depends on 3, 3 depends on 1 (circular)
|
||||
// But since all tasks are in 'backlog' and target is 'in-progress',
|
||||
// only direct dependencies that are in different tags will be found
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
// Only direct dependencies of task 1 that are not in target tag
|
||||
expect(conflicts).toHaveLength(1);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it('should block move with circular dependencies', () => {
|
||||
// Since task 1 has dependencies in the same tag, validateCrossTagMove should not throw
|
||||
// The function only checks direct dependencies, not circular chains
|
||||
expect(() => {
|
||||
validateCrossTagMove(allTasks[0], 'backlog', 'in-progress', allTasks);
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should return canMove: false for circular dependencies', () => {
|
||||
const result = canMoveWithDependencies(
|
||||
'1',
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Dependency Chains', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2, 3],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [4],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [5],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
dependencies: [6],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 6,
|
||||
title: 'Task 6',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 7,
|
||||
title: 'Task 7',
|
||||
dependencies: [],
|
||||
status: 'in-progress',
|
||||
tag: 'in-progress'
|
||||
}
|
||||
];
|
||||
|
||||
it('should find all dependencies in complex chain', () => {
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
// Only direct dependencies of task 1 that are not in target tag
|
||||
expect(conflicts).toHaveLength(2);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
|
||||
).toBe(true);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 3)
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it('should get all dependent task IDs in complex chain', () => {
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
const dependentIds = getDependentTaskIds(
|
||||
[allTasks[0]],
|
||||
conflicts,
|
||||
allTasks
|
||||
);
|
||||
|
||||
// Should include only the direct dependency IDs from conflicts
|
||||
expect(dependentIds).toContain(2);
|
||||
expect(dependentIds).toContain(3);
|
||||
// Should not include the source task or tasks not in conflicts
|
||||
expect(dependentIds).not.toContain(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Mixed Dependency Types', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2, '3.1'],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [4],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [5],
|
||||
status: 'pending',
|
||||
tag: 'backlog',
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Subtask 3.1',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
}
|
||||
];
|
||||
|
||||
it('should handle mixed task and subtask dependencies', () => {
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(conflicts).toHaveLength(2);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
|
||||
).toBe(true);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === '3.1')
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Large Task Set Performance', () => {
|
||||
const allTasks = [];
|
||||
for (let i = 1; i <= 100; i++) {
|
||||
allTasks.push({
|
||||
id: i,
|
||||
title: `Task ${i}`,
|
||||
dependencies: i < 100 ? [i + 1] : [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
});
|
||||
}
|
||||
|
||||
it('should handle large task sets efficiently', () => {
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(conflicts.length).toBeGreaterThan(0);
|
||||
expect(conflicts[0]).toHaveProperty('taskId');
|
||||
expect(conflicts[0]).toHaveProperty('dependencyId');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases and Error Conditions', () => {
|
||||
const allTasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [],
|
||||
status: 'pending',
|
||||
tag: 'backlog'
|
||||
}
|
||||
];
|
||||
|
||||
it('should handle empty task arrays', () => {
|
||||
expect(() => {
|
||||
findCrossTagDependencies([], 'backlog', 'in-progress', allTasks);
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle non-existent tasks gracefully', () => {
|
||||
expect(() => {
|
||||
findCrossTagDependencies(
|
||||
[{ id: 999, dependencies: [] }],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle invalid tag names', () => {
|
||||
expect(() => {
|
||||
findCrossTagDependencies(
|
||||
[allTasks[0]],
|
||||
'invalid-tag',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle null/undefined dependencies', () => {
|
||||
const taskWithNullDeps = {
|
||||
...allTasks[0],
|
||||
dependencies: [null, undefined, 2]
|
||||
};
|
||||
expect(() => {
|
||||
findCrossTagDependencies(
|
||||
[taskWithNullDeps],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle string dependencies correctly', () => {
|
||||
const taskWithStringDeps = { ...allTasks[0], dependencies: ['2', '3'] };
|
||||
const conflicts = findCrossTagDependencies(
|
||||
[taskWithStringDeps],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
expect(conflicts.length).toBeGreaterThanOrEqual(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,397 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
validateCrossTagMove,
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
validateSubtaskMove,
|
||||
canMoveWithDependencies
|
||||
} from '../../../../../scripts/modules/dependency-manager.js';
|
||||
|
||||
describe('Cross-Tag Dependency Validation', () => {
|
||||
describe('validateCrossTagMove', () => {
|
||||
const mockAllTasks = [
|
||||
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
|
||||
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
|
||||
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
|
||||
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
|
||||
];
|
||||
|
||||
it('should allow move when no dependencies exist', () => {
|
||||
const task = { id: 2, dependencies: [], title: 'Task 2' };
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
expect(result.conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should block move when cross-tag dependencies exist', () => {
|
||||
const task = { id: 1, dependencies: [2], title: 'Task 1' };
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toHaveLength(1);
|
||||
expect(result.conflicts[0]).toMatchObject({
|
||||
taskId: 1,
|
||||
dependencyId: 2,
|
||||
dependencyTag: 'backlog'
|
||||
});
|
||||
});
|
||||
|
||||
it('should allow move when dependencies are in target tag', () => {
|
||||
const task = { id: 3, dependencies: [1], title: 'Task 3' };
|
||||
// Move both task 1 and task 3 to in-progress, then move task 1 to done
|
||||
const updatedTasks = mockAllTasks.map((t) => {
|
||||
if (t.id === 1) return { ...t, tag: 'in-progress' };
|
||||
if (t.id === 3) return { ...t, tag: 'in-progress' };
|
||||
return t;
|
||||
});
|
||||
// Now move task 1 to done
|
||||
const updatedTasks2 = updatedTasks.map((t) =>
|
||||
t.id === 1 ? { ...t, tag: 'done' } : t
|
||||
);
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'in-progress',
|
||||
'done',
|
||||
updatedTasks2
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
expect(result.conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should handle multiple dependencies correctly', () => {
|
||||
const task = { id: 5, dependencies: [1, 3], title: 'Task 5' };
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'backlog',
|
||||
'done',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toHaveLength(2);
|
||||
expect(result.conflicts[0].dependencyId).toBe(1);
|
||||
expect(result.conflicts[1].dependencyId).toBe(3);
|
||||
});
|
||||
|
||||
it('should throw error for invalid task parameter', () => {
|
||||
expect(() =>
|
||||
validateCrossTagMove(null, 'backlog', 'in-progress', mockAllTasks)
|
||||
).toThrow('Task parameter must be a valid object');
|
||||
});
|
||||
|
||||
it('should throw error for invalid source tag', () => {
|
||||
const task = { id: 1, dependencies: [], title: 'Task 1' };
|
||||
expect(() =>
|
||||
validateCrossTagMove(task, '', 'in-progress', mockAllTasks)
|
||||
).toThrow('Source tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid target tag', () => {
|
||||
const task = { id: 1, dependencies: [], title: 'Task 1' };
|
||||
expect(() =>
|
||||
validateCrossTagMove(task, 'backlog', null, mockAllTasks)
|
||||
).toThrow('Target tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid allTasks parameter', () => {
|
||||
const task = { id: 1, dependencies: [], title: 'Task 1' };
|
||||
expect(() =>
|
||||
validateCrossTagMove(task, 'backlog', 'in-progress', 'not-an-array')
|
||||
).toThrow('All tasks parameter must be an array');
|
||||
});
|
||||
});
|
||||
|
||||
describe('findCrossTagDependencies', () => {
|
||||
const mockAllTasks = [
|
||||
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
|
||||
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
|
||||
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
|
||||
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
|
||||
];
|
||||
|
||||
it('should find cross-tag dependencies for multiple tasks', () => {
|
||||
const sourceTasks = [
|
||||
{ id: 1, dependencies: [2], title: 'Task 1' },
|
||||
{ id: 3, dependencies: [1], title: 'Task 3' }
|
||||
];
|
||||
const conflicts = findCrossTagDependencies(
|
||||
sourceTasks,
|
||||
'backlog',
|
||||
'done',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(conflicts).toHaveLength(2);
|
||||
expect(conflicts[0].taskId).toBe(1);
|
||||
expect(conflicts[0].dependencyId).toBe(2);
|
||||
expect(conflicts[1].taskId).toBe(3);
|
||||
expect(conflicts[1].dependencyId).toBe(1);
|
||||
});
|
||||
|
||||
it('should return empty array when no cross-tag dependencies exist', () => {
|
||||
const sourceTasks = [
|
||||
{ id: 2, dependencies: [], title: 'Task 2' },
|
||||
{ id: 4, dependencies: [], title: 'Task 4' }
|
||||
];
|
||||
const conflicts = findCrossTagDependencies(
|
||||
sourceTasks,
|
||||
'backlog',
|
||||
'done',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should handle tasks without dependencies', () => {
|
||||
const sourceTasks = [{ id: 2, dependencies: [], title: 'Task 2' }];
|
||||
const conflicts = findCrossTagDependencies(
|
||||
sourceTasks,
|
||||
'backlog',
|
||||
'done',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should throw error for invalid sourceTasks parameter', () => {
|
||||
expect(() =>
|
||||
findCrossTagDependencies(
|
||||
'not-an-array',
|
||||
'backlog',
|
||||
'done',
|
||||
mockAllTasks
|
||||
)
|
||||
).toThrow('Source tasks parameter must be an array');
|
||||
});
|
||||
|
||||
it('should throw error for invalid source tag', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
|
||||
expect(() =>
|
||||
findCrossTagDependencies(sourceTasks, '', 'done', mockAllTasks)
|
||||
).toThrow('Source tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid target tag', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
|
||||
expect(() =>
|
||||
findCrossTagDependencies(sourceTasks, 'backlog', null, mockAllTasks)
|
||||
).toThrow('Target tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid allTasks parameter', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
|
||||
expect(() =>
|
||||
findCrossTagDependencies(sourceTasks, 'backlog', 'done', 'not-an-array')
|
||||
).toThrow('All tasks parameter must be an array');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getDependentTaskIds', () => {
|
||||
const mockAllTasks = [
|
||||
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
|
||||
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
|
||||
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
|
||||
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
|
||||
];
|
||||
|
||||
it('should return dependent task IDs', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [2], title: 'Task 1' }];
|
||||
const crossTagDependencies = [
|
||||
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }
|
||||
];
|
||||
const dependentIds = getDependentTaskIds(
|
||||
sourceTasks,
|
||||
crossTagDependencies,
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(dependentIds).toContain(2);
|
||||
// The function also finds tasks that depend on the source task, so we expect more than just the dependency
|
||||
expect(dependentIds.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should handle multiple dependencies with recursive resolution', () => {
|
||||
const sourceTasks = [{ id: 5, dependencies: [1, 3], title: 'Task 5' }];
|
||||
const crossTagDependencies = [
|
||||
{ taskId: 5, dependencyId: 1, dependencyTag: 'backlog' },
|
||||
{ taskId: 5, dependencyId: 3, dependencyTag: 'in-progress' }
|
||||
];
|
||||
const dependentIds = getDependentTaskIds(
|
||||
sourceTasks,
|
||||
crossTagDependencies,
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
// Should find all dependencies recursively:
|
||||
// Task 5 → [1, 3], Task 1 → [2], so total is [1, 2, 3]
|
||||
expect(dependentIds).toContain(1);
|
||||
expect(dependentIds).toContain(2); // Task 1's dependency
|
||||
expect(dependentIds).toContain(3);
|
||||
expect(dependentIds).toHaveLength(3);
|
||||
});
|
||||
|
||||
it('should return empty array when no dependencies', () => {
|
||||
const sourceTasks = [{ id: 2, dependencies: [], title: 'Task 2' }];
|
||||
const crossTagDependencies = [];
|
||||
const dependentIds = getDependentTaskIds(
|
||||
sourceTasks,
|
||||
crossTagDependencies,
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
// The function finds tasks that depend on source tasks, so even with no cross-tag dependencies,
|
||||
// it might find tasks that depend on the source task
|
||||
expect(Array.isArray(dependentIds)).toBe(true);
|
||||
});
|
||||
|
||||
it('should throw error for invalid sourceTasks parameter', () => {
|
||||
const crossTagDependencies = [];
|
||||
expect(() =>
|
||||
getDependentTaskIds('not-an-array', crossTagDependencies, mockAllTasks)
|
||||
).toThrow('Source tasks parameter must be an array');
|
||||
});
|
||||
|
||||
it('should throw error for invalid crossTagDependencies parameter', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
|
||||
expect(() =>
|
||||
getDependentTaskIds(sourceTasks, 'not-an-array', mockAllTasks)
|
||||
).toThrow('Cross tag dependencies parameter must be an array');
|
||||
});
|
||||
|
||||
it('should throw error for invalid allTasks parameter', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
|
||||
const crossTagDependencies = [];
|
||||
expect(() =>
|
||||
getDependentTaskIds(sourceTasks, crossTagDependencies, 'not-an-array')
|
||||
).toThrow('All tasks parameter must be an array');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateSubtaskMove', () => {
|
||||
it('should throw error for subtask movement', () => {
|
||||
expect(() =>
|
||||
validateSubtaskMove('1.2', 'backlog', 'in-progress')
|
||||
).toThrow('Cannot move subtask 1.2 directly between tags');
|
||||
});
|
||||
|
||||
it('should allow regular task movement', () => {
|
||||
expect(() =>
|
||||
validateSubtaskMove('1', 'backlog', 'in-progress')
|
||||
).not.toThrow();
|
||||
});
|
||||
|
||||
it('should throw error for invalid taskId parameter', () => {
|
||||
expect(() => validateSubtaskMove(null, 'backlog', 'in-progress')).toThrow(
|
||||
'Task ID must be a valid string'
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw error for invalid source tag', () => {
|
||||
expect(() => validateSubtaskMove('1', '', 'in-progress')).toThrow(
|
||||
'Source tag must be a valid string'
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw error for invalid target tag', () => {
|
||||
expect(() => validateSubtaskMove('1', 'backlog', null)).toThrow(
|
||||
'Target tag must be a valid string'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('canMoveWithDependencies', () => {
|
||||
const mockAllTasks = [
|
||||
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
|
||||
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
|
||||
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
|
||||
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
|
||||
];
|
||||
|
||||
it('should return canMove: true when no conflicts exist', () => {
|
||||
const result = canMoveWithDependencies(
|
||||
'2',
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
expect(result.dependentTaskIds).toHaveLength(0);
|
||||
expect(result.conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should return canMove: false when conflicts exist', () => {
|
||||
const result = canMoveWithDependencies(
|
||||
'1',
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.dependentTaskIds).toContain(2);
|
||||
expect(result.conflicts).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('should return canMove: false when task not found', () => {
|
||||
const result = canMoveWithDependencies(
|
||||
'999',
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.error).toBe('Task not found');
|
||||
});
|
||||
|
||||
it('should handle string task IDs', () => {
|
||||
const result = canMoveWithDependencies(
|
||||
'2',
|
||||
'backlog',
|
||||
'in-progress',
|
||||
mockAllTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
});
|
||||
|
||||
it('should throw error for invalid taskId parameter', () => {
|
||||
expect(() =>
|
||||
canMoveWithDependencies(null, 'backlog', 'in-progress', mockAllTasks)
|
||||
).toThrow('Task ID must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid source tag', () => {
|
||||
expect(() =>
|
||||
canMoveWithDependencies('1', '', 'in-progress', mockAllTasks)
|
||||
).toThrow('Source tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid target tag', () => {
|
||||
expect(() =>
|
||||
canMoveWithDependencies('1', 'backlog', null, mockAllTasks)
|
||||
).toThrow('Target tag must be a valid string');
|
||||
});
|
||||
|
||||
it('should throw error for invalid allTasks parameter', () => {
|
||||
expect(() =>
|
||||
canMoveWithDependencies('1', 'backlog', 'in-progress', 'not-an-array')
|
||||
).toThrow('All tasks parameter must be an array');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -20,27 +20,17 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
taskExists: jest.fn(() => true),
|
||||
formatTaskId: jest.fn((id) => id),
|
||||
findCycles: jest.fn(() => []),
|
||||
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => []),
|
||||
isSilentMode: jest.fn(() => true),
|
||||
resolveTag: jest.fn(() => 'master'),
|
||||
getTasksForTag: jest.fn(() => []),
|
||||
setTasksForTag: jest.fn(),
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn(),
|
||||
isEmpty: jest.fn((value) => {
|
||||
if (value === null || value === undefined) return true;
|
||||
if (Array.isArray(value)) return value.length === 0;
|
||||
if (typeof value === 'object' && value !== null)
|
||||
return Object.keys(value).length === 0;
|
||||
return false; // Not an array or object
|
||||
}),
|
||||
resolveEnvVariable: jest.fn()
|
||||
disableSilentMode: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock ui.js
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn(),
|
||||
formatDependenciesWithStatus: jest.fn()
|
||||
displayBanner: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock task-manager.js
|
||||
|
||||
@@ -41,8 +41,7 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
markMigrationForNotice: jest.fn(),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || []),
|
||||
traverseDependencies: jest.fn((tasks, taskId, visited) => [])
|
||||
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
@@ -79,38 +78,6 @@ jest.unstable_mockModule(
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
streamTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: async function* () {
|
||||
yield '{"tasks":[';
|
||||
yield '{"id":1,"title":"Test Task","priority":"high"}';
|
||||
yield ']}';
|
||||
},
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: '1234567890',
|
||||
commandName: 'analyze-complexity',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
streamObjectService: jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
get partialObjectStream() {
|
||||
return (async function* () {
|
||||
yield { tasks: [] };
|
||||
yield { tasks: [{ id: 1, title: 'Test Task', priority: 'high' }] };
|
||||
})();
|
||||
},
|
||||
object: Promise.resolve({
|
||||
tasks: [{ id: 1, title: 'Test Task', priority: 'high' }]
|
||||
})
|
||||
};
|
||||
})
|
||||
})
|
||||
);
|
||||
@@ -221,8 +188,9 @@ const { readJSON, writeJSON, log, CONFIG, findTaskById } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const { generateObjectService, generateTextService, streamTextService } =
|
||||
await import('../../../../../scripts/modules/ai-services-unified.js');
|
||||
const { generateObjectService, generateTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
const fs = await import('fs');
|
||||
|
||||
|
||||
@@ -90,7 +90,6 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
}
|
||||
return path.join(projectRoot || '.', basePath);
|
||||
}),
|
||||
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => []),
|
||||
CONFIG: {
|
||||
defaultSubtasks: 3
|
||||
}
|
||||
@@ -178,24 +177,6 @@ jest.unstable_mockModule(
|
||||
});
|
||||
}
|
||||
}),
|
||||
streamTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: async function* () {
|
||||
yield '{"tasks":[';
|
||||
yield '{"id":1,"title":"Test Task","priority":"high"}';
|
||||
yield ']}';
|
||||
},
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
commandName: 'analyze-complexity',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
}),
|
||||
generateObjectService: jest.fn().mockResolvedValue({
|
||||
mainResult: {
|
||||
object: {
|
||||
@@ -420,7 +401,7 @@ const { readJSON, writeJSON, getTagAwareFilePath } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const { generateTextService, streamTextService } = await import(
|
||||
const { generateTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
|
||||
@@ -22,10 +22,7 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
),
|
||||
addComplexityToTask: jest.fn(),
|
||||
readComplexityReport: jest.fn(() => null),
|
||||
getTagAwareFilePath: jest.fn((tag, path) => '/mock/tagged/report.json'),
|
||||
stripAnsiCodes: jest.fn((text) =>
|
||||
text ? text.replace(/\x1b\[[0-9;]*m/g, '') : text
|
||||
)
|
||||
getTagAwareFilePath: jest.fn((tag, path) => '/mock/tagged/report.json')
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
@@ -48,13 +45,8 @@ jest.unstable_mockModule(
|
||||
);
|
||||
|
||||
// Import the mocked modules
|
||||
const {
|
||||
readJSON,
|
||||
log,
|
||||
readComplexityReport,
|
||||
addComplexityToTask,
|
||||
stripAnsiCodes
|
||||
} = await import('../../../../../scripts/modules/utils.js');
|
||||
const { readJSON, log, readComplexityReport, addComplexityToTask } =
|
||||
await import('../../../../../scripts/modules/utils.js');
|
||||
const { displayTaskList } = await import(
|
||||
'../../../../../scripts/modules/ui.js'
|
||||
);
|
||||
@@ -592,140 +584,4 @@ describe('listTasks', () => {
|
||||
expect(taskIds).toContain(5); // review task
|
||||
});
|
||||
});
|
||||
|
||||
describe('Compact output format', () => {
|
||||
test('should output compact format when outputFormat is compact', async () => {
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
|
||||
await listTasks(tasksPath, null, null, false, 'compact', {
|
||||
tag: 'master'
|
||||
});
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalled();
|
||||
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
|
||||
// Strip ANSI color codes for testing
|
||||
const cleanOutput = stripAnsiCodes(output);
|
||||
|
||||
// Should contain compact format elements: ID status title (priority) [→ dependencies]
|
||||
expect(cleanOutput).toContain('1 done Setup Project (high)');
|
||||
expect(cleanOutput).toContain(
|
||||
'2 pending Implement Core Features (high) → 1'
|
||||
);
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
test('should format single task compactly', async () => {
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
|
||||
await listTasks(tasksPath, null, null, false, 'compact', {
|
||||
tag: 'master'
|
||||
});
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalled();
|
||||
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
|
||||
|
||||
// Should be compact (no verbose headers)
|
||||
expect(output).not.toContain('Project Dashboard');
|
||||
expect(output).not.toContain('Progress:');
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
test('should handle compact format with subtasks', async () => {
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
|
||||
await listTasks(
|
||||
tasksPath,
|
||||
null,
|
||||
null,
|
||||
true, // withSubtasks = true
|
||||
'compact',
|
||||
{ tag: 'master' }
|
||||
);
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalled();
|
||||
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
|
||||
// Strip ANSI color codes for testing
|
||||
const cleanOutput = stripAnsiCodes(output);
|
||||
|
||||
// Should handle both tasks and subtasks
|
||||
expect(cleanOutput).toContain('1 done Setup Project (high)');
|
||||
expect(cleanOutput).toContain('3.1 done Create Header Component');
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
test('should handle empty task list in compact format', async () => {
|
||||
readJSON.mockReturnValue({ tasks: [] });
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
|
||||
await listTasks(tasksPath, null, null, false, 'compact', {
|
||||
tag: 'master'
|
||||
});
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalledWith('No tasks found');
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
test('should format dependencies correctly with shared helper', async () => {
|
||||
// Create mock tasks with various dependency scenarios
|
||||
const tasksWithDeps = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task with no dependencies',
|
||||
status: 'pending',
|
||||
priority: 'medium',
|
||||
dependencies: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task with few dependencies',
|
||||
status: 'pending',
|
||||
priority: 'high',
|
||||
dependencies: [1, 3]
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task with many dependencies',
|
||||
status: 'pending',
|
||||
priority: 'low',
|
||||
dependencies: [1, 2, 4, 5, 6, 7, 8, 9]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
readJSON.mockReturnValue(tasksWithDeps);
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
|
||||
await listTasks(tasksPath, null, null, false, 'compact', {
|
||||
tag: 'master'
|
||||
});
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalled();
|
||||
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
|
||||
// Strip ANSI color codes for testing
|
||||
const cleanOutput = stripAnsiCodes(output);
|
||||
|
||||
// Should format tasks correctly with compact output including priority
|
||||
expect(cleanOutput).toContain(
|
||||
'1 pending Task with no dependencies (medium)'
|
||||
);
|
||||
expect(cleanOutput).toContain('Task with few dependencies');
|
||||
expect(cleanOutput).toContain('Task with many dependencies');
|
||||
// Should show dependencies with arrow when they exist
|
||||
expect(cleanOutput).toMatch(/2.*→.*1,3/);
|
||||
// Should truncate many dependencies with "+X more" format
|
||||
expect(cleanOutput).toMatch(/3.*→.*1,2,4,5,6.*\(\+\d+ more\)/);
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,633 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// --- Mocks ---
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
truncate: jest.fn((t) => t),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => {
|
||||
// Mock realistic dependency behavior for testing
|
||||
const { direction = 'forward' } = options;
|
||||
|
||||
if (direction === 'forward') {
|
||||
// For forward dependencies: return tasks that the source tasks depend on
|
||||
const result = [];
|
||||
sourceTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
result.push(...task.dependencies);
|
||||
}
|
||||
});
|
||||
return result;
|
||||
} else if (direction === 'reverse') {
|
||||
// For reverse dependencies: return tasks that depend on the source tasks
|
||||
const sourceIds = sourceTasks.map((t) => t.id);
|
||||
const normalizedSourceIds = sourceIds.map((id) => String(id));
|
||||
const result = [];
|
||||
allTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
const hasDependency = task.dependencies.some((depId) =>
|
||||
normalizedSourceIds.includes(String(depId))
|
||||
);
|
||||
if (hasDependency) {
|
||||
result.push(task.id);
|
||||
}
|
||||
}
|
||||
});
|
||||
return result;
|
||||
}
|
||||
return [];
|
||||
})
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn().mockResolvedValue()
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager.js',
|
||||
() => ({
|
||||
isTaskDependentOn: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/dependency-manager.js',
|
||||
() => ({
|
||||
validateCrossTagMove: jest.fn(),
|
||||
findCrossTagDependencies: jest.fn(),
|
||||
getDependentTaskIds: jest.fn(),
|
||||
validateSubtaskMove: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
const { readJSON, writeJSON, log } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const {
|
||||
validateCrossTagMove,
|
||||
findCrossTagDependencies,
|
||||
getDependentTaskIds,
|
||||
validateSubtaskMove
|
||||
} = await import('../../../../../scripts/modules/dependency-manager.js');
|
||||
|
||||
const { moveTasksBetweenTags, getAllTasksWithTags } = await import(
|
||||
'../../../../../scripts/modules/task-manager/move-task.js'
|
||||
);
|
||||
|
||||
describe('Cross-Tag Task Movement', () => {
|
||||
let mockRawData;
|
||||
let mockTasksPath;
|
||||
let mockContext;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Setup mock data
|
||||
mockRawData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2] },
|
||||
{ id: 2, title: 'Task 2', dependencies: [] },
|
||||
{ id: 3, title: 'Task 3', dependencies: [1] }
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [{ id: 4, title: 'Task 4', dependencies: [] }]
|
||||
},
|
||||
done: {
|
||||
tasks: [{ id: 5, title: 'Task 5', dependencies: [4] }]
|
||||
}
|
||||
};
|
||||
|
||||
mockTasksPath = '/test/path/tasks.json';
|
||||
mockContext = { projectRoot: '/test/project' };
|
||||
|
||||
// Mock readJSON to return our test data
|
||||
readJSON.mockImplementation((path, projectRoot, tag) => {
|
||||
return { ...mockRawData[tag], tag, _rawTaggedData: mockRawData };
|
||||
});
|
||||
|
||||
writeJSON.mockResolvedValue();
|
||||
log.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('getAllTasksWithTags', () => {
|
||||
it('should return all tasks with tag information', () => {
|
||||
const allTasks = getAllTasksWithTags(mockRawData);
|
||||
|
||||
expect(allTasks).toHaveLength(5);
|
||||
expect(allTasks.find((t) => t.id === 1).tag).toBe('backlog');
|
||||
expect(allTasks.find((t) => t.id === 4).tag).toBe('in-progress');
|
||||
expect(allTasks.find((t) => t.id === 5).tag).toBe('done');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateCrossTagMove', () => {
|
||||
it('should allow move when no dependencies exist', () => {
|
||||
const task = { id: 2, dependencies: [] };
|
||||
const allTasks = getAllTasksWithTags(mockRawData);
|
||||
|
||||
validateCrossTagMove.mockReturnValue({ canMove: true, conflicts: [] });
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(true);
|
||||
expect(result.conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should block move when cross-tag dependencies exist', () => {
|
||||
const task = { id: 1, dependencies: [2] };
|
||||
const allTasks = getAllTasksWithTags(mockRawData);
|
||||
|
||||
validateCrossTagMove.mockReturnValue({
|
||||
canMove: false,
|
||||
conflicts: [{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }]
|
||||
});
|
||||
const result = validateCrossTagMove(
|
||||
task,
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(result.canMove).toBe(false);
|
||||
expect(result.conflicts).toHaveLength(1);
|
||||
expect(result.conflicts[0].dependencyId).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findCrossTagDependencies', () => {
|
||||
it('should find cross-tag dependencies for multiple tasks', () => {
|
||||
const sourceTasks = [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 3, dependencies: [1] }
|
||||
];
|
||||
const allTasks = getAllTasksWithTags(mockRawData);
|
||||
|
||||
findCrossTagDependencies.mockReturnValue([
|
||||
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' },
|
||||
{ taskId: 3, dependencyId: 1, dependencyTag: 'backlog' }
|
||||
]);
|
||||
const conflicts = findCrossTagDependencies(
|
||||
sourceTasks,
|
||||
'backlog',
|
||||
'in-progress',
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(conflicts).toHaveLength(2);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
|
||||
).toBe(true);
|
||||
expect(
|
||||
conflicts.some((c) => c.taskId === 3 && c.dependencyId === 1)
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getDependentTaskIds', () => {
|
||||
it('should return dependent task IDs', () => {
|
||||
const sourceTasks = [{ id: 1, dependencies: [2] }];
|
||||
const crossTagDependencies = [
|
||||
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }
|
||||
];
|
||||
const allTasks = getAllTasksWithTags(mockRawData);
|
||||
|
||||
getDependentTaskIds.mockReturnValue([2]);
|
||||
const dependentTaskIds = getDependentTaskIds(
|
||||
sourceTasks,
|
||||
crossTagDependencies,
|
||||
allTasks
|
||||
);
|
||||
|
||||
expect(dependentTaskIds).toContain(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('moveTasksBetweenTags', () => {
|
||||
it('should move tasks without dependencies successfully', async () => {
|
||||
// Mock the dependency functions to return no conflicts
|
||||
findCrossTagDependencies.mockReturnValue([]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[2],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
);
|
||||
|
||||
expect(result.message).toContain('Successfully moved 1 tasks');
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.any(Object),
|
||||
mockContext.projectRoot,
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw error for cross-tag dependencies by default', async () => {
|
||||
const mockDependency = {
|
||||
taskId: 1,
|
||||
dependencyId: 2,
|
||||
dependencyTag: 'backlog'
|
||||
};
|
||||
findCrossTagDependencies.mockReturnValue([mockDependency]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
)
|
||||
).rejects.toThrow(
|
||||
'Cannot move tasks: 1 cross-tag dependency conflicts found'
|
||||
);
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should move with dependencies when --with-dependencies is used', async () => {
|
||||
const mockDependency = {
|
||||
taskId: 1,
|
||||
dependencyId: 2,
|
||||
dependencyTag: 'backlog'
|
||||
};
|
||||
findCrossTagDependencies.mockReturnValue([mockDependency]);
|
||||
getDependentTaskIds.mockReturnValue([2]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[1],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ withDependencies: true },
|
||||
mockContext
|
||||
);
|
||||
|
||||
expect(result.message).toContain('Successfully moved 2 tasks');
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [1]
|
||||
})
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
dependencies: []
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2],
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [],
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
])
|
||||
}),
|
||||
done: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
dependencies: [4]
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
mockContext.projectRoot,
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should break dependencies when --ignore-dependencies is used', async () => {
|
||||
const mockDependency = {
|
||||
taskId: 1,
|
||||
dependencyId: 2,
|
||||
dependencyTag: 'backlog'
|
||||
};
|
||||
findCrossTagDependencies.mockReturnValue([mockDependency]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[2],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{ ignoreDependencies: true },
|
||||
mockContext
|
||||
);
|
||||
|
||||
expect(result.message).toContain('Successfully moved 1 tasks');
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2] // Dependencies not actually removed in current implementation
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [1]
|
||||
})
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
dependencies: []
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [],
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
])
|
||||
}),
|
||||
done: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
dependencies: [4]
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
mockContext.projectRoot,
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target tag if it does not exist', async () => {
|
||||
findCrossTagDependencies.mockReturnValue([]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[2],
|
||||
'backlog',
|
||||
'new-tag',
|
||||
{},
|
||||
mockContext
|
||||
);
|
||||
|
||||
expect(result.message).toContain('Successfully moved 1 tasks');
|
||||
expect(result.message).toContain('new-tag');
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [2]
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: [1]
|
||||
})
|
||||
])
|
||||
}),
|
||||
'new-tag': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: [],
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'new-tag',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
title: 'Task 4',
|
||||
dependencies: []
|
||||
})
|
||||
])
|
||||
}),
|
||||
done: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 5,
|
||||
title: 'Task 5',
|
||||
dependencies: [4]
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
mockContext.projectRoot,
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw error for subtask movement', async () => {
|
||||
const subtaskError = 'Cannot move subtask 1.2 directly between tags';
|
||||
validateSubtaskMove.mockImplementation(() => {
|
||||
throw new Error(subtaskError);
|
||||
});
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
['1.2'],
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
)
|
||||
).rejects.toThrow(subtaskError);
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should throw error for invalid task IDs', async () => {
|
||||
findCrossTagDependencies.mockReturnValue([]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[999], // Non-existent task
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
)
|
||||
).rejects.toThrow('Task 999 not found in source tag "backlog"');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should throw error for invalid source tag', async () => {
|
||||
findCrossTagDependencies.mockReturnValue([]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
await expect(
|
||||
moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
[1],
|
||||
'non-existent-tag',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
)
|
||||
).rejects.toThrow('Source tag "non-existent-tag" not found or invalid');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle string dependencies correctly during cross-tag move', async () => {
|
||||
// Setup mock data with string dependencies
|
||||
mockRawData = {
|
||||
backlog: {
|
||||
tasks: [
|
||||
{ id: 1, title: 'Task 1', dependencies: ['2'] }, // String dependency
|
||||
{ id: 2, title: 'Task 2', dependencies: [] },
|
||||
{ id: 3, title: 'Task 3', dependencies: ['1'] } // String dependency
|
||||
]
|
||||
},
|
||||
'in-progress': {
|
||||
tasks: [{ id: 4, title: 'Task 4', dependencies: [] }]
|
||||
}
|
||||
};
|
||||
|
||||
// Mock readJSON to return our test data
|
||||
readJSON.mockImplementation((path, projectRoot, tag) => {
|
||||
return { ...mockRawData[tag], tag, _rawTaggedData: mockRawData };
|
||||
});
|
||||
|
||||
findCrossTagDependencies.mockReturnValue([]);
|
||||
validateSubtaskMove.mockImplementation(() => {});
|
||||
|
||||
const result = await moveTasksBetweenTags(
|
||||
mockTasksPath,
|
||||
['1'], // String task ID
|
||||
'backlog',
|
||||
'in-progress',
|
||||
{},
|
||||
mockContext
|
||||
);
|
||||
|
||||
expect(result.message).toContain('Successfully moved 1 tasks');
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.objectContaining({
|
||||
backlog: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: []
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
dependencies: ['1'] // Should remain as string
|
||||
})
|
||||
])
|
||||
}),
|
||||
'in-progress': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: ['2'], // Should remain as string
|
||||
metadata: expect.objectContaining({
|
||||
moveHistory: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
fromTag: 'backlog',
|
||||
toTag: 'in-progress',
|
||||
timestamp: expect.any(String)
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
])
|
||||
})
|
||||
}),
|
||||
mockContext.projectRoot,
|
||||
null
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,13 +1,13 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// --- Mocks ---
|
||||
// Only mock the specific functions that move-task actually uses
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
traverseDependencies: jest.fn(() => [])
|
||||
truncate: jest.fn((t) => t),
|
||||
isSilentMode: jest.fn(() => false)
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
@@ -18,20 +18,13 @@ jest.unstable_mockModule(
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/is-task-dependent.js',
|
||||
'../../../../../scripts/modules/task-manager.js',
|
||||
() => ({
|
||||
default: jest.fn(() => false)
|
||||
isTaskDependentOn: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/dependency-manager.js',
|
||||
() => ({
|
||||
findCrossTagDependencies: jest.fn(() => []),
|
||||
getDependentTaskIds: jest.fn(() => []),
|
||||
validateSubtaskMove: jest.fn()
|
||||
})
|
||||
);
|
||||
// fs not needed since move-task uses writeJSON
|
||||
|
||||
const { readJSON, writeJSON, log } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,498 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
displayCrossTagDependencyError,
|
||||
displaySubtaskMoveError,
|
||||
displayInvalidTagCombinationError,
|
||||
displayDependencyValidationHints,
|
||||
formatTaskIdForDisplay
|
||||
} from '../../../../../scripts/modules/ui.js';
|
||||
|
||||
// Mock console.log to capture output
|
||||
const originalConsoleLog = console.log;
|
||||
const mockConsoleLog = jest.fn();
|
||||
global.console.log = mockConsoleLog;
|
||||
|
||||
// Add afterAll hook to restore
|
||||
afterAll(() => {
|
||||
global.console.log = originalConsoleLog;
|
||||
});
|
||||
|
||||
describe('Cross-Tag Error Display Functions', () => {
|
||||
beforeEach(() => {
|
||||
mockConsoleLog.mockClear();
|
||||
});
|
||||
|
||||
describe('displayCrossTagDependencyError', () => {
|
||||
it('should display cross-tag dependency error with conflicts', () => {
|
||||
const conflicts = [
|
||||
{
|
||||
taskId: 1,
|
||||
dependencyId: 2,
|
||||
dependencyTag: 'backlog',
|
||||
message: 'Task 1 depends on 2 (in backlog)'
|
||||
},
|
||||
{
|
||||
taskId: 3,
|
||||
dependencyId: 4,
|
||||
dependencyTag: 'done',
|
||||
message: 'Task 3 depends on 4 (in done)'
|
||||
}
|
||||
];
|
||||
|
||||
displayCrossTagDependencyError(conflicts, 'in-progress', 'done', '1,3');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move tasks from "in-progress" to "done"'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Cross-tag dependency conflicts detected:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Task 1 depends on 2 (in backlog)')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Task 3 depends on 4 (in done)')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Resolution options:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('--with-dependencies')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('--ignore-dependencies')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle empty conflicts array', () => {
|
||||
displayCrossTagDependencyError([], 'backlog', 'done', '1');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('❌ Cannot move tasks from "backlog" to "done"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Cross-tag dependency conflicts detected:')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('displaySubtaskMoveError', () => {
|
||||
it('should display subtask movement restriction error', () => {
|
||||
displaySubtaskMoveError('5.2', 'backlog', 'in-progress');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 5.2 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Subtask movement restriction:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'• Subtasks cannot be moved directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Resolution options:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=5.2 --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle nested subtask IDs (three levels)', () => {
|
||||
displaySubtaskMoveError('5.2.1', 'feature-auth', 'production');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 5.2.1 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=5.2.1 --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle deeply nested subtask IDs (four levels)', () => {
|
||||
displaySubtaskMoveError('10.3.2.1', 'development', 'testing');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 10.3.2.1 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=10.3.2.1 --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle single-level subtask IDs', () => {
|
||||
displaySubtaskMoveError('15.1', 'master', 'feature-branch');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 15.1 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=15.1 --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle invalid subtask ID format gracefully', () => {
|
||||
displaySubtaskMoveError('invalid-id', 'tag1', 'tag2');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask invalid-id directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=invalid-id --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle empty subtask ID', () => {
|
||||
displaySubtaskMoveError('', 'source', 'target');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
`❌ Cannot move subtask ${formatTaskIdForDisplay('')} directly between tags`
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
`remove-subtask --id=${formatTaskIdForDisplay('')} --convert`
|
||||
)
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle null subtask ID', () => {
|
||||
displaySubtaskMoveError(null, 'source', 'target');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask null directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=null --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle undefined subtask ID', () => {
|
||||
displaySubtaskMoveError(undefined, 'source', 'target');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask undefined directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=undefined --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle special characters in subtask ID', () => {
|
||||
displaySubtaskMoveError('5.2@test', 'dev', 'prod');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 5.2@test directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=5.2@test --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle numeric subtask IDs', () => {
|
||||
displaySubtaskMoveError('123.456', 'alpha', 'beta');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 123.456 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id=123.456 --convert')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle identical source and target tags', () => {
|
||||
displaySubtaskMoveError('7.3', 'same-tag', 'same-tag');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 7.3 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Source tag: "same-tag"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Target tag: "same-tag"')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle empty tag names', () => {
|
||||
displaySubtaskMoveError('9.1', '', '');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 9.1 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Source tag: ""')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Target tag: ""')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle null tag names', () => {
|
||||
displaySubtaskMoveError('12.4', null, null);
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 12.4 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Source tag: "null"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Target tag: "null"')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle complex tag names with special characters', () => {
|
||||
displaySubtaskMoveError(
|
||||
'3.2.1',
|
||||
'feature/user-auth@v2.0',
|
||||
'production@stable'
|
||||
);
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 3.2.1 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Source tag: "feature/user-auth@v2.0"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Target tag: "production@stable"')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle very long subtask IDs', () => {
|
||||
const longId = '1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20';
|
||||
displaySubtaskMoveError(longId, 'short', 'long');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
`❌ Cannot move subtask ${longId} directly between tags`
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(`remove-subtask --id=${longId} --convert`)
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle whitespace in subtask ID', () => {
|
||||
displaySubtaskMoveError(' 5.2 ', 'clean', 'dirty');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'❌ Cannot move subtask 5.2 directly between tags'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('remove-subtask --id= 5.2 --convert')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('displayInvalidTagCombinationError', () => {
|
||||
it('should display invalid tag combination error', () => {
|
||||
displayInvalidTagCombinationError(
|
||||
'backlog',
|
||||
'backlog',
|
||||
'Source and target tags are identical'
|
||||
);
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('❌ Invalid tag combination')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Error details:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Source tag: "backlog"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('• Target tag: "backlog"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'• Reason: Source and target tags are identical'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Resolution options:')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('displayDependencyValidationHints', () => {
|
||||
it('should display general hints by default', () => {
|
||||
displayDependencyValidationHints();
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Helpful hints:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('💡 Use "task-master validate-dependencies"')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('💡 Use "task-master fix-dependencies"')
|
||||
);
|
||||
});
|
||||
|
||||
it('should display before-move hints', () => {
|
||||
displayDependencyValidationHints('before-move');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Helpful hints:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'💡 Tip: Run "task-master validate-dependencies"'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('💡 Tip: Use "task-master fix-dependencies"')
|
||||
);
|
||||
});
|
||||
|
||||
it('should display after-error hints', () => {
|
||||
displayDependencyValidationHints('after-error');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Helpful hints:')
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'🔧 Quick fix: Run "task-master validate-dependencies"'
|
||||
)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'🔧 Quick fix: Use "task-master fix-dependencies"'
|
||||
)
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle unknown context gracefully', () => {
|
||||
displayDependencyValidationHints('unknown-context');
|
||||
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Helpful hints:')
|
||||
);
|
||||
// Should fall back to general hints
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringContaining('💡 Use "task-master validate-dependencies"')
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test for ID type consistency in dependency comparisons
|
||||
* This test verifies that the fix for mixed string/number ID comparison issues works correctly
|
||||
*/
|
||||
|
||||
describe('ID Type Consistency in Dependency Comparisons', () => {
|
||||
test('should handle mixed string/number ID comparisons correctly', () => {
|
||||
// Test the pattern that was fixed in the move-task tests
|
||||
const sourceTasks = [
|
||||
{ id: 1, title: 'Task 1' },
|
||||
{ id: 2, title: 'Task 2' },
|
||||
{ id: '3.1', title: 'Subtask 3.1' }
|
||||
];
|
||||
|
||||
const allTasks = [
|
||||
{ id: 1, title: 'Task 1', dependencies: [2, '3.1'] },
|
||||
{ id: 2, title: 'Task 2', dependencies: ['1'] },
|
||||
{
|
||||
id: 3,
|
||||
title: 'Task 3',
|
||||
subtasks: [{ id: 1, title: 'Subtask 3.1', dependencies: [1] }]
|
||||
}
|
||||
];
|
||||
|
||||
// Test the fixed pattern: normalize source IDs and compare with string conversion
|
||||
const sourceIds = sourceTasks.map((t) => t.id);
|
||||
const normalizedSourceIds = sourceIds.map((id) => String(id));
|
||||
|
||||
// Test that dependencies are correctly identified regardless of type
|
||||
const result = [];
|
||||
allTasks.forEach((task) => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
const hasDependency = task.dependencies.some((depId) =>
|
||||
normalizedSourceIds.includes(String(depId))
|
||||
);
|
||||
if (hasDependency) {
|
||||
result.push(task.id);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Verify that the comparison works correctly
|
||||
expect(result).toContain(1); // Task 1 has dependency on 2 and '3.1'
|
||||
expect(result).toContain(2); // Task 2 has dependency on '1'
|
||||
|
||||
// Test edge cases
|
||||
const mixedDependencies = [
|
||||
{ id: 1, dependencies: [1, 2, '3.1', '4.2'] },
|
||||
{ id: 2, dependencies: ['1', 3, '5.1'] }
|
||||
];
|
||||
|
||||
const testSourceIds = [1, '3.1', 4];
|
||||
const normalizedTestSourceIds = testSourceIds.map((id) => String(id));
|
||||
|
||||
mixedDependencies.forEach((task) => {
|
||||
const hasMatch = task.dependencies.some((depId) =>
|
||||
normalizedTestSourceIds.includes(String(depId))
|
||||
);
|
||||
expect(typeof hasMatch).toBe('boolean');
|
||||
expect(hasMatch).toBe(true); // Should find matches in both tasks
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle edge cases in ID normalization', () => {
|
||||
// Test various ID formats
|
||||
const testCases = [
|
||||
{ source: 1, dependency: '1', expected: true },
|
||||
{ source: '1', dependency: 1, expected: true },
|
||||
{ source: '3.1', dependency: '3.1', expected: true },
|
||||
{ source: 3, dependency: '3.1', expected: false }, // Different formats
|
||||
{ source: '3.1', dependency: 3, expected: false }, // Different formats
|
||||
{ source: 1, dependency: 2, expected: false }, // No match
|
||||
{ source: '1.2', dependency: '1.2', expected: true },
|
||||
{ source: 1, dependency: null, expected: false }, // Handle null
|
||||
{ source: 1, dependency: undefined, expected: false } // Handle undefined
|
||||
];
|
||||
|
||||
testCases.forEach(({ source, dependency, expected }) => {
|
||||
const normalizedSourceIds = [String(source)];
|
||||
const hasMatch = normalizedSourceIds.includes(String(dependency));
|
||||
expect(hasMatch).toBe(expected);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,169 +0,0 @@
|
||||
/**
|
||||
* Unit tests for indicators module (priority and complexity indicators)
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock chalk using unstable_mockModule for ESM compatibility
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
red: jest.fn((str) => str),
|
||||
yellow: jest.fn((str) => str),
|
||||
green: jest.fn((str) => str),
|
||||
white: jest.fn((str) => str),
|
||||
hex: jest.fn(() => jest.fn((str) => str))
|
||||
}
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
const {
|
||||
getMcpPriorityIndicators,
|
||||
getCliPriorityIndicators,
|
||||
getPriorityIndicators,
|
||||
getPriorityIndicator,
|
||||
getStatusBarPriorityIndicators,
|
||||
getPriorityColors,
|
||||
getCliComplexityIndicators,
|
||||
getStatusBarComplexityIndicators,
|
||||
getComplexityColors,
|
||||
getComplexityIndicator
|
||||
} = await import('../../../src/ui/indicators.js');
|
||||
|
||||
describe('Priority Indicators', () => {
|
||||
describe('getMcpPriorityIndicators', () => {
|
||||
it('should return emoji indicators for MCP context', () => {
|
||||
const indicators = getMcpPriorityIndicators();
|
||||
expect(indicators).toEqual({
|
||||
high: '🔴',
|
||||
medium: '🟠',
|
||||
low: '🟢'
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getCliPriorityIndicators', () => {
|
||||
it('should return colored dot indicators for CLI context', () => {
|
||||
const indicators = getCliPriorityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
// Since chalk is mocked, we're just verifying structure
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPriorityIndicators', () => {
|
||||
it('should return MCP indicators when isMcp is true', () => {
|
||||
const indicators = getPriorityIndicators(true);
|
||||
expect(indicators).toEqual({
|
||||
high: '🔴',
|
||||
medium: '🟠',
|
||||
low: '🟢'
|
||||
});
|
||||
});
|
||||
|
||||
it('should return CLI indicators when isMcp is false', () => {
|
||||
const indicators = getPriorityIndicators(false);
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
});
|
||||
|
||||
it('should default to CLI indicators when no parameter provided', () => {
|
||||
const indicators = getPriorityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPriorityIndicator', () => {
|
||||
it('should return correct MCP indicator for valid priority', () => {
|
||||
expect(getPriorityIndicator('high', true)).toBe('🔴');
|
||||
expect(getPriorityIndicator('medium', true)).toBe('🟠');
|
||||
expect(getPriorityIndicator('low', true)).toBe('🟢');
|
||||
});
|
||||
|
||||
it('should return correct CLI indicator for valid priority', () => {
|
||||
const highIndicator = getPriorityIndicator('high', false);
|
||||
const mediumIndicator = getPriorityIndicator('medium', false);
|
||||
const lowIndicator = getPriorityIndicator('low', false);
|
||||
|
||||
expect(highIndicator).toContain('●');
|
||||
expect(mediumIndicator).toContain('●');
|
||||
expect(lowIndicator).toContain('●');
|
||||
});
|
||||
|
||||
it('should return medium indicator for invalid priority', () => {
|
||||
expect(getPriorityIndicator('invalid', true)).toBe('🟠');
|
||||
expect(getPriorityIndicator(null, true)).toBe('🟠');
|
||||
expect(getPriorityIndicator(undefined, true)).toBe('🟠');
|
||||
});
|
||||
|
||||
it('should default to CLI context when isMcp not provided', () => {
|
||||
const indicator = getPriorityIndicator('high');
|
||||
expect(indicator).toContain('●');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complexity Indicators', () => {
|
||||
describe('getCliComplexityIndicators', () => {
|
||||
it('should return colored dot indicators for complexity levels', () => {
|
||||
const indicators = getCliComplexityIndicators();
|
||||
expect(indicators).toHaveProperty('high');
|
||||
expect(indicators).toHaveProperty('medium');
|
||||
expect(indicators).toHaveProperty('low');
|
||||
expect(indicators.high).toContain('●');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getStatusBarComplexityIndicators', () => {
|
||||
it('should return single character indicators for status bars', () => {
|
||||
const indicators = getStatusBarComplexityIndicators();
|
||||
// Since chalk is mocked, we need to check for the actual characters
|
||||
expect(indicators.high).toContain('⋮');
|
||||
expect(indicators.medium).toContain(':');
|
||||
expect(indicators.low).toContain('.');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getComplexityColors', () => {
|
||||
it('should return complexity color functions', () => {
|
||||
const colors = getComplexityColors();
|
||||
expect(colors).toHaveProperty('high');
|
||||
expect(colors).toHaveProperty('medium');
|
||||
expect(colors).toHaveProperty('low');
|
||||
// Verify they are functions (mocked chalk functions)
|
||||
expect(typeof colors.high).toBe('function');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getComplexityIndicator', () => {
|
||||
it('should return high indicator for scores >= 7', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(7)).toBe(cliIndicators.high);
|
||||
expect(getComplexityIndicator(8)).toBe(cliIndicators.high);
|
||||
expect(getComplexityIndicator(10)).toBe(cliIndicators.high);
|
||||
});
|
||||
|
||||
it('should return low indicator for scores <= 3', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(1)).toBe(cliIndicators.low);
|
||||
expect(getComplexityIndicator(2)).toBe(cliIndicators.low);
|
||||
expect(getComplexityIndicator(3)).toBe(cliIndicators.low);
|
||||
});
|
||||
|
||||
it('should return medium indicator for scores 4-6', () => {
|
||||
const cliIndicators = getCliComplexityIndicators();
|
||||
expect(getComplexityIndicator(4)).toBe(cliIndicators.medium);
|
||||
expect(getComplexityIndicator(5)).toBe(cliIndicators.medium);
|
||||
expect(getComplexityIndicator(6)).toBe(cliIndicators.medium);
|
||||
});
|
||||
|
||||
it('should return status bar indicators when statusBar is true', () => {
|
||||
const statusBarIndicators = getStatusBarComplexityIndicators();
|
||||
expect(getComplexityIndicator(8, true)).toBe(statusBarIndicators.high);
|
||||
expect(getComplexityIndicator(5, true)).toBe(statusBarIndicators.medium);
|
||||
expect(getComplexityIndicator(2, true)).toBe(statusBarIndicators.low);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,56 +0,0 @@
|
||||
/**
|
||||
* Tests for the stripAnsiCodes utility function
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Import the module under test
|
||||
const { stripAnsiCodes } = await import('../../scripts/modules/utils.js');
|
||||
|
||||
describe('stripAnsiCodes', () => {
|
||||
test('should remove ANSI color codes from text', () => {
|
||||
const textWithColors = '\x1b[31mRed text\x1b[0m \x1b[32mGreen text\x1b[0m';
|
||||
const result = stripAnsiCodes(textWithColors);
|
||||
expect(result).toBe('Red text Green text');
|
||||
});
|
||||
|
||||
test('should handle text without ANSI codes', () => {
|
||||
const plainText = 'This is plain text';
|
||||
const result = stripAnsiCodes(plainText);
|
||||
expect(result).toBe('This is plain text');
|
||||
});
|
||||
|
||||
test('should handle empty string', () => {
|
||||
const result = stripAnsiCodes('');
|
||||
expect(result).toBe('');
|
||||
});
|
||||
|
||||
test('should handle complex ANSI sequences', () => {
|
||||
// Test with various ANSI escape sequences
|
||||
const complexText =
|
||||
'\x1b[1;31mBold red\x1b[0m \x1b[4;32mUnderlined green\x1b[0m \x1b[33;46mYellow on cyan\x1b[0m';
|
||||
const result = stripAnsiCodes(complexText);
|
||||
expect(result).toBe('Bold red Underlined green Yellow on cyan');
|
||||
});
|
||||
|
||||
test('should handle non-string input gracefully', () => {
|
||||
expect(stripAnsiCodes(null)).toBe(null);
|
||||
expect(stripAnsiCodes(undefined)).toBe(undefined);
|
||||
expect(stripAnsiCodes(123)).toBe(123);
|
||||
expect(stripAnsiCodes({})).toEqual({});
|
||||
});
|
||||
|
||||
test('should handle real chalk output patterns', () => {
|
||||
// Test patterns similar to what chalk produces
|
||||
const chalkLikeText =
|
||||
'1 \x1b[32m✓ done\x1b[39m Setup Project \x1b[31m(high)\x1b[39m';
|
||||
const result = stripAnsiCodes(chalkLikeText);
|
||||
expect(result).toBe('1 ✓ done Setup Project (high)');
|
||||
});
|
||||
|
||||
test('should handle multiline text with ANSI codes', () => {
|
||||
const multilineText =
|
||||
'\x1b[31mLine 1\x1b[0m\n\x1b[32mLine 2\x1b[0m\n\x1b[33mLine 3\x1b[0m';
|
||||
const result = stripAnsiCodes(multilineText);
|
||||
expect(result).toBe('Line 1\nLine 2\nLine 3');
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user