Compare commits
18 Commits
chore/add.
...
feat/add.c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7112a1d10b | ||
|
|
a9c548e8ae | ||
|
|
e3f03e4ac8 | ||
|
|
00eb71dce8 | ||
|
|
0cbb0b6a9f | ||
|
|
db720a954d | ||
|
|
89335578ff | ||
|
|
781b8ef2af | ||
|
|
7d564920b5 | ||
|
|
2737fbaa67 | ||
|
|
9feb8d2dbf | ||
|
|
8a991587f1 | ||
|
|
7ceba2f572 | ||
|
|
10565f07d3 | ||
|
|
f27ce34fe9 | ||
|
|
71be933a8d | ||
|
|
5d94f1b471 | ||
|
|
3dee60dc3d |
@@ -1,27 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
@@ -1,6 +0,0 @@
|
||||
---
|
||||
"extension": minor
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
"Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add CLI & MCP progress tracking for parse-prd command.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"extension": minor
|
||||
---
|
||||
|
||||
Display current task ID on task details page
|
||||
8
.changeset/quiet-owls-fix-cli-sdk.md
Normal file
8
.changeset/quiet-owls-fix-cli-sdk.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
fix(claude-code): prevent crash/hang when the optional `@anthropic-ai/claude-code` SDK is missing by guarding `AbortError instanceof` checks and adding explicit SDK presence checks in `doGenerate`/`doStream`. Also bump the optional dependency to `^1.0.88` for improved export consistency.
|
||||
|
||||
Related to JSON truncation handling in #920; this change addresses a separate error-path crash reported in #1142.
|
||||
|
||||
5
.changeset/rich-emus-say.md
Normal file
5
.changeset/rich-emus-say.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Temporarily disable streaming for improved model compatibility - will be re-enabled in upcoming release
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
14
.changeset/wet-candies-accept.md
Normal file
14
.changeset/wet-candies-accept.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
38
.claude/commands/dedupe.md
Normal file
38
.claude/commands/dedupe.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
38
.github/workflows/claude-docs-updater.yml
vendored
38
.github/workflows/claude-docs-updater.yml
vendored
@@ -5,42 +5,41 @@ on:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- 'apps/docs/**'
|
||||
- '*.md'
|
||||
- '.github/workflows/**'
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.event.pusher.name != 'github-actions[bot]' && github.event.pusher.name != 'dependabot[bot]'
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
|
||||
# Store changed files for Claude to analyze
|
||||
echo "changed_files<<EOF" >> $GITHUB_OUTPUT
|
||||
git diff --name-only HEAD^ HEAD >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
|
||||
# Get the commit message and changes summary
|
||||
echo "commit_message<<EOF" >> $GITHUB_OUTPUT
|
||||
git log -1 --pretty=%B >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
|
||||
# Get diff for documentation context
|
||||
echo "commit_diff<<EOF" >> $GITHUB_OUTPUT
|
||||
git diff HEAD^ HEAD --stat >> $GITHUB_OUTPUT
|
||||
@@ -58,24 +57,27 @@ jobs:
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "auto"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
prompt: |
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ steps.changed-files.outputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ steps.changed-files.outputs.changed_files }}
|
||||
|
||||
|
||||
- Changes summary:
|
||||
${{ steps.changed-files.outputs.commit_diff }}
|
||||
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
@@ -86,7 +88,7 @@ jobs:
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
@@ -94,7 +96,7 @@ jobs:
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
@@ -122,7 +124,7 @@ jobs:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
@@ -151,4 +153,4 @@ jobs:
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
--label "automated"
|
||||
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
Normal file
36
.github/workflows/claude.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
176
.github/workflows/log-issue-events.yml
vendored
Normal file
176
.github/workflows/log-issue-events.yml
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
96
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
96
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Get dates for last week
|
||||
run: |
|
||||
# Last 7 days
|
||||
first_day=$(date -d "7 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
|
||||
- name: Generate PR metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_metrics.md
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: |
|
||||
# Parse the metrics from the generated markdown files
|
||||
if [ -f "issue_metrics.md" ]; then
|
||||
# Extract key metrics using grep/awk
|
||||
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
|
||||
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
if [ -f "pr_metrics.md" ]; then
|
||||
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
|
||||
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
# Set outputs for Discord action
|
||||
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
|
||||
**⏱️ Response Times**
|
||||
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
|
||||
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
|
||||
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
108
CHANGELOG.md
108
CHANGELOG.md
@@ -1,5 +1,113 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -56,7 +56,7 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
#### Quick Install for Cursor 1.0+ (One-Click)
|
||||
|
||||
[](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
[](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
|
||||
3
apps/docs/CHANGELOG.md
Normal file
3
apps/docs/CHANGELOG.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# docs
|
||||
|
||||
## 0.0.1
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"version": "0.0.0",
|
||||
"version": "0.0.1",
|
||||
"private": true,
|
||||
"description": "Task Master documentation powered by Mintlify",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,5 +1,29 @@
|
||||
# Change Log
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1100](https://github.com/eyaltoledano/claude-task-master/pull/1100) [`30ca144`](https://github.com/eyaltoledano/claude-task-master/commit/30ca144231c36a6c63911f20adc225d38fb15a2f) Thanks [@vedovelli](https://github.com/vedovelli)! - Display current task ID on task details page
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1), [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573), [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e), [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d), [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397), [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5)]:
|
||||
- task-master-ai@0.25.0
|
||||
|
||||
## 0.24.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1100](https://github.com/eyaltoledano/claude-task-master/pull/1100) [`30ca144`](https://github.com/eyaltoledano/claude-task-master/commit/30ca144231c36a6c63911f20adc225d38fb15a2f) Thanks [@vedovelli](https://github.com/vedovelli)! - Display current task ID on task details page
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1), [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573), [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e), [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d), [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8), [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397), [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5)]:
|
||||
- task-master-ai@0.25.0-rc.0
|
||||
|
||||
## 0.23.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"private": true,
|
||||
"displayName": "TaskMaster",
|
||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||
"version": "0.23.1",
|
||||
"version": "0.24.0",
|
||||
"publisher": "Hamster",
|
||||
"icon": "assets/icon.png",
|
||||
"engines": {
|
||||
@@ -239,7 +239,7 @@
|
||||
"check-types": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"task-master-ai": "0.24.0"
|
||||
"task-master-ai": "0.25.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
|
||||
19
package-lock.json
generated
19
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.0",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -81,21 +81,21 @@
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@anthropic-ai/claude-code": "^1.0.25",
|
||||
"@anthropic-ai/claude-code": "^1.0.88",
|
||||
"@biomejs/cli-linux-x64": "^1.9.4",
|
||||
"ai-sdk-provider-gemini-cli": "^0.1.1"
|
||||
}
|
||||
},
|
||||
"apps/docs": {
|
||||
"version": "0.0.0",
|
||||
"version": "0.0.1",
|
||||
"devDependencies": {
|
||||
"mintlify": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"apps/extension": {
|
||||
"version": "0.23.1",
|
||||
"version": "0.24.0",
|
||||
"dependencies": {
|
||||
"task-master-ai": "0.24.0"
|
||||
"task-master-ai": "0.25.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
@@ -2046,10 +2046,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/claude-code": {
|
||||
"version": "1.0.34",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-1.0.34.tgz",
|
||||
"integrity": "sha512-9mQd8hodE5/RxZnsWUCdLzqGUKuCzBczrfc2QfxrNSlvUFpOgTzjT1Zlww2vW9v0K1e5K9g1o08apqPl/QPmpw==",
|
||||
"hasInstallScript": true,
|
||||
"version": "1.0.88",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-1.0.88.tgz",
|
||||
"integrity": "sha512-Np6H4EjkbmNolUpx98DvqLXV/iJrw2y7dz2rDJ7av9ajMz6HZfB8bdJV5D75+jO+Gk1pvA54HCIm0c65lDrzcw==",
|
||||
"license": "SEE LICENSE IN README.md",
|
||||
"optional": true,
|
||||
"bin": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -86,7 +86,7 @@
|
||||
"zod-to-json-schema": "^3.24.5"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@anthropic-ai/claude-code": "^1.0.25",
|
||||
"@anthropic-ai/claude-code": "^1.0.88",
|
||||
"@biomejs/cli-linux-x64": "^1.9.4",
|
||||
"ai-sdk-provider-gemini-cli": "^0.1.1"
|
||||
},
|
||||
|
||||
@@ -427,6 +427,19 @@ function getResearchProvider(explicitRoot = null) {
|
||||
return getModelConfigForRole('research', explicitRoot).provider;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if Claude Code is being used as the provider
|
||||
* @param {boolean} useResearch - Whether to check research provider or main provider
|
||||
* @param {string|null} projectRoot - Project root path (optional)
|
||||
* @returns {boolean} True if Claude Code is the current provider
|
||||
*/
|
||||
function isClaudeCode(useResearch = false, projectRoot = null) {
|
||||
const currentProvider = useResearch
|
||||
? getResearchProvider(projectRoot)
|
||||
: getMainProvider(projectRoot);
|
||||
return currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
||||
}
|
||||
|
||||
function getResearchModelId(explicitRoot = null) {
|
||||
return getModelConfigForRole('research', explicitRoot).modelId;
|
||||
}
|
||||
@@ -983,6 +996,7 @@ export {
|
||||
getResearchModelId,
|
||||
getResearchMaxTokens,
|
||||
getResearchTemperature,
|
||||
isClaudeCode,
|
||||
getFallbackProvider,
|
||||
getFallbackModelId,
|
||||
getFallbackMaxTokens,
|
||||
|
||||
@@ -25,7 +25,7 @@ import {
|
||||
markMigrationForNotice
|
||||
} from '../utils.js';
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { getDefaultPriority } from '../config-manager.js';
|
||||
import { getDefaultPriority, isClaudeCode } from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import ContextGatherer from '../utils/contextGatherer.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
@@ -425,7 +425,9 @@ async function addTask(
|
||||
contextFromArgs,
|
||||
useResearch,
|
||||
priority: effectivePriority,
|
||||
dependencies: numericDependencies
|
||||
dependencies: numericDependencies,
|
||||
isClaudeCode: isClaudeCode(useResearch, projectRoot),
|
||||
projectRoot: projectRoot
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
@@ -17,7 +17,8 @@ import {
|
||||
getDebugFlag,
|
||||
getProjectName,
|
||||
getMainProvider,
|
||||
getResearchProvider
|
||||
getResearchProvider,
|
||||
isClaudeCode
|
||||
} from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import {
|
||||
@@ -415,16 +416,12 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
const promptManager = getPromptManager();
|
||||
|
||||
// Check if Claude Code is being used as the provider
|
||||
const currentProvider = useResearch
|
||||
? getResearchProvider(projectRoot)
|
||||
: getMainProvider(projectRoot);
|
||||
const isClaudeCode = currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
||||
|
||||
const promptParams = {
|
||||
tasks: tasksData.tasks,
|
||||
gatheredContext: gatheredContext || '',
|
||||
useResearch: useResearch,
|
||||
isClaudeCode: isClaudeCode,
|
||||
isClaudeCode: isClaudeCode(useResearch, projectRoot),
|
||||
projectRoot: projectRoot || ''
|
||||
};
|
||||
|
||||
|
||||
@@ -63,8 +63,15 @@ export class PrdParseConfig {
|
||||
this.targetTag = this.tag || getCurrentTag(this.projectRoot) || 'master';
|
||||
this.isMCP = !!this.mcpLog;
|
||||
this.outputFormat = this.isMCP && !this.reportProgress ? 'json' : 'text';
|
||||
|
||||
// Feature flag: Temporarily disable streaming, use generateObject instead
|
||||
// TODO: Re-enable streaming once issues are resolved
|
||||
const ENABLE_STREAMING = false;
|
||||
|
||||
this.useStreaming =
|
||||
typeof this.reportProgress === 'function' || this.outputFormat === 'text';
|
||||
ENABLE_STREAMING &&
|
||||
(typeof this.reportProgress === 'function' ||
|
||||
this.outputFormat === 'text');
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -20,7 +20,7 @@ import {
|
||||
flattenTasksWithSubtasks
|
||||
} from '../utils.js';
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import { getDebugFlag, isClaudeCode } from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||
@@ -231,7 +231,9 @@ async function updateSubtaskById(
|
||||
currentDetails: subtask.details || '(No existing details)',
|
||||
updatePrompt: prompt,
|
||||
useResearch: useResearch,
|
||||
gatheredContext: gatheredContext || ''
|
||||
gatheredContext: gatheredContext || '',
|
||||
isClaudeCode: isClaudeCode(useResearch, projectRoot),
|
||||
projectRoot: projectRoot
|
||||
};
|
||||
|
||||
const variantKey = useResearch ? 'research' : 'default';
|
||||
|
||||
@@ -23,7 +23,7 @@ import {
|
||||
} from '../ui.js';
|
||||
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag, isApiKeySet } from '../config-manager.js';
|
||||
import { getDebugFlag, isApiKeySet, isClaudeCode } from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||
@@ -453,7 +453,9 @@ async function updateTaskById(
|
||||
appendMode: appendMode,
|
||||
useResearch: useResearch,
|
||||
currentDetails: taskToUpdate.details || '(No existing details)',
|
||||
gatheredContext: gatheredContext || ''
|
||||
gatheredContext: gatheredContext || '',
|
||||
isClaudeCode: isClaudeCode(useResearch, projectRoot),
|
||||
projectRoot: projectRoot
|
||||
};
|
||||
|
||||
const variantKey = appendMode
|
||||
|
||||
@@ -19,7 +19,7 @@ import {
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import { getDebugFlag, isClaudeCode } from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
@@ -435,7 +435,9 @@ async function updateTasks(
|
||||
tasks: tasksToUpdate,
|
||||
updatePrompt: prompt,
|
||||
useResearch,
|
||||
projectContext: gatheredContext
|
||||
projectContext: gatheredContext,
|
||||
isClaudeCode: isClaudeCode(useResearch, projectRoot),
|
||||
projectRoot: projectRoot
|
||||
}
|
||||
);
|
||||
// --- End Build Prompts ---
|
||||
|
||||
@@ -169,6 +169,11 @@ export class ClaudeCodeLanguageModel {
|
||||
const warnings = this.generateUnsupportedWarnings(options);
|
||||
|
||||
try {
|
||||
if (!query) {
|
||||
throw new Error(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
}
|
||||
const response = query({
|
||||
prompt: messagesPrompt,
|
||||
options: queryOptions
|
||||
@@ -227,7 +232,7 @@ export class ClaudeCodeLanguageModel {
|
||||
finishReason = 'truncated';
|
||||
// Skip re-throwing: fall through so the caller receives usable data
|
||||
} else {
|
||||
if (error instanceof AbortError) {
|
||||
if (AbortError && error instanceof AbortError) {
|
||||
throw options.abortSignal?.aborted
|
||||
? options.abortSignal.reason
|
||||
: error;
|
||||
@@ -335,6 +340,11 @@ export class ClaudeCodeLanguageModel {
|
||||
const stream = new ReadableStream({
|
||||
start: async (controller) => {
|
||||
try {
|
||||
if (!query) {
|
||||
throw new Error(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
}
|
||||
const response = query({
|
||||
prompt: messagesPrompt,
|
||||
options: queryOptions
|
||||
@@ -478,7 +488,7 @@ export class ClaudeCodeLanguageModel {
|
||||
} catch (error) {
|
||||
let errorToEmit;
|
||||
|
||||
if (error instanceof AbortError) {
|
||||
if (AbortError && error instanceof AbortError) {
|
||||
errorToEmit = options.abortSignal?.aborted
|
||||
? options.abortSignal.reason
|
||||
: error;
|
||||
|
||||
@@ -45,12 +45,24 @@
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
},
|
||||
"isClaudeCode": {
|
||||
"type": "boolean",
|
||||
"required": false,
|
||||
"default": false,
|
||||
"description": "Whether Claude Code is being used as the provider"
|
||||
},
|
||||
"projectRoot": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Project root path for context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\nWhen determining dependencies for a new task, follow these principles:\n1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n2. Prioritize task dependencies that are semantically related to the functionality being built.\n3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\nThe dependencies array should contain task IDs (numbers) of prerequisite tasks.{{#if useResearch}}\n\nResearch current best practices and technologies relevant to this task.{{/if}}",
|
||||
"user": "You are generating the details for Task #{{newTaskId}}. Based on the user's request: \"{{prompt}}\", create a comprehensive new task for a software development project.\n \n {{gatheredContext}}\n \n {{#if useResearch}}Research current best practices, technologies, and implementation patterns relevant to this task. {{/if}}Based on the information about existing tasks provided above, include appropriate dependencies in the \"dependencies\" array. Only include task IDs that this new task directly depends on.\n \n Return your answer as a single JSON object matching the schema precisely:\n \n {\n \"title\": \"Task title goes here\",\n \"description\": \"A concise one or two sentence description of what the task involves\",\n \"details\": \"Detailed implementation steps, considerations, code examples, or technical approach\",\n \"testStrategy\": \"Specific steps to verify correct implementation and functionality\",\n \"dependencies\": [1, 3] // Example: IDs of tasks that must be completed before this task\n }\n \n Make sure the details and test strategy are comprehensive and specific{{#if useResearch}}, incorporating current best practices from your research{{/if}}. DO NOT include the task ID in the title.\n {{#if contextFromArgs}}{{contextFromArgs}}{{/if}}"
|
||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating the task:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine key files like package.json, main entry points, and relevant source files\n4. Analyze the current implementation to understand what already exists\n\nBased on your analysis:\n- Identify existing components/features that relate to this new task\n- Understand the technology stack, frameworks, and patterns in use\n- Generate implementation details that align with the project's current architecture\n- Reference specific files, functions, or patterns from the codebase in your details\n\nProject Root: {{projectRoot}}\n\n{{/if}}You are generating the details for Task #{{newTaskId}}. Based on the user's request: \"{{prompt}}\", create a comprehensive new task for a software development project.\n \n {{gatheredContext}}\n \n {{#if useResearch}}Research current best practices, technologies, and implementation patterns relevant to this task. {{/if}}Based on the information about existing tasks provided above, include appropriate dependencies in the \"dependencies\" array. Only include task IDs that this new task directly depends on.\n \n Return your answer as a single JSON object matching the schema precisely:\n \n {\n \"title\": \"Task title goes here\",\n \"description\": \"A concise one or two sentence description of what the task involves\",\n \"details\": \"Detailed implementation steps, considerations, code examples, or technical approach\",\n \"testStrategy\": \"Specific steps to verify correct implementation and functionality\",\n \"dependencies\": [1, 3] // Example: IDs of tasks that must be completed before this task\n }\n \n Make sure the details and test strategy are comprehensive and specific{{#if useResearch}}, incorporating current best practices from your research{{/if}}. DO NOT include the task ID in the title.\n {{#if contextFromArgs}}{{contextFromArgs}}{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,12 +44,24 @@
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Additional project context"
|
||||
},
|
||||
"isClaudeCode": {
|
||||
"type": "boolean",
|
||||
"required": false,
|
||||
"default": false,
|
||||
"description": "Whether Claude Code is being used as the provider"
|
||||
},
|
||||
"projectRoot": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Project root path for context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update a subtask. You will be provided with the subtask's existing details, context about its parent and sibling tasks, and a user request string.{{#if useResearch}} You have access to current best practices and latest technical information to provide research-backed updates.{{/if}}\n\nYour Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the subtask's details.\nFocus *only* on generating the substance of the update.\n\nOutput Requirements:\n1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.\n2. Your string response should NOT include any of the subtask's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.\n3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.\n4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with \"Okay, here's the update...\").{{#if useResearch}}\n5. Include specific libraries, versions, and current best practices relevant to the subtask implementation.\n6. Provide research-backed technical recommendations and proven approaches.{{/if}}",
|
||||
"user": "Task Context:\n\nParent Task: {{{json parentTask}}}\n{{#if prevSubtask}}Previous Subtask: {{{json prevSubtask}}}\n{{/if}}{{#if nextSubtask}}Next Subtask: {{{json nextSubtask}}}\n{{/if}}Current Subtask Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\n{{#if useResearch}}Research and incorporate current best practices, latest stable versions, and proven approaches into your update. {{/if}}Based on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.{{#if useResearch}} Include specific technical recommendations based on current industry standards.{{/if}}\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating the subtask update:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine relevant files and understand current implementation\n4. Analyze the current codebase to inform your subtask update\n\nBased on your analysis:\n- Include specific file references, code patterns, or implementation details\n- Ensure suggestions align with the project's current architecture\n- Reference existing components or patterns when relevant\n- Make implementation notes specific to the codebase structure\n\nProject Root: {{projectRoot}}\n\n{{/if}}Task Context:\n\nParent Task: {{{json parentTask}}}\n{{#if prevSubtask}}Previous Subtask: {{{json prevSubtask}}}\n{{/if}}{{#if nextSubtask}}Next Subtask: {{{json nextSubtask}}}\n{{/if}}Current Subtask Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\n{{#if useResearch}}Research and incorporate current best practices, latest stable versions, and proven approaches into your update. {{/if}}Based on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.{{#if useResearch}} Include specific technical recommendations based on current industry standards.{{/if}}\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -43,17 +43,29 @@
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Additional project context"
|
||||
},
|
||||
"isClaudeCode": {
|
||||
"type": "boolean",
|
||||
"required": false,
|
||||
"default": false,
|
||||
"description": "Whether Claude Code is being used as the provider"
|
||||
},
|
||||
"projectRoot": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Project root path for context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update a software development task based on new context.{{#if useResearch}} You have access to current best practices and latest technical information to provide research-backed updates.{{/if}}\nYou will be given a task and a prompt describing changes or new implementation details.\nYour job is to update the task to reflect these changes, while preserving its basic structure.\n\nGuidelines:\n1. VERY IMPORTANT: NEVER change the title of the task - keep it exactly as is\n2. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt{{#if useResearch}}\n3. Research and update the description, details, and test strategy with current best practices\n4. Include specific versions, libraries, and approaches that are current and well-tested{{/if}}{{#if (not useResearch)}}\n3. Update the description, details, and test strategy to reflect the new information\n4. Do not change anything unnecessarily - just adapt what needs to change based on the prompt{{/if}}\n5. Return a complete valid JSON object representing the updated task\n6. VERY IMPORTANT: Preserve all subtasks marked as \"done\" or \"completed\" - do not modify their content\n7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything\n8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly\n9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced\n10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted\n11. Ensure any new subtasks have unique IDs that don't conflict with existing ones\n12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings (\"1\", \"2\", \"3\")\n13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix{{#if useResearch}}\n14. Include links to documentation or resources where helpful\n15. Focus on practical, implementable solutions using current technologies{{/if}}\n\nThe changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.",
|
||||
"user": "Here is the task to update{{#if useResearch}} with research-backed information{{/if}}:\n{{{taskJson}}}\n\nPlease {{#if useResearch}}research and {{/if}}update this task based on the following {{#if useResearch}}context:\n{{updatePrompt}}\n\nIncorporate current best practices, latest stable versions, and proven approaches.{{/if}}{{#if (not useResearch)}}new context:\n{{updatePrompt}}{{/if}}\n\nIMPORTANT: {{#if useResearch}}Preserve any subtasks marked as \"done\" or \"completed\".{{/if}}{{#if (not useResearch)}}In the task JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{/if}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nReturn only the updated task as a valid JSON object{{#if useResearch}} with research-backed improvements{{/if}}."
|
||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before updating the task:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine relevant files and understand current implementation\n4. Analyze how the task changes relate to the existing codebase\n\nBased on your analysis:\n- Update task details to reference specific files, functions, or patterns from the codebase\n- Ensure implementation details align with the project's current architecture\n- Include specific code examples or file references where appropriate\n- Consider how changes impact existing components\n\nProject Root: {{projectRoot}}\n\n{{/if}}Here is the task to update{{#if useResearch}} with research-backed information{{/if}}:\n{{{taskJson}}}\n\nPlease {{#if useResearch}}research and {{/if}}update this task based on the following {{#if useResearch}}context:\n{{updatePrompt}}\n\nIncorporate current best practices, latest stable versions, and proven approaches.{{/if}}{{#if (not useResearch)}}new context:\n{{updatePrompt}}{{/if}}\n\nIMPORTANT: {{#if useResearch}}Preserve any subtasks marked as \"done\" or \"completed\".{{/if}}{{#if (not useResearch)}}In the task JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{/if}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nReturn only the updated task as a valid JSON object{{#if useResearch}} with research-backed improvements{{/if}}."
|
||||
},
|
||||
"append": {
|
||||
"condition": "appendMode === true",
|
||||
"system": "You are an AI assistant helping to append additional information to a software development task. You will be provided with the task's existing details, context, and a user request string.\n\nYour Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the task's details.\nFocus *only* on generating the substance of the update.\n\nOutput Requirements:\n1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.\n2. Your string response should NOT include any of the task's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.\n3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.\n4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with \"Okay, here's the update...\").",
|
||||
"user": "Task Context:\n\nTask: {{{json task}}}\nCurrent Task Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\nBased on the User Request and all the Task Context (including current task details provided above), what is the new information or text that should be appended to this task's details? Return ONLY this new text as a plain string.\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating the task update:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine relevant files and understand current implementation\n4. Analyze the current codebase to inform your update\n\nBased on your analysis:\n- Include specific file references, code patterns, or implementation details\n- Ensure suggestions align with the project's current architecture\n- Reference existing components or patterns when relevant\n\nProject Root: {{projectRoot}}\n\n{{/if}}Task Context:\n\nTask: {{{json task}}}\nCurrent Task Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\nBased on the User Request and all the Task Context (including current task details provided above), what is the new information or text that should be appended to this task's details? Return ONLY this new text as a plain string.\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,12 +27,24 @@
|
||||
"projectContext": {
|
||||
"type": "string",
|
||||
"description": "Additional project context"
|
||||
},
|
||||
"isClaudeCode": {
|
||||
"type": "boolean",
|
||||
"required": false,
|
||||
"default": false,
|
||||
"description": "Whether Claude Code is being used as the provider"
|
||||
},
|
||||
"projectRoot": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Project root path for context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update software development tasks based on new context.\nYou will be given a set of tasks and a prompt describing changes or new implementation details.\nYour job is to update the tasks to reflect these changes, while preserving their basic structure.\n\nCRITICAL RULES:\n1. Return ONLY a JSON array - no explanations, no markdown, no additional text before or after\n2. Each task MUST have ALL fields from the original (do not omit any fields)\n3. Maintain the same IDs, statuses, and dependencies unless specifically mentioned in the prompt\n4. Update titles, descriptions, details, and test strategies to reflect the new information\n5. Do not change anything unnecessarily - just adapt what needs to change based on the prompt\n6. You should return ALL the tasks in order, not just the modified ones\n7. Return a complete valid JSON array with all tasks\n8. VERY IMPORTANT: Preserve all subtasks marked as \"done\" or \"completed\" - do not modify their content\n9. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything\n10. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly\n11. Instead, add a new subtask that clearly indicates what needs to be changed or replaced\n12. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted\n\nThe changes described in the prompt should be applied to ALL tasks in the list.",
|
||||
"user": "Here are the tasks to update:\n{{{json tasks}}}\n\nPlease update these tasks based on the following new context:\n{{updatePrompt}}\n\nIMPORTANT: In the tasks JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{#if projectContext}}\n\n# Project Context\n\n{{projectContext}}{{/if}}\n\nRequired JSON structure for EACH task (ALL fields MUST be present):\n{\n \"id\": <number>,\n \"title\": <string>,\n \"description\": <string>,\n \"status\": <string>,\n \"dependencies\": <array>,\n \"priority\": <string or null>,\n \"details\": <string or null>,\n \"testStrategy\": <string or null>,\n \"subtasks\": <array or null>\n}\n\nReturn a valid JSON array containing ALL the tasks with ALL their fields:\n- id (number) - preserve existing value\n- title (string)\n- description (string)\n- status (string) - preserve existing value unless explicitly changing\n- dependencies (array) - preserve existing value unless explicitly changing\n- priority (string or null)\n- details (string or null)\n- testStrategy (string or null)\n- subtasks (array or null)\n\nReturn ONLY the JSON array now:"
|
||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before updating tasks:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine relevant files and understand current implementation\n4. Analyze how the new changes relate to the existing codebase\n\nBased on your analysis:\n- Update task details to reference specific files, functions, or patterns from the codebase\n- Ensure implementation details align with the project's current architecture\n- Include specific code examples or file references where appropriate\n- Consider how changes impact existing components\n\nProject Root: {{projectRoot}}\n\n{{/if}}Here are the tasks to update:\n{{{json tasks}}}\n\nPlease update these tasks based on the following new context:\n{{updatePrompt}}\n\nIMPORTANT: In the tasks JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{#if projectContext}}\n\n# Project Context\n\n{{projectContext}}{{/if}}\n\nRequired JSON structure for EACH task (ALL fields MUST be present):\n{\n \"id\": <number>,\n \"title\": <string>,\n \"description\": <string>,\n \"status\": <string>,\n \"dependencies\": <array>,\n \"priority\": <string or null>,\n \"details\": <string or null>,\n \"testStrategy\": <string or null>,\n \"subtasks\": <array or null>\n}\n\nReturn a valid JSON array containing ALL the tasks with ALL their fields:\n- id (number) - preserve existing value\n- title (string)\n- description (string)\n- status (string) - preserve existing value unless explicitly changing\n- dependencies (array) - preserve existing value unless explicitly changing\n- priority (string or null)\n- details (string or null)\n- testStrategy (string or null)\n- subtasks (array or null)\n\nReturn ONLY the JSON array now:"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -99,7 +99,8 @@ jest.unstable_mockModule(
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDefaultPriority: jest.fn(() => 'medium')
|
||||
getDefaultPriority: jest.fn(() => 'medium'),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
@@ -187,7 +187,8 @@ jest.unstable_mockModule(
|
||||
// Additional functions
|
||||
getAllProviders: jest.fn(() => ['anthropic', 'openai', 'perplexity']),
|
||||
getVertexProjectId: jest.fn(() => undefined),
|
||||
getVertexLocation: jest.fn(() => undefined)
|
||||
getVertexLocation: jest.fn(() => undefined),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
@@ -301,7 +301,8 @@ jest.unstable_mockModule(
|
||||
// Additional functions
|
||||
getAllProviders: jest.fn(() => ['anthropic', 'openai', 'perplexity']),
|
||||
getVertexProjectId: jest.fn(() => undefined),
|
||||
getVertexLocation: jest.fn(() => undefined)
|
||||
getVertexLocation: jest.fn(() => undefined),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
@@ -795,7 +795,7 @@ describe('parsePRD', () => {
|
||||
});
|
||||
|
||||
describe('Streaming vs Non-Streaming Modes', () => {
|
||||
test('should use streaming when reportProgress function is provided', async () => {
|
||||
test('should use non-streaming when reportProgress function is provided (streaming disabled)', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((path) => {
|
||||
if (path === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
@@ -815,23 +815,20 @@ describe('parsePRD', () => {
|
||||
};
|
||||
JSONParser.mockReturnValue(mockParser);
|
||||
|
||||
// Call the function with reportProgress to trigger streaming path
|
||||
// Call the function with reportProgress - with streaming disabled, should use non-streaming
|
||||
const result = await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 3, {
|
||||
reportProgress: mockReportProgress
|
||||
});
|
||||
|
||||
// Verify streamObjectService was called (streaming path)
|
||||
expect(streamObjectService).toHaveBeenCalled();
|
||||
// With streaming disabled, should use generateObjectService instead
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
|
||||
// Verify generateObjectService was NOT called (non-streaming path)
|
||||
expect(generateObjectService).not.toHaveBeenCalled();
|
||||
// Verify streamObjectService was NOT called (streaming is disabled)
|
||||
expect(streamObjectService).not.toHaveBeenCalled();
|
||||
|
||||
// Verify progress reporting was called
|
||||
// Verify progress reporting was still called
|
||||
expect(mockReportProgress).toHaveBeenCalled();
|
||||
|
||||
// We no longer use parseStream with streamObject
|
||||
// expect(parseStream).toHaveBeenCalled();
|
||||
|
||||
// Verify result structure
|
||||
expect(result).toEqual({
|
||||
success: true,
|
||||
@@ -840,7 +837,7 @@ describe('parsePRD', () => {
|
||||
});
|
||||
});
|
||||
|
||||
test('should fallback to non-streaming when streaming fails with specific errors', async () => {
|
||||
test.skip('should fallback to non-streaming when streaming fails with specific errors (streaming disabled)', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((path) => {
|
||||
if (path === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
@@ -954,7 +951,7 @@ describe('parsePRD', () => {
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle research flag with streaming', async () => {
|
||||
test('should handle research flag with non-streaming (streaming disabled)', async () => {
|
||||
// Setup mocks to simulate normal conditions
|
||||
fs.default.existsSync.mockImplementation((path) => {
|
||||
if (path === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
@@ -965,19 +962,19 @@ describe('parsePRD', () => {
|
||||
// Mock progress reporting function
|
||||
const mockReportProgress = jest.fn(() => Promise.resolve());
|
||||
|
||||
// Call with streaming + research
|
||||
// Call with reportProgress + research - with streaming disabled, should use non-streaming
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 3, {
|
||||
reportProgress: mockReportProgress,
|
||||
research: true
|
||||
});
|
||||
|
||||
// Verify streaming path was used with research role
|
||||
expect(streamObjectService).toHaveBeenCalledWith(
|
||||
// With streaming disabled, should use generateObjectService with research role
|
||||
expect(generateObjectService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
role: 'research'
|
||||
})
|
||||
);
|
||||
expect(generateObjectService).not.toHaveBeenCalled();
|
||||
expect(streamObjectService).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle research flag with non-streaming', async () => {
|
||||
@@ -1009,7 +1006,7 @@ describe('parsePRD', () => {
|
||||
expect(streamObjectService).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should use streaming for CLI text mode even without reportProgress', async () => {
|
||||
test('should use non-streaming for CLI text mode (streaming disabled)', async () => {
|
||||
// Setup mocks to simulate normal conditions
|
||||
fs.default.existsSync.mockImplementation((path) => {
|
||||
if (path === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
@@ -1020,13 +1017,12 @@ describe('parsePRD', () => {
|
||||
// Call without mcpLog and without reportProgress (CLI text mode)
|
||||
const result = await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 3);
|
||||
|
||||
// Verify streaming path was used (no mcpLog means CLI text mode, which should use streaming)
|
||||
expect(streamObjectService).toHaveBeenCalled();
|
||||
expect(generateObjectService).not.toHaveBeenCalled();
|
||||
// With streaming disabled, should use generateObjectService even in CLI text mode
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
expect(streamObjectService).not.toHaveBeenCalled();
|
||||
|
||||
// Verify progress tracker components were called for CLI mode
|
||||
expect(createParsePrdTracker).toHaveBeenCalled();
|
||||
expect(displayParsePrdStart).toHaveBeenCalled();
|
||||
// Progress tracker components may still be called for CLI mode display
|
||||
// but the actual parsing uses non-streaming
|
||||
|
||||
expect(result).toEqual({
|
||||
success: true,
|
||||
|
||||
@@ -52,7 +52,8 @@ jest.unstable_mockModule(
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
@@ -51,7 +51,8 @@ jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
isApiKeySet: jest.fn(() => true)
|
||||
isApiKeySet: jest.fn(() => true),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
@@ -44,7 +44,8 @@ jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
isClaudeCode: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
Reference in New Issue
Block a user