Compare commits
171 Commits
chore/fix.
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5781d61d9c | ||
|
|
25a00dca67 | ||
|
|
f263d4b2e0 | ||
|
|
f12a16d096 | ||
|
|
aaf903ff2f | ||
|
|
2a910a40ba | ||
|
|
0df6595245 | ||
|
|
33e3fbb20f | ||
|
|
5cb7ed557a | ||
|
|
b9e644c556 | ||
|
|
7265a6cf53 | ||
|
|
db6f405f23 | ||
|
|
7b5a7c4495 | ||
|
|
caee040907 | ||
|
|
4b5473860b | ||
|
|
b43b7ce201 | ||
|
|
86027f1ee4 | ||
|
|
4f984f8a69 | ||
|
|
f7646f41b5 | ||
|
|
20004a39ea | ||
|
|
f1393f47b1 | ||
|
|
738ec51c04 | ||
|
|
c7418c4594 | ||
|
|
0747f1c772 | ||
|
|
ffe24a2e35 | ||
|
|
604b94baa9 | ||
|
|
2ea4bb6a81 | ||
|
|
3e96387715 | ||
|
|
100c3dc47d | ||
|
|
986ac117ae | ||
|
|
18aa416035 | ||
|
|
3b3dbabed1 | ||
|
|
af53525cbc | ||
|
|
0079b7defd | ||
|
|
0b2c6967c4 | ||
|
|
c0682ac795 | ||
|
|
01a7faea8f | ||
|
|
b7f32eac5a | ||
|
|
044a7bfc98 | ||
|
|
814265cd33 | ||
|
|
9b7b2ca7b2 | ||
|
|
949f091179 | ||
|
|
51a351760c | ||
|
|
732b2c61ad | ||
|
|
32c2b03c23 | ||
|
|
3bfd999d81 | ||
|
|
9fa79eb026 | ||
|
|
875134247a | ||
|
|
4c2801d5eb | ||
|
|
c911608f60 | ||
|
|
8f1497407f | ||
|
|
10b64ec6f5 | ||
|
|
1a1879483b | ||
|
|
d691cbb7ae | ||
|
|
1b7c9637a5 | ||
|
|
9ff5f158d5 | ||
|
|
b2ff06e8c5 | ||
|
|
c2fc61ddb3 | ||
|
|
aaacc3dae3 | ||
|
|
46cd5dc186 | ||
|
|
49a31be416 | ||
|
|
2b69936ee7 | ||
|
|
6438f6c7c8 | ||
|
|
6bbd777552 | ||
|
|
100482722f | ||
|
|
7ff882bf23 | ||
|
|
6ab768f6ec | ||
|
|
b5fe723f8e | ||
|
|
f487736670 | ||
|
|
d67b81d25d | ||
|
|
66c05053c0 | ||
|
|
d7ab4609aa | ||
|
|
05f6242f7e | ||
|
|
a58719cf50 | ||
|
|
674d1f6de7 | ||
|
|
f106fb8e0b | ||
|
|
fd9dd43ee0 | ||
|
|
c395e93696 | ||
|
|
a621ff05ea | ||
|
|
47ddb60231 | ||
|
|
fce841490a | ||
|
|
4e126430a0 | ||
|
|
a33abe6c21 | ||
|
|
2b0cbdbc84 | ||
|
|
f1cdf78aa6 | ||
|
|
e6de285cea | ||
|
|
cf3339fa48 | ||
|
|
255b9f0334 | ||
|
|
cb2c266b2d | ||
|
|
170d6f2f65 | ||
|
|
137ef36278 | ||
|
|
1a3a528bf7 | ||
|
|
c164adc6ff | ||
|
|
9d61e0447d | ||
|
|
ee11b735b3 | ||
|
|
6d978228d9 | ||
|
|
ea9341e7af | ||
|
|
4296e383ea | ||
|
|
97b2781709 | ||
|
|
96553e4a5f | ||
|
|
7582219365 | ||
|
|
84baedc3d2 | ||
|
|
78da39edff | ||
|
|
4d1416b175 | ||
|
|
dc811eb45e | ||
|
|
3c41a113fe | ||
|
|
0e8c42c7cb | ||
|
|
799d1d2cce | ||
|
|
83af314879 | ||
|
|
dd03374496 | ||
|
|
4ab0affba7 | ||
|
|
77e1ddc237 | ||
|
|
3eeb19590a | ||
|
|
587745046f | ||
|
|
c61c73f827 | ||
|
|
15900d9fd5 | ||
|
|
7cf4004038 | ||
|
|
0f3ab00f26 | ||
|
|
e81040def5 | ||
|
|
597f6b03b4 | ||
|
|
a7ad4c8e92 | ||
|
|
0d54747894 | ||
|
|
df26c65632 | ||
|
|
e80e5bb7cd | ||
|
|
c4f92f6a0a | ||
|
|
be0c0f267c | ||
|
|
a983f75d4f | ||
|
|
e743aaa8c2 | ||
|
|
16ffffaf68 | ||
|
|
f254aed4a6 | ||
|
|
dd3b47bb2b | ||
|
|
37af0f1912 | ||
|
|
8783708e5e | ||
|
|
4dad2fd613 | ||
|
|
4cae2991d4 | ||
|
|
0d7ff627c9 | ||
|
|
db720a954d | ||
|
|
89335578ff | ||
|
|
781b8ef2af | ||
|
|
7d564920b5 | ||
|
|
2737fbaa67 | ||
|
|
9feb8d2dbf | ||
|
|
8a991587f1 | ||
|
|
7ceba2f572 | ||
|
|
10565f07d3 | ||
|
|
f27ce34fe9 | ||
|
|
71be933a8d | ||
|
|
5d94f1b471 | ||
|
|
3dee60dc3d | ||
|
|
f469515228 | ||
|
|
2fd0f026d3 | ||
|
|
e3ed4d7c14 | ||
|
|
fc47714340 | ||
|
|
30ae0e9a57 | ||
|
|
95640dcde8 | ||
|
|
311b2433e2 | ||
|
|
04e11b5e82 | ||
|
|
782728ff95 | ||
|
|
30ca144231 | ||
|
|
0220d0e994 | ||
|
|
41a8c2406a | ||
|
|
a003041cd8 | ||
|
|
6b57ead106 | ||
|
|
7b6e117b1d | ||
|
|
03b045e9cd | ||
|
|
699afdae59 | ||
|
|
80c09802e8 | ||
|
|
cf8f0f4b1c | ||
|
|
75c514cf5b | ||
|
|
41d1e671b1 | ||
|
|
37fb569a62 |
7
.changeset/auto-update-changelog-highlights.md
Normal file
7
.changeset/auto-update-changelog-highlights.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
@@ -2,13 +2,15 @@
|
||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||
"changelog": [
|
||||
"@changesets/changelog-github",
|
||||
{ "repo": "eyaltoledano/claude-task-master" }
|
||||
{
|
||||
"repo": "eyaltoledano/claude-task-master"
|
||||
}
|
||||
],
|
||||
"commit": false,
|
||||
"fixed": [],
|
||||
"linked": [],
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"updateInternalDependencies": "patch",
|
||||
"ignore": []
|
||||
}
|
||||
"ignore": [
|
||||
"docs"
|
||||
]
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Claude Code provider with codebase-aware task generation
|
||||
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
17
.changeset/nice-ways-hope.md
Normal file
17
.changeset/nice-ways-hope.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
7
.changeset/plain-falcons-serve.md
Normal file
7
.changeset/plain-falcons-serve.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.23.0",
|
||||
"extension": "0.23.0"
|
||||
},
|
||||
"changesets": [
|
||||
"fuzzy-words-count",
|
||||
"tender-trams-refuse",
|
||||
"vast-sites-leave"
|
||||
]
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix MCP scope-up/down tools not finding tasks
|
||||
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"extension": patch
|
||||
---
|
||||
|
||||
Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension
|
||||
@@ -1,11 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve AI provider compatibility for JSON generation
|
||||
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
@@ -1,59 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
38
.claude/commands/dedupe.md
Normal file
38
.claude/commands/dedupe.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"args": ["./dist/mcp-server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
157
.github/scripts/parse-metrics.mjs
vendored
Normal file
157
.github/scripts/parse-metrics.mjs
vendored
Normal file
@@ -0,0 +1,157 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
||||
|
||||
function parseMetricsTable(content, metricName) {
|
||||
const lines = content.split('\n');
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i].trim();
|
||||
// Match a markdown table row like: | Metric Name | value | ...
|
||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
||||
const match = line.match(re);
|
||||
if (match) {
|
||||
return match[1].trim() || 'N/A';
|
||||
}
|
||||
}
|
||||
return 'N/A';
|
||||
}
|
||||
|
||||
function parseCountMetric(content, metricName) {
|
||||
const result = parseMetricsTable(content, metricName);
|
||||
// Extract number from string, handling commas and spaces
|
||||
const numberMatch = result.toString().match(/[\d,]+/);
|
||||
if (numberMatch) {
|
||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
||||
return isNaN(number) ? 0 : number;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function main() {
|
||||
const metrics = {
|
||||
issues_created: 0,
|
||||
issues_closed: 0,
|
||||
prs_created: 0,
|
||||
prs_merged: 0,
|
||||
issue_avg_first_response: 'N/A',
|
||||
issue_avg_time_to_close: 'N/A',
|
||||
pr_avg_first_response: 'N/A',
|
||||
pr_avg_merge_time: 'N/A'
|
||||
};
|
||||
|
||||
// Parse issue metrics
|
||||
if (existsSync('issue_metrics.md')) {
|
||||
console.log('📄 Found issue_metrics.md, parsing...');
|
||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
||||
|
||||
metrics.issues_created = parseCountMetric(
|
||||
issueContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.issues_closed = parseCountMetric(
|
||||
issueContent,
|
||||
'Number of items closed'
|
||||
);
|
||||
metrics.issue_avg_first_response = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to first response'
|
||||
);
|
||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
||||
}
|
||||
|
||||
// Parse PR created metrics
|
||||
if (existsSync('pr_created_metrics.md')) {
|
||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_created = parseCountMetric(
|
||||
prCreatedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.pr_avg_first_response = parseMetricsTable(
|
||||
prCreatedContent,
|
||||
'Time to first response'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
||||
);
|
||||
}
|
||||
|
||||
// Parse PR merged metrics (for more accurate merge data)
|
||||
if (existsSync('pr_merged_metrics.md')) {
|
||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_merged = parseCountMetric(
|
||||
prMergedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
// For merged PRs, "Time to close" is actually time to merge
|
||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
||||
prMergedContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
||||
);
|
||||
// Fallback: try old pr_metrics.md if it exists
|
||||
if (existsSync('pr_metrics.md')) {
|
||||
console.log('📄 Falling back to pr_metrics.md...');
|
||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
||||
|
||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
||||
metrics.prs_merged =
|
||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
||||
|
||||
const maybeMergeTime = parseMetricsTable(
|
||||
prContent,
|
||||
'Average time to merge'
|
||||
);
|
||||
metrics.pr_avg_merge_time =
|
||||
maybeMergeTime !== 'N/A'
|
||||
? maybeMergeTime
|
||||
: parseMetricsTable(prContent, 'Time to close');
|
||||
} else {
|
||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
||||
}
|
||||
}
|
||||
|
||||
// Output for GitHub Actions
|
||||
const output = Object.entries(metrics)
|
||||
.map(([key, value]) => `${key}=${value}`)
|
||||
.join('\n');
|
||||
|
||||
// Always output to stdout for debugging
|
||||
console.log('\n=== FINAL METRICS ===');
|
||||
Object.entries(metrics).forEach(([key, value]) => {
|
||||
console.log(`${key}: ${value}`);
|
||||
});
|
||||
|
||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
||||
if (process.env.GITHUB_OUTPUT) {
|
||||
try {
|
||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
||||
console.log(
|
||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
54
.github/scripts/pre-release.mjs
vendored
54
.github/scripts/pre-release.mjs
vendored
@@ -1,54 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import {
|
||||
findRootDir,
|
||||
runCommand,
|
||||
getPackageVersion,
|
||||
createAndPushTag
|
||||
} from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const extensionPkgPath = join(rootDir, 'apps', 'extension', 'package.json');
|
||||
|
||||
console.log('🚀 Starting pre-release process...');
|
||||
|
||||
// Check if we're in RC mode
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.error('⚠️ Not in RC mode. Run "npx changeset pre enter rc" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
try {
|
||||
const preJson = JSON.parse(readFileSync(preJsonPath, 'utf8'));
|
||||
if (preJson.tag !== 'rc') {
|
||||
console.error(`⚠️ Not in RC mode. Current tag: ${preJson.tag}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to read pre.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get current extension version
|
||||
const extensionVersion = getPackageVersion(extensionPkgPath);
|
||||
console.log(`Extension version: ${extensionVersion}`);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
console.log('📦 Publishing npm packages...');
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
// Create tag for extension pre-release if it doesn't exist
|
||||
const extensionTag = `extension-rc@${extensionVersion}`;
|
||||
const tagCreated = createAndPushTag(extensionTag);
|
||||
|
||||
if (tagCreated) {
|
||||
console.log('This will trigger the extension-pre-release workflow...');
|
||||
}
|
||||
|
||||
console.log('✅ Pre-release process completed!');
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
128
.github/workflows/ci.yml
vendored
128
.github/workflows/ci.yml
vendored
@@ -6,73 +6,124 @@ on:
|
||||
- main
|
||||
- next
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
DO_NOT_TRACK: 1
|
||||
NODE_ENV: development
|
||||
|
||||
jobs:
|
||||
setup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install Dependencies
|
||||
id: install
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
# Fast checks that can run in parallel
|
||||
format-check:
|
||||
needs: setup
|
||||
name: Format Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Format Check
|
||||
run: npm run format-check
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
test:
|
||||
needs: setup
|
||||
typecheck:
|
||||
name: Typecheck
|
||||
timeout-minutes: 10
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Typecheck
|
||||
run: npm run turbo:typecheck
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
# Build job to ensure everything compiles
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Upload build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
retention-days: 1
|
||||
|
||||
test:
|
||||
name: Test
|
||||
timeout-minutes: 15
|
||||
runs-on: ubuntu-latest
|
||||
needs: [format-check, typecheck, build]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
@@ -81,7 +132,6 @@ jobs:
|
||||
NODE_ENV: test
|
||||
CI: true
|
||||
FORCE_COLOR: 1
|
||||
timeout-minutes: 10
|
||||
|
||||
- name: Upload Test Results
|
||||
if: always()
|
||||
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
name: Trigger Claude Documentation Update
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
trigger-docs-update:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
actions: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
# Store changed files for Claude to analyze (escaped for JSON)
|
||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the commit message (escaped for JSON)
|
||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get diff for documentation context (escaped for JSON)
|
||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get commit SHA
|
||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Trigger Claude workflow
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# Trigger the Claude docs updater workflow with the change information
|
||||
gh workflow run claude-docs-updater.yml \
|
||||
--ref next \
|
||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
||||
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
@@ -0,0 +1,145 @@
|
||||
name: Claude Documentation Updater
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
commit_sha:
|
||||
description: 'The commit SHA that triggered this update'
|
||||
required: true
|
||||
type: string
|
||||
commit_message:
|
||||
description: 'The commit message'
|
||||
required: true
|
||||
type: string
|
||||
changed_files:
|
||||
description: 'List of changed files'
|
||||
required: true
|
||||
type: string
|
||||
commit_diff:
|
||||
description: 'Diff summary of changes'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: next
|
||||
fetch-depth: 0 # Need full history to checkout specific commit
|
||||
|
||||
- name: Create docs update branch
|
||||
id: create-branch
|
||||
run: |
|
||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||
git checkout -b $BRANCH_NAME
|
||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Claude Code to Update Documentation
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ inputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ inputs.changed_files }}
|
||||
|
||||
- Changes summary:
|
||||
${{ inputs.commit_diff }}
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
3. If documentation updates are needed:
|
||||
- Update relevant documentation files in apps/docs/
|
||||
- Ensure examples are updated if APIs changed
|
||||
- Update any configuration documentation if config options changed
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
- Include code examples where appropriate
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
id: check-changes
|
||||
run: |
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
git add -A
|
||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --local user.name "github-actions[bot]"
|
||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||
|
||||
This PR was automatically generated to update documentation based on recent changes.
|
||||
|
||||
Original commit: ${{ inputs.commit_message }}
|
||||
|
||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||
fi
|
||||
|
||||
- name: Push changes and create PR
|
||||
if: steps.check-changes.outputs.has_changes == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
--body "## 📚 Documentation Update
|
||||
|
||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||
|
||||
### Original Changes
|
||||
**Commit:** ${{ inputs.commit_sha }}
|
||||
**Message:** ${{ inputs.commit_message }}
|
||||
|
||||
### Changed Files in Original Commit
|
||||
\`\`\`
|
||||
${{ inputs.changed_files }}
|
||||
\`\`\`
|
||||
|
||||
### Documentation Updates
|
||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||
- [ ] Documentation accurately reflects the changes
|
||||
- [ ] Examples are correct and working
|
||||
- [ ] No important details are missing
|
||||
- [ ] Style is consistent with existing documentation
|
||||
|
||||
---
|
||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
Normal file
36
.github/workflows/claude.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
5
.github/workflows/extension-ci.yml
vendored
5
.github/workflows/extension-ci.yml
vendored
@@ -41,8 +41,7 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
- name: Install Monorepo Dependencies
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
@@ -68,7 +67,6 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
@@ -100,7 +98,6 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
|
||||
110
.github/workflows/extension-pre-release.yml
vendored
110
.github/workflows/extension-pre-release.yml
vendored
@@ -1,110 +0,0 @@
|
||||
name: Extension Pre-Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "extension-rc@*"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
concurrency: extension-pre-release-${{ github.ref }}
|
||||
|
||||
jobs:
|
||||
publish-extension-rc:
|
||||
runs-on: ubuntu-latest
|
||||
environment: extension-release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
node_modules
|
||||
*/*/node_modules
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Type Check Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run check-types
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run build
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Package Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run package
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Create VSIX Package (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce package --no-dependencies --pre-release
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Get VSIX filename
|
||||
id: vsix-info
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: |
|
||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
||||
if [ -z "$VSIX_FILE" ]; then
|
||||
echo "Error: No VSIX file found"
|
||||
exit 1
|
||||
fi
|
||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
||||
echo "Found VSIX: $VSIX_FILE"
|
||||
|
||||
- name: Publish to VS Code Marketplace (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Install Open VSX CLI
|
||||
run: npm install -g ovsx
|
||||
|
||||
- name: Publish to Open VSX Registry (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Upload Build Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: extension-pre-release-${{ github.ref_name }}
|
||||
path: |
|
||||
apps/extension/vsix-build/*.vsix
|
||||
apps/extension/dist/
|
||||
retention-days: 30
|
||||
|
||||
notify-success:
|
||||
needs: publish-extension-rc
|
||||
if: success()
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Success Notification
|
||||
run: |
|
||||
echo "🚀 Extension ${{ github.ref_name }} successfully published as pre-release!"
|
||||
echo "📦 Available on VS Code Marketplace (Pre-Release)"
|
||||
echo "🌍 Available on Open VSX Registry (Pre-Release)"
|
||||
3
.github/workflows/extension-release.yml
vendored
3
.github/workflows/extension-release.yml
vendored
@@ -31,8 +31,7 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
- name: Install Monorepo Dependencies
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
|
||||
176
.github/workflows/log-issue-events.yml
vendored
Normal file
176
.github/workflows/log-issue-events.yml
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
41
.github/workflows/pre-release.yml
vendored
41
.github/workflows/pre-release.yml
vendored
@@ -36,9 +36,26 @@ jobs:
|
||||
|
||||
- name: Enter RC mode (if not already in RC mode)
|
||||
run: |
|
||||
# ensure we’re in the right pre-mode (tag "rc")
|
||||
if [ ! -f .changeset/pre.json ] \
|
||||
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
|
||||
# Check if we're in pre-release mode with the "rc" tag
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
||||
|
||||
if [ "$MODE" = "exit" ]; then
|
||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
||||
npx changeset pre exit
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
||||
echo "Already in RC pre-release mode"
|
||||
else
|
||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
else
|
||||
echo "No pre.json found, entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
|
||||
@@ -48,15 +65,27 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
- name: Run format
|
||||
run: npm run format
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: node ./.github/scripts/pre-release.mjs
|
||||
publish: npx changeset publish
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
|
||||
- name: Commit & Push changes
|
||||
uses: actions-js/push@master
|
||||
|
||||
11
.github/workflows/release.yml
vendored
11
.github/workflows/release.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: 'npm'
|
||||
cache: "npm"
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
@@ -41,6 +41,15 @@ jobs:
|
||||
- name: Check pre-release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Create Release Pull Request or Publish to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
|
||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
108
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
@@ -0,0 +1,108 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Get dates for last 14 days
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
# Last 14 days
|
||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
OUTPUT_FILE: issue_metrics.md
|
||||
|
||||
- name: Generate PR created metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_created_metrics.md
|
||||
|
||||
- name: Generate PR merged metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_merged_metrics.md
|
||||
|
||||
- name: Debug generated metrics
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
echo "Listing markdown files in workspace:"
|
||||
ls -la *.md || true
|
||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
||||
if [ -f "$f" ]; then
|
||||
echo "== $f (first 10 lines) =="
|
||||
head -n 10 "$f"
|
||||
else
|
||||
echo "Missing $f"
|
||||
fi
|
||||
done
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: node .github/scripts/parse-metrics.mjs
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
*${{ env.date_range }}*
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
|
||||
**📈 Visual Analytics**
|
||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -93,4 +93,7 @@ dev-debug.log
|
||||
apps/extension/.vscode-test/
|
||||
|
||||
# apps/extension
|
||||
apps/extension/vsix-build/
|
||||
apps/extension/vsix-build/
|
||||
|
||||
# turbo
|
||||
.turbo
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
6
.manypkg.json
Normal file
6
.manypkg.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
||||
"defaultBranch": "main",
|
||||
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
||||
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
||||
}
|
||||
@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"modelId": "claude-sonnet-4-20250514",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
@@ -14,8 +14,8 @@
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet-20241022",
|
||||
"maxTokens": 8192,
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
@@ -29,9 +29,15 @@
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||
"responseLanguage": "English",
|
||||
"enableCodebaseAnalysis": true,
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"defaultTag": "master"
|
||||
},
|
||||
"claudeCode": {}
|
||||
"claudeCode": {},
|
||||
"grokCli": {
|
||||
"timeout": 120000,
|
||||
"workingDirectory": null,
|
||||
"defaultModel": "grok-4-latest"
|
||||
}
|
||||
}
|
||||
|
||||
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# Task Master Migration Roadmap
|
||||
|
||||
## Overview
|
||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
||||
|
||||
## Architecture Vision
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ User Interfaces │
|
||||
├──────────┬──────────┬──────────┬────────────────┤
|
||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
||||
└──────────┴──────────┴──────────┴────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────┐
|
||||
│ @tm/core │
|
||||
│ (Business Logic) │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## Migration Phases
|
||||
|
||||
### Phase 1: Core Extraction ✅ (In Progress)
|
||||
**Goal**: Move all business logic to @tm/core
|
||||
|
||||
- [x] Create @tm/core package structure
|
||||
- [x] Move types and interfaces
|
||||
- [x] Implement TaskMasterCore facade
|
||||
- [x] Move storage adapters
|
||||
- [x] Move task services
|
||||
- [ ] Move AI providers
|
||||
- [ ] Move parser logic
|
||||
- [ ] Complete test coverage
|
||||
|
||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
||||
**Goal**: Create @tm/cli as a thin presentation layer
|
||||
|
||||
- [x] Create @tm/cli package structure
|
||||
- [x] Implement Command interface pattern
|
||||
- [x] Create CommandRegistry
|
||||
- [x] Build legacy bridge/adapter
|
||||
- [x] Migrate list-tasks command
|
||||
- [ ] Migrate remaining commands one by one
|
||||
- [ ] Remove UI logic from core
|
||||
|
||||
### Phase 3: Transitional Integration
|
||||
**Goal**: Use new packages in existing scripts without breaking changes
|
||||
|
||||
```javascript
|
||||
// scripts/modules/commands.js gradually adopts new commands
|
||||
import { ListTasksCommand } from '@tm/cli';
|
||||
const listCommand = new ListTasksCommand();
|
||||
|
||||
// Old interface remains the same
|
||||
programInstance
|
||||
.command('list')
|
||||
.action(async (options) => {
|
||||
// Use new command internally
|
||||
const result = await listCommand.execute(convertOptions(options));
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 4: MCP Package
|
||||
**Goal**: Separate MCP server as its own package
|
||||
|
||||
- [ ] Create @tm/mcp package
|
||||
- [ ] Move MCP server code
|
||||
- [ ] Use @tm/core for all logic
|
||||
- [ ] MCP becomes a thin RPC layer
|
||||
|
||||
### Phase 5: Complete Migration
|
||||
**Goal**: Remove old scripts, pure monorepo
|
||||
|
||||
- [ ] All commands migrated to @tm/cli
|
||||
- [ ] Remove scripts/modules/task-manager/*
|
||||
- [ ] Remove scripts/modules/commands.js
|
||||
- [ ] Update bin/task-master.js to use @tm/cli
|
||||
- [ ] Clean up dependencies
|
||||
|
||||
## Current Transitional Strategy
|
||||
|
||||
### 1. Adapter Pattern (commands-adapter.js)
|
||||
```javascript
|
||||
// Checks if new CLI is available and uses it
|
||||
// Falls back to legacy implementation if not
|
||||
export async function listTasksAdapter(...args) {
|
||||
if (cliAvailable) {
|
||||
return useNewImplementation(...args);
|
||||
}
|
||||
return useLegacyImplementation(...args);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Command Bridge Pattern
|
||||
```javascript
|
||||
// Allows new commands to work in old code
|
||||
const bridge = new CommandBridge(new ListTasksCommand());
|
||||
const data = await bridge.run(legacyOptions); // Legacy style
|
||||
const result = await bridge.execute(newOptions); // New style
|
||||
```
|
||||
|
||||
### 3. Gradual File Migration
|
||||
Instead of big-bang refactoring:
|
||||
1. Create new implementation in @tm/cli
|
||||
2. Add adapter in commands-adapter.js
|
||||
3. Update commands.js to use adapter
|
||||
4. Test both paths work
|
||||
5. Eventually remove adapter when all migrated
|
||||
|
||||
## Benefits of This Approach
|
||||
|
||||
1. **No Breaking Changes**: Existing CLI continues to work
|
||||
2. **Incremental PRs**: Each command can be migrated separately
|
||||
3. **Parallel Development**: New features can use new architecture
|
||||
4. **Easy Rollback**: Can disable new implementation if issues
|
||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
||||
|
||||
## Example PR Sequence
|
||||
|
||||
### PR 1: Core Package Setup ✅
|
||||
- Create @tm/core
|
||||
- Move types and interfaces
|
||||
- Basic TaskMasterCore implementation
|
||||
|
||||
### PR 2: CLI Package Foundation ✅
|
||||
- Create @tm/cli
|
||||
- Command interface and registry
|
||||
- Legacy bridge utilities
|
||||
|
||||
### PR 3: First Command Migration
|
||||
- Migrate list-tasks to new system
|
||||
- Add adapter in scripts
|
||||
- Test both implementations
|
||||
|
||||
### PR 4-N: Migrate Commands One by One
|
||||
- Each PR migrates 1-2 related commands
|
||||
- Small, reviewable changes
|
||||
- Continuous delivery
|
||||
|
||||
### Final PR: Cleanup
|
||||
- Remove legacy implementations
|
||||
- Remove adapters
|
||||
- Update documentation
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Dual Testing During Migration
|
||||
```javascript
|
||||
describe('List Tasks', () => {
|
||||
it('works with legacy implementation', async () => {
|
||||
// Force legacy
|
||||
const result = await legacyListTasks(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
|
||||
it('works with new implementation', async () => {
|
||||
// Force new
|
||||
const command = new ListTasksCommand();
|
||||
const result = await command.execute(...);
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('adapter chooses correctly', async () => {
|
||||
// Let adapter decide
|
||||
const result = await listTasksAdapter(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] All commands migrated without breaking changes
|
||||
- [ ] Test coverage maintained or improved
|
||||
- [ ] Performance maintained or improved
|
||||
- [ ] Cleaner, more maintainable codebase
|
||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
||||
|
||||
## Notes for Contributors
|
||||
|
||||
1. **Keep PRs Small**: Migrate one command at a time
|
||||
2. **Test Both Paths**: Ensure legacy and new both work
|
||||
3. **Document Changes**: Update this roadmap as you go
|
||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
||||
|
||||
This is a living document - update as the migration progresses!
|
||||
91
.taskmaster/docs/prd-tm-start.txt
Normal file
91
.taskmaster/docs/prd-tm-start.txt
Normal file
@@ -0,0 +1,91 @@
|
||||
<context>
|
||||
# Overview
|
||||
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
||||
|
||||
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
||||
|
||||
# Core Features
|
||||
- `start` command (Commander class style)
|
||||
- Hard-coded executor: `claude-code`
|
||||
- Standardized prompt designed for minimal changes following existing patterns
|
||||
- Shows claude-code output (no streaming)
|
||||
- Git status check for success detection
|
||||
- Auto-mark task done if successful
|
||||
|
||||
# User Experience
|
||||
```
|
||||
task-master start 12
|
||||
```
|
||||
1) Fetches Task #12 details
|
||||
2) Builds standardized prompt with task context
|
||||
3) Runs claude-code with the prompt
|
||||
4) Shows output
|
||||
5) Checks git status for changes
|
||||
6) Auto-marks task done if changes detected
|
||||
</context>
|
||||
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
|
||||
- Command pattern:
|
||||
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
||||
|
||||
- Task retrieval:
|
||||
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
||||
- Extract: id, title, description, details
|
||||
|
||||
- Executor (ultra-simple approach):
|
||||
- Execute `claude "full prompt here"` command directly
|
||||
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
||||
- Then tells Claude to implement the code changes
|
||||
- This opens Claude CLI interface naturally in the current terminal
|
||||
- No subprocess management needed - just execute the command
|
||||
|
||||
- Execution flow:
|
||||
1) Validate `<task_id>` exists; exit with error if not
|
||||
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
||||
3) Execute `claude "prompt"` command directly in terminal
|
||||
4) Claude CLI opens, runs `tm show`, then implements changes
|
||||
5) After Claude session ends, run `git status --porcelain` to detect changes
|
||||
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
||||
|
||||
- Success criteria:
|
||||
- Success = exit code 0 AND git shows modified/created files
|
||||
- Print changed file paths; warn if no changes detected
|
||||
|
||||
# Development Roadmap
|
||||
|
||||
MVP (ship in ~1 hour):
|
||||
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
||||
2) Validate task exists via tm-core
|
||||
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
||||
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
||||
|
||||
# Risks and Mitigations
|
||||
- Executor availability: Error clearly if `claude-code` provider fails
|
||||
- False success: Git-change heuristic acceptable for hackathon MVP
|
||||
|
||||
# Appendix
|
||||
|
||||
**Standardized Prompt Template:**
|
||||
```
|
||||
You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show <task_id>
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show <task_id> to understand what needs to be implemented.
|
||||
```
|
||||
|
||||
**Key References:**
|
||||
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
||||
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
||||
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
||||
</PRD>
|
||||
8
.taskmaster/docs/test-prd.txt
Normal file
8
.taskmaster/docs/test-prd.txt
Normal file
@@ -0,0 +1,8 @@
|
||||
Simple Todo App PRD
|
||||
|
||||
Create a basic todo list application with the following features:
|
||||
1. Add new todos
|
||||
2. Mark todos as complete
|
||||
3. Delete todos
|
||||
|
||||
That's it. Keep it simple.
|
||||
@@ -0,0 +1,77 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:39:03.250Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
|
||||
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
|
||||
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
|
||||
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
|
||||
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
|
||||
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
|
||||
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
|
||||
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
|
||||
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
|
||||
}
|
||||
]
|
||||
}
|
||||
77
.taskmaster/reports/tm-core-complexity.json
Normal file
77
.taskmaster/reports/tm-core-complexity.json
Normal file
@@ -0,0 +1,77 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:15:01.327Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
|
||||
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
|
||||
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
|
||||
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
|
||||
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
|
||||
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 2,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
|
||||
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
|
||||
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
|
||||
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
||||
"lastSwitched": "2025-09-12T22:25:27.535Z",
|
||||
"branchTagMapping": {
|
||||
"v017-adds": "v017-adds",
|
||||
"next": "next"
|
||||
|
||||
34
.taskmaster/tasks/task_001_tm-start.txt
Normal file
34
.taskmaster/tasks/task_001_tm-start.txt
Normal file
@@ -0,0 +1,34 @@
|
||||
# Task ID: 1
|
||||
# Title: Create start command class structure
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create the basic structure for the start command following the Commander class pattern
|
||||
# Details:
|
||||
Create a new file `apps/cli/src/commands/start.command.ts` based on the existing list.command.ts pattern. Implement the command class with proper command registration, description, and argument handling for the task_id parameter. The class should extend the base Command class and implement the required methods.
|
||||
|
||||
Example structure:
|
||||
```typescript
|
||||
import { Command } from 'commander';
|
||||
import { BaseCommand } from './base.command';
|
||||
|
||||
export class StartCommand extends BaseCommand {
|
||||
public register(program: Command): void {
|
||||
program
|
||||
.command('start')
|
||||
.alias('tm start')
|
||||
.description('Start implementing a task using claude-code')
|
||||
.argument('<task_id>', 'ID of the task to start')
|
||||
.action(async (taskId: string) => {
|
||||
await this.execute(taskId);
|
||||
});
|
||||
}
|
||||
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Implementation will be added in subsequent tasks
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command registers correctly by running the CLI with --help and checking that the start command appears with proper description and arguments. Test the basic structure by ensuring the command can be invoked without errors.
|
||||
26
.taskmaster/tasks/task_002_tm-start.txt
Normal file
26
.taskmaster/tasks/task_002_tm-start.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
# Task ID: 2
|
||||
# Title: Register start command in CLI
|
||||
# Status: pending
|
||||
# Dependencies: 7
|
||||
# Priority: high
|
||||
# Description: Register the start command in the CLI application
|
||||
# Details:
|
||||
Update the CLI application to register the new start command. This involves importing the StartCommand class and adding it to the commands array in the CLI initialization.
|
||||
|
||||
In `apps/cli/src/index.ts` or the appropriate file where commands are registered:
|
||||
|
||||
```typescript
|
||||
import { StartCommand } from './commands/start.command';
|
||||
|
||||
// Add StartCommand to the commands array
|
||||
const commands = [
|
||||
// ... existing commands
|
||||
new StartCommand(),
|
||||
];
|
||||
|
||||
// Register all commands
|
||||
commands.forEach(command => command.register(program));
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command is correctly registered by running the CLI with --help and checking that the start command appears in the list of available commands.
|
||||
32
.taskmaster/tasks/task_003_tm-start.txt
Normal file
32
.taskmaster/tasks/task_003_tm-start.txt
Normal file
@@ -0,0 +1,32 @@
|
||||
# Task ID: 3
|
||||
# Title: Create standardized prompt builder
|
||||
# Status: pending
|
||||
# Dependencies: 1
|
||||
# Priority: medium
|
||||
# Description: Implement a function to build the standardized prompt for claude-code based on the task details
|
||||
# Details:
|
||||
Create a function in the StartCommand class that builds the standardized prompt according to the template provided in the PRD. The prompt should include instructions for Claude to first run `tm show <task_id>` to get task details, and then implement the required changes.
|
||||
|
||||
```typescript
|
||||
private buildPrompt(taskId: string): string {
|
||||
return `You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show ${taskId}
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show ${taskId} to understand what needs to be implemented.`;
|
||||
}
|
||||
```
|
||||
<info added on 2025-09-12T02:40:01.812Z>
|
||||
The prompt builder function will handle task context retrieval by instructing Claude to use the task-master show command. This approach ensures Claude has access to all necessary task details before implementation begins. The command syntax "tm show ${taskId}" embedded in the prompt will direct Claude to first gather the complete task context, including description, requirements, and any existing implementation details, before proceeding with code changes.
|
||||
</info added on 2025-09-12T02:40:01.812Z>
|
||||
|
||||
# Test Strategy:
|
||||
Verify the prompt is correctly formatted by calling the function with a sample task ID and checking that the output matches the expected template with the task ID properly inserted.
|
||||
36
.taskmaster/tasks/task_004_tm-start.txt
Normal file
36
.taskmaster/tasks/task_004_tm-start.txt
Normal file
@@ -0,0 +1,36 @@
|
||||
# Task ID: 4
|
||||
# Title: Implement claude-code executor
|
||||
# Status: pending
|
||||
# Dependencies: 3
|
||||
# Priority: high
|
||||
# Description: Add functionality to execute the claude-code command with the built prompt
|
||||
# Details:
|
||||
Implement the functionality to execute the claude command with the built prompt. This should use Node.js child_process.exec() to run the command directly in the terminal.
|
||||
|
||||
```typescript
|
||||
import { exec } from 'child_process';
|
||||
|
||||
// Inside execute method, after task validation
|
||||
private async executeClaude(prompt: string): Promise<void> {
|
||||
console.log('Starting claude-code to implement the task...');
|
||||
|
||||
try {
|
||||
// Execute claude with the prompt
|
||||
const claudeCommand = `claude "${prompt.replace(/"/g, '\\"')}"`;
|
||||
|
||||
// Use execSync to wait for the command to complete
|
||||
const { execSync } = require('child_process');
|
||||
execSync(claudeCommand, { stdio: 'inherit' });
|
||||
|
||||
console.log('Claude session completed.');
|
||||
} catch (error) {
|
||||
console.error('Error executing claude-code:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then call this method from the execute method after building the prompt.
|
||||
|
||||
# Test Strategy:
|
||||
Test by running the command with a valid task ID and verifying that the claude command is executed with the correct prompt. Check that the command handles errors appropriately if claude-code is not available.
|
||||
49
.taskmaster/tasks/task_007_tm-start.txt
Normal file
49
.taskmaster/tasks/task_007_tm-start.txt
Normal file
@@ -0,0 +1,49 @@
|
||||
# Task ID: 7
|
||||
# Title: Integrate execution flow in start command
|
||||
# Status: pending
|
||||
# Dependencies: 3, 4
|
||||
# Priority: high
|
||||
# Description: Connect all the components to implement the complete execution flow for the start command
|
||||
# Details:
|
||||
Update the execute method in the StartCommand class to integrate all the components and implement the complete execution flow as described in the PRD:
|
||||
1. Validate task exists
|
||||
2. Build standardized prompt
|
||||
3. Execute claude-code
|
||||
4. Check git status for changes
|
||||
5. Auto-mark task as done if changes detected
|
||||
|
||||
```typescript
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Validate task exists
|
||||
const core = await createTaskMasterCore();
|
||||
const task = await core.tasks.getById(parseInt(taskId, 10));
|
||||
|
||||
if (!task) {
|
||||
console.error(`Task with ID ${taskId} not found`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Build prompt
|
||||
const prompt = this.buildPrompt(taskId);
|
||||
|
||||
// Execute claude-code
|
||||
await this.executeClaude(prompt);
|
||||
|
||||
// Check git status
|
||||
const changedFiles = await this.checkGitChanges();
|
||||
|
||||
if (changedFiles.length > 0) {
|
||||
console.log('\nChanges detected in the following files:');
|
||||
changedFiles.forEach(file => console.log(`- ${file}`));
|
||||
|
||||
// Auto-mark task as done
|
||||
await this.markTaskAsDone(taskId);
|
||||
console.log(`\nTask ${taskId} completed successfully and marked as done.`);
|
||||
} else {
|
||||
console.warn('\nNo changes detected after claude-code execution. Task not marked as done.');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Test the complete execution flow by running the start command with a valid task ID and verifying that all steps are executed correctly. Test with both scenarios: when changes are detected and when no changes are detected.
|
||||
File diff suppressed because one or more lines are too long
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
511
.taskmaster/templates/example_prd_rpg.txt
Normal file
@@ -0,0 +1,511 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
15
.vscode/settings.json
vendored
15
.vscode/settings.json
vendored
@@ -10,5 +10,18 @@
|
||||
},
|
||||
|
||||
"json.format.enable": true,
|
||||
"json.validate.enable": true
|
||||
"json.validate.enable": true,
|
||||
"typescript.tsdk": "node_modules/typescript/lib",
|
||||
"[typescript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[typescriptreact]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[javascript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
}
|
||||
}
|
||||
|
||||
659
CHANGELOG.md
659
CHANGELOG.md
@@ -1,5 +1,664 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.28.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
|
||||
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
|
||||
- OAuth-first authentication via `codex login` - no API key required
|
||||
- Optional OPENAI_CODEX_API_KEY support
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Command-specific settings and approval/sandbox modes
|
||||
|
||||
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
|
||||
|
||||
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
|
||||
- Works better with claude-code and gemini-cli as ai providers
|
||||
- Improved openai model family compatibility
|
||||
- Migrate ollama provider to v2
|
||||
- Closes #1223, #1013, #1161, #1174
|
||||
|
||||
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
|
||||
|
||||
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
|
||||
|
||||
### Key Changes:
|
||||
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
|
||||
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
|
||||
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
|
||||
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
|
||||
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
|
||||
|
||||
### Technical Improvements:
|
||||
- Centralized provider configuration in `ai-providers-unified.js`
|
||||
- Added `generateObject` support detection for each provider
|
||||
- Implemented proper error handling for schema validation failures
|
||||
- Maintained backward compatibility with existing prompt structures
|
||||
|
||||
### Bug Fixes:
|
||||
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
|
||||
- Enhanced prompt instructions to enforce proper ID generation patterns
|
||||
- Ensured subtasks display correctly as X.1, X.2, X.3 format
|
||||
|
||||
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
|
||||
|
||||
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
|
||||
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
|
||||
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
|
||||
|
||||
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
|
||||
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
|
||||
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
|
||||
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
|
||||
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
|
||||
- Added complexity score on "next task" when running `task-master list`
|
||||
- Added colors to complexity to reflect complexity (easy, medium, hard)
|
||||
|
||||
## 0.28.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
|
||||
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
|
||||
- OAuth-first authentication via `codex login` - no API key required
|
||||
- Optional OPENAI_CODEX_API_KEY support
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Command-specific settings and approval/sandbox modes
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
|
||||
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
|
||||
## 0.28.0-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
|
||||
|
||||
## 0.28.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
|
||||
|
||||
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
|
||||
- Works better with claude-code and gemini-cli as ai providers
|
||||
- Improved openai model family compatibility
|
||||
- Migrate ollama provider to v2
|
||||
- Closes #1223, #1013, #1161, #1174
|
||||
|
||||
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
|
||||
|
||||
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
|
||||
|
||||
### Key Changes:
|
||||
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
|
||||
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
|
||||
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
|
||||
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
|
||||
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
|
||||
|
||||
### Technical Improvements:
|
||||
- Centralized provider configuration in `ai-providers-unified.js`
|
||||
- Added `generateObject` support detection for each provider
|
||||
- Implemented proper error handling for schema validation failures
|
||||
- Maintained backward compatibility with existing prompt structures
|
||||
|
||||
### Bug Fixes:
|
||||
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
|
||||
- Enhanced prompt instructions to enforce proper ID generation patterns
|
||||
- Ensured subtasks display correctly as X.1, X.2, X.3 format
|
||||
|
||||
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
|
||||
|
||||
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
|
||||
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
|
||||
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
|
||||
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
|
||||
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
|
||||
- Added complexity score on "next task" when running `task-master list`
|
||||
- Added colors to complexity to reflect complexity (easy, medium, hard)
|
||||
|
||||
## 0.27.3
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1254](https://github.com/eyaltoledano/claude-task-master/pull/1254) [`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed issue where `tm show` command could not find subtasks using dotted notation IDs (e.g., '8.1').
|
||||
- The command now properly searches within parent task subtasks and returns the correct subtask information.
|
||||
|
||||
## 0.27.2
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1248](https://github.com/eyaltoledano/claude-task-master/pull/1248) [`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix set-status for subtasks:
|
||||
- Parent tasks are now set as `done` when subtasks are all `done`
|
||||
- Parent tasks are now set as `in-progress` when at least one subtask is `in-progress` or `done`
|
||||
|
||||
## 0.27.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Zed MCP configuration by adding required "source" property
|
||||
- Add "source": "custom" property to task-master-ai server in Zed settings.json
|
||||
|
||||
## 0.27.1-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - One last testing final final
|
||||
|
||||
## 0.27.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
|
||||
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
|
||||
- we now bundle our whole package, so we no longer need the --package
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
|
||||
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
|
||||
- `task-master start` will automatically detect next-task when no ID is provided.
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
|
||||
|
||||
## Setup Instructions
|
||||
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
|
||||
2. **Set the environment variable**:
|
||||
```bash
|
||||
export GROK_CLI_API_KEY="your-api-key-here"
|
||||
```
|
||||
3. **Configure Task Master to use Grok**:
|
||||
```bash
|
||||
task-master models --set-main grok-beta
|
||||
# or
|
||||
task-master models --set-research grok-beta
|
||||
# or
|
||||
task-master models --set-fallback grok-beta
|
||||
```
|
||||
|
||||
## Key Features
|
||||
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
|
||||
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
|
||||
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
|
||||
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
|
||||
|
||||
## Available Models
|
||||
- `grok-beta` - Latest Grok model with codebase context
|
||||
- `grok-vision-beta` - Grok with vision capabilities and codebase context
|
||||
|
||||
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
|
||||
|
||||
## Credits
|
||||
|
||||
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
|
||||
|
||||
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
|
||||
- moving from main anthropic 3.7 to anthropic sonnet 4
|
||||
- moving from fallback anthropic 3.5 to anthropic 3.7
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
|
||||
|
||||
## 0.27.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
## 0.27.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [`255b9f0`](https://github.com/eyaltoledano/claude-task-master/commit/255b9f0334555b0063280abde701445cd62fa11b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Testing one more pre-release iteration
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Test out the RC
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee)]:
|
||||
- @tm/cli@0.27.0-rc.0
|
||||
|
||||
## 0.26.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1133](https://github.com/eyaltoledano/claude-task-master/pull/1133) [`df26c65`](https://github.com/eyaltoledano/claude-task-master/commit/df26c65632000874a73504963b08f18c46283144) Thanks [@neonwatty](https://github.com/neonwatty)! - Restore Taskmaster claude-code commands and move clear commands under /remove to avoid collision with the claude-code /clear command.
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.26.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
## 0.26.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.25.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1152](https://github.com/eyaltoledano/claude-task-master/pull/1152) [`8933557`](https://github.com/eyaltoledano/claude-task-master/commit/89335578ffffc65504b2055c0c85aa7521e5e79b) Thanks [@ben-vargas](https://github.com/ben-vargas)! - fix(claude-code): prevent crash/hang when the optional `@anthropic-ai/claude-code` SDK is missing by guarding `AbortError instanceof` checks and adding explicit SDK presence checks in `doGenerate`/`doStream`. Also bump the optional dependency to `^1.0.88` for improved export consistency.
|
||||
|
||||
Related to JSON truncation handling in #920; this change addresses a separate error-path crash reported in #1142.
|
||||
|
||||
- [#1151](https://github.com/eyaltoledano/claude-task-master/pull/1151) [`db720a9`](https://github.com/eyaltoledano/claude-task-master/commit/db720a954d390bb44838cd021b8813dde8f3d8de) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Temporarily disable streaming for improved model compatibility - will be re-enabled in upcoming release
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1098](https://github.com/eyaltoledano/claude-task-master/pull/1098) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
|
||||
## 0.24.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
## 0.24.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
## 0.23.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
26
CLAUDE.md
26
CLAUDE.md
@@ -3,3 +3,29 @@
|
||||
## Task Master AI Instructions
|
||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||
@./.taskmaster/CLAUDE.md
|
||||
|
||||
## Test Guidelines
|
||||
|
||||
### Synchronous Tests
|
||||
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
|
||||
- Use synchronous top-level imports instead of dynamic `await import()`
|
||||
- Test bodies should be synchronous whenever possible
|
||||
- Example:
|
||||
```javascript
|
||||
// ✅ CORRECT - Synchronous imports
|
||||
import { MyClass } from '../src/my-class.js';
|
||||
|
||||
it('should verify behavior', () => {
|
||||
expect(new MyClass().property).toBe(value);
|
||||
});
|
||||
|
||||
// ❌ INCORRECT - Async imports
|
||||
it('should verify behavior', async () => {
|
||||
const { MyClass } = await import('../src/my-class.js');
|
||||
expect(new MyClass().property).toBe(value);
|
||||
});
|
||||
```
|
||||
|
||||
## Changeset Guidelines
|
||||
|
||||
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||
67
README.md
67
README.md
@@ -1,14 +1,39 @@
|
||||
# Task Master [](https://github.com/eyaltoledano/claude-task-master/stargazers)
|
||||
<a name="readme-top"></a>
|
||||
|
||||
[](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [](https://badge.fury.io/js/task-master-ai) [](https://discord.gg/taskmasterai) [](LICENSE)
|
||||
<div align='center'>
|
||||
<a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
[](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai)
|
||||
<p align="center">
|
||||
<a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a>
|
||||
</p>
|
||||
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
|
||||
<p align="center">
|
||||
<b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> |
|
||||
<a href="https://docs.task-master.dev" target="_blank">Docs</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a>
|
||||
<a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a>
|
||||
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
</p>
|
||||
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
|
||||
|
||||
[](https://x.com/eyaltoledano)
|
||||
[](https://x.com/RalphEcom)
|
||||
[](https://x.com/jasonzhou1993)
|
||||
|
||||
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
|
||||
|
||||
@@ -31,10 +56,23 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
#### Quick Install for Cursor 1.0+ (One-Click)
|
||||
|
||||
[](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
[](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
#### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
|
||||
@@ -50,8 +88,9 @@ At least one (1) of the following is required:
|
||||
- xAI API Key (for research or main model)
|
||||
- OpenRouter API Key (for research or main model)
|
||||
- Claude Code (no API key required - requires Claude Code CLI)
|
||||
- Codex CLI (OAuth via ChatGPT subscription - requires Codex CLI)
|
||||
|
||||
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will.
|
||||
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or Codex CLI with OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
|
||||
|
||||
## Quick Start
|
||||
|
||||
@@ -67,17 +106,18 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
|
||||
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
|
||||
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
|
||||
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
|
||||
|
||||
##### Manual Configuration
|
||||
|
||||
###### Cursor & Windsurf (`mcpServers`)
|
||||
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
@@ -97,7 +137,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
|
||||
|
||||
###### VS Code (`servers` + `type`)
|
||||
|
||||
@@ -106,7 +146,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
"servers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
@@ -230,6 +270,11 @@ task-master show 1,3,5
|
||||
# Research fresh information with project context
|
||||
task-master research "What are the latest best practices for JWT authentication?"
|
||||
|
||||
# Move tasks between tags (cross-tag movement)
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
|
||||
|
||||
34
apps/cli/CHANGELOG.md
Normal file
34
apps/cli/CHANGELOG.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# @tm/cli
|
||||
|
||||
## null
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@null
|
||||
|
||||
## null
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@null
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@0.26.1
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
|
||||
|
||||
## 1.1.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`cd90b4d`](https://github.com/eyaltoledano/claude-task-master/commit/cd90b4d65fc2f04bdad9fb73aba320b58a124240) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
|
||||
53
apps/cli/package.json
Normal file
53
apps/cli/package.json
Normal file
@@ -0,0 +1,53 @@
|
||||
{
|
||||
"name": "@tm/cli",
|
||||
"description": "Task Master CLI - Command line interface for task management",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"main": "./dist/index.js",
|
||||
"types": "./src/index.ts",
|
||||
"exports": {
|
||||
".": "./src/index.ts"
|
||||
},
|
||||
"files": ["dist", "README.md"],
|
||||
"scripts": {
|
||||
"typecheck": "tsc --noEmit",
|
||||
"lint": "biome check src",
|
||||
"format": "biome format --write src",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:coverage": "vitest run --coverage",
|
||||
"test:unit": "vitest run -t unit",
|
||||
"test:integration": "vitest run -t integration",
|
||||
"test:e2e": "vitest run --dir tests/e2e",
|
||||
"test:ci": "vitest run --coverage --reporter=dot"
|
||||
},
|
||||
"dependencies": {
|
||||
"@tm/core": "*",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "5.6.2",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^12.1.0",
|
||||
"inquirer": "^12.5.0",
|
||||
"ora": "^8.2.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@biomejs/biome": "^1.9.4",
|
||||
"@types/inquirer": "^9.0.3",
|
||||
"@types/node": "^22.10.5",
|
||||
"tsx": "^4.20.4",
|
||||
"typescript": "^5.9.2",
|
||||
"vitest": "^2.1.8"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"keywords": ["task-master", "cli", "task-management", "productivity"],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"typesVersions": {
|
||||
"*": {
|
||||
"*": ["src/*"]
|
||||
}
|
||||
},
|
||||
"version": ""
|
||||
}
|
||||
255
apps/cli/src/command-registry.ts
Normal file
255
apps/cli/src/command-registry.ts
Normal file
@@ -0,0 +1,255 @@
|
||||
/**
|
||||
* @fileoverview Centralized Command Registry
|
||||
* Provides a single location for registering all CLI commands
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
|
||||
// Import all commands
|
||||
import { ListTasksCommand } from './commands/list.command.js';
|
||||
import { ShowCommand } from './commands/show.command.js';
|
||||
import { AuthCommand } from './commands/auth.command.js';
|
||||
import { ContextCommand } from './commands/context.command.js';
|
||||
import { StartCommand } from './commands/start.command.js';
|
||||
import { SetStatusCommand } from './commands/set-status.command.js';
|
||||
import { ExportCommand } from './commands/export.command.js';
|
||||
|
||||
/**
|
||||
* Command metadata for registration
|
||||
*/
|
||||
export interface CommandMetadata {
|
||||
name: string;
|
||||
description: string;
|
||||
commandClass: typeof Command;
|
||||
category?: 'task' | 'auth' | 'utility' | 'development';
|
||||
}
|
||||
|
||||
/**
|
||||
* Registry of all available commands
|
||||
*/
|
||||
export class CommandRegistry {
|
||||
/**
|
||||
* All available commands with their metadata
|
||||
*/
|
||||
private static commands: CommandMetadata[] = [
|
||||
// Task Management Commands
|
||||
{
|
||||
name: 'list',
|
||||
description: 'List all tasks with filtering and status overview',
|
||||
commandClass: ListTasksCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'show',
|
||||
description: 'Display detailed information about a specific task',
|
||||
commandClass: ShowCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'start',
|
||||
description: 'Start working on a task with claude-code',
|
||||
commandClass: StartCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'set-status',
|
||||
description: 'Update the status of one or more tasks',
|
||||
commandClass: SetStatusCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'export',
|
||||
description: 'Export tasks to external systems',
|
||||
commandClass: ExportCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
|
||||
// Authentication & Context Commands
|
||||
{
|
||||
name: 'auth',
|
||||
description: 'Manage authentication with tryhamster.com',
|
||||
commandClass: AuthCommand as any,
|
||||
category: 'auth'
|
||||
},
|
||||
{
|
||||
name: 'context',
|
||||
description: 'Manage workspace context (organization/brief)',
|
||||
commandClass: ContextCommand as any,
|
||||
category: 'auth'
|
||||
}
|
||||
];
|
||||
|
||||
/**
|
||||
* Register all commands on a program instance
|
||||
* @param program - Commander program to register commands on
|
||||
*/
|
||||
static registerAll(program: Command): void {
|
||||
for (const cmd of this.commands) {
|
||||
this.registerCommand(program, cmd);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register specific commands by category
|
||||
* @param program - Commander program to register commands on
|
||||
* @param category - Category of commands to register
|
||||
*/
|
||||
static registerByCategory(
|
||||
program: Command,
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): void {
|
||||
const categoryCommands = this.commands.filter(
|
||||
(cmd) => cmd.category === category
|
||||
);
|
||||
|
||||
for (const cmd of categoryCommands) {
|
||||
this.registerCommand(program, cmd);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a single command by name
|
||||
* @param program - Commander program to register the command on
|
||||
* @param name - Name of the command to register
|
||||
*/
|
||||
static registerByName(program: Command, name: string): void {
|
||||
const cmd = this.commands.find((c) => c.name === name);
|
||||
if (cmd) {
|
||||
this.registerCommand(program, cmd);
|
||||
} else {
|
||||
throw new Error(`Command '${name}' not found in registry`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a single command
|
||||
* @param program - Commander program to register the command on
|
||||
* @param metadata - Command metadata
|
||||
*/
|
||||
private static registerCommand(
|
||||
program: Command,
|
||||
metadata: CommandMetadata
|
||||
): void {
|
||||
const CommandClass = metadata.commandClass as any;
|
||||
|
||||
// Use the static registration method that all commands have
|
||||
if (CommandClass.registerOn) {
|
||||
CommandClass.registerOn(program);
|
||||
} else if (CommandClass.register) {
|
||||
CommandClass.register(program);
|
||||
} else {
|
||||
// Fallback to creating instance and adding
|
||||
const instance = new CommandClass();
|
||||
program.addCommand(instance);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all registered command names
|
||||
*/
|
||||
static getCommandNames(): string[] {
|
||||
return this.commands.map((cmd) => cmd.name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get commands by category
|
||||
*/
|
||||
static getCommandsByCategory(
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): CommandMetadata[] {
|
||||
return this.commands.filter((cmd) => cmd.category === category);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a new command to the registry
|
||||
* @param metadata - Command metadata to add
|
||||
*/
|
||||
static addCommand(metadata: CommandMetadata): void {
|
||||
// Check if command already exists
|
||||
if (this.commands.some((cmd) => cmd.name === metadata.name)) {
|
||||
throw new Error(`Command '${metadata.name}' already exists in registry`);
|
||||
}
|
||||
|
||||
this.commands.push(metadata);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a command from the registry
|
||||
* @param name - Name of the command to remove
|
||||
*/
|
||||
static removeCommand(name: string): boolean {
|
||||
const index = this.commands.findIndex((cmd) => cmd.name === name);
|
||||
if (index >= 0) {
|
||||
this.commands.splice(index, 1);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get command metadata by name
|
||||
* @param name - Name of the command
|
||||
*/
|
||||
static getCommand(name: string): CommandMetadata | undefined {
|
||||
return this.commands.find((cmd) => cmd.name === name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a command exists
|
||||
* @param name - Name of the command
|
||||
*/
|
||||
static hasCommand(name: string): boolean {
|
||||
return this.commands.some((cmd) => cmd.name === name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a formatted list of all commands for display
|
||||
*/
|
||||
static getFormattedCommandList(): string {
|
||||
const categories = {
|
||||
task: 'Task Management',
|
||||
auth: 'Authentication & Context',
|
||||
utility: 'Utilities',
|
||||
development: 'Development'
|
||||
};
|
||||
|
||||
let output = '';
|
||||
|
||||
for (const [category, title] of Object.entries(categories)) {
|
||||
const cmds = this.getCommandsByCategory(
|
||||
category as keyof typeof categories
|
||||
);
|
||||
if (cmds.length > 0) {
|
||||
output += `\n${title}:\n`;
|
||||
for (const cmd of cmds) {
|
||||
output += ` ${cmd.name.padEnd(20)} ${cmd.description}\n`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience function to register all CLI commands
|
||||
* @param program - Commander program instance
|
||||
*/
|
||||
export function registerAllCommands(program: Command): void {
|
||||
CommandRegistry.registerAll(program);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience function to register commands by category
|
||||
* @param program - Commander program instance
|
||||
* @param category - Category to register
|
||||
*/
|
||||
export function registerCommandsByCategory(
|
||||
program: Command,
|
||||
category: 'task' | 'auth' | 'utility' | 'development'
|
||||
): void {
|
||||
CommandRegistry.registerByCategory(program, category);
|
||||
}
|
||||
|
||||
// Export the registry for direct access if needed
|
||||
export default CommandRegistry;
|
||||
503
apps/cli/src/commands/auth.command.ts
Normal file
503
apps/cli/src/commands/auth.command.ts
Normal file
@@ -0,0 +1,503 @@
|
||||
/**
|
||||
* @fileoverview Auth command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import open from 'open';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type AuthCredentials
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from auth command
|
||||
*/
|
||||
export interface AuthResult {
|
||||
success: boolean;
|
||||
action: 'login' | 'logout' | 'status' | 'refresh';
|
||||
credentials?: AuthCredentials;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* AuthCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's AuthManager
|
||||
*/
|
||||
export class AuthCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: AuthResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'auth');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command with subcommands
|
||||
this.description('Manage authentication with tryhamster.com');
|
||||
|
||||
// Add subcommands
|
||||
this.addLoginCommand();
|
||||
this.addLogoutCommand();
|
||||
this.addStatusCommand();
|
||||
this.addRefreshCommand();
|
||||
|
||||
// Default action shows help
|
||||
this.action(() => {
|
||||
this.help();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add login subcommand
|
||||
*/
|
||||
private addLoginCommand(): void {
|
||||
this.command('login')
|
||||
.description('Authenticate with tryhamster.com')
|
||||
.action(async () => {
|
||||
await this.executeLogin();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add logout subcommand
|
||||
*/
|
||||
private addLogoutCommand(): void {
|
||||
this.command('logout')
|
||||
.description('Logout and clear credentials')
|
||||
.action(async () => {
|
||||
await this.executeLogout();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add status subcommand
|
||||
*/
|
||||
private addStatusCommand(): void {
|
||||
this.command('status')
|
||||
.description('Display authentication status')
|
||||
.action(async () => {
|
||||
await this.executeStatus();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add refresh subcommand
|
||||
*/
|
||||
private addRefreshCommand(): void {
|
||||
this.command('refresh')
|
||||
.description('Refresh authentication token')
|
||||
.action(async () => {
|
||||
await this.executeRefresh();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute login command
|
||||
*/
|
||||
private async executeLogin(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performInteractiveAuth();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Exit cleanly after successful authentication
|
||||
// Small delay to ensure all output is flushed
|
||||
setTimeout(() => {
|
||||
process.exit(0);
|
||||
}, 100);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute logout command
|
||||
*/
|
||||
private async executeLogout(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performLogout();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute status command
|
||||
*/
|
||||
private async executeStatus(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayStatus();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute refresh command
|
||||
*/
|
||||
private async executeRefresh(): Promise<void> {
|
||||
try {
|
||||
const result = await this.refreshToken();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display authentication status
|
||||
*/
|
||||
private displayStatus(): AuthResult {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
|
||||
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.green('✓ Authenticated'));
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
console.log(
|
||||
chalk.gray(` Token Type: ${credentials.tokenType || 'standard'}`)
|
||||
);
|
||||
|
||||
if (credentials.expiresAt) {
|
||||
const expiresAt = new Date(credentials.expiresAt);
|
||||
const now = new Date();
|
||||
const hoursRemaining = Math.floor(
|
||||
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
|
||||
);
|
||||
|
||||
if (hoursRemaining > 0) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.gray(' Expires: Never (API key)'));
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.gray(` Saved: ${new Date(credentials.savedAt).toLocaleString()}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'status',
|
||||
credentials,
|
||||
message: 'Authenticated'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "task-master auth login" to authenticate')
|
||||
);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'status',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform logout
|
||||
*/
|
||||
private async performLogout(): Promise<AuthResult> {
|
||||
try {
|
||||
await this.authManager.logout();
|
||||
ui.displaySuccess('Successfully logged out');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'logout',
|
||||
message: 'Successfully logged out'
|
||||
};
|
||||
} catch (error) {
|
||||
const message = `Failed to logout: ${(error as Error).message}`;
|
||||
ui.displayError(message);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'logout',
|
||||
message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh authentication token
|
||||
*/
|
||||
private async refreshToken(): Promise<AuthResult> {
|
||||
const spinner = ora('Refreshing authentication token...').start();
|
||||
|
||||
try {
|
||||
const credentials = await this.authManager.refreshToken();
|
||||
spinner.succeed('Token refreshed successfully');
|
||||
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` New expiration: ${credentials.expiresAt ? new Date(credentials.expiresAt).toLocaleString() : 'Never'}`
|
||||
)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'refresh',
|
||||
credentials,
|
||||
message: 'Token refreshed successfully'
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to refresh token');
|
||||
|
||||
if ((error as AuthenticationError).code === 'NO_REFRESH_TOKEN') {
|
||||
ui.displayWarning(
|
||||
'No refresh token available. Please re-authenticate.'
|
||||
);
|
||||
} else {
|
||||
ui.displayError(`Refresh failed: ${(error as Error).message}`);
|
||||
}
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'refresh',
|
||||
message: `Failed to refresh: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform interactive authentication
|
||||
*/
|
||||
private async performInteractiveAuth(): Promise<AuthResult> {
|
||||
ui.displayBanner('Task Master Authentication');
|
||||
|
||||
// Check if already authenticated
|
||||
if (this.authManager.isAuthenticated()) {
|
||||
const { continueAuth } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'continueAuth',
|
||||
message:
|
||||
'You are already authenticated. Do you want to re-authenticate?',
|
||||
default: false
|
||||
}
|
||||
]);
|
||||
|
||||
if (!continueAuth) {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
ui.displaySuccess('Using existing authentication');
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials: credentials || undefined,
|
||||
message: 'Using existing authentication'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Direct browser authentication - no menu needed
|
||||
const credentials = await this.authenticateWithBrowser();
|
||||
|
||||
ui.displaySuccess('Authentication successful!');
|
||||
console.log(
|
||||
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials,
|
||||
message: 'Authentication successful'
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleAuthError(error as AuthenticationError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'login',
|
||||
message: `Authentication failed: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Authenticate with browser using OAuth 2.0 with PKCE
|
||||
*/
|
||||
private async authenticateWithBrowser(): Promise<AuthCredentials> {
|
||||
let authSpinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Use AuthManager's new unified OAuth flow method with callbacks
|
||||
const credentials = await this.authManager.authenticateWithOAuth({
|
||||
// Callback to handle browser opening
|
||||
openBrowser: async (authUrl) => {
|
||||
await open(authUrl);
|
||||
},
|
||||
timeout: 5 * 60 * 1000, // 5 minutes
|
||||
|
||||
// Callback when auth URL is ready
|
||||
onAuthUrl: (authUrl) => {
|
||||
// Display authentication instructions
|
||||
console.log(chalk.blue.bold('\n🔐 Browser Authentication\n'));
|
||||
console.log(chalk.white(' Opening your browser to authenticate...'));
|
||||
console.log(chalk.gray(" If the browser doesn't open, visit:"));
|
||||
console.log(chalk.cyan.underline(` ${authUrl}\n`));
|
||||
},
|
||||
|
||||
// Callback when waiting for authentication
|
||||
onWaitingForAuth: () => {
|
||||
authSpinner = ora({
|
||||
text: 'Waiting for authentication...',
|
||||
spinner: 'dots'
|
||||
}).start();
|
||||
},
|
||||
|
||||
// Callback on success
|
||||
onSuccess: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.succeed('Authentication successful!');
|
||||
}
|
||||
},
|
||||
|
||||
// Callback on error
|
||||
onError: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.fail('Authentication failed');
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return credentials;
|
||||
} catch (error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle authentication errors
|
||||
*/
|
||||
private handleAuthError(error: AuthenticationError): void {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
switch (error.code) {
|
||||
case 'NETWORK_ERROR':
|
||||
ui.displayWarning(
|
||||
'Please check your internet connection and try again.'
|
||||
);
|
||||
break;
|
||||
case 'INVALID_CREDENTIALS':
|
||||
ui.displayWarning('Please check your credentials and try again.');
|
||||
break;
|
||||
case 'AUTH_EXPIRED':
|
||||
ui.displayWarning(
|
||||
'Your session has expired. Please authenticate again.'
|
||||
);
|
||||
break;
|
||||
default:
|
||||
if (process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack || ''));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
this.handleAuthError(error);
|
||||
} else {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: AuthResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): AuthResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current authentication status (for programmatic usage)
|
||||
*/
|
||||
isAuthenticated(): boolean {
|
||||
return this.authManager.isAuthenticated();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current credentials (for programmatic usage)
|
||||
*/
|
||||
getCredentials(): AuthCredentials | null {
|
||||
return this.authManager.getCredentials();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for auth command
|
||||
// But keeping method for consistency with other commands
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): AuthCommand {
|
||||
const authCommand = new AuthCommand(name);
|
||||
program.addCommand(authCommand);
|
||||
return authCommand;
|
||||
}
|
||||
}
|
||||
704
apps/cli/src/commands/context.command.ts
Normal file
704
apps/cli/src/commands/context.command.ts
Normal file
@@ -0,0 +1,704 @@
|
||||
/**
|
||||
* @fileoverview Context command for managing org/brief selection
|
||||
* Provides a clean interface for workspace context management
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { Ora } from 'ora';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from context command
|
||||
*/
|
||||
export interface ContextResult {
|
||||
success: boolean;
|
||||
action: 'show' | 'select-org' | 'select-brief' | 'clear' | 'set';
|
||||
context?: UserContext;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ContextCommand extending Commander's Command class
|
||||
* Manages user's workspace context (org/brief selection)
|
||||
*/
|
||||
export class ContextCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: ContextResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'context');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Manage workspace context (organization and brief selection)'
|
||||
);
|
||||
|
||||
// Add subcommands
|
||||
this.addOrgCommand();
|
||||
this.addBriefCommand();
|
||||
this.addClearCommand();
|
||||
this.addSetCommand();
|
||||
|
||||
// Accept optional positional argument for brief ID or Hamster URL
|
||||
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
|
||||
|
||||
// Default action: if an argument is provided, resolve and set context; else show
|
||||
this.action(async (briefOrUrl?: string) => {
|
||||
if (briefOrUrl && briefOrUrl.trim().length > 0) {
|
||||
await this.executeSetFromBriefInput(briefOrUrl.trim());
|
||||
return;
|
||||
}
|
||||
await this.executeShow();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add org selection subcommand
|
||||
*/
|
||||
private addOrgCommand(): void {
|
||||
this.command('org')
|
||||
.description('Select an organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectOrg();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add brief selection subcommand
|
||||
*/
|
||||
private addBriefCommand(): void {
|
||||
this.command('brief')
|
||||
.description('Select a brief within the current organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectBrief();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add clear subcommand
|
||||
*/
|
||||
private addClearCommand(): void {
|
||||
this.command('clear')
|
||||
.description('Clear all context selections')
|
||||
.action(async () => {
|
||||
await this.executeClear();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add set subcommand for direct context setting
|
||||
*/
|
||||
private addSetCommand(): void {
|
||||
this.command('set')
|
||||
.description('Set context directly')
|
||||
.option('--org <id>', 'Organization ID')
|
||||
.option('--org-name <name>', 'Organization name')
|
||||
.option('--brief <id>', 'Brief ID')
|
||||
.option('--brief-name <name>', 'Brief name')
|
||||
.action(async (options) => {
|
||||
await this.executeSet(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute show current context
|
||||
*/
|
||||
private async executeShow(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayContext();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display current context
|
||||
*/
|
||||
private displayContext(): ContextResult {
|
||||
// Check authentication first
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(chalk.gray('\n Run "tm auth login" to authenticate first'));
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'show',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
|
||||
|
||||
if (context && (context.orgId || context.briefId)) {
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(chalk.green('✓ Organization'));
|
||||
if (context.orgName) {
|
||||
console.log(chalk.white(` ${context.orgName}`));
|
||||
}
|
||||
if (context.orgId) {
|
||||
console.log(chalk.gray(` ID: ${context.orgId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(chalk.green('\n✓ Brief'));
|
||||
if (context.briefName) {
|
||||
console.log(chalk.white(` ${context.briefName}`));
|
||||
}
|
||||
if (context.briefId) {
|
||||
console.log(chalk.gray(` ID: ${context.briefId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.updatedAt) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
`\n Last updated: ${new Date(context.updatedAt).toLocaleString()}`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
context,
|
||||
message: 'Context loaded'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ No context selected'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "tm context org" to select an organization')
|
||||
);
|
||||
console.log(chalk.gray(' Run "tm context brief" to select a brief'));
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
message: 'No context selected'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute org selection
|
||||
*/
|
||||
private async executeSelectOrg(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectOrganization();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select an organization interactively
|
||||
*/
|
||||
private async selectOrganization(): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching organizations...').start();
|
||||
|
||||
try {
|
||||
// Fetch organizations from API
|
||||
const organizations = await this.authManager.getOrganizations();
|
||||
spinner.stop();
|
||||
|
||||
if (organizations.length === 0) {
|
||||
ui.displayWarning('No organizations available');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-org',
|
||||
message: 'No organizations available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedOrg } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedOrg',
|
||||
message: 'Select an organization:',
|
||||
choices: organizations.map((org) => ({
|
||||
name: org.name,
|
||||
value: org
|
||||
}))
|
||||
}
|
||||
]);
|
||||
|
||||
// Update context
|
||||
await this.authManager.updateContext({
|
||||
orgId: selectedOrg.id,
|
||||
orgName: selectedOrg.name,
|
||||
// Clear brief when changing org
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected organization: ${selectedOrg.name}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-org',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected organization: ${selectedOrg.name}`
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch organizations');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute brief selection
|
||||
*/
|
||||
private async executeSelectBrief(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Check if org is selected
|
||||
const context = this.authManager.getContext();
|
||||
if (!context?.orgId) {
|
||||
ui.displayError(
|
||||
'No organization selected. Run "tm context org" first.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectBrief(context.orgId);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select a brief within the current organization
|
||||
*/
|
||||
private async selectBrief(orgId: string): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching briefs...').start();
|
||||
|
||||
try {
|
||||
// Fetch briefs from API
|
||||
const briefs = await this.authManager.getBriefs(orgId);
|
||||
spinner.stop();
|
||||
|
||||
if (briefs.length === 0) {
|
||||
ui.displayWarning('No briefs available in this organization');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-brief',
|
||||
message: 'No briefs available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedBrief } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedBrief',
|
||||
message: 'Select a brief:',
|
||||
choices: [
|
||||
{ name: '(No brief - organization level)', value: null },
|
||||
...briefs.map((brief) => ({
|
||||
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
|
||||
value: brief
|
||||
}))
|
||||
]
|
||||
}
|
||||
]);
|
||||
|
||||
if (selectedBrief) {
|
||||
// Update context with brief
|
||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
briefId: selectedBrief.id,
|
||||
briefName: briefName
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected brief: ${briefName}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected brief: ${selectedBrief.name}`
|
||||
};
|
||||
} else {
|
||||
// Clear brief selection
|
||||
await this.authManager.updateContext({
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess('Cleared brief selection (organization level)');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Cleared brief selection'
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch briefs');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute clear context
|
||||
*/
|
||||
private async executeClear(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.clearContext();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all context selections
|
||||
*/
|
||||
private async clearContext(): Promise<ContextResult> {
|
||||
try {
|
||||
await this.authManager.clearContext();
|
||||
ui.displaySuccess('Context cleared');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'clear',
|
||||
message: 'Context cleared'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to clear context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'clear',
|
||||
message: `Failed to clear context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute set context with options
|
||||
*/
|
||||
private async executeSet(options: any): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.setContext(options);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute setting context from a brief ID or Hamster URL
|
||||
*/
|
||||
private async executeSetFromBriefInput(briefOrUrl: string): Promise<void> {
|
||||
let spinner: Ora | undefined;
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
spinner = ora('Resolving brief...');
|
||||
spinner.start();
|
||||
|
||||
// Extract brief ID
|
||||
const briefId = this.extractBriefId(briefOrUrl);
|
||||
if (!briefId) {
|
||||
spinner.fail('Could not extract a brief ID from the provided input');
|
||||
ui.displayError(
|
||||
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Fetch brief and resolve its organization
|
||||
const brief = await this.authManager.getBrief(briefId);
|
||||
if (!brief) {
|
||||
spinner.fail('Brief not found or you do not have access');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Fetch org to get a friendly name (optional)
|
||||
let orgName: string | undefined;
|
||||
try {
|
||||
const org = await this.authManager.getOrganization(brief.accountId);
|
||||
orgName = org?.name;
|
||||
} catch {
|
||||
// Non-fatal if org lookup fails
|
||||
}
|
||||
|
||||
// Update context: set org and brief
|
||||
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
orgId: brief.accountId,
|
||||
orgName,
|
||||
briefId: brief.id,
|
||||
briefName
|
||||
});
|
||||
|
||||
spinner.succeed('Context set from brief');
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Organization: ${orgName || brief.accountId}\n Brief: ${briefName}`
|
||||
)
|
||||
);
|
||||
|
||||
this.setLastResult({
|
||||
success: true,
|
||||
action: 'set',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Context set from brief'
|
||||
});
|
||||
} catch (error: any) {
|
||||
try {
|
||||
if (spinner?.isSpinning) spinner.stop();
|
||||
} catch {}
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a brief ID from raw input (ID or Hamster URL)
|
||||
*/
|
||||
private extractBriefId(input: string): string | null {
|
||||
const raw = input?.trim() ?? '';
|
||||
if (!raw) return null;
|
||||
|
||||
const parseUrl = (s: string): URL | null => {
|
||||
try {
|
||||
return new URL(s);
|
||||
} catch {}
|
||||
try {
|
||||
return new URL(`https://${s}`);
|
||||
} catch {}
|
||||
return null;
|
||||
};
|
||||
|
||||
const fromParts = (path: string): string | null => {
|
||||
const parts = path.split('/').filter(Boolean);
|
||||
const briefsIdx = parts.lastIndexOf('briefs');
|
||||
const candidate =
|
||||
briefsIdx >= 0 && parts.length > briefsIdx + 1
|
||||
? parts[briefsIdx + 1]
|
||||
: parts[parts.length - 1];
|
||||
return candidate?.trim() || null;
|
||||
};
|
||||
|
||||
// 1) URL (absolute or scheme‑less)
|
||||
const url = parseUrl(raw);
|
||||
if (url) {
|
||||
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
|
||||
const candidate = (qId || fromParts(url.pathname)) ?? null;
|
||||
if (candidate) {
|
||||
// Light sanity check; let API be the final validator
|
||||
if (this.isLikelyId(candidate) || candidate.length >= 8)
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// 2) Looks like a path without scheme
|
||||
if (raw.includes('/')) {
|
||||
const candidate = fromParts(raw);
|
||||
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// 3) Fallback: raw token
|
||||
return raw;
|
||||
}
|
||||
|
||||
/**
|
||||
* Heuristic to check if a string looks like a brief ID (UUID-like)
|
||||
*/
|
||||
private isLikelyId(value: string): boolean {
|
||||
const uuidRegex =
|
||||
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
|
||||
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i; // ULID
|
||||
const slugRegex = /^[A-Za-z0-9_-]{16,}$/; // general token
|
||||
return (
|
||||
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set context directly from options
|
||||
*/
|
||||
private async setContext(options: any): Promise<ContextResult> {
|
||||
try {
|
||||
const context: Partial<UserContext> = {};
|
||||
|
||||
if (options.org) {
|
||||
context.orgId = options.org;
|
||||
}
|
||||
if (options.orgName) {
|
||||
context.orgName = options.orgName;
|
||||
}
|
||||
if (options.brief) {
|
||||
context.briefId = options.brief;
|
||||
}
|
||||
if (options.briefName) {
|
||||
context.briefName = options.briefName;
|
||||
}
|
||||
|
||||
if (Object.keys(context).length === 0) {
|
||||
ui.displayWarning('No context options provided');
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: 'No context options provided'
|
||||
};
|
||||
}
|
||||
|
||||
await this.authManager.updateContext(context);
|
||||
ui.displaySuccess('Context updated');
|
||||
|
||||
// Display what was set
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(
|
||||
chalk.gray(` Organization: ${context.orgName || context.orgId}`)
|
||||
);
|
||||
}
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(
|
||||
chalk.gray(` Brief: ${context.briefName || context.briefId}`)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'set',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Context updated'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to set context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: `Failed to set context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ContextResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ContextResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current context (for programmatic usage)
|
||||
*/
|
||||
getContext(): UserContext | null {
|
||||
return this.authManager.getContext();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for context command
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ContextCommand {
|
||||
const contextCommand = new ContextCommand(name);
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
}
|
||||
379
apps/cli/src/commands/export.command.ts
Normal file
379
apps/cli/src/commands/export.command.ts
Normal file
@@ -0,0 +1,379 @@
|
||||
/**
|
||||
* @fileoverview Export command for exporting tasks to external systems
|
||||
* Provides functionality to export tasks to Hamster briefs
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { Ora } from 'ora';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import { TaskMasterCore, type ExportResult } from '@tm/core';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from export command
|
||||
*/
|
||||
export interface ExportCommandResult {
|
||||
success: boolean;
|
||||
action: 'export' | 'validate' | 'cancelled';
|
||||
result?: ExportResult;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ExportCommand extending Commander's Command class
|
||||
* Handles task export to external systems
|
||||
*/
|
||||
export class ExportCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private taskMasterCore?: TaskMasterCore;
|
||||
private lastResult?: ExportCommandResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'export');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command
|
||||
this.description('Export tasks to external systems (e.g., Hamster briefs)');
|
||||
|
||||
// Add options
|
||||
this.option('--org <id>', 'Organization ID to export to');
|
||||
this.option('--brief <id>', 'Brief ID to export tasks to');
|
||||
this.option('--tag <tag>', 'Export tasks from a specific tag');
|
||||
this.option(
|
||||
'--status <status>',
|
||||
'Filter tasks by status (pending, in-progress, done, etc.)'
|
||||
);
|
||||
this.option('--exclude-subtasks', 'Exclude subtasks from export');
|
||||
this.option('-y, --yes', 'Skip confirmation prompt');
|
||||
|
||||
// Accept optional positional argument for brief ID or Hamster URL
|
||||
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
|
||||
|
||||
// Default action
|
||||
this.action(async (briefOrUrl?: string, options?: any) => {
|
||||
await this.executeExport(briefOrUrl, options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the TaskMasterCore
|
||||
*/
|
||||
private async initializeServices(): Promise<void> {
|
||||
if (this.taskMasterCore) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Initialize TaskMasterCore
|
||||
this.taskMasterCore = await TaskMasterCore.create({
|
||||
projectPath: process.cwd()
|
||||
});
|
||||
} catch (error) {
|
||||
throw new Error(
|
||||
`Failed to initialize services: ${(error as Error).message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the export command
|
||||
*/
|
||||
private async executeExport(
|
||||
briefOrUrl?: string,
|
||||
options?: any
|
||||
): Promise<void> {
|
||||
let spinner: Ora | undefined;
|
||||
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize services
|
||||
await this.initializeServices();
|
||||
|
||||
// Get current context
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
// Determine org and brief IDs
|
||||
let orgId = options?.org || context?.orgId;
|
||||
let briefId = options?.brief || briefOrUrl || context?.briefId;
|
||||
|
||||
// If a URL/ID was provided as argument, resolve it
|
||||
if (briefOrUrl && !options?.brief) {
|
||||
spinner = ora('Resolving brief...').start();
|
||||
const resolvedBrief = await this.resolveBriefInput(briefOrUrl);
|
||||
if (resolvedBrief) {
|
||||
briefId = resolvedBrief.briefId;
|
||||
orgId = resolvedBrief.orgId;
|
||||
spinner.succeed('Brief resolved');
|
||||
} else {
|
||||
spinner.fail('Could not resolve brief');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Validate we have necessary IDs
|
||||
if (!orgId) {
|
||||
ui.displayError(
|
||||
'No organization selected. Run "tm context org" or use --org flag.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!briefId) {
|
||||
ui.displayError(
|
||||
'No brief specified. Run "tm context brief", provide a brief ID/URL, or use --brief flag.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Confirm export if not auto-confirmed
|
||||
if (!options?.yes) {
|
||||
const confirmed = await this.confirmExport(orgId, briefId, context);
|
||||
if (!confirmed) {
|
||||
ui.displayWarning('Export cancelled');
|
||||
this.lastResult = {
|
||||
success: false,
|
||||
action: 'cancelled',
|
||||
message: 'User cancelled export'
|
||||
};
|
||||
process.exit(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Perform export
|
||||
spinner = ora('Exporting tasks...').start();
|
||||
|
||||
const exportResult = await this.taskMasterCore!.exportTasks({
|
||||
orgId,
|
||||
briefId,
|
||||
tag: options?.tag,
|
||||
status: options?.status,
|
||||
excludeSubtasks: options?.excludeSubtasks || false
|
||||
});
|
||||
|
||||
if (exportResult.success) {
|
||||
spinner.succeed(
|
||||
`Successfully exported ${exportResult.taskCount} task(s) to brief`
|
||||
);
|
||||
|
||||
// Display summary
|
||||
console.log(chalk.cyan('\n📤 Export Summary\n'));
|
||||
console.log(chalk.white(` Organization: ${orgId}`));
|
||||
console.log(chalk.white(` Brief: ${briefId}`));
|
||||
console.log(chalk.white(` Tasks exported: ${exportResult.taskCount}`));
|
||||
if (options?.tag) {
|
||||
console.log(chalk.gray(` Tag: ${options.tag}`));
|
||||
}
|
||||
if (options?.status) {
|
||||
console.log(chalk.gray(` Status filter: ${options.status}`));
|
||||
}
|
||||
|
||||
if (exportResult.message) {
|
||||
console.log(chalk.gray(`\n ${exportResult.message}`));
|
||||
}
|
||||
} else {
|
||||
spinner.fail('Export failed');
|
||||
if (exportResult.error) {
|
||||
console.error(chalk.red(`\n✗ ${exportResult.error.message}`));
|
||||
}
|
||||
}
|
||||
|
||||
this.lastResult = {
|
||||
success: exportResult.success,
|
||||
action: 'export',
|
||||
result: exportResult
|
||||
};
|
||||
} catch (error: any) {
|
||||
if (spinner?.isSpinning) spinner.fail('Export failed');
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve brief input to get brief and org IDs
|
||||
*/
|
||||
private async resolveBriefInput(
|
||||
briefOrUrl: string
|
||||
): Promise<{ briefId: string; orgId: string } | null> {
|
||||
try {
|
||||
// Extract brief ID from input
|
||||
const briefId = this.extractBriefId(briefOrUrl);
|
||||
if (!briefId) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Fetch brief to get organization
|
||||
const brief = await this.authManager.getBrief(briefId);
|
||||
if (!brief) {
|
||||
ui.displayError('Brief not found or you do not have access');
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
briefId: brief.id,
|
||||
orgId: brief.accountId
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Failed to resolve brief: ${error}`));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a brief ID from raw input (ID or URL)
|
||||
*/
|
||||
private extractBriefId(input: string): string | null {
|
||||
const raw = input?.trim() ?? '';
|
||||
if (!raw) return null;
|
||||
|
||||
const parseUrl = (s: string): URL | null => {
|
||||
try {
|
||||
return new URL(s);
|
||||
} catch {}
|
||||
try {
|
||||
return new URL(`https://${s}`);
|
||||
} catch {}
|
||||
return null;
|
||||
};
|
||||
|
||||
const fromParts = (path: string): string | null => {
|
||||
const parts = path.split('/').filter(Boolean);
|
||||
const briefsIdx = parts.lastIndexOf('briefs');
|
||||
const candidate =
|
||||
briefsIdx >= 0 && parts.length > briefsIdx + 1
|
||||
? parts[briefsIdx + 1]
|
||||
: parts[parts.length - 1];
|
||||
return candidate?.trim() || null;
|
||||
};
|
||||
|
||||
// Try URL parsing
|
||||
const url = parseUrl(raw);
|
||||
if (url) {
|
||||
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
|
||||
const candidate = (qId || fromParts(url.pathname)) ?? null;
|
||||
if (candidate) {
|
||||
if (this.isLikelyId(candidate) || candidate.length >= 8) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if it looks like a path
|
||||
if (raw.includes('/')) {
|
||||
const candidate = fromParts(raw);
|
||||
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// Return raw if it looks like an ID
|
||||
return raw;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string looks like a brief ID
|
||||
*/
|
||||
private isLikelyId(value: string): boolean {
|
||||
const uuidRegex =
|
||||
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
|
||||
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i;
|
||||
const slugRegex = /^[A-Za-z0-9_-]{16,}$/;
|
||||
return (
|
||||
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Confirm export with the user
|
||||
*/
|
||||
private async confirmExport(
|
||||
orgId: string,
|
||||
briefId: string,
|
||||
context: UserContext | null
|
||||
): Promise<boolean> {
|
||||
console.log(chalk.cyan('\n📤 Export Tasks\n'));
|
||||
|
||||
// Show org name if available
|
||||
if (context?.orgName) {
|
||||
console.log(chalk.white(` Organization: ${context.orgName}`));
|
||||
console.log(chalk.gray(` ID: ${orgId}`));
|
||||
} else {
|
||||
console.log(chalk.white(` Organization ID: ${orgId}`));
|
||||
}
|
||||
|
||||
// Show brief info
|
||||
if (context?.briefName) {
|
||||
console.log(chalk.white(`\n Brief: ${context.briefName}`));
|
||||
console.log(chalk.gray(` ID: ${briefId}`));
|
||||
} else {
|
||||
console.log(chalk.white(`\n Brief ID: ${briefId}`));
|
||||
}
|
||||
|
||||
const { confirmed } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirmed',
|
||||
message: 'Do you want to proceed with export?',
|
||||
default: true
|
||||
}
|
||||
]);
|
||||
|
||||
return confirmed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last export result (useful for testing)
|
||||
*/
|
||||
public getLastResult(): ExportCommandResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ExportCommand {
|
||||
const exportCommand = new ExportCommand(name);
|
||||
program.addCommand(exportCommand);
|
||||
return exportCommand;
|
||||
}
|
||||
}
|
||||
484
apps/cli/src/commands/list.command.ts
Normal file
484
apps/cli/src/commands/list.command.ts
Normal file
@@ -0,0 +1,484 @@
|
||||
/**
|
||||
* @fileoverview ListTasks command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type Task,
|
||||
type TaskStatus,
|
||||
type TaskMasterCore,
|
||||
TASK_STATUSES,
|
||||
OUTPUT_FORMATS,
|
||||
STATUS_ICONS,
|
||||
type OutputFormat
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import {
|
||||
displayHeader,
|
||||
displayDashboards,
|
||||
calculateTaskStatistics,
|
||||
calculateSubtaskStatistics,
|
||||
calculateDependencyStatistics,
|
||||
getPriorityBreakdown,
|
||||
displayRecommendedNextTask,
|
||||
getTaskDescription,
|
||||
displaySuggestedNextSteps,
|
||||
type NextTaskInfo
|
||||
} from '../ui/index.js';
|
||||
|
||||
/**
|
||||
* Options interface for the list command
|
||||
*/
|
||||
export interface ListCommandOptions {
|
||||
status?: string;
|
||||
tag?: string;
|
||||
withSubtasks?: boolean;
|
||||
format?: OutputFormat;
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from list command
|
||||
*/
|
||||
export interface ListTasksResult {
|
||||
tasks: Task[];
|
||||
total: number;
|
||||
filtered: number;
|
||||
tag?: string;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* ListTasksCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class ListTasksCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: ListTasksResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'list');
|
||||
|
||||
// Configure the command
|
||||
this.description('List tasks with optional filtering')
|
||||
.alias('ls')
|
||||
.option('-s, --status <status>', 'Filter by status (comma-separated)')
|
||||
.option('-t, --tag <tag>', 'Filter by tag')
|
||||
.option('--with-subtasks', 'Include subtasks in the output')
|
||||
.option(
|
||||
'-f, --format <format>',
|
||||
'Output format (text, json, compact)',
|
||||
'text'
|
||||
)
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: ListCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the list command
|
||||
*/
|
||||
private async executeCommand(options: ListCommandOptions): Promise<void> {
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get tasks from core
|
||||
const result = await this.getTasks(options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: ListCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (
|
||||
options.format &&
|
||||
!OUTPUT_FORMATS.includes(options.format as OutputFormat)
|
||||
) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: ${OUTPUT_FORMATS.join(', ')}`));
|
||||
return false;
|
||||
}
|
||||
|
||||
// Validate status
|
||||
if (options.status) {
|
||||
const statuses = options.status.split(',').map((s: string) => s.trim());
|
||||
|
||||
for (const status of statuses) {
|
||||
if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) {
|
||||
console.error(chalk.red(`Invalid status: ${status}`));
|
||||
console.error(
|
||||
chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`)
|
||||
);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tasks from tm-core
|
||||
*/
|
||||
private async getTasks(
|
||||
options: ListCommandOptions
|
||||
): Promise<ListTasksResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Build filter
|
||||
const filter =
|
||||
options.status && options.status !== 'all'
|
||||
? {
|
||||
status: options.status
|
||||
.split(',')
|
||||
.map((s: string) => s.trim() as TaskStatus)
|
||||
}
|
||||
: undefined;
|
||||
|
||||
// Call tm-core
|
||||
const result = await this.tmCore.getTaskList({
|
||||
tag: options.tag,
|
||||
filter,
|
||||
includeSubtasks: options.withSubtasks
|
||||
});
|
||||
|
||||
return result as ListTasksResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: ListTasksResult,
|
||||
options: ListCommandOptions
|
||||
): void {
|
||||
const format = (options.format || 'text') as OutputFormat | 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'compact':
|
||||
this.displayCompact(result.tasks, options.withSubtasks);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayText(result, options.withSubtasks);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(data: ListTasksResult): void {
|
||||
console.log(
|
||||
JSON.stringify(
|
||||
{
|
||||
tasks: data.tasks,
|
||||
metadata: {
|
||||
total: data.total,
|
||||
filtered: data.filtered,
|
||||
tag: data.tag,
|
||||
storageType: data.storageType
|
||||
}
|
||||
},
|
||||
null,
|
||||
2
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in compact format
|
||||
*/
|
||||
private displayCompact(tasks: Task[], withSubtasks?: boolean): void {
|
||||
tasks.forEach((task) => {
|
||||
const icon = STATUS_ICONS[task.status];
|
||||
console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`);
|
||||
|
||||
if (withSubtasks && task.subtasks?.length) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subIcon = STATUS_ICONS[subtask.status];
|
||||
console.log(
|
||||
` ${chalk.gray(String(subtask.id))} ${subIcon} ${chalk.gray(subtask.title)}`
|
||||
);
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in text format with tables
|
||||
*/
|
||||
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
||||
const { tasks, tag } = data;
|
||||
|
||||
// Get file path for display
|
||||
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
|
||||
|
||||
// Display header without banner (banner already shown by main CLI)
|
||||
displayHeader({
|
||||
tag: tag || 'master',
|
||||
filePath: filePath
|
||||
});
|
||||
|
||||
// No tasks message
|
||||
if (tasks.length === 0) {
|
||||
ui.displayWarning('No tasks found matching the criteria.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Calculate statistics
|
||||
const taskStats = calculateTaskStatistics(tasks);
|
||||
const subtaskStats = calculateSubtaskStatistics(tasks);
|
||||
const depStats = calculateDependencyStatistics(tasks);
|
||||
const priorityBreakdown = getPriorityBreakdown(tasks);
|
||||
|
||||
// Find next task following the same logic as findNextTask
|
||||
const nextTaskInfo = this.findNextTask(tasks);
|
||||
|
||||
// Get the full task object with complexity data already included
|
||||
const nextTask = nextTaskInfo
|
||||
? tasks.find((t) => String(t.id) === String(nextTaskInfo.id))
|
||||
: undefined;
|
||||
|
||||
// Display dashboard boxes (nextTask already has complexity from storage enrichment)
|
||||
displayDashboards(
|
||||
taskStats,
|
||||
subtaskStats,
|
||||
priorityBreakdown,
|
||||
depStats,
|
||||
nextTask
|
||||
);
|
||||
|
||||
// Task table
|
||||
console.log(
|
||||
ui.createTaskTable(tasks, {
|
||||
showSubtasks: withSubtasks,
|
||||
showDependencies: true,
|
||||
showComplexity: true // Enable complexity column
|
||||
})
|
||||
);
|
||||
|
||||
// Display recommended next task section immediately after table
|
||||
if (nextTask) {
|
||||
const description = getTaskDescription(nextTask);
|
||||
|
||||
displayRecommendedNextTask({
|
||||
id: nextTask.id,
|
||||
title: nextTask.title,
|
||||
priority: nextTask.priority,
|
||||
status: nextTask.status,
|
||||
dependencies: nextTask.dependencies,
|
||||
description,
|
||||
complexity: nextTask.complexity as number | undefined
|
||||
});
|
||||
} else {
|
||||
displayRecommendedNextTask(undefined);
|
||||
}
|
||||
|
||||
// Display suggested next steps at the end
|
||||
displaySuggestedNextSteps();
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ListTasksResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the next task to work on
|
||||
* Implements the same logic as scripts/modules/task-manager/find-next-task.js
|
||||
*/
|
||||
private findNextTask(tasks: Task[]): NextTaskInfo | undefined {
|
||||
const priorityValues: Record<string, number> = {
|
||||
critical: 4,
|
||||
high: 3,
|
||||
medium: 2,
|
||||
low: 1
|
||||
};
|
||||
|
||||
// Build set of completed task IDs (including subtasks)
|
||||
const completedIds = new Set<string>();
|
||||
tasks.forEach((t) => {
|
||||
if (t.status === 'done' || t.status === 'completed') {
|
||||
completedIds.add(String(t.id));
|
||||
}
|
||||
if (t.subtasks) {
|
||||
t.subtasks.forEach((st) => {
|
||||
if (st.status === 'done' || st.status === 'completed') {
|
||||
completedIds.add(`${t.id}.${st.id}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// First, look for eligible subtasks in in-progress parent tasks
|
||||
const candidateSubtasks: NextTaskInfo[] = [];
|
||||
|
||||
tasks
|
||||
.filter(
|
||||
(t) => t.status === 'in-progress' && t.subtasks && t.subtasks.length > 0
|
||||
)
|
||||
.forEach((parent) => {
|
||||
parent.subtasks!.forEach((st) => {
|
||||
const stStatus = (st.status || 'pending').toLowerCase();
|
||||
if (stStatus !== 'pending' && stStatus !== 'in-progress') return;
|
||||
|
||||
// Check if dependencies are satisfied
|
||||
const fullDeps =
|
||||
st.dependencies?.map((d) => {
|
||||
// Handle both numeric and string IDs
|
||||
if (typeof d === 'string' && d.includes('.')) {
|
||||
return d;
|
||||
}
|
||||
return `${parent.id}.${d}`;
|
||||
}) ?? [];
|
||||
|
||||
const depsSatisfied =
|
||||
fullDeps.length === 0 ||
|
||||
fullDeps.every((depId) => completedIds.has(String(depId)));
|
||||
|
||||
if (depsSatisfied) {
|
||||
candidateSubtasks.push({
|
||||
id: `${parent.id}.${st.id}`,
|
||||
title: st.title || `Subtask ${st.id}`,
|
||||
priority: st.priority || parent.priority || 'medium',
|
||||
dependencies: fullDeps.map((d) => String(d))
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (candidateSubtasks.length > 0) {
|
||||
// Sort by priority, then by dependencies count, then by ID
|
||||
candidateSubtasks.sort((a, b) => {
|
||||
const pa = priorityValues[a.priority || 'medium'] ?? 2;
|
||||
const pb = priorityValues[b.priority || 'medium'] ?? 2;
|
||||
if (pb !== pa) return pb - pa;
|
||||
|
||||
const depCountA = a.dependencies?.length || 0;
|
||||
const depCountB = b.dependencies?.length || 0;
|
||||
if (depCountA !== depCountB) return depCountA - depCountB;
|
||||
|
||||
return String(a.id).localeCompare(String(b.id));
|
||||
});
|
||||
return candidateSubtasks[0];
|
||||
}
|
||||
|
||||
// Fall back to finding eligible top-level tasks
|
||||
const eligibleTasks = tasks.filter((task) => {
|
||||
// Skip non-eligible statuses
|
||||
const status = (task.status || 'pending').toLowerCase();
|
||||
if (status !== 'pending' && status !== 'in-progress') return false;
|
||||
|
||||
// Check dependencies
|
||||
const deps = task.dependencies || [];
|
||||
const depsSatisfied =
|
||||
deps.length === 0 ||
|
||||
deps.every((depId) => completedIds.has(String(depId)));
|
||||
|
||||
return depsSatisfied;
|
||||
});
|
||||
|
||||
if (eligibleTasks.length === 0) return undefined;
|
||||
|
||||
// Sort eligible tasks
|
||||
eligibleTasks.sort((a, b) => {
|
||||
// Priority (higher first)
|
||||
const pa = priorityValues[a.priority || 'medium'] ?? 2;
|
||||
const pb = priorityValues[b.priority || 'medium'] ?? 2;
|
||||
if (pb !== pa) return pb - pa;
|
||||
|
||||
// Dependencies count (fewer first)
|
||||
const depCountA = a.dependencies?.length || 0;
|
||||
const depCountB = b.dependencies?.length || 0;
|
||||
if (depCountA !== depCountB) return depCountA - depCountB;
|
||||
|
||||
// ID (lower first)
|
||||
return Number(a.id) - Number(b.id);
|
||||
});
|
||||
|
||||
const nextTask = eligibleTasks[0];
|
||||
return {
|
||||
id: nextTask.id,
|
||||
title: nextTask.title,
|
||||
priority: nextTask.priority,
|
||||
dependencies: nextTask.dependencies?.map((d) => String(d))
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ListTasksResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ListTasksCommand {
|
||||
const listCommand = new ListTasksCommand(name);
|
||||
program.addCommand(listCommand);
|
||||
return listCommand;
|
||||
}
|
||||
}
|
||||
304
apps/cli/src/commands/set-status.command.ts
Normal file
304
apps/cli/src/commands/set-status.command.ts
Normal file
@@ -0,0 +1,304 @@
|
||||
/**
|
||||
* @fileoverview SetStatusCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type TaskMasterCore,
|
||||
type TaskStatus
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Valid task status values for validation
|
||||
*/
|
||||
const VALID_TASK_STATUSES: TaskStatus[] = [
|
||||
'pending',
|
||||
'in-progress',
|
||||
'done',
|
||||
'deferred',
|
||||
'cancelled',
|
||||
'blocked',
|
||||
'review'
|
||||
];
|
||||
|
||||
/**
|
||||
* Options interface for the set-status command
|
||||
*/
|
||||
export interface SetStatusCommandOptions {
|
||||
id?: string;
|
||||
status?: TaskStatus;
|
||||
format?: 'text' | 'json';
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from set-status command
|
||||
*/
|
||||
export interface SetStatusResult {
|
||||
success: boolean;
|
||||
updatedTasks: Array<{
|
||||
taskId: string;
|
||||
oldStatus: TaskStatus;
|
||||
newStatus: TaskStatus;
|
||||
}>;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* SetStatusCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class SetStatusCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: SetStatusResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'set-status');
|
||||
|
||||
// Configure the command
|
||||
this.description('Update the status of one or more tasks')
|
||||
.requiredOption(
|
||||
'-i, --id <id>',
|
||||
'Task ID(s) to update (comma-separated for multiple, supports subtasks like 5.2)'
|
||||
)
|
||||
.requiredOption(
|
||||
'-s, --status <status>',
|
||||
`New status (${VALID_TASK_STATUSES.join(', ')})`
|
||||
)
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: SetStatusCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the set-status command
|
||||
*/
|
||||
private async executeCommand(
|
||||
options: SetStatusCommandOptions
|
||||
): Promise<void> {
|
||||
try {
|
||||
// Validate required options
|
||||
if (!options.id) {
|
||||
console.error(chalk.red('Error: Task ID is required. Use -i or --id'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!options.status) {
|
||||
console.error(
|
||||
chalk.red('Error: Status is required. Use -s or --status')
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Validate status
|
||||
if (!VALID_TASK_STATUSES.includes(options.status)) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Invalid status "${options.status}". Valid options: ${VALID_TASK_STATUSES.join(', ')}`
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize TaskMaster core
|
||||
this.tmCore = await createTaskMasterCore({
|
||||
projectPath: options.project || process.cwd()
|
||||
});
|
||||
|
||||
// Parse task IDs (handle comma-separated values)
|
||||
const taskIds = options.id.split(',').map((id) => id.trim());
|
||||
|
||||
// Update each task
|
||||
const updatedTasks: Array<{
|
||||
taskId: string;
|
||||
oldStatus: TaskStatus;
|
||||
newStatus: TaskStatus;
|
||||
}> = [];
|
||||
|
||||
for (const taskId of taskIds) {
|
||||
try {
|
||||
const result = await this.tmCore.updateTaskStatus(
|
||||
taskId,
|
||||
options.status
|
||||
);
|
||||
updatedTasks.push({
|
||||
taskId: result.taskId,
|
||||
oldStatus: result.oldStatus,
|
||||
newStatus: result.newStatus
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(
|
||||
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
|
||||
);
|
||||
}
|
||||
if (options.format === 'json') {
|
||||
console.log(
|
||||
JSON.stringify({
|
||||
success: false,
|
||||
error: errorMessage,
|
||||
taskId,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Store result for potential reuse
|
||||
this.lastResult = {
|
||||
success: true,
|
||||
updatedTasks,
|
||||
storageType: this.tmCore.getStorageType() as Exclude<
|
||||
StorageType,
|
||||
'auto'
|
||||
>
|
||||
};
|
||||
|
||||
// Display results
|
||||
this.displayResults(this.lastResult, options);
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : 'Unknown error occurred';
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(chalk.red(`Error: ${errorMessage}`));
|
||||
}
|
||||
|
||||
if (options.format === 'json') {
|
||||
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
} finally {
|
||||
// Clean up resources
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: SetStatusResult,
|
||||
options: SetStatusCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
if (!options.silent) {
|
||||
this.displayTextResults(result);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results in text format
|
||||
*/
|
||||
private displayTextResults(result: SetStatusResult): void {
|
||||
if (result.updatedTasks.length === 1) {
|
||||
// Single task update
|
||||
const update = result.updatedTasks[0];
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold(`✅ Successfully updated task ${update.taskId}`) +
|
||||
'\n\n' +
|
||||
`${chalk.blue('From:')} ${this.getStatusDisplay(update.oldStatus)}\n` +
|
||||
`${chalk.blue('To:')} ${this.getStatusDisplay(update.newStatus)}`,
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// Multiple task updates
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold(
|
||||
`✅ Successfully updated ${result.updatedTasks.length} tasks`
|
||||
) +
|
||||
'\n\n' +
|
||||
result.updatedTasks
|
||||
.map(
|
||||
(update) =>
|
||||
`${chalk.cyan(update.taskId)}: ${this.getStatusDisplay(update.oldStatus)} → ${this.getStatusDisplay(update.newStatus)}`
|
||||
)
|
||||
.join('\n'),
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored status display
|
||||
*/
|
||||
private getStatusDisplay(status: TaskStatus): string {
|
||||
const statusColors: Record<TaskStatus, (text: string) => string> = {
|
||||
pending: chalk.yellow,
|
||||
'in-progress': chalk.blue,
|
||||
done: chalk.green,
|
||||
deferred: chalk.gray,
|
||||
cancelled: chalk.red,
|
||||
blocked: chalk.red,
|
||||
review: chalk.magenta,
|
||||
completed: chalk.green
|
||||
};
|
||||
|
||||
const colorFn = statusColors[status] || chalk.white;
|
||||
return colorFn(status);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last command result (useful for testing or chaining)
|
||||
*/
|
||||
getLastResult(): SetStatusResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): SetStatusCommand {
|
||||
const setStatusCommand = new SetStatusCommand(name);
|
||||
program.addCommand(setStatusCommand);
|
||||
return setStatusCommand;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory function to create and configure the set-status command
|
||||
*/
|
||||
export function createSetStatusCommand(): SetStatusCommand {
|
||||
return new SetStatusCommand();
|
||||
}
|
||||
332
apps/cli/src/commands/show.command.ts
Normal file
332
apps/cli/src/commands/show.command.ts
Normal file
@@ -0,0 +1,332 @@
|
||||
/**
|
||||
* @fileoverview ShowCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
|
||||
/**
|
||||
* Options interface for the show command
|
||||
*/
|
||||
export interface ShowCommandOptions {
|
||||
id?: string;
|
||||
status?: string;
|
||||
format?: 'text' | 'json';
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from show command
|
||||
*/
|
||||
export interface ShowTaskResult {
|
||||
task: Task | null;
|
||||
found: boolean;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type for multiple tasks
|
||||
*/
|
||||
export interface ShowMultipleTasksResult {
|
||||
tasks: Task[];
|
||||
notFound: string[];
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* ShowCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class ShowCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: ShowTaskResult | ShowMultipleTasksResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'show');
|
||||
|
||||
// Configure the command
|
||||
this.description('Display detailed information about one or more tasks')
|
||||
.argument('[id]', 'Task ID(s) to show (comma-separated for multiple)')
|
||||
.option(
|
||||
'-i, --id <id>',
|
||||
'Task ID(s) to show (comma-separated for multiple)'
|
||||
)
|
||||
.option('-s, --status <status>', 'Filter subtasks by status')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(
|
||||
async (taskId: string | undefined, options: ShowCommandOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the show command
|
||||
*/
|
||||
private async executeCommand(
|
||||
taskId: string | undefined,
|
||||
options: ShowCommandOptions
|
||||
): Promise<void> {
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get the task ID from argument or option
|
||||
const idArg = taskId || options.id;
|
||||
if (!idArg) {
|
||||
console.error(chalk.red('Error: Please provide a task ID'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Check if multiple IDs are provided (comma-separated)
|
||||
const taskIds = idArg
|
||||
.split(',')
|
||||
.map((id) => id.trim())
|
||||
.filter((id) => id.length > 0);
|
||||
|
||||
// Get tasks from core
|
||||
const result =
|
||||
taskIds.length > 1
|
||||
? await this.getMultipleTasks(taskIds, options)
|
||||
: await this.getSingleTask(taskIds[0], options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: ShowCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: text, json`));
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single task from tm-core
|
||||
*/
|
||||
private async getSingleTask(
|
||||
taskId: string,
|
||||
_options: ShowCommandOptions
|
||||
): Promise<ShowTaskResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Get the task
|
||||
const task = await this.tmCore.getTask(taskId);
|
||||
|
||||
// Get storage type
|
||||
const storageType = this.tmCore.getStorageType();
|
||||
|
||||
return {
|
||||
task,
|
||||
found: task !== null,
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get multiple tasks from tm-core
|
||||
*/
|
||||
private async getMultipleTasks(
|
||||
taskIds: string[],
|
||||
_options: ShowCommandOptions
|
||||
): Promise<ShowMultipleTasksResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
const tasks: Task[] = [];
|
||||
const notFound: string[] = [];
|
||||
|
||||
// Get each task individually
|
||||
for (const taskId of taskIds) {
|
||||
const task = await this.tmCore.getTask(taskId);
|
||||
if (task) {
|
||||
tasks.push(task);
|
||||
} else {
|
||||
notFound.push(taskId);
|
||||
}
|
||||
}
|
||||
|
||||
// Get storage type
|
||||
const storageType = this.tmCore.getStorageType();
|
||||
|
||||
return {
|
||||
tasks,
|
||||
notFound,
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: ShowTaskResult | ShowMultipleTasksResult,
|
||||
options: ShowCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
if ('task' in result) {
|
||||
// Single task result
|
||||
this.displaySingleTask(result, options);
|
||||
} else {
|
||||
// Multiple tasks result
|
||||
this.displayMultipleTasks(result, options);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: ShowTaskResult | ShowMultipleTasksResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a single task in text format
|
||||
*/
|
||||
private displaySingleTask(
|
||||
result: ShowTaskResult,
|
||||
options: ShowCommandOptions
|
||||
): void {
|
||||
if (!result.found || !result.task) {
|
||||
console.log(
|
||||
boxen(chalk.yellow(`Task not found!`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Use the global task details display function
|
||||
displayTaskDetails(result.task, {
|
||||
statusFilter: options.status,
|
||||
showSuggestedActions: true
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display multiple tasks in text format
|
||||
*/
|
||||
private displayMultipleTasks(
|
||||
result: ShowMultipleTasksResult,
|
||||
_options: ShowCommandOptions
|
||||
): void {
|
||||
// Header
|
||||
ui.displayBanner(`Tasks (${result.tasks.length} found)`);
|
||||
|
||||
if (result.notFound.length > 0) {
|
||||
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
|
||||
}
|
||||
|
||||
if (result.tasks.length === 0) {
|
||||
ui.displayWarning('No tasks found matching the criteria.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Task table
|
||||
console.log(chalk.blue.bold(`\n📋 Tasks:\n`));
|
||||
console.log(
|
||||
ui.createTaskTable(result.tasks, {
|
||||
showSubtasks: true,
|
||||
showDependencies: true
|
||||
})
|
||||
);
|
||||
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(
|
||||
result: ShowTaskResult | ShowMultipleTasksResult
|
||||
): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ShowTaskResult | ShowMultipleTasksResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ShowCommand {
|
||||
const showCommand = new ShowCommand(name);
|
||||
program.addCommand(showCommand);
|
||||
return showCommand;
|
||||
}
|
||||
}
|
||||
503
apps/cli/src/commands/start.command.ts
Normal file
503
apps/cli/src/commands/start.command.ts
Normal file
@@ -0,0 +1,503 @@
|
||||
/**
|
||||
* @fileoverview StartCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
* This is a thin presentation layer over @tm/core's TaskExecutionService
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import { spawn } from 'child_process';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type TaskMasterCore,
|
||||
type StartTaskResult as CoreStartTaskResult
|
||||
} from '@tm/core';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* CLI-specific options interface for the start command
|
||||
*/
|
||||
export interface StartCommandOptions {
|
||||
id?: string;
|
||||
format?: 'text' | 'json';
|
||||
project?: string;
|
||||
dryRun?: boolean;
|
||||
force?: boolean;
|
||||
noStatusUpdate?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* CLI-specific result type from start command
|
||||
* Extends the core result with CLI-specific display information
|
||||
*/
|
||||
export interface StartCommandResult extends CoreStartTaskResult {
|
||||
storageType?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* StartCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's TaskExecutionService
|
||||
*/
|
||||
export class StartCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: StartCommandResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'start');
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Start working on a task by launching claude-code with context'
|
||||
)
|
||||
.argument('[id]', 'Task ID to start working on')
|
||||
.option('-i, --id <id>', 'Task ID to start working on')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option(
|
||||
'--dry-run',
|
||||
'Show what would be executed without launching claude-code'
|
||||
)
|
||||
.option(
|
||||
'--force',
|
||||
'Force start even if another task is already in-progress'
|
||||
)
|
||||
.option(
|
||||
'--no-status-update',
|
||||
'Do not automatically update task status to in-progress'
|
||||
)
|
||||
.action(
|
||||
async (taskId: string | undefined, options: StartCommandOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the start command
|
||||
*/
|
||||
private async executeCommand(
|
||||
taskId: string | undefined,
|
||||
options: StartCommandOptions
|
||||
): Promise<void> {
|
||||
let spinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core with spinner
|
||||
spinner = ora('Initializing Task Master...').start();
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
spinner.succeed('Task Master initialized');
|
||||
|
||||
// Get the task ID from argument or option, or find next available task
|
||||
const idArg = taskId || options.id || null;
|
||||
let targetTaskId = idArg;
|
||||
|
||||
if (!targetTaskId) {
|
||||
spinner = ora('Finding next available task...').start();
|
||||
targetTaskId = await this.performGetNextTask();
|
||||
if (targetTaskId) {
|
||||
spinner.succeed(`Found next task: #${targetTaskId}`);
|
||||
} else {
|
||||
spinner.fail('No available tasks found');
|
||||
}
|
||||
}
|
||||
|
||||
if (!targetTaskId) {
|
||||
ui.displayError('No task ID provided and no available tasks found');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Show pre-launch message (no spinner needed, it's just display)
|
||||
if (!options.dryRun) {
|
||||
await this.showPreLaunchMessage(targetTaskId);
|
||||
}
|
||||
|
||||
// Use tm-core's startTask method with spinner
|
||||
spinner = ora('Preparing task execution...').start();
|
||||
const coreResult = await this.performStartTask(targetTaskId, options);
|
||||
|
||||
if (coreResult.started) {
|
||||
spinner.succeed(
|
||||
options.dryRun
|
||||
? 'Dry run completed'
|
||||
: 'Task prepared - launching Claude...'
|
||||
);
|
||||
} else {
|
||||
spinner.fail('Task execution failed');
|
||||
}
|
||||
|
||||
// Execute command if we have one and it's not a dry run
|
||||
if (!options.dryRun && coreResult.command) {
|
||||
// Stop any remaining spinners before launching Claude
|
||||
if (spinner && !spinner.isSpinning) {
|
||||
// Clear the line to make room for Claude
|
||||
console.log();
|
||||
}
|
||||
await this.executeChildProcess(coreResult.command);
|
||||
}
|
||||
|
||||
// Convert core result to CLI result with storage type
|
||||
const result: StartCommandResult = {
|
||||
...coreResult,
|
||||
storageType: this.tmCore?.getStorageType()
|
||||
};
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results (only for dry run or if execution failed)
|
||||
if (options.dryRun || !coreResult.started) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
if (spinner) {
|
||||
spinner.fail('Operation failed');
|
||||
}
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: StartCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: text, json`));
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the next available task using tm-core
|
||||
*/
|
||||
private async performGetNextTask(): Promise<string | null> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
return this.tmCore.getNextAvailableTask();
|
||||
}
|
||||
|
||||
/**
|
||||
* Show pre-launch message using tm-core data
|
||||
*/
|
||||
private async showPreLaunchMessage(targetTaskId: string): Promise<void> {
|
||||
if (!this.tmCore) return;
|
||||
|
||||
const { task, subtask, subtaskId } =
|
||||
await this.tmCore.getTaskWithSubtask(targetTaskId);
|
||||
if (task) {
|
||||
const workItemText = subtask
|
||||
? `Subtask #${task.id}.${subtaskId} - ${subtask.title}`
|
||||
: `Task #${task.id} - ${task.title}`;
|
||||
|
||||
console.log(
|
||||
chalk.green('🚀 Starting: ') + chalk.white.bold(workItemText)
|
||||
);
|
||||
console.log(chalk.gray('Launching Claude Code...'));
|
||||
console.log(); // Empty line
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform start task using tm-core business logic
|
||||
*/
|
||||
private async performStartTask(
|
||||
targetTaskId: string,
|
||||
options: StartCommandOptions
|
||||
): Promise<CoreStartTaskResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Show spinner for status update if enabled
|
||||
let statusSpinner: Ora | null = null;
|
||||
if (!options.noStatusUpdate && !options.dryRun) {
|
||||
statusSpinner = ora('Updating task status to in-progress...').start();
|
||||
}
|
||||
|
||||
// Get execution command from tm-core (instead of executing directly)
|
||||
const result = await this.tmCore.startTask(targetTaskId, {
|
||||
dryRun: options.dryRun,
|
||||
force: options.force,
|
||||
updateStatus: !options.noStatusUpdate
|
||||
});
|
||||
|
||||
if (statusSpinner) {
|
||||
if (result.started) {
|
||||
statusSpinner.succeed('Task status updated');
|
||||
} else {
|
||||
statusSpinner.warn('Task status update skipped');
|
||||
}
|
||||
}
|
||||
|
||||
if (!result) {
|
||||
throw new Error('Failed to start task - core result is undefined');
|
||||
}
|
||||
|
||||
// Don't execute here - let the main executeCommand method handle it
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the child process directly in the main thread for better process control
|
||||
*/
|
||||
private async executeChildProcess(command: {
|
||||
executable: string;
|
||||
args: string[];
|
||||
cwd: string;
|
||||
}): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Don't show the full command with args as it can be very long
|
||||
console.log(chalk.green('🚀 Launching Claude Code...'));
|
||||
console.log(); // Add space before Claude takes over
|
||||
|
||||
const childProcess = spawn(command.executable, command.args, {
|
||||
cwd: command.cwd,
|
||||
stdio: 'inherit', // Inherit stdio from parent process
|
||||
shell: false
|
||||
});
|
||||
|
||||
childProcess.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(`Process exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
|
||||
childProcess.on('error', (error) => {
|
||||
reject(new Error(`Failed to spawn process: ${error.message}`));
|
||||
});
|
||||
|
||||
// Handle process termination signals gracefully
|
||||
const cleanup = () => {
|
||||
if (childProcess && !childProcess.killed) {
|
||||
childProcess.kill('SIGTERM');
|
||||
}
|
||||
};
|
||||
|
||||
process.on('SIGINT', cleanup);
|
||||
process.on('SIGTERM', cleanup);
|
||||
process.on('exit', cleanup);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: StartCommandResult,
|
||||
options: StartCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayTextResult(result, options);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: StartCommandResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display result in text format
|
||||
*/
|
||||
private displayTextResult(
|
||||
result: StartCommandResult,
|
||||
options: StartCommandOptions
|
||||
): void {
|
||||
if (!result.found || !result.task) {
|
||||
console.log(
|
||||
boxen(chalk.yellow(`Task not found!`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const task = result.task;
|
||||
|
||||
if (options.dryRun) {
|
||||
// For dry run, show full details since Claude Code won't be launched
|
||||
let headerText = `Dry Run: Starting Task #${task.id} - ${task.title}`;
|
||||
|
||||
// If working on a specific subtask, highlight it in the header
|
||||
if (result.subtask && result.subtaskId) {
|
||||
headerText = `Dry Run: Starting Subtask #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
|
||||
}
|
||||
|
||||
displayTaskDetails(task, {
|
||||
customHeader: headerText,
|
||||
headerColor: 'yellow'
|
||||
});
|
||||
|
||||
// Show claude-code prompt
|
||||
if (result.executionOutput) {
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Claude-Code Prompt:') +
|
||||
'\n\n' +
|
||||
result.executionOutput,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan',
|
||||
width: process.stdout.columns * 0.95 || 100
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
'🔍 Dry run - claude-code would be launched with the above prompt'
|
||||
),
|
||||
{
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// For actual execution, show minimal info since Claude Code will clear the terminal
|
||||
if (result.started) {
|
||||
// Determine what was worked on - task or subtask
|
||||
let workItemText = `Task: #${task.id} - ${task.title}`;
|
||||
let statusTarget = task.id;
|
||||
|
||||
if (result.subtask && result.subtaskId) {
|
||||
workItemText = `Subtask: #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
|
||||
statusTarget = `${task.id}.${result.subtaskId}`;
|
||||
}
|
||||
|
||||
// Post-execution message (shown after Claude Code exits)
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold('🎉 Task Session Complete!') +
|
||||
'\n\n' +
|
||||
chalk.white(workItemText) +
|
||||
'\n\n' +
|
||||
chalk.cyan('Next steps:') +
|
||||
'\n' +
|
||||
`• Run ${chalk.yellow('tm show ' + task.id)} to review task details\n` +
|
||||
`• Run ${chalk.yellow('tm set-status --id=' + statusTarget + ' --status=done')} when complete\n` +
|
||||
`• Run ${chalk.yellow('tm next')} to find the next available task\n` +
|
||||
`• Run ${chalk.yellow('tm start')} to begin the next task`,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
width: process.stdout.columns * 0.95 || 100,
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// Error case
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.red(
|
||||
'❌ Failed to launch claude-code' +
|
||||
(result.error ? `\nError: ${result.error}` : '')
|
||||
),
|
||||
{
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'red',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
// Show stack trace in development mode or when DEBUG is set
|
||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
||||
if ((isDevelopment || process.env.DEBUG) && error.stack) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: StartCommandResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): StartCommandResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): StartCommand {
|
||||
const startCommand = new StartCommand(name);
|
||||
program.addCommand(startCommand);
|
||||
return startCommand;
|
||||
}
|
||||
}
|
||||
40
apps/cli/src/index.ts
Normal file
40
apps/cli/src/index.ts
Normal file
@@ -0,0 +1,40 @@
|
||||
/**
|
||||
* @fileoverview Main entry point for @tm/cli package
|
||||
* Exports all public APIs for the CLI presentation layer
|
||||
*/
|
||||
|
||||
// Commands
|
||||
export { ListTasksCommand } from './commands/list.command.js';
|
||||
export { ShowCommand } from './commands/show.command.js';
|
||||
export { AuthCommand } from './commands/auth.command.js';
|
||||
export { ContextCommand } from './commands/context.command.js';
|
||||
export { StartCommand } from './commands/start.command.js';
|
||||
export { SetStatusCommand } from './commands/set-status.command.js';
|
||||
export { ExportCommand } from './commands/export.command.js';
|
||||
|
||||
// Command Registry
|
||||
export {
|
||||
CommandRegistry,
|
||||
registerAllCommands,
|
||||
registerCommandsByCategory,
|
||||
type CommandMetadata
|
||||
} from './command-registry.js';
|
||||
|
||||
// UI utilities (for other commands to use)
|
||||
export * as ui from './utils/ui.js';
|
||||
|
||||
// Auto-update utilities
|
||||
export {
|
||||
checkForUpdate,
|
||||
performAutoUpdate,
|
||||
displayUpgradeNotification,
|
||||
compareVersions
|
||||
} from './utils/auto-update.js';
|
||||
|
||||
// Re-export commonly used types from tm-core
|
||||
export type {
|
||||
Task,
|
||||
TaskStatus,
|
||||
TaskPriority,
|
||||
TaskMasterCore
|
||||
} from '@tm/core';
|
||||
568
apps/cli/src/ui/components/dashboard.component.ts
Normal file
568
apps/cli/src/ui/components/dashboard.component.ts
Normal file
@@ -0,0 +1,568 @@
|
||||
/**
|
||||
* @fileoverview Dashboard components for Task Master CLI
|
||||
* Displays project statistics and dependency information
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import type { Task, TaskPriority } from '@tm/core/types';
|
||||
import { getComplexityWithColor } from '../../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Statistics for task collection
|
||||
*/
|
||||
export interface TaskStatistics {
|
||||
total: number;
|
||||
done: number;
|
||||
inProgress: number;
|
||||
pending: number;
|
||||
blocked: number;
|
||||
deferred: number;
|
||||
cancelled: number;
|
||||
review?: number;
|
||||
completionPercentage: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Statistics for dependencies
|
||||
*/
|
||||
export interface DependencyStatistics {
|
||||
tasksWithNoDeps: number;
|
||||
tasksReadyToWork: number;
|
||||
tasksBlockedByDeps: number;
|
||||
mostDependedOnTaskId?: number;
|
||||
mostDependedOnCount?: number;
|
||||
avgDependenciesPerTask: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Next task information
|
||||
*/
|
||||
export interface NextTaskInfo {
|
||||
id: string | number;
|
||||
title: string;
|
||||
priority?: TaskPriority;
|
||||
dependencies?: (string | number)[];
|
||||
complexity?: number | string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Status breakdown for progress bars
|
||||
*/
|
||||
export interface StatusBreakdown {
|
||||
'in-progress'?: number;
|
||||
pending?: number;
|
||||
blocked?: number;
|
||||
deferred?: number;
|
||||
cancelled?: number;
|
||||
review?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a progress bar with color-coded status segments
|
||||
*/
|
||||
function createProgressBar(
|
||||
completionPercentage: number,
|
||||
width: number = 30,
|
||||
statusBreakdown?: StatusBreakdown
|
||||
): string {
|
||||
// If no breakdown provided, use simple green bar
|
||||
if (!statusBreakdown) {
|
||||
const filled = Math.round((completionPercentage / 100) * width);
|
||||
const empty = width - filled;
|
||||
return chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
|
||||
}
|
||||
|
||||
// Build the bar with different colored sections
|
||||
// Order matches the status display: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
|
||||
let bar = '';
|
||||
let charsUsed = 0;
|
||||
|
||||
// 1. Green filled blocks for completed tasks (done)
|
||||
const completedChars = Math.round((completionPercentage / 100) * width);
|
||||
if (completedChars > 0) {
|
||||
bar += chalk.green('█').repeat(completedChars);
|
||||
charsUsed += completedChars;
|
||||
}
|
||||
|
||||
// 2. Gray filled blocks for cancelled (won't be done)
|
||||
if (statusBreakdown.cancelled && charsUsed < width) {
|
||||
const cancelledChars = Math.round(
|
||||
(statusBreakdown.cancelled / 100) * width
|
||||
);
|
||||
const actualChars = Math.min(cancelledChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.gray('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Gray filled blocks for deferred (won't be done now)
|
||||
if (statusBreakdown.deferred && charsUsed < width) {
|
||||
const deferredChars = Math.round((statusBreakdown.deferred / 100) * width);
|
||||
const actualChars = Math.min(deferredChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.gray('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Blue filled blocks for in-progress (actively working)
|
||||
if (statusBreakdown['in-progress'] && charsUsed < width) {
|
||||
const inProgressChars = Math.round(
|
||||
(statusBreakdown['in-progress'] / 100) * width
|
||||
);
|
||||
const actualChars = Math.min(inProgressChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.blue('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Magenta empty blocks for review (almost done)
|
||||
if (statusBreakdown.review && charsUsed < width) {
|
||||
const reviewChars = Math.round((statusBreakdown.review / 100) * width);
|
||||
const actualChars = Math.min(reviewChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.magenta('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Yellow empty blocks for pending (ready to start)
|
||||
if (statusBreakdown.pending && charsUsed < width) {
|
||||
const pendingChars = Math.round((statusBreakdown.pending / 100) * width);
|
||||
const actualChars = Math.min(pendingChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.yellow('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Red empty blocks for blocked (can't start yet)
|
||||
if (statusBreakdown.blocked && charsUsed < width) {
|
||||
const blockedChars = Math.round((statusBreakdown.blocked / 100) * width);
|
||||
const actualChars = Math.min(blockedChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.red('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// Fill any remaining space with gray empty yellow blocks
|
||||
if (charsUsed < width) {
|
||||
bar += chalk.yellow('░').repeat(width - charsUsed);
|
||||
}
|
||||
|
||||
return bar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate task statistics from a list of tasks
|
||||
*/
|
||||
export function calculateTaskStatistics(tasks: Task[]): TaskStatistics {
|
||||
const stats: TaskStatistics = {
|
||||
total: tasks.length,
|
||||
done: 0,
|
||||
inProgress: 0,
|
||||
pending: 0,
|
||||
blocked: 0,
|
||||
deferred: 0,
|
||||
cancelled: 0,
|
||||
review: 0,
|
||||
completionPercentage: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
switch (task.status) {
|
||||
case 'done':
|
||||
stats.done++;
|
||||
break;
|
||||
case 'in-progress':
|
||||
stats.inProgress++;
|
||||
break;
|
||||
case 'pending':
|
||||
stats.pending++;
|
||||
break;
|
||||
case 'blocked':
|
||||
stats.blocked++;
|
||||
break;
|
||||
case 'deferred':
|
||||
stats.deferred++;
|
||||
break;
|
||||
case 'cancelled':
|
||||
stats.cancelled++;
|
||||
break;
|
||||
case 'review':
|
||||
stats.review = (stats.review || 0) + 1;
|
||||
break;
|
||||
}
|
||||
});
|
||||
|
||||
stats.completionPercentage =
|
||||
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate subtask statistics from tasks
|
||||
*/
|
||||
export function calculateSubtaskStatistics(tasks: Task[]): TaskStatistics {
|
||||
const stats: TaskStatistics = {
|
||||
total: 0,
|
||||
done: 0,
|
||||
inProgress: 0,
|
||||
pending: 0,
|
||||
blocked: 0,
|
||||
deferred: 0,
|
||||
cancelled: 0,
|
||||
review: 0,
|
||||
completionPercentage: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
stats.total++;
|
||||
switch (subtask.status) {
|
||||
case 'done':
|
||||
stats.done++;
|
||||
break;
|
||||
case 'in-progress':
|
||||
stats.inProgress++;
|
||||
break;
|
||||
case 'pending':
|
||||
stats.pending++;
|
||||
break;
|
||||
case 'blocked':
|
||||
stats.blocked++;
|
||||
break;
|
||||
case 'deferred':
|
||||
stats.deferred++;
|
||||
break;
|
||||
case 'cancelled':
|
||||
stats.cancelled++;
|
||||
break;
|
||||
case 'review':
|
||||
stats.review = (stats.review || 0) + 1;
|
||||
break;
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
stats.completionPercentage =
|
||||
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate dependency statistics
|
||||
*/
|
||||
export function calculateDependencyStatistics(
|
||||
tasks: Task[]
|
||||
): DependencyStatistics {
|
||||
const completedTaskIds = new Set(
|
||||
tasks.filter((t) => t.status === 'done').map((t) => t.id)
|
||||
);
|
||||
|
||||
const tasksWithNoDeps = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' && (!t.dependencies || t.dependencies.length === 0)
|
||||
).length;
|
||||
|
||||
const tasksWithAllDepsSatisfied = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' &&
|
||||
t.dependencies &&
|
||||
t.dependencies.length > 0 &&
|
||||
t.dependencies.every((depId) => completedTaskIds.has(depId))
|
||||
).length;
|
||||
|
||||
const tasksBlockedByDeps = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' &&
|
||||
t.dependencies &&
|
||||
t.dependencies.length > 0 &&
|
||||
!t.dependencies.every((depId) => completedTaskIds.has(depId))
|
||||
).length;
|
||||
|
||||
// Calculate most depended-on task
|
||||
const dependencyCount: Record<string, number> = {};
|
||||
tasks.forEach((task) => {
|
||||
if (task.dependencies && task.dependencies.length > 0) {
|
||||
task.dependencies.forEach((depId) => {
|
||||
const key = String(depId);
|
||||
dependencyCount[key] = (dependencyCount[key] || 0) + 1;
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let mostDependedOnTaskId: number | undefined;
|
||||
let mostDependedOnCount = 0;
|
||||
|
||||
for (const [taskId, count] of Object.entries(dependencyCount)) {
|
||||
if (count > mostDependedOnCount) {
|
||||
mostDependedOnCount = count;
|
||||
mostDependedOnTaskId = parseInt(taskId);
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate average dependencies
|
||||
const totalDependencies = tasks.reduce(
|
||||
(sum, task) => sum + (task.dependencies ? task.dependencies.length : 0),
|
||||
0
|
||||
);
|
||||
const avgDependenciesPerTask =
|
||||
tasks.length > 0 ? totalDependencies / tasks.length : 0;
|
||||
|
||||
return {
|
||||
tasksWithNoDeps,
|
||||
tasksReadyToWork: tasksWithNoDeps + tasksWithAllDepsSatisfied,
|
||||
tasksBlockedByDeps,
|
||||
mostDependedOnTaskId,
|
||||
mostDependedOnCount,
|
||||
avgDependenciesPerTask
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority counts
|
||||
*/
|
||||
export function getPriorityBreakdown(
|
||||
tasks: Task[]
|
||||
): Record<TaskPriority, number> {
|
||||
const breakdown: Record<TaskPriority, number> = {
|
||||
critical: 0,
|
||||
high: 0,
|
||||
medium: 0,
|
||||
low: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
const priority = task.priority || 'medium';
|
||||
breakdown[priority]++;
|
||||
});
|
||||
|
||||
return breakdown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate status breakdown as percentages
|
||||
*/
|
||||
function calculateStatusBreakdown(stats: TaskStatistics): StatusBreakdown {
|
||||
if (stats.total === 0) return {};
|
||||
|
||||
return {
|
||||
'in-progress': (stats.inProgress / stats.total) * 100,
|
||||
pending: (stats.pending / stats.total) * 100,
|
||||
blocked: (stats.blocked / stats.total) * 100,
|
||||
deferred: (stats.deferred / stats.total) * 100,
|
||||
cancelled: (stats.cancelled / stats.total) * 100,
|
||||
review: ((stats.review || 0) / stats.total) * 100
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Format status counts in the correct order with colors
|
||||
* @param stats - The statistics object containing counts
|
||||
* @param isSubtask - Whether this is for subtasks (affects "Done" vs "Completed" label)
|
||||
*/
|
||||
function formatStatusLine(
|
||||
stats: TaskStatistics,
|
||||
isSubtask: boolean = false
|
||||
): string {
|
||||
const parts: string[] = [];
|
||||
|
||||
// Order: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
|
||||
if (isSubtask) {
|
||||
parts.push(`Completed: ${chalk.green(`${stats.done}/${stats.total}`)}`);
|
||||
} else {
|
||||
parts.push(`Done: ${chalk.green(stats.done)}`);
|
||||
}
|
||||
|
||||
parts.push(`Cancelled: ${chalk.gray(stats.cancelled)}`);
|
||||
parts.push(`Deferred: ${chalk.gray(stats.deferred)}`);
|
||||
|
||||
// Add line break for second row
|
||||
const firstLine = parts.join(' ');
|
||||
parts.length = 0;
|
||||
|
||||
parts.push(`In Progress: ${chalk.blue(stats.inProgress)}`);
|
||||
parts.push(`Review: ${chalk.magenta(stats.review || 0)}`);
|
||||
parts.push(`Pending: ${chalk.yellow(stats.pending)}`);
|
||||
parts.push(`Blocked: ${chalk.red(stats.blocked)}`);
|
||||
|
||||
const secondLine = parts.join(' ');
|
||||
|
||||
return firstLine + '\n' + secondLine;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the project dashboard box
|
||||
*/
|
||||
export function displayProjectDashboard(
|
||||
taskStats: TaskStatistics,
|
||||
subtaskStats: TaskStatistics,
|
||||
priorityBreakdown: Record<TaskPriority, number>
|
||||
): string {
|
||||
// Calculate status breakdowns using the helper function
|
||||
const taskStatusBreakdown = calculateStatusBreakdown(taskStats);
|
||||
const subtaskStatusBreakdown = calculateStatusBreakdown(subtaskStats);
|
||||
|
||||
// Create progress bars with the breakdowns
|
||||
const taskProgressBar = createProgressBar(
|
||||
taskStats.completionPercentage,
|
||||
30,
|
||||
taskStatusBreakdown
|
||||
);
|
||||
const subtaskProgressBar = createProgressBar(
|
||||
subtaskStats.completionPercentage,
|
||||
30,
|
||||
subtaskStatusBreakdown
|
||||
);
|
||||
|
||||
const taskPercentage = `${taskStats.completionPercentage}% ${taskStats.done}/${taskStats.total}`;
|
||||
const subtaskPercentage = `${subtaskStats.completionPercentage}% ${subtaskStats.done}/${subtaskStats.total}`;
|
||||
|
||||
const content =
|
||||
chalk.white.bold('Project Dashboard') +
|
||||
'\n' +
|
||||
`Tasks Progress: ${taskProgressBar} ${chalk.yellow(taskPercentage)}\n` +
|
||||
formatStatusLine(taskStats, false) +
|
||||
'\n\n' +
|
||||
`Subtasks Progress: ${subtaskProgressBar} ${chalk.cyan(subtaskPercentage)}\n` +
|
||||
formatStatusLine(subtaskStats, true) +
|
||||
'\n\n' +
|
||||
chalk.cyan.bold('Priority Breakdown:') +
|
||||
'\n' +
|
||||
`${chalk.red('•')} ${chalk.white('High priority:')} ${priorityBreakdown.high}\n` +
|
||||
`${chalk.yellow('•')} ${chalk.white('Medium priority:')} ${priorityBreakdown.medium}\n` +
|
||||
`${chalk.green('•')} ${chalk.white('Low priority:')} ${priorityBreakdown.low}`;
|
||||
|
||||
return content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the dependency dashboard box
|
||||
*/
|
||||
export function displayDependencyDashboard(
|
||||
depStats: DependencyStatistics,
|
||||
nextTask?: NextTaskInfo
|
||||
): string {
|
||||
const content =
|
||||
chalk.white.bold('Dependency Status & Next Task') +
|
||||
'\n' +
|
||||
chalk.cyan.bold('Dependency Metrics:') +
|
||||
'\n' +
|
||||
`${chalk.green('•')} ${chalk.white('Tasks with no dependencies:')} ${depStats.tasksWithNoDeps}\n` +
|
||||
`${chalk.green('•')} ${chalk.white('Tasks ready to work on:')} ${depStats.tasksReadyToWork}\n` +
|
||||
`${chalk.yellow('•')} ${chalk.white('Tasks blocked by dependencies:')} ${depStats.tasksBlockedByDeps}\n` +
|
||||
`${chalk.magenta('•')} ${chalk.white('Most depended-on task:')} ${
|
||||
depStats.mostDependedOnTaskId
|
||||
? chalk.cyan(
|
||||
`#${depStats.mostDependedOnTaskId} (${depStats.mostDependedOnCount} dependents)`
|
||||
)
|
||||
: chalk.gray('None')
|
||||
}\n` +
|
||||
`${chalk.blue('•')} ${chalk.white('Avg dependencies per task:')} ${depStats.avgDependenciesPerTask.toFixed(1)}\n\n` +
|
||||
chalk.cyan.bold('Next Task to Work On:') +
|
||||
'\n' +
|
||||
`ID: ${nextTask ? chalk.cyan(String(nextTask.id)) : chalk.gray('N/A')} - ${
|
||||
nextTask
|
||||
? chalk.white.bold(nextTask.title)
|
||||
: chalk.yellow('No task available')
|
||||
}\n` +
|
||||
`Priority: ${nextTask?.priority || chalk.gray('N/A')} Dependencies: ${
|
||||
nextTask?.dependencies?.length
|
||||
? chalk.cyan(nextTask.dependencies.join(', '))
|
||||
: chalk.gray('None')
|
||||
}\n` +
|
||||
`Complexity: ${nextTask?.complexity !== undefined ? getComplexityWithColor(nextTask.complexity) : chalk.gray('N/A')}`;
|
||||
|
||||
return content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display dashboard boxes side by side or stacked
|
||||
*/
|
||||
export function displayDashboards(
|
||||
taskStats: TaskStatistics,
|
||||
subtaskStats: TaskStatistics,
|
||||
priorityBreakdown: Record<TaskPriority, number>,
|
||||
depStats: DependencyStatistics,
|
||||
nextTask?: NextTaskInfo
|
||||
): void {
|
||||
const projectDashboardContent = displayProjectDashboard(
|
||||
taskStats,
|
||||
subtaskStats,
|
||||
priorityBreakdown
|
||||
);
|
||||
const dependencyDashboardContent = displayDependencyDashboard(
|
||||
depStats,
|
||||
nextTask
|
||||
);
|
||||
|
||||
// Get terminal width
|
||||
const terminalWidth = process.stdout.columns || 80;
|
||||
const minDashboardWidth = 50;
|
||||
const minDependencyWidth = 50;
|
||||
const totalMinWidth = minDashboardWidth + minDependencyWidth + 4;
|
||||
|
||||
// If terminal is wide enough, show side by side
|
||||
if (terminalWidth >= totalMinWidth) {
|
||||
const halfWidth = Math.floor(terminalWidth / 2);
|
||||
const boxContentWidth = halfWidth - 4;
|
||||
|
||||
const dashboardBox = boxen(projectDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
width: boxContentWidth,
|
||||
dimBorder: false
|
||||
});
|
||||
|
||||
const dependencyBox = boxen(dependencyDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
width: boxContentWidth,
|
||||
dimBorder: false
|
||||
});
|
||||
|
||||
// Create side-by-side layout
|
||||
const dashboardLines = dashboardBox.split('\n');
|
||||
const dependencyLines = dependencyBox.split('\n');
|
||||
const maxHeight = Math.max(dashboardLines.length, dependencyLines.length);
|
||||
|
||||
const combinedLines = [];
|
||||
for (let i = 0; i < maxHeight; i++) {
|
||||
const dashLine = i < dashboardLines.length ? dashboardLines[i] : '';
|
||||
const depLine = i < dependencyLines.length ? dependencyLines[i] : '';
|
||||
const paddedDashLine = dashLine.padEnd(halfWidth, ' ');
|
||||
combinedLines.push(paddedDashLine + depLine);
|
||||
}
|
||||
|
||||
console.log(combinedLines.join('\n'));
|
||||
} else {
|
||||
// Show stacked vertically
|
||||
const dashboardBox = boxen(projectDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 0, bottom: 1 }
|
||||
});
|
||||
|
||||
const dependencyBox = boxen(dependencyDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 0, bottom: 1 }
|
||||
});
|
||||
|
||||
console.log(dashboardBox);
|
||||
console.log(dependencyBox);
|
||||
}
|
||||
}
|
||||
45
apps/cli/src/ui/components/header.component.ts
Normal file
45
apps/cli/src/ui/components/header.component.ts
Normal file
@@ -0,0 +1,45 @@
|
||||
/**
|
||||
* @fileoverview Task Master header component
|
||||
* Displays the banner, version, project info, and file path
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
|
||||
/**
|
||||
* Header configuration options
|
||||
*/
|
||||
export interface HeaderOptions {
|
||||
title?: string;
|
||||
tag?: string;
|
||||
filePath?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the Task Master header with project info
|
||||
*/
|
||||
export function displayHeader(options: HeaderOptions = {}): void {
|
||||
const { filePath, tag } = options;
|
||||
|
||||
// Display tag and file path info
|
||||
if (tag) {
|
||||
let tagInfo = '';
|
||||
|
||||
if (tag && tag !== 'master') {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
||||
} else {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
||||
}
|
||||
|
||||
console.log(tagInfo);
|
||||
|
||||
if (filePath) {
|
||||
// Convert to absolute path if it's relative
|
||||
const absolutePath = filePath.startsWith('/')
|
||||
? filePath
|
||||
: `${process.cwd()}/${filePath}`;
|
||||
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
|
||||
}
|
||||
|
||||
console.log(); // Empty line for spacing
|
||||
}
|
||||
}
|
||||
9
apps/cli/src/ui/components/index.ts
Normal file
9
apps/cli/src/ui/components/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* @fileoverview UI components exports
|
||||
*/
|
||||
|
||||
export * from './header.component.js';
|
||||
export * from './dashboard.component.js';
|
||||
export * from './next-task.component.js';
|
||||
export * from './suggested-steps.component.js';
|
||||
export * from './task-detail.component.js';
|
||||
141
apps/cli/src/ui/components/next-task.component.ts
Normal file
141
apps/cli/src/ui/components/next-task.component.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
/**
|
||||
* @fileoverview Next task recommendation component
|
||||
* Displays detailed information about the recommended next task
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import type { Task } from '@tm/core/types';
|
||||
import { getComplexityWithColor } from '../../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Next task display options
|
||||
*/
|
||||
export interface NextTaskDisplayOptions {
|
||||
id: string | number;
|
||||
title: string;
|
||||
priority?: string;
|
||||
status?: string;
|
||||
dependencies?: (string | number)[];
|
||||
description?: string;
|
||||
complexity?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the recommended next task section
|
||||
*/
|
||||
export function displayRecommendedNextTask(
|
||||
task: NextTaskDisplayOptions | undefined
|
||||
): void {
|
||||
if (!task) {
|
||||
// If no task available, show a message
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
|
||||
),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow',
|
||||
title: '⚠ NO TASKS AVAILABLE ⚠',
|
||||
titleAlignment: 'center'
|
||||
}
|
||||
)
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Build the content for the next task box
|
||||
const content = [];
|
||||
|
||||
// Task header with ID and title
|
||||
content.push(
|
||||
`🔥 ${chalk.hex('#FF8800').bold('Next Task to Work On:')} ${chalk.yellow(`#${task.id}`)}${chalk.hex('#FF8800').bold(` - ${task.title}`)}`
|
||||
);
|
||||
content.push('');
|
||||
|
||||
// Priority and Status line
|
||||
const statusLine = [];
|
||||
if (task.priority) {
|
||||
const priorityColor =
|
||||
task.priority === 'high'
|
||||
? chalk.red
|
||||
: task.priority === 'medium'
|
||||
? chalk.yellow
|
||||
: chalk.gray;
|
||||
statusLine.push(`Priority: ${priorityColor.bold(task.priority)}`);
|
||||
}
|
||||
if (task.status) {
|
||||
const statusDisplay =
|
||||
task.status === 'pending'
|
||||
? chalk.yellow('○ pending')
|
||||
: task.status === 'in-progress'
|
||||
? chalk.blue('▶ in-progress')
|
||||
: chalk.gray(task.status);
|
||||
statusLine.push(`Status: ${statusDisplay}`);
|
||||
}
|
||||
content.push(statusLine.join(' '));
|
||||
|
||||
// Dependencies
|
||||
const depsDisplay =
|
||||
!task.dependencies || task.dependencies.length === 0
|
||||
? chalk.gray('None')
|
||||
: chalk.cyan(task.dependencies.join(', '));
|
||||
content.push(`Dependencies: ${depsDisplay}`);
|
||||
|
||||
// Complexity with color and label
|
||||
if (typeof task.complexity === 'number') {
|
||||
content.push(`Complexity: ${getComplexityWithColor(task.complexity)}`);
|
||||
}
|
||||
|
||||
// Description if available
|
||||
if (task.description) {
|
||||
content.push('');
|
||||
content.push(`Description: ${chalk.white(task.description)}`);
|
||||
}
|
||||
|
||||
// Action commands
|
||||
content.push('');
|
||||
content.push(
|
||||
`${chalk.cyan('Start working:')} ${chalk.yellow(`task-master set-status --id=${task.id} --status=in-progress`)}`
|
||||
);
|
||||
content.push(
|
||||
`${chalk.cyan('View details:')} ${chalk.yellow(`task-master show ${task.id}`)}`
|
||||
);
|
||||
|
||||
// Display in a styled box with orange border
|
||||
console.log(
|
||||
boxen(content.join('\n'), {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: '#FFA500', // Orange color
|
||||
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
||||
titleAlignment: 'center',
|
||||
width: process.stdout.columns * 0.97,
|
||||
fullscreen: false
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get task description from the full task object
|
||||
*/
|
||||
export function getTaskDescription(task: Task): string | undefined {
|
||||
// Try to get description from the task
|
||||
// This could be from task.description or the first line of task.details
|
||||
if ('description' in task && task.description) {
|
||||
return task.description as string;
|
||||
}
|
||||
|
||||
if ('details' in task && task.details) {
|
||||
// Take first sentence or line from details
|
||||
const details = task.details as string;
|
||||
const firstLine = details.split('\n')[0];
|
||||
const firstSentence = firstLine.split('.')[0];
|
||||
return firstSentence;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
31
apps/cli/src/ui/components/suggested-steps.component.ts
Normal file
31
apps/cli/src/ui/components/suggested-steps.component.ts
Normal file
@@ -0,0 +1,31 @@
|
||||
/**
|
||||
* @fileoverview Suggested next steps component
|
||||
* Displays helpful command suggestions at the end of the list
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
|
||||
/**
|
||||
* Display suggested next steps section
|
||||
*/
|
||||
export function displaySuggestedNextSteps(): void {
|
||||
const steps = [
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master next')} to see what to work on next`,
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
|
||||
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master set-status --id=<id> --status=done')} to mark a task as complete`
|
||||
];
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Suggested Next Steps:') + '\n\n' + steps.join('\n'),
|
||||
{
|
||||
padding: 1,
|
||||
margin: { top: 0, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'gray',
|
||||
width: process.stdout.columns * 0.97
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
340
apps/cli/src/ui/components/task-detail.component.ts
Normal file
340
apps/cli/src/ui/components/task-detail.component.ts
Normal file
@@ -0,0 +1,340 @@
|
||||
/**
|
||||
* @fileoverview Task detail component for show command
|
||||
* Displays detailed task information in a structured format
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { marked, MarkedExtension } from 'marked';
|
||||
import { markedTerminal } from 'marked-terminal';
|
||||
import type { Task } from '@tm/core/types';
|
||||
import {
|
||||
getStatusWithColor,
|
||||
getPriorityWithColor,
|
||||
getComplexityWithColor
|
||||
} from '../../utils/ui.js';
|
||||
|
||||
// Configure marked to use terminal renderer with subtle colors
|
||||
marked.use(
|
||||
markedTerminal({
|
||||
// More subtle colors that match the overall design
|
||||
code: (code: string) => {
|
||||
// Custom code block handler to preserve formatting
|
||||
return code
|
||||
.split('\n')
|
||||
.map((line) => ' ' + chalk.cyan(line))
|
||||
.join('\n');
|
||||
},
|
||||
blockquote: chalk.gray.italic,
|
||||
html: chalk.gray,
|
||||
heading: chalk.white.bold, // White bold for headings
|
||||
hr: chalk.gray,
|
||||
listitem: chalk.white, // White for list items
|
||||
paragraph: chalk.white, // White for paragraphs (default text color)
|
||||
strong: chalk.white.bold, // White bold for strong text
|
||||
em: chalk.white.italic, // White italic for emphasis
|
||||
codespan: chalk.cyan, // Cyan for inline code (no background)
|
||||
del: chalk.dim.strikethrough,
|
||||
link: chalk.blue,
|
||||
href: chalk.blue.underline,
|
||||
// Add more explicit code block handling
|
||||
showSectionPrefix: false,
|
||||
unescape: true,
|
||||
emoji: false,
|
||||
// Try to preserve whitespace in code blocks
|
||||
tab: 4,
|
||||
width: 120
|
||||
}) as MarkedExtension
|
||||
);
|
||||
|
||||
// Also set marked options to preserve whitespace
|
||||
marked.setOptions({
|
||||
breaks: true,
|
||||
gfm: true
|
||||
});
|
||||
|
||||
/**
|
||||
* Display the task header with tag
|
||||
*/
|
||||
export function displayTaskHeader(
|
||||
taskId: string | number,
|
||||
title: string
|
||||
): void {
|
||||
// Display task header box
|
||||
console.log(
|
||||
boxen(chalk.white.bold(`Task: #${taskId} - ${title}`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display task properties in a table format
|
||||
*/
|
||||
export function displayTaskProperties(task: Task): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Create table for task properties - simple 2-column layout
|
||||
const table = new Table({
|
||||
head: [],
|
||||
style: {
|
||||
head: [],
|
||||
border: ['grey']
|
||||
},
|
||||
colWidths: [
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.8)
|
||||
],
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
const deps =
|
||||
task.dependencies && task.dependencies.length > 0
|
||||
? task.dependencies.map((d) => String(d)).join(', ')
|
||||
: 'None';
|
||||
|
||||
// Build the left column (labels) and right column (values)
|
||||
const labels = [
|
||||
chalk.cyan('ID:'),
|
||||
chalk.cyan('Title:'),
|
||||
chalk.cyan('Status:'),
|
||||
chalk.cyan('Priority:'),
|
||||
chalk.cyan('Dependencies:'),
|
||||
chalk.cyan('Complexity:'),
|
||||
chalk.cyan('Description:')
|
||||
].join('\n');
|
||||
|
||||
const values = [
|
||||
String(task.id),
|
||||
task.title,
|
||||
getStatusWithColor(task.status),
|
||||
getPriorityWithColor(task.priority),
|
||||
deps,
|
||||
typeof task.complexity === 'number'
|
||||
? getComplexityWithColor(task.complexity)
|
||||
: chalk.gray('N/A'),
|
||||
task.description || ''
|
||||
].join('\n');
|
||||
|
||||
table.push([labels, values]);
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Display implementation details in a box
|
||||
*/
|
||||
export function displayImplementationDetails(details: string): void {
|
||||
// Handle all escaped characters properly
|
||||
const cleanDetails = details
|
||||
.replace(/\\n/g, '\n') // Convert \n to actual newlines
|
||||
.replace(/\\t/g, '\t') // Convert \t to actual tabs
|
||||
.replace(/\\"/g, '"') // Convert \" to actual quotes
|
||||
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
|
||||
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
|
||||
// Parse markdown to terminal-friendly format
|
||||
const markdownResult = marked(cleanDetails);
|
||||
const formattedDetails =
|
||||
typeof markdownResult === 'string' ? markdownResult.trim() : cleanDetails; // Fallback to original if Promise
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Implementation Details:') + '\n\n' + formattedDetails,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan', // Changed to cyan to match the original
|
||||
width: terminalWidth // Fixed width to match the original
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display test strategy in a box
|
||||
*/
|
||||
export function displayTestStrategy(testStrategy: string): void {
|
||||
// Handle all escaped characters properly (same as implementation details)
|
||||
const cleanStrategy = testStrategy
|
||||
.replace(/\\n/g, '\n') // Convert \n to actual newlines
|
||||
.replace(/\\t/g, '\t') // Convert \t to actual tabs
|
||||
.replace(/\\"/g, '"') // Convert \" to actual quotes
|
||||
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
|
||||
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
|
||||
// Parse markdown to terminal-friendly format (same as implementation details)
|
||||
const markdownResult = marked(cleanStrategy);
|
||||
const formattedStrategy =
|
||||
typeof markdownResult === 'string' ? markdownResult.trim() : cleanStrategy; // Fallback to original if Promise
|
||||
|
||||
console.log(
|
||||
boxen(chalk.white.bold('Test Strategy:') + '\n\n' + formattedStrategy, {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan', // Changed to cyan to match implementation details
|
||||
width: terminalWidth
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display subtasks in a table format
|
||||
*/
|
||||
export function displaySubtasks(
|
||||
subtasks: Array<{
|
||||
id: string | number;
|
||||
title: string;
|
||||
status: any;
|
||||
description?: string;
|
||||
dependencies?: string[];
|
||||
}>
|
||||
): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Display subtasks header
|
||||
console.log(
|
||||
boxen(chalk.magenta.bold('Subtasks'), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, bottom: 0 }
|
||||
})
|
||||
);
|
||||
|
||||
// Create subtasks table
|
||||
const table = new Table({
|
||||
head: [
|
||||
chalk.magenta.bold('ID'),
|
||||
chalk.magenta.bold('Status'),
|
||||
chalk.magenta.bold('Title'),
|
||||
chalk.magenta.bold('Deps')
|
||||
],
|
||||
style: {
|
||||
head: [],
|
||||
border: ['grey']
|
||||
},
|
||||
colWidths: [
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.6),
|
||||
Math.floor(terminalWidth * 0.15)
|
||||
],
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
subtasks.forEach((subtask) => {
|
||||
const subtaskId = String(subtask.id);
|
||||
|
||||
// Format dependencies
|
||||
const deps =
|
||||
subtask.dependencies && subtask.dependencies.length > 0
|
||||
? subtask.dependencies.join(', ')
|
||||
: 'None';
|
||||
|
||||
table.push([
|
||||
subtaskId,
|
||||
getStatusWithColor(subtask.status),
|
||||
subtask.title,
|
||||
deps
|
||||
]);
|
||||
});
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Display suggested actions
|
||||
*/
|
||||
export function displaySuggestedActions(taskId: string | number): void {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Suggested Actions:') +
|
||||
'\n\n' +
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow(`task-master set-status --id=${taskId} --status=in-progress`)} to start working\n` +
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow(`task-master expand --id=${taskId}`)} to break down into subtasks\n` +
|
||||
`${chalk.cyan('3.')} Run ${chalk.yellow(`task-master update-task --id=${taskId} --prompt="..."`)} to update details`,
|
||||
{
|
||||
padding: 1,
|
||||
margin: { top: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
width: process.stdout.columns * 0.95 || 100
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display complete task details - used by both show and start commands
|
||||
*/
|
||||
export function displayTaskDetails(
|
||||
task: Task,
|
||||
options?: {
|
||||
statusFilter?: string;
|
||||
showSuggestedActions?: boolean;
|
||||
customHeader?: string;
|
||||
headerColor?: string;
|
||||
}
|
||||
): void {
|
||||
const {
|
||||
statusFilter,
|
||||
showSuggestedActions = false,
|
||||
customHeader,
|
||||
headerColor = 'blue'
|
||||
} = options || {};
|
||||
|
||||
// Display header - either custom or default
|
||||
if (customHeader) {
|
||||
console.log(
|
||||
boxen(chalk.white.bold(customHeader), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: headerColor,
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
} else {
|
||||
displayTaskHeader(task.id, task.title);
|
||||
}
|
||||
|
||||
// Display task properties in table format
|
||||
displayTaskProperties(task);
|
||||
|
||||
// Display implementation details if available
|
||||
if (task.details) {
|
||||
console.log(); // Empty line for spacing
|
||||
displayImplementationDetails(task.details);
|
||||
}
|
||||
|
||||
// Display test strategy if available
|
||||
if ('testStrategy' in task && task.testStrategy) {
|
||||
console.log(); // Empty line for spacing
|
||||
displayTestStrategy(task.testStrategy as string);
|
||||
}
|
||||
|
||||
// Display subtasks if available
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
// Filter subtasks by status if provided
|
||||
const filteredSubtasks = statusFilter
|
||||
? task.subtasks.filter((sub) => sub.status === statusFilter)
|
||||
: task.subtasks;
|
||||
|
||||
if (filteredSubtasks.length === 0 && statusFilter) {
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(chalk.gray(` No subtasks with status '${statusFilter}'`));
|
||||
} else if (filteredSubtasks.length > 0) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySubtasks(filteredSubtasks);
|
||||
}
|
||||
}
|
||||
|
||||
// Display suggested actions if requested
|
||||
if (showSuggestedActions) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySuggestedActions(task.id);
|
||||
}
|
||||
}
|
||||
9
apps/cli/src/ui/index.ts
Normal file
9
apps/cli/src/ui/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* @fileoverview Main UI exports
|
||||
*/
|
||||
|
||||
// Export all components
|
||||
export * from './components/index.js';
|
||||
|
||||
// Re-export existing UI utilities
|
||||
export * from '../utils/ui.js';
|
||||
377
apps/cli/src/utils/auto-update.ts
Normal file
377
apps/cli/src/utils/auto-update.ts
Normal file
@@ -0,0 +1,377 @@
|
||||
/**
|
||||
* @fileoverview Auto-update utilities for task-master-ai CLI
|
||||
*/
|
||||
|
||||
import { spawn } from 'child_process';
|
||||
import https from 'https';
|
||||
import chalk from 'chalk';
|
||||
import ora from 'ora';
|
||||
import boxen from 'boxen';
|
||||
|
||||
export interface UpdateInfo {
|
||||
currentVersion: string;
|
||||
latestVersion: string;
|
||||
needsUpdate: boolean;
|
||||
highlights?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current version from build-time injected environment variable
|
||||
*/
|
||||
function getCurrentVersion(): string {
|
||||
// Version is injected at build time via TM_PUBLIC_VERSION
|
||||
const version = process.env.TM_PUBLIC_VERSION;
|
||||
if (version && version !== 'unknown') {
|
||||
return version;
|
||||
}
|
||||
|
||||
// Fallback for development or if injection failed
|
||||
console.warn('Could not read version from TM_PUBLIC_VERSION, using fallback');
|
||||
return '0.0.0';
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare semantic versions with proper pre-release handling
|
||||
* @param v1 - First version
|
||||
* @param v2 - Second version
|
||||
* @returns -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2
|
||||
*/
|
||||
export function compareVersions(v1: string, v2: string): number {
|
||||
const toParts = (v: string) => {
|
||||
const [core, pre = ''] = v.split('-', 2);
|
||||
const nums = core.split('.').map((n) => Number.parseInt(n, 10) || 0);
|
||||
return { nums, pre };
|
||||
};
|
||||
|
||||
const a = toParts(v1);
|
||||
const b = toParts(v2);
|
||||
const len = Math.max(a.nums.length, b.nums.length);
|
||||
|
||||
// Compare numeric parts
|
||||
for (let i = 0; i < len; i++) {
|
||||
const d = (a.nums[i] || 0) - (b.nums[i] || 0);
|
||||
if (d !== 0) return d < 0 ? -1 : 1;
|
||||
}
|
||||
|
||||
// Handle pre-release comparison
|
||||
if (a.pre && !b.pre) return -1; // prerelease < release
|
||||
if (!a.pre && b.pre) return 1; // release > prerelease
|
||||
if (a.pre === b.pre) return 0; // same or both empty
|
||||
return a.pre < b.pre ? -1 : 1; // basic prerelease tie-break
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch CHANGELOG.md from GitHub and extract highlights for a specific version
|
||||
*/
|
||||
async function fetchChangelogHighlights(version: string): Promise<string[]> {
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'raw.githubusercontent.com',
|
||||
path: '/eyaltoledano/claude-task-master/main/CHANGELOG.md',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'User-Agent': `task-master-ai/${version}`
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
try {
|
||||
if (res.statusCode !== 200) {
|
||||
resolve([]);
|
||||
return;
|
||||
}
|
||||
|
||||
const highlights = parseChangelogHighlights(data, version);
|
||||
resolve(highlights);
|
||||
} catch (error) {
|
||||
resolve([]);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', () => {
|
||||
resolve([]);
|
||||
});
|
||||
|
||||
req.setTimeout(3000, () => {
|
||||
req.destroy();
|
||||
resolve([]);
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse changelog markdown to extract Minor Changes for a specific version
|
||||
* @internal - Exported for testing purposes only
|
||||
*/
|
||||
export function parseChangelogHighlights(
|
||||
changelog: string,
|
||||
version: string
|
||||
): string[] {
|
||||
try {
|
||||
// Validate version format (basic semver pattern) to prevent ReDoS
|
||||
if (!/^\d+\.\d+\.\d+(-[a-zA-Z0-9.-]+)?$/.test(version)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Find the version section
|
||||
const versionRegex = new RegExp(
|
||||
`## ${version.replace(/\./g, '\\.')}\\s*\\n`,
|
||||
'i'
|
||||
);
|
||||
const versionMatch = changelog.match(versionRegex);
|
||||
|
||||
if (!versionMatch) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Extract content from this version to the next version heading
|
||||
const startIdx = versionMatch.index! + versionMatch[0].length;
|
||||
const nextVersionIdx = changelog.indexOf('\n## ', startIdx);
|
||||
const versionContent =
|
||||
nextVersionIdx > 0
|
||||
? changelog.slice(startIdx, nextVersionIdx)
|
||||
: changelog.slice(startIdx);
|
||||
|
||||
// Find Minor Changes section
|
||||
const minorChangesMatch = versionContent.match(
|
||||
/### Minor Changes\s*\n([\s\S]*?)(?=\n###|\n##|$)/i
|
||||
);
|
||||
|
||||
if (!minorChangesMatch) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const minorChangesContent = minorChangesMatch[1];
|
||||
const highlights: string[] = [];
|
||||
|
||||
// Extract all bullet points (lines starting with -)
|
||||
// Format: - [#PR](...) Thanks [@author]! - Description
|
||||
const bulletRegex = /^-\s+\[#\d+\][^\n]*?!\s+-\s+(.+?)$/gm;
|
||||
let match;
|
||||
|
||||
while ((match = bulletRegex.exec(minorChangesContent)) !== null) {
|
||||
const desc = match[1].trim();
|
||||
highlights.push(desc);
|
||||
}
|
||||
|
||||
return highlights;
|
||||
} catch (error) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for newer version of task-master-ai
|
||||
*/
|
||||
export async function checkForUpdate(
|
||||
currentVersionOverride?: string
|
||||
): Promise<UpdateInfo> {
|
||||
const currentVersion = currentVersionOverride || getCurrentVersion();
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'registry.npmjs.org',
|
||||
path: '/task-master-ai',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/vnd.npm.install-v1+json',
|
||||
'User-Agent': `task-master-ai/${currentVersion}`
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', async () => {
|
||||
try {
|
||||
if (res.statusCode !== 200)
|
||||
throw new Error(`npm registry status ${res.statusCode}`);
|
||||
const npmData = JSON.parse(data);
|
||||
const latestVersion = npmData['dist-tags']?.latest || currentVersion;
|
||||
|
||||
const needsUpdate =
|
||||
compareVersions(currentVersion, latestVersion) < 0;
|
||||
|
||||
// Fetch highlights if update is needed
|
||||
let highlights: string[] | undefined;
|
||||
if (needsUpdate) {
|
||||
highlights = await fetchChangelogHighlights(latestVersion);
|
||||
}
|
||||
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion,
|
||||
needsUpdate,
|
||||
highlights
|
||||
});
|
||||
} catch (error) {
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', () => {
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
});
|
||||
|
||||
req.setTimeout(3000, () => {
|
||||
req.destroy();
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display upgrade notification message
|
||||
*/
|
||||
export function displayUpgradeNotification(
|
||||
currentVersion: string,
|
||||
latestVersion: string,
|
||||
highlights?: string[]
|
||||
) {
|
||||
let content = `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}`;
|
||||
|
||||
if (highlights && highlights.length > 0) {
|
||||
content += '\n\n' + chalk.bold("What's New:");
|
||||
for (const highlight of highlights) {
|
||||
content += '\n' + chalk.cyan('• ') + highlight;
|
||||
}
|
||||
content += '\n\n' + 'Auto-updating to the latest version...';
|
||||
} else {
|
||||
content +=
|
||||
'\n\n' +
|
||||
'Auto-updating to the latest version with new features and bug fixes...';
|
||||
}
|
||||
|
||||
const message = boxen(content, {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round'
|
||||
});
|
||||
|
||||
console.log(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Automatically update task-master-ai to the latest version
|
||||
*/
|
||||
export async function performAutoUpdate(
|
||||
latestVersion: string
|
||||
): Promise<boolean> {
|
||||
if (
|
||||
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1' ||
|
||||
process.env.CI ||
|
||||
process.env.NODE_ENV === 'test'
|
||||
) {
|
||||
const reason =
|
||||
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1'
|
||||
? 'TASKMASTER_SKIP_AUTO_UPDATE=1'
|
||||
: process.env.CI
|
||||
? 'CI environment'
|
||||
: 'NODE_ENV=test';
|
||||
console.log(chalk.dim(`Skipping auto-update (${reason})`));
|
||||
return false;
|
||||
}
|
||||
const spinner = ora({
|
||||
text: chalk.blue(
|
||||
`Updating task-master-ai to version ${chalk.green(latestVersion)}`
|
||||
),
|
||||
spinner: 'dots',
|
||||
color: 'blue'
|
||||
}).start();
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const updateProcess = spawn(
|
||||
'npm',
|
||||
[
|
||||
'install',
|
||||
'-g',
|
||||
`task-master-ai@${latestVersion}`,
|
||||
'--no-fund',
|
||||
'--no-audit',
|
||||
'--loglevel=warn'
|
||||
],
|
||||
{
|
||||
stdio: ['ignore', 'pipe', 'pipe']
|
||||
}
|
||||
);
|
||||
|
||||
let errorOutput = '';
|
||||
|
||||
updateProcess.stdout.on('data', () => {
|
||||
// Update spinner text with progress
|
||||
spinner.text = chalk.blue(
|
||||
`Installing task-master-ai@${latestVersion}...`
|
||||
);
|
||||
});
|
||||
|
||||
updateProcess.stderr.on('data', (data) => {
|
||||
errorOutput += data.toString();
|
||||
});
|
||||
|
||||
updateProcess.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
spinner.succeed(
|
||||
chalk.green(
|
||||
`Successfully updated to version ${chalk.bold(latestVersion)}`
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.dim('Please restart your command to use the new version.')
|
||||
);
|
||||
resolve(true);
|
||||
} else {
|
||||
spinner.fail(chalk.red('Auto-update failed'));
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
`Please run manually: npm install -g task-master-ai@${latestVersion}`
|
||||
)
|
||||
);
|
||||
if (errorOutput) {
|
||||
console.log(chalk.dim(`Error: ${errorOutput.trim()}`));
|
||||
}
|
||||
resolve(false);
|
||||
}
|
||||
});
|
||||
|
||||
updateProcess.on('error', (error) => {
|
||||
spinner.fail(chalk.red('Auto-update failed'));
|
||||
console.log(chalk.red('Error:'), error.message);
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
`Please run manually: npm install -g task-master-ai@${latestVersion}`
|
||||
)
|
||||
);
|
||||
resolve(false);
|
||||
});
|
||||
});
|
||||
}
|
||||
393
apps/cli/src/utils/ui.ts
Normal file
393
apps/cli/src/utils/ui.ts
Normal file
@@ -0,0 +1,393 @@
|
||||
/**
|
||||
* @fileoverview UI utilities for Task Master CLI
|
||||
* Provides formatting, display, and visual components for the command line interface
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import type { Task, TaskStatus, TaskPriority } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function getStatusWithColor(
|
||||
status: TaskStatus,
|
||||
forTable: boolean = false
|
||||
): string {
|
||||
const statusConfig = {
|
||||
done: {
|
||||
color: chalk.green,
|
||||
icon: '✓',
|
||||
tableIcon: '✓'
|
||||
},
|
||||
pending: {
|
||||
color: chalk.yellow,
|
||||
icon: '○',
|
||||
tableIcon: '○'
|
||||
},
|
||||
'in-progress': {
|
||||
color: chalk.hex('#FFA500'),
|
||||
icon: '▶',
|
||||
tableIcon: '▶'
|
||||
},
|
||||
deferred: {
|
||||
color: chalk.gray,
|
||||
icon: 'x',
|
||||
tableIcon: 'x'
|
||||
},
|
||||
review: {
|
||||
color: chalk.magenta,
|
||||
icon: '?',
|
||||
tableIcon: '?'
|
||||
},
|
||||
cancelled: {
|
||||
color: chalk.gray,
|
||||
icon: 'x',
|
||||
tableIcon: 'x'
|
||||
},
|
||||
blocked: {
|
||||
color: chalk.red,
|
||||
icon: '!',
|
||||
tableIcon: '!'
|
||||
},
|
||||
completed: {
|
||||
color: chalk.green,
|
||||
icon: '✓',
|
||||
tableIcon: '✓'
|
||||
}
|
||||
};
|
||||
|
||||
const config = statusConfig[status] || {
|
||||
color: chalk.red,
|
||||
icon: 'X',
|
||||
tableIcon: 'X'
|
||||
};
|
||||
|
||||
const icon = forTable ? config.tableIcon : config.icon;
|
||||
return config.color(`${icon} ${status}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored priority display
|
||||
*/
|
||||
export function getPriorityWithColor(priority: TaskPriority): string {
|
||||
const priorityColors: Record<TaskPriority, (text: string) => string> = {
|
||||
critical: chalk.red.bold,
|
||||
high: chalk.red,
|
||||
medium: chalk.yellow,
|
||||
low: chalk.gray
|
||||
};
|
||||
|
||||
const colorFn = priorityColors[priority] || chalk.white;
|
||||
return colorFn(priority);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get complexity color and label based on score thresholds
|
||||
*/
|
||||
function getComplexityLevel(score: number): {
|
||||
color: (text: string) => string;
|
||||
label: string;
|
||||
} {
|
||||
if (score >= 7) {
|
||||
return { color: chalk.hex('#CC0000'), label: 'High' };
|
||||
} else if (score >= 4) {
|
||||
return { color: chalk.hex('#FF8800'), label: 'Medium' };
|
||||
} else {
|
||||
return { color: chalk.green, label: 'Low' };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored complexity display with dot indicator (simple format)
|
||||
*/
|
||||
export function getComplexityWithColor(complexity: number | string): string {
|
||||
const score =
|
||||
typeof complexity === 'string' ? parseInt(complexity, 10) : complexity;
|
||||
|
||||
if (isNaN(score)) {
|
||||
return chalk.gray('N/A');
|
||||
}
|
||||
|
||||
const { color } = getComplexityLevel(score);
|
||||
return color(`● ${score}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored complexity display with /10 format (for dashboards)
|
||||
*/
|
||||
export function getComplexityWithScore(complexity: number | undefined): string {
|
||||
if (typeof complexity !== 'number') {
|
||||
return chalk.gray('N/A');
|
||||
}
|
||||
|
||||
const { color, label } = getComplexityLevel(complexity);
|
||||
return color(`${complexity}/10 (${label})`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate text to specified length
|
||||
*/
|
||||
export function truncate(text: string, maxLength: number): string {
|
||||
if (text.length <= maxLength) {
|
||||
return text;
|
||||
}
|
||||
return text.substring(0, maxLength - 3) + '...';
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a progress bar
|
||||
*/
|
||||
export function createProgressBar(
|
||||
completed: number,
|
||||
total: number,
|
||||
width: number = 30
|
||||
): string {
|
||||
if (total === 0) {
|
||||
return chalk.gray('No tasks');
|
||||
}
|
||||
|
||||
const percentage = Math.round((completed / total) * 100);
|
||||
const filled = Math.round((completed / total) * width);
|
||||
const empty = width - filled;
|
||||
|
||||
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
|
||||
|
||||
return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a fancy banner
|
||||
*/
|
||||
export function displayBanner(title: string = 'Task Master'): void {
|
||||
console.log(
|
||||
boxen(chalk.white.bold(title), {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue',
|
||||
textAlignment: 'center'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display an error message (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function displayError(message: string, details?: string): void {
|
||||
console.error(
|
||||
boxen(
|
||||
chalk.red.bold('X Error: ') +
|
||||
chalk.white(message) +
|
||||
(details ? '\n\n' + chalk.gray(details) : ''),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'red'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a success message
|
||||
*/
|
||||
export function displaySuccess(message: string): void {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a warning message
|
||||
*/
|
||||
export function displayWarning(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display info message
|
||||
*/
|
||||
export function displayInfo(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependencies with their status
|
||||
*/
|
||||
export function formatDependenciesWithStatus(
|
||||
dependencies: string[] | number[],
|
||||
tasks: Task[]
|
||||
): string {
|
||||
if (!dependencies || dependencies.length === 0) {
|
||||
return chalk.gray('none');
|
||||
}
|
||||
|
||||
const taskMap = new Map(tasks.map((t) => [t.id.toString(), t]));
|
||||
|
||||
return dependencies
|
||||
.map((depId) => {
|
||||
const task = taskMap.get(depId.toString());
|
||||
if (!task) {
|
||||
return chalk.red(`${depId} (not found)`);
|
||||
}
|
||||
|
||||
const statusIcon =
|
||||
task.status === 'done'
|
||||
? '✓'
|
||||
: task.status === 'in-progress'
|
||||
? '►'
|
||||
: '○';
|
||||
|
||||
return `${depId}${statusIcon}`;
|
||||
})
|
||||
.join(', ');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a task table for display
|
||||
*/
|
||||
export function createTaskTable(
|
||||
tasks: Task[],
|
||||
options?: {
|
||||
showSubtasks?: boolean;
|
||||
showComplexity?: boolean;
|
||||
showDependencies?: boolean;
|
||||
}
|
||||
): string {
|
||||
const {
|
||||
showSubtasks = false,
|
||||
showComplexity = false,
|
||||
showDependencies = true
|
||||
} = options || {};
|
||||
|
||||
// Calculate dynamic column widths based on terminal width
|
||||
const terminalWidth = process.stdout.columns * 0.9 || 100;
|
||||
// Adjust column widths to better match the original layout
|
||||
const baseColWidths = showComplexity
|
||||
? [
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.1)
|
||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||
: [
|
||||
Math.floor(terminalWidth * 0.08),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.18),
|
||||
Math.floor(terminalWidth * 0.12),
|
||||
Math.floor(terminalWidth * 0.2)
|
||||
]; // ID, Title, Status, Priority, Dependencies
|
||||
|
||||
const headers = [
|
||||
chalk.blue.bold('ID'),
|
||||
chalk.blue.bold('Title'),
|
||||
chalk.blue.bold('Status'),
|
||||
chalk.blue.bold('Priority')
|
||||
];
|
||||
const colWidths = baseColWidths.slice(0, 4);
|
||||
|
||||
if (showDependencies) {
|
||||
headers.push(chalk.blue.bold('Dependencies'));
|
||||
colWidths.push(baseColWidths[4]);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
headers.push(chalk.blue.bold('Complexity'));
|
||||
colWidths.push(baseColWidths[5] || 12);
|
||||
}
|
||||
|
||||
const table = new Table({
|
||||
head: headers,
|
||||
style: { head: [], border: [] },
|
||||
colWidths,
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
tasks.forEach((task) => {
|
||||
const row: string[] = [
|
||||
chalk.cyan(task.id.toString()),
|
||||
truncate(task.title, colWidths[1] - 3),
|
||||
getStatusWithColor(task.status, true), // Use table version
|
||||
getPriorityWithColor(task.priority)
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
// For table display, show simple format without status icons
|
||||
if (!task.dependencies || task.dependencies.length === 0) {
|
||||
row.push(chalk.gray('None'));
|
||||
} else {
|
||||
row.push(
|
||||
chalk.cyan(task.dependencies.map((d) => String(d)).join(', '))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
// Show complexity score from report if available
|
||||
if (typeof task.complexity === 'number') {
|
||||
row.push(getComplexityWithColor(task.complexity));
|
||||
} else {
|
||||
row.push(chalk.gray('N/A'));
|
||||
}
|
||||
}
|
||||
|
||||
table.push(row);
|
||||
|
||||
// Add subtasks if requested
|
||||
if (showSubtasks && task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subRow: string[] = [
|
||||
chalk.gray(` └─ ${subtask.id}`),
|
||||
chalk.gray(truncate(subtask.title, colWidths[1] - 6)),
|
||||
chalk.gray(getStatusWithColor(subtask.status, true)),
|
||||
chalk.gray(subtask.priority || 'medium')
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
subRow.push(
|
||||
chalk.gray(
|
||||
subtask.dependencies && subtask.dependencies.length > 0
|
||||
? subtask.dependencies.map((dep) => String(dep)).join(', ')
|
||||
: 'None'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
const complexityDisplay =
|
||||
typeof subtask.complexity === 'number'
|
||||
? getComplexityWithColor(subtask.complexity)
|
||||
: '--';
|
||||
subRow.push(chalk.gray(complexityDisplay));
|
||||
}
|
||||
|
||||
table.push(subRow);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return table.toString();
|
||||
}
|
||||
36
apps/cli/tsconfig.json
Normal file
36
apps/cli/tsconfig.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "NodeNext",
|
||||
"lib": ["ES2022"],
|
||||
"declaration": true,
|
||||
"declarationMap": true,
|
||||
"sourceMap": true,
|
||||
"outDir": "./dist",
|
||||
"baseUrl": ".",
|
||||
"rootDir": "./src",
|
||||
"strict": true,
|
||||
"noImplicitAny": true,
|
||||
"strictNullChecks": true,
|
||||
"strictFunctionTypes": true,
|
||||
"strictBindCallApply": true,
|
||||
"strictPropertyInitialization": true,
|
||||
"noImplicitThis": true,
|
||||
"alwaysStrict": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"esModuleInterop": true,
|
||||
"skipLibCheck": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"moduleResolution": "NodeNext",
|
||||
"moduleDetection": "force",
|
||||
"types": ["node"],
|
||||
"resolveJsonModule": true,
|
||||
"isolatedModules": true,
|
||||
"allowImportingTsExtensions": false
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "tests", "**/*.test.ts", "**/*.spec.ts"]
|
||||
}
|
||||
11
apps/docs/CHANGELOG.md
Normal file
11
apps/docs/CHANGELOG.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# docs
|
||||
|
||||
## 0.0.5
|
||||
|
||||
## 0.0.4
|
||||
|
||||
## 0.0.3
|
||||
|
||||
## 0.0.2
|
||||
|
||||
## 0.0.1
|
||||
24
apps/docs/README.md
Normal file
24
apps/docs/README.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Task Master Documentation
|
||||
|
||||
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
|
||||
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
|
||||
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
|
||||
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
|
||||
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
|
||||
|
||||
## Best Practices
|
||||
|
||||
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
|
||||
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
|
||||
|
||||
## Need More Help?
|
||||
|
||||
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
|
||||
114
apps/docs/archive/Installation.mdx
Normal file
114
apps/docs/archive/Installation.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: "Installation(2)"
|
||||
description: "This guide walks you through setting up Task Master in your development environment."
|
||||
---
|
||||
|
||||
## Initial Setup
|
||||
|
||||
<Tip>
|
||||
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
|
||||
</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Option 1: Using MCP (Recommended)" icon="sparkles">
|
||||
<Steps>
|
||||
<Step title="Add the MCP config to your editor">
|
||||
<Link href="https://cursor.sh">Cursor</Link> recommended, but it works with other text editors
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"MODEL": "claude-3-7-sonnet-20250219",
|
||||
"PERPLEXITY_MODEL": "sonar-pro",
|
||||
"MAX_TOKENS": 128000,
|
||||
"TEMPERATURE": 0.2,
|
||||
"DEFAULT_SUBTASKS": 5,
|
||||
"DEFAULT_PRIORITY": "medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Step>
|
||||
<Step title="Enable the MCP in your editor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Prompt the AI to initialize Task Master">
|
||||
> "Can you please initialize taskmaster-ai into my project?"
|
||||
|
||||
**The AI will:**
|
||||
|
||||
1. Create necessary project structure
|
||||
2. Set up initial configuration files
|
||||
3. Guide you through the rest of the process
|
||||
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
|
||||
5. **Use natural language commands** to interact with Task Master:
|
||||
|
||||
> "Can you parse my PRD at scripts/prd.txt?"
|
||||
>
|
||||
> "What's the next task I should work on?"
|
||||
>
|
||||
> "Can you help me implement task 3?"
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Option 2: Manual Installation">
|
||||
If you prefer to use the command line interface directly:
|
||||
|
||||
<Steps>
|
||||
<Step title="Install">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
npm install -g task-master-ai
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npm install task-master-ai
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Initialize a new project">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
task-master init
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npx task-master-init
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Commands
|
||||
|
||||
<Tip>
|
||||
After setting up Task Master, you can use these commands (either via AI prompts or CLI)
|
||||
</Tip>
|
||||
|
||||
```bash
|
||||
# Parse a PRD and generate tasks
|
||||
task-master parse-prd your-prd.txt
|
||||
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# Show the next task to work on
|
||||
task-master next
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
263
apps/docs/archive/ai-client-utils-example.mdx
Normal file
263
apps/docs/archive/ai-client-utils-example.mdx
Normal file
@@ -0,0 +1,263 @@
|
||||
---
|
||||
title: "AI Client Utilities for MCP Tools"
|
||||
description: "This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools."
|
||||
---
|
||||
## Examples
|
||||
<AccordionGroup>
|
||||
<Accordion title="Basic Usage with Direct Functions">
|
||||
```javascript
|
||||
// In your direct function implementation:
|
||||
import {
|
||||
getAnthropicClientForMCP,
|
||||
getModelConfig,
|
||||
handleClaudeError
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function someAiOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Initialize Anthropic client with session from context
|
||||
const client = getAnthropicClientForMCP(context.session, log);
|
||||
|
||||
// Get model configuration with defaults or session overrides
|
||||
const modelConfig = getModelConfig(context.session);
|
||||
|
||||
// Make API call with proper error handling
|
||||
try {
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature,
|
||||
messages: [{ role: 'user', content: 'Your prompt here' }]
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response
|
||||
};
|
||||
} catch (apiError) {
|
||||
// Use helper to get user-friendly error message
|
||||
const friendlyMessage = handleClaudeError(apiError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_API_ERROR',
|
||||
message: friendlyMessage
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle client initialization errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_CLIENT_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Integration with AsyncOperationManager">
|
||||
```javascript
|
||||
// In your MCP tool implementation:
|
||||
import {
|
||||
AsyncOperationManager,
|
||||
StatusCodes
|
||||
} from '../../utils/async-operation-manager.js';
|
||||
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
|
||||
|
||||
export async function someAiOperation(args, context) {
|
||||
const { session, mcpLog } = context;
|
||||
const log = mcpLog || console;
|
||||
|
||||
try {
|
||||
// Create operation description
|
||||
const operationDescription = `AI operation: ${args.someParam}`;
|
||||
|
||||
// Start async operation
|
||||
const operation = AsyncOperationManager.createOperation(
|
||||
operationDescription,
|
||||
async (reportProgress) => {
|
||||
try {
|
||||
// Initial progress report
|
||||
reportProgress({
|
||||
progress: 0,
|
||||
status: 'Starting AI operation...'
|
||||
});
|
||||
|
||||
// Call direct function with session and progress reporting
|
||||
const result = await someAiOperationDirect(args, log, {
|
||||
reportProgress,
|
||||
mcpLog: log,
|
||||
session
|
||||
});
|
||||
|
||||
// Final progress update
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: result.success ? 'Operation completed' : 'Operation failed',
|
||||
result: result.data,
|
||||
error: result.error
|
||||
});
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Handle errors in the operation
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: 'Operation failed',
|
||||
error: {
|
||||
message: error.message,
|
||||
code: error.code || 'OPERATION_FAILED'
|
||||
}
|
||||
});
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Return immediate response with operation ID
|
||||
return {
|
||||
status: StatusCodes.ACCEPTED,
|
||||
body: {
|
||||
success: true,
|
||||
message: 'Operation started',
|
||||
operationId: operation.id
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// Handle errors in the MCP tool
|
||||
log.error(`Error in someAiOperation: ${error.message}`);
|
||||
return {
|
||||
status: StatusCodes.INTERNAL_SERVER_ERROR,
|
||||
body: {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'OPERATION_FAILED',
|
||||
message: error.message
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Using Research Capabilities with Perplexity">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import {
|
||||
getPerplexityClientForMCP,
|
||||
getBestAvailableAIModel
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function researchOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Get the best AI model for this operation based on needs
|
||||
const { type, client } = await getBestAvailableAIModel(
|
||||
context.session,
|
||||
{ requiresResearch: true },
|
||||
log
|
||||
);
|
||||
|
||||
// Report which model we're using
|
||||
if (context.reportProgress) {
|
||||
await context.reportProgress({
|
||||
progress: 10,
|
||||
status: `Using ${type} model for research...`
|
||||
});
|
||||
}
|
||||
|
||||
// Make API call based on the model type
|
||||
if (type === 'perplexity') {
|
||||
// Call Perplexity
|
||||
const response = await client.chat.completions.create({
|
||||
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
|
||||
messages: [{ role: 'user', content: args.researchQuery }],
|
||||
temperature: 0.1
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.choices[0].message.content
|
||||
};
|
||||
} else {
|
||||
// Call Claude as fallback
|
||||
// (Implementation depends on specific needs)
|
||||
// ...
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'RESEARCH_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Configuration Override">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import { getModelConfig } from '../utils/ai-client-utils.js';
|
||||
|
||||
// Using custom defaults for a specific operation
|
||||
const operationDefaults = {
|
||||
model: 'claude-3-haiku-20240307', // Faster, smaller model
|
||||
maxTokens: 1000, // Lower token limit
|
||||
temperature: 0.2 // Lower temperature for more deterministic output
|
||||
};
|
||||
|
||||
// Get model config with operation-specific defaults
|
||||
const modelConfig = getModelConfig(context.session, operationDefaults);
|
||||
|
||||
// Now use modelConfig in your API calls
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature
|
||||
// Other parameters...
|
||||
});
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Error Handling">
|
||||
- Always use try/catch blocks around both client initialization and API calls
|
||||
- Use `handleClaudeError` to provide user-friendly error messages
|
||||
- Return standardized error objects with code and message
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Progress Reporting">
|
||||
- Report progress at key points (starting, processing, completing)
|
||||
- Include meaningful status messages
|
||||
- Include error details in progress reports when failures occur
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Session Handling">
|
||||
- Always pass the session from the context to the AI client getters
|
||||
- Use `getModelConfig` to respect user settings from session
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Selection">
|
||||
- Use `getBestAvailableAIModel` when you need to select between different models
|
||||
- Set `requiresResearch: true` when you need Perplexity capabilities
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AsyncOperationManager Integration">
|
||||
- Create descriptive operation names
|
||||
- Handle all errors within the operation function
|
||||
- Return standardized results from direct functions
|
||||
- Return immediate responses with operation IDs
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
180
apps/docs/archive/ai-development-workflow.mdx
Normal file
180
apps/docs/archive/ai-development-workflow.mdx
Normal file
@@ -0,0 +1,180 @@
|
||||
---
|
||||
title: "AI Development Workflow"
|
||||
description: "Learn how Task Master and Cursor AI work together to streamline your development workflow"
|
||||
---
|
||||
|
||||
<Tip>The Cursor agent is pre-configured (via the rules file) to follow this workflow</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="1. Task Discovery and Selection">
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="2. Task Implementation">
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="3. Task Verification">
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="4. Task Completion">
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="5. Handling Implementation Drift">
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="6. Breaking Down Complex Tasks">
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using Perplexity AI:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Example Cursor AI Interactions
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
208
apps/docs/archive/command-reference.mdx
Normal file
208
apps/docs/archive/command-reference.mdx
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: "Task Master Commands"
|
||||
description: "A comprehensive reference of all available Task Master commands"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
80
apps/docs/archive/configuration.mdx
Normal file
80
apps/docs/archive/configuration.mdx
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: "Configuration"
|
||||
description: "Configure Task Master through environment variables in a .env file"
|
||||
---
|
||||
|
||||
## Required Configuration
|
||||
|
||||
<Note>
|
||||
Task Master requires an Anthropic API key to function. Add this to your `.env` file:
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
```
|
||||
|
||||
You can obtain an API key from the [Anthropic Console](https://console.anthropic.com/).
|
||||
</Note>
|
||||
|
||||
## Optional Configuration
|
||||
|
||||
| Variable | Default Value | Description | Example |
|
||||
| --- | --- | --- | --- |
|
||||
| `MODEL` | `"claude-3-7-sonnet-20250219"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
|
||||
| `MAX_TOKENS` | `"4000"` | Maximum tokens for responses | `MAX_TOKENS=8000` |
|
||||
| `TEMPERATURE` | `"0.7"` | Temperature for model responses | `TEMPERATURE=0.5` |
|
||||
| `DEBUG` | `"false"` | Enable debug logging | `DEBUG=true` |
|
||||
| `LOG_LEVEL` | `"info"` | Console output level | `LOG_LEVEL=debug` |
|
||||
| `DEFAULT_SUBTASKS` | `"3"` | Default subtask count | `DEFAULT_SUBTASKS=5` |
|
||||
| `DEFAULT_PRIORITY` | `"medium"` | Default priority | `DEFAULT_PRIORITY=high` |
|
||||
| `PROJECT_NAME` | `"MCP SaaS MVP"` | Project name in metadata | `PROJECT_NAME=My Awesome Project` |
|
||||
| `PROJECT_VERSION` | `"1.0.0"` | Version in metadata | `PROJECT_VERSION=2.1.0` |
|
||||
| `PERPLEXITY_API_KEY` | - | For research-backed features | `PERPLEXITY_API_KEY=pplx-...` |
|
||||
| `PERPLEXITY_MODEL` | `"sonar-medium-online"` | Perplexity model | `PERPLEXITY_MODEL=sonar-large-online` |
|
||||
|
||||
## Example .env File
|
||||
|
||||
```
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
|
||||
# Optional - Claude Configuration
|
||||
MODEL=claude-3-7-sonnet-20250219
|
||||
MAX_TOKENS=4000
|
||||
TEMPERATURE=0.7
|
||||
|
||||
# Optional - Perplexity API for Research
|
||||
PERPLEXITY_API_KEY=pplx-your-api-key
|
||||
PERPLEXITY_MODEL=sonar-medium-online
|
||||
|
||||
# Optional - Project Info
|
||||
PROJECT_NAME=My Project
|
||||
PROJECT_VERSION=1.0.0
|
||||
|
||||
# Optional - Application Configuration
|
||||
DEFAULT_SUBTASKS=3
|
||||
DEFAULT_PRIORITY=medium
|
||||
DEBUG=false
|
||||
LOG_LEVEL=info
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
<Note>
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide] page.
|
||||
</Note>
|
||||
113
apps/docs/archive/cursor-setup.mdx
Normal file
113
apps/docs/archive/cursor-setup.mdx
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
title: "Cursor AI Integration"
|
||||
description: "Learn how to set up and use Task Master with Cursor AI"
|
||||
---
|
||||
|
||||
## Setting up Cursor AI Integration
|
||||
|
||||
<Check>
|
||||
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development. As of version 0.28.0, Task Master automatically sets up custom slash commands in Cursor IDE.
|
||||
</Check>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Cursor Custom Slash Commands (New in 0.28.0)" icon="terminal">
|
||||
Task Master now automatically configures custom slash commands in Cursor IDE when adding a profile. These commands provide quick access to Task Master functionality:
|
||||
|
||||
### Available Slash Commands
|
||||
- `/tm-list` - View all tasks with their status
|
||||
- `/tm-next` - Get the next available task to work on
|
||||
- `/tm-show` - Show detailed task information
|
||||
- `/tm-add` - Add a new task with AI assistance
|
||||
- `/tm-status` - Set task status (pending, in-progress, done, etc.)
|
||||
- `/tm-expand` - Expand a task into subtasks
|
||||
- `/tm-complexity` - Analyze task complexity
|
||||
|
||||
### Automatic Setup
|
||||
When you run `task-master profiles add cursor`, the slash commands are automatically copied to `.cursor/commands/`. If you remove the profile with `task-master profiles remove cursor`, the commands are cleaned up automatically.
|
||||
|
||||
### Manual Setup
|
||||
If you need to manually set up the commands, they're available in the `assets/claude/commands/` directory of your Task Master installation.
|
||||
</Accordion>
|
||||
<Accordion title="Using Cursor with MCP (Recommended)" icon="sparkles">
|
||||
If you've already set up Task Master with MCP in Cursor, the integration is automatic. You can simply use natural language to interact with Task Master:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
Can you analyze the complexity of our tasks?
|
||||
I'd like to implement task 4. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Manual Cursor Setup">
|
||||
If you're not using MCP, you can still set up Cursor integration:
|
||||
|
||||
<Steps>
|
||||
<Step title="After initializing your project, open it in Cursor">
|
||||
The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
|
||||
</Step>
|
||||
<Step title="Place your PRD document in the scripts/ directory (e.g., scripts/prd.txt)">
|
||||
|
||||
</Step>
|
||||
<Step title="Open Cursor's AI chat and switch to Agent mode">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Alternative MCP Setup in Cursor">
|
||||
<Steps>
|
||||
<Step title="Go to Cursor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Navigate to the MCP section">
|
||||
|
||||
</Step>
|
||||
<Step title="Click on 'Add New MCP Server'">
|
||||
|
||||
</Step>
|
||||
<Step title="Configure with the following details:">
|
||||
- Name: "Task Master"
|
||||
- Type: "Command"
|
||||
- Command: "npx -y task-master-ai"
|
||||
</Step>
|
||||
<Step title="Save Settings">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Initial Task Generation
|
||||
|
||||
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
|
||||
|
||||
```
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master parse-prd scripts/prd.txt
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Parse your PRD document
|
||||
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
|
||||
- The agent will understand this process due to the Cursor rules
|
||||
|
||||
### Generate Individual Task Files
|
||||
|
||||
Next, ask the agent to generate individual task files:
|
||||
|
||||
```
|
||||
Please generate individual task files from tasks.json
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
This creates individual task files in the `tasks/` directory (e.g., `task_001.txt`, `task_002.txt`), making it easier to reference specific tasks.
|
||||
56
apps/docs/archive/examples.mdx
Normal file
56
apps/docs/archive/examples.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "Example Cursor AI Interactions"
|
||||
description: "Below are some common interactions with Cursor AI when using Task Master"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
210
apps/docs/best-practices/advanced-tasks.mdx
Normal file
210
apps/docs/best-practices/advanced-tasks.mdx
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
title: Advanced Tasks
|
||||
sidebarTitle: "Advanced Tasks"
|
||||
---
|
||||
|
||||
## AI-Driven Development Workflow
|
||||
|
||||
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
|
||||
|
||||
### 1. Task Discovery and Selection
|
||||
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
```
|
||||
Can you show me tasks 1, 3, and 5 to understand their current status?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
|
||||
### 2. Task Implementation
|
||||
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
|
||||
### 2.1. Viewing Multiple Tasks
|
||||
|
||||
For efficient context gathering and batch operations:
|
||||
|
||||
```
|
||||
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master show 5,7,9` to display a compact summary table
|
||||
- Show task status, priority, and progress indicators
|
||||
- Provide an interactive action menu with batch operations
|
||||
- Allow you to perform group actions like marking multiple tasks as in-progress
|
||||
|
||||
### 3. Task Verification
|
||||
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
|
||||
### 4. Task Completion
|
||||
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
|
||||
### 5. Handling Implementation Drift
|
||||
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
|
||||
|
||||
# OR, if research is needed to find best practices for MongoDB:
|
||||
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
|
||||
### 6. Reorganizing Tasks
|
||||
|
||||
If you need to reorganize your task structure:
|
||||
|
||||
```
|
||||
I think subtask 5.2 would fit better as part of task 7 instead. Can you move it there?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master move --from=5.2 --to=7.3
|
||||
```
|
||||
|
||||
You can reorganize tasks in various ways:
|
||||
|
||||
- Moving a standalone task to become a subtask: `--from=5 --to=7`
|
||||
- Moving a subtask to become a standalone task: `--from=5.2 --to=7`
|
||||
- Moving a subtask to a different parent: `--from=5.2 --to=7.3`
|
||||
- Reordering subtasks within the same parent: `--from=5.2 --to=5.4`
|
||||
- Moving a task to a new ID position: `--from=5 --to=25` (even if task 25 doesn't exist yet)
|
||||
- Moving multiple tasks at once: `--from=10,11,12 --to=16,17,18` (must have same number of IDs, Taskmaster will look through each position)
|
||||
|
||||
When moving tasks to new IDs:
|
||||
|
||||
- The system automatically creates placeholder tasks for non-existent destination IDs
|
||||
- This prevents accidental data loss during reorganization
|
||||
- Any tasks that depend on moved tasks will have their dependencies updated
|
||||
- When moving a parent task, all its subtasks are automatically moved with it and renumbered
|
||||
|
||||
This is particularly useful as your project understanding evolves and you need to refine your task structure.
|
||||
|
||||
### 7. Resolving Merge Conflicts with Tasks
|
||||
|
||||
When working with a team, you might encounter merge conflicts in your tasks.json file if multiple team members create tasks on different branches. The move command makes resolving these conflicts straightforward:
|
||||
|
||||
```
|
||||
I just merged the main branch and there's a conflict with tasks.json. My teammates created tasks 10-15 while I created tasks 10-12 on my branch. Can you help me resolve this?
|
||||
```
|
||||
|
||||
The agent will help you:
|
||||
|
||||
1. Keep your teammates' tasks (10-15)
|
||||
2. Move your tasks to new positions to avoid conflicts:
|
||||
|
||||
```bash
|
||||
# Move your tasks to new positions (e.g., 16-18)
|
||||
task-master move --from=10 --to=16
|
||||
task-master move --from=11 --to=17
|
||||
task-master move --from=12 --to=18
|
||||
```
|
||||
|
||||
This approach preserves everyone's work while maintaining a clean task structure, making it much easier to handle task conflicts than trying to manually merge JSON files.
|
||||
|
||||
### 8. Breaking Down Complex Tasks
|
||||
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using the configured research model:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
319
apps/docs/best-practices/configuration-advanced.mdx
Normal file
319
apps/docs/best-practices/configuration-advanced.mdx
Normal file
@@ -0,0 +1,319 @@
|
||||
---
|
||||
title: Advanced Configuration
|
||||
sidebarTitle: "Advanced Configuration"
|
||||
---
|
||||
|
||||
|
||||
Taskmaster uses two primary methods for configuration:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
|
||||
|
||||
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
|
||||
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
|
||||
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
|
||||
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
|
||||
- **Example Structure:**
|
||||
```json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2,
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1,
|
||||
"baseURL": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"defaultTag": "master",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
|
||||
|
||||
- For projects that haven't migrated to the new structure yet.
|
||||
- **Location:** Project root directory.
|
||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||
|
||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
- **Location:**
|
||||
- For CLI usage: Create a `.env` file in your project root.
|
||||
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
|
||||
- **Required API Keys (Depending on configured providers):**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key.
|
||||
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
|
||||
- `MISTRAL_API_KEY`: Your Mistral API key.
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
|
||||
- **Optional Auto-Update Control:**
|
||||
- `TASKMASTER_SKIP_AUTO_UPDATE`: Set to '1' to disable automatic updates. Also automatically disabled in CI environments (when `CI` environment variable is set).
|
||||
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
|
||||
|
||||
## Tagged Task Lists Configuration (v0.17+)
|
||||
|
||||
Taskmaster includes a tagged task lists system for multi-context task management.
|
||||
|
||||
### Global Tag Settings
|
||||
|
||||
```json
|
||||
"global": {
|
||||
"defaultTag": "master"
|
||||
}
|
||||
```
|
||||
|
||||
- **`defaultTag`** (string): Default tag context for new operations (default: "master")
|
||||
|
||||
### Git Integration
|
||||
|
||||
Task Master provides manual git integration through the `--from-branch` option:
|
||||
|
||||
- **Manual Tag Creation**: Use `task-master add-tag --from-branch` to create a tag based on your current git branch name
|
||||
- **User Control**: No automatic tag switching - you control when and how tags are created
|
||||
- **Flexible Workflow**: Supports any git workflow without imposing rigid branch-tag mappings
|
||||
|
||||
## State Management File
|
||||
|
||||
Taskmaster uses `.taskmaster/state.json` to track tagged system runtime information:
|
||||
|
||||
```json
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-06-11T20:26:12.598Z",
|
||||
"migrationNoticeShown": true
|
||||
}
|
||||
```
|
||||
|
||||
- **`currentTag`**: Currently active tag context
|
||||
- **`lastSwitched`**: Timestamp of last tag switch
|
||||
- **`migrationNoticeShown`**: Whether migration notice has been displayed
|
||||
|
||||
This file is automatically created during tagged system migration and should not be manually edited.
|
||||
|
||||
## Example `.env` File (for API Keys)
|
||||
|
||||
```
|
||||
# Required API keys for providers configured in .taskmaster/config.json
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
|
||||
PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# GOOGLE_API_KEY=AIzaSy...
|
||||
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
# etc.
|
||||
|
||||
# Optional Endpoint Overrides
|
||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
||||
#
|
||||
# Azure OpenAI Configuration
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Configuration Errors
|
||||
|
||||
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
|
||||
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
|
||||
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
## Provider-Specific Configuration
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- A Google Cloud account with Vertex AI API enabled
|
||||
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
|
||||
- A Google Cloud project ID
|
||||
2. **Authentication Options**:
|
||||
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
|
||||
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
|
||||
3. **Required Configuration**:
|
||||
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
|
||||
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
|
||||
4. **Example Setup**:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
Or using service account:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
5. **In .taskmaster/config.json**:
|
||||
```json
|
||||
"global": {
|
||||
"vertexProjectId": "my-gcp-project-123",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
```
|
||||
|
||||
### Azure OpenAI Configuration
|
||||
|
||||
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- An Azure account with an active subscription
|
||||
- Azure OpenAI service resource created in the Azure portal
|
||||
- Azure OpenAI API key and endpoint URL
|
||||
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource
|
||||
|
||||
2. **Authentication**:
|
||||
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
|
||||
- Configure the endpoint URL using one of the methods below
|
||||
|
||||
3. **Configuration Options**:
|
||||
|
||||
**Option 1: Using Global Azure Base URL (affects all Azure models)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Environment Variables**:
|
||||
```bash
|
||||
# In .env file
|
||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
|
||||
# Optional: Override endpoint for all Azure models
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||
```
|
||||
|
||||
5. **Important Notes**:
|
||||
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
|
||||
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
|
||||
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`
|
||||
|
||||
6. **Troubleshooting**:
|
||||
|
||||
**"Resource not found" errors:**
|
||||
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
|
||||
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
|
||||
- Check that your Azure OpenAI resource is in the correct region and properly deployed
|
||||
|
||||
**Authentication errors:**
|
||||
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
|
||||
- Ensure your Azure OpenAI resource has the necessary permissions
|
||||
- Check that your subscription has not been suspended or reached quota limits
|
||||
|
||||
**Model availability errors:**
|
||||
- Confirm the model is deployed in your Azure OpenAI resource
|
||||
- Verify the deployment name matches your configuration exactly (case-sensitive)
|
||||
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
|
||||
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
|
||||
8
apps/docs/best-practices/index.mdx
Normal file
8
apps/docs/best-practices/index.mdx
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
title: Intro to Advanced Usage
|
||||
sidebarTitle: "Advanced Usage"
|
||||
---
|
||||
|
||||
# Best Practices
|
||||
|
||||
Explore advanced tips, recommended workflows, and best practices for getting the most out of Task Master.
|
||||
209
apps/docs/capabilities/cli-root-commands.mdx
Normal file
209
apps/docs/capabilities/cli-root-commands.mdx
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
title: CLI Commands
|
||||
sidebarTitle: "CLI Commands"
|
||||
---
|
||||
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
241
apps/docs/capabilities/index.mdx
Normal file
241
apps/docs/capabilities/index.mdx
Normal file
@@ -0,0 +1,241 @@
|
||||
---
|
||||
title: Technical Capabilities
|
||||
sidebarTitle: "Technical Capabilities"
|
||||
---
|
||||
|
||||
# Capabilities (Technical)
|
||||
|
||||
Discover the technical capabilities of Task Master, including supported models, integrations, and more.
|
||||
|
||||
# CLI Interface Synopsis
|
||||
|
||||
This document outlines the command-line interface (CLI) for the Task Master application, as defined in `bin/task-master.js` and the `scripts/modules/commands.js` file (which I will assume exists based on the context). This guide is intended for those writing user-facing documentation to understand how users interact with the application from the command line.
|
||||
|
||||
## Entry Point
|
||||
|
||||
The main entry point for the CLI is the `task-master` command, which is an executable script that spawns the main application logic in `scripts/dev.js`.
|
||||
|
||||
## Global Options
|
||||
|
||||
The following options are available for all commands:
|
||||
|
||||
- `-h, --help`: Display help information.
|
||||
- `--version`: Display the application's version.
|
||||
|
||||
## Commands
|
||||
|
||||
The CLI is organized into a series of commands, each with its own set of options. The following is a summary of the available commands, categorized by their functionality.
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add`**: Creates a new task using an AI-powered prompt.
|
||||
- `--prompt <prompt>`: The prompt to use for generating the task.
|
||||
- `--dependencies <dependencies>`: A comma-separated list of task IDs that this task depends on.
|
||||
- `--priority <priority>`: The priority of the task (e.g., `high`, `medium`, `low`).
|
||||
- **`add-subtask`**: Adds a subtask to a parent task.
|
||||
- `--parent-id <parentId>`: The ID of the parent task.
|
||||
- `--task-id <taskId>`: The ID of an existing task to convert to a subtask.
|
||||
- `--title <title>`: The title of the new subtask.
|
||||
- **`remove`**: Removes one or more tasks or subtasks.
|
||||
- `--ids <ids>`: A comma-separated list of task or subtask IDs to remove.
|
||||
- **`remove-subtask`**: Removes a subtask from its parent.
|
||||
- `--id <subtaskId>`: The ID of the subtask to remove (in the format `parentId.subtaskId`).
|
||||
- `--convert-to-task`: Converts the subtask to a standalone task.
|
||||
- **`update`**: Updates multiple tasks starting from a specific ID.
|
||||
- `--from <fromId>`: The ID of the task to start updating from.
|
||||
- `--prompt <prompt>`: The new context to apply to the tasks.
|
||||
- **`update-task`**: Updates a single task.
|
||||
- `--id <taskId>`: The ID of the task to update.
|
||||
- `--prompt <prompt>`: The new context to apply to the task.
|
||||
- **`update-subtask`**: Appends information to a subtask.
|
||||
- `--id <subtaskId>`: The ID of the subtask to update (in the format `parentId.subtaskId`).
|
||||
- `--prompt <prompt>`: The information to append to the subtask.
|
||||
- **`move`**: Moves a task or subtask.
|
||||
- `--from <sourceId>`: The ID of the task or subtask to move.
|
||||
- `--to <destinationId>`: The destination ID.
|
||||
- **`clear-subtasks`**: Clears all subtasks from one or more tasks.
|
||||
- `--ids <ids>`: A comma-separated list of task IDs.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`list`**: Lists all tasks.
|
||||
- `--status <status>`: Filters tasks by status.
|
||||
- `--with-subtasks`: Includes subtasks in the list.
|
||||
- **`show`**: Shows the details of a specific task.
|
||||
- `--id <taskId>`: The ID of the task to show.
|
||||
- **`next`**: Shows the next task to work on.
|
||||
- **`set-status`**: Sets the status of a task or subtask.
|
||||
- `--id <id>`: The ID of the task or subtask.
|
||||
- `--status <status>`: The new status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse-prd`**: Parses a PRD to generate tasks.
|
||||
- `--file <file>`: The path to the PRD file.
|
||||
- `--num-tasks <numTasks>`: The number of tasks to generate.
|
||||
- **`expand`**: Expands a task into subtasks.
|
||||
- `--id <taskId>`: The ID of the task to expand.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate.
|
||||
- **`expand-all`**: Expands all eligible tasks.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate for each task.
|
||||
- **`analyze-complexity`**: Analyzes task complexity.
|
||||
- `--file <file>`: The path to the tasks file.
|
||||
- **`complexity-report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Project and Configuration
|
||||
|
||||
- **`init`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`migrate`**: Migrates a project to the new directory structure.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
- `--query <query>`: The research query.
|
||||
|
||||
This synopsis provides a comprehensive overview of the CLI commands and their options, which should be helpful for creating user-facing documentation.
|
||||
|
||||
|
||||
# Core Implementation Synopsis
|
||||
|
||||
This document provides a high-level overview of the core implementation of the Task Master application, focusing on the functionalities exposed through `scripts/modules/task-manager.js`. This serves as a guide for understanding the application's capabilities when writing user-facing documentation.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The application revolves around the management of tasks and subtasks, which are stored in a `tasks.json` file. The core logic provides functionalities to create, read, update, and delete tasks and subtasks, as well as manage their dependencies and statuses.
|
||||
|
||||
### Task Structure
|
||||
|
||||
A task is a JSON object with the following key properties:
|
||||
|
||||
- `id`: A unique number identifying the task.
|
||||
- `title`: A string representing the task's title.
|
||||
- `description`: A string providing a brief description of the task.
|
||||
- `details`: A string containing detailed information about the task.
|
||||
- `testStrategy`: A string describing how to test the task.
|
||||
- `status`: A string representing the task's current status (e.g., `pending`, `in-progress`, `done`).
|
||||
- `dependencies`: An array of task IDs that this task depends on.
|
||||
- `priority`: A string representing the task's priority (e.g., `high`, `medium`, `low`).
|
||||
- `subtasks`: An array of subtask objects.
|
||||
|
||||
A subtask has a similar structure to a task but is nested within a parent task.
|
||||
|
||||
## Feature Categories
|
||||
|
||||
The core functionalities can be categorized as follows:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks.
|
||||
|
||||
- **`addTask(prompt, dependencies, priority)`**: Creates a new task using an AI-powered prompt to generate the title, description, details, and test strategy. It can also be used to create a task manually by providing the task data directly.
|
||||
- **`addSubtask(parentId, existingTaskId, newSubtaskData)`**: Adds a subtask to a parent task. It can either convert an existing task into a subtask or create a new subtask from scratch.
|
||||
- **`removeTask(taskIds)`**: Removes one or more tasks or subtasks.
|
||||
- **`removeSubtask(subtaskId, convertToTask)`**: Removes a subtask from its parent. It can optionally convert the subtask into a standalone task.
|
||||
- **`updateTaskById(taskId, prompt)`**: Updates a task's information based on a prompt.
|
||||
- **`updateSubtaskById(subtaskId, prompt)`**: Appends additional information to a subtask's details.
|
||||
- **`updateTasks(fromId, prompt)`**: Updates multiple tasks starting from a specific ID based on a new context.
|
||||
- **`moveTask(sourceId, destinationId)`**: Moves a task or subtask to a new position.
|
||||
- **`clearSubtasks(taskIds)`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
These functions are used to retrieve information about tasks and manage their status.
|
||||
|
||||
- **`listTasks(statusFilter, withSubtasks)`**: Lists all tasks, with options to filter by status and include subtasks.
|
||||
- **`findTaskById(taskId)`**: Finds a task by its ID.
|
||||
- **`taskExists(taskId)`**: Checks if a task with a given ID exists.
|
||||
- **`setTaskStatus(taskIdInput, newStatus)`**: Sets the status of a task or subtask.
|
||||
-al
|
||||
- **`updateSingleTaskStatus(taskIdInput, newStatus)`**: A helper function to update the status of a single task or subtask.
|
||||
- **`findNextTask()`**: Determines the next task to work on based on dependencies and status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
These functions leverage AI to analyze and break down tasks.
|
||||
|
||||
- **`parsePRD(prdPath, numTasks)`**: Parses a Product Requirements Document (PRD) to generate an initial set of tasks.
|
||||
- **`expandTask(taskId, numSubtasks)`**: Expands a task into a specified number of subtasks using AI.
|
||||
- **`expandAllTasks(numSubtasks)`**: Expands all eligible pending or in-progress tasks.
|
||||
- **`analyzeTaskComplexity(options)`**: Analyzes the complexity of tasks and generates recommendations for expansion.
|
||||
- **`readComplexityReport()`**: Reads the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
These functions are crucial for managing the relationships between tasks.
|
||||
|
||||
- **`isTaskDependentOn(task, targetTaskId)`**: Checks if a task has a direct or indirect dependency on another task.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
These functions are for managing the project and its configuration.
|
||||
|
||||
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`.
|
||||
- **`migrateProject()`**: Migrates the project to the new `.taskmaster` directory structure.
|
||||
- **`performResearch(query, options)`**: Performs AI-powered research with project context.
|
||||
|
||||
This overview should provide a solid foundation for creating user-facing documentation. For more detailed information on each function, refer to the source code in `scripts/modules/task-manager/`.
|
||||
|
||||
|
||||
# MCP Interface Synopsis
|
||||
|
||||
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add_task`**: Creates a new task.
|
||||
- **`add_subtask`**: Adds a subtask to a parent task.
|
||||
- **`remove_task`**: Removes one or more tasks or subtasks.
|
||||
- **`remove_subtask`**: Removes a subtask from its parent.
|
||||
- **`update_task`**: Updates a single task.
|
||||
- **`update_subtask`**: Appends information to a subtask.
|
||||
- **`update`**: Updates multiple tasks.
|
||||
- **`move_task`**: Moves a task or subtask.
|
||||
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`get_tasks`**: Lists all tasks.
|
||||
- **`get_task`**: Shows the details of a specific task.
|
||||
- **`next_task`**: Shows the next task to work on.
|
||||
- **`set_task_status`**: Sets the status of a task or subtask.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse_prd`**: Parses a PRD to generate tasks.
|
||||
- **`expand_task`**: Expands a task into subtasks.
|
||||
- **`expand_all`**: Expands all eligible tasks.
|
||||
- **`analyze_project_complexity`**: Analyzes task complexity.
|
||||
- **`complexity_report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
- **`add_dependency`**: Adds a dependency to a task.
|
||||
- **`remove_dependency`**: Removes a dependency from a task.
|
||||
- **`validate_dependencies`**: Validates the dependencies of all tasks.
|
||||
- **`fix_dependencies`**: Fixes any invalid dependencies.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
- **`initialize_project`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`models`**: Manages AI model configurations.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
|
||||
### 6. Tag Management
|
||||
|
||||
- **`add_tag`**: Creates a new tag.
|
||||
- **`delete_tag`**: Deletes a tag.
|
||||
- **`list_tags`**: Lists all tags.
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
|
||||
This synopsis provides a clear overview of the MCP interface and its available tools, which will be valuable for anyone writing documentation for developers who need to interact with the Task Master application programmatically.
|
||||
101
apps/docs/capabilities/mcp.mdx
Normal file
101
apps/docs/capabilities/mcp.mdx
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: MCP Tools
|
||||
sidebarTitle: "MCP Tools"
|
||||
---
|
||||
|
||||
# MCP Tools
|
||||
|
||||
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add_task`**: Creates a new task.
|
||||
- **`add_subtask`**: Adds a subtask to a parent task.
|
||||
- **`remove_task`**: Removes one or more tasks or subtasks.
|
||||
- **`remove_subtask`**: Removes a subtask from its parent.
|
||||
- **`update_task`**: Updates a single task.
|
||||
- **`update_subtask`**: Appends information to a subtask.
|
||||
- **`update`**: Updates multiple tasks.
|
||||
- **`move_task`**: Moves a task or subtask.
|
||||
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`get_tasks`**: Lists all tasks.
|
||||
- **`get_task`**: Shows the details of a specific task.
|
||||
- **`next_task`**: Shows the next task to work on.
|
||||
- **`set_task_status`**: Sets the status of a task or subtask.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse_prd`**: Parses a PRD to generate tasks.
|
||||
- **`expand_task`**: Expands a task into subtasks.
|
||||
- **`expand_all`**: Expands all eligible tasks.
|
||||
- **`analyze_project_complexity`**: Analyzes task complexity.
|
||||
- **`complexity_report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
- **`add_dependency`**: Adds a dependency to a task.
|
||||
- **`remove_dependency`**: Removes a dependency from a task.
|
||||
- **`validate_dependencies`**: Validates the dependencies of all tasks.
|
||||
- **`fix_dependencies`**: Fixes any invalid dependencies.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
- **`initialize_project`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`models`**: Manages AI model configurations.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
|
||||
### 6. Tag Management
|
||||
|
||||
- **`add_tag`**: Creates a new tag.
|
||||
- **`delete_tag`**: Deletes a tag.
|
||||
- **`list_tags`**: Lists all tags.
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
|
||||
## Configuration and Performance
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
As of version 0.28.0, Task Master automatically configures appropriate timeouts for MCP operations to handle long-running AI tasks. The Roo Code profile now includes a 300-second timeout (increased from the default 60 seconds) to accommodate complex operations like:
|
||||
|
||||
- `parse_prd` - PRD parsing and task generation
|
||||
- `expand_all` - Expanding multiple tasks into subtasks
|
||||
- `analyze_project_complexity` - Project-wide complexity analysis
|
||||
- Research-enabled operations with the `--research` flag
|
||||
|
||||
### MCP Server Configuration
|
||||
|
||||
The recommended MCP server configuration automatically includes these timeout settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"timeout": 300000,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-key-here",
|
||||
"PERPLEXITY_API_KEY": "your-key-here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This configuration is automatically generated when using Task Master profiles with Roo Code and other AI coding assistants.
|
||||
326
apps/docs/capabilities/rpg-method.mdx
Normal file
326
apps/docs/capabilities/rpg-method.mdx
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
title: RPG Method for PRD Creation
|
||||
sidebarTitle: "RPG Method"
|
||||
---
|
||||
|
||||
# Repository Planning Graph (RPG) Method
|
||||
|
||||
The RPG (Repository Planning Graph) method is an advanced approach to creating Product Requirements Documents that generate highly-structured, dependency-aware task graphs. It's based on Microsoft Research's methodology for scalable codebase generation.
|
||||
|
||||
## When to Use RPG
|
||||
|
||||
Use the RPG template (`example_prd_rpg.txt`) for:
|
||||
|
||||
- **Complex multi-module systems** with intricate dependencies
|
||||
- **Large-scale codebases** being built from scratch
|
||||
- **Projects requiring explicit architecture** and clear module boundaries
|
||||
- **Teams needing dependency visibility** for parallel development
|
||||
|
||||
For simpler features or smaller projects, the standard `example_prd.txt` template may be more appropriate.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Dual-Semantics
|
||||
|
||||
Separate **functional** thinking (WHAT) from **structural** thinking (HOW):
|
||||
|
||||
```
|
||||
Functional: "Data Validation capability with schema checking and rule enforcement"
|
||||
↓
|
||||
Structural: "src/validation/ with schema-validator.js and rule-validator.js"
|
||||
```
|
||||
|
||||
This separation prevents mixing concerns and creates clearer module boundaries.
|
||||
|
||||
### 2. Explicit Dependencies
|
||||
|
||||
Never assume dependencies - always state them explicitly:
|
||||
|
||||
```
|
||||
Good:
|
||||
Module: data-ingestion
|
||||
Depends on: [schema-validator, config-manager]
|
||||
|
||||
Bad:
|
||||
Module: data-ingestion
|
||||
(Assumes schema-validator exists somewhere)
|
||||
```
|
||||
|
||||
Explicit dependencies enable:
|
||||
- Topological ordering of implementation
|
||||
- Parallel development of independent modules
|
||||
- Clear build/test order
|
||||
- Early detection of circular dependencies
|
||||
|
||||
### 3. Topological Order
|
||||
|
||||
Build foundation layers before higher layers:
|
||||
|
||||
```
|
||||
Phase 0 (Foundation): error-handling, base-types, config
|
||||
↓
|
||||
Phase 1 (Data): validation, ingestion (depend on Phase 0)
|
||||
↓
|
||||
Phase 2 (Core): algorithms, pipelines (depend on Phase 1)
|
||||
↓
|
||||
Phase 3 (API): routes, handlers (depend on Phase 2)
|
||||
```
|
||||
|
||||
Task Master automatically orders tasks based on this dependency chain.
|
||||
|
||||
### 4. Progressive Refinement
|
||||
|
||||
Start broad, refine iteratively:
|
||||
|
||||
1. High-level capabilities → Main tasks
|
||||
2. Features per capability → Subtasks
|
||||
3. Implementation details → Expanded subtasks
|
||||
|
||||
---
|
||||
|
||||
## Template Structure
|
||||
|
||||
The RPG template guides you through 7 key sections:
|
||||
|
||||
### 1. Overview
|
||||
- Problem statement
|
||||
- Target users
|
||||
- Success metrics
|
||||
|
||||
### 2. Functional Decomposition (WHAT)
|
||||
- High-level capability domains
|
||||
- Features per capability
|
||||
- Inputs/outputs/behavior for each feature
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
Feature: Schema validation
|
||||
Description: Validate JSON against defined schemas
|
||||
Inputs: JSON object, schema definition
|
||||
Outputs: Validation result + error details
|
||||
Behavior: Iterate fields, check types, enforce constraints
|
||||
```
|
||||
|
||||
### 3. Structural Decomposition (HOW)
|
||||
- Repository folder structure
|
||||
- Module-to-capability mapping
|
||||
- File organization
|
||||
- Public interfaces/exports
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Capability: Data Management
|
||||
→ Maps to: src/data/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Rule validation feature)
|
||||
└── index.js (Exports)
|
||||
```
|
||||
|
||||
### 4. Dependency Graph (CRITICAL)
|
||||
- Foundation layer (no dependencies)
|
||||
- Each subsequent layer's dependencies
|
||||
- Explicit "depends on" declarations
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Foundation Layer (Phase 0):
|
||||
- error-handling: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer (Phase 1):
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator]
|
||||
```
|
||||
|
||||
### 5. Implementation Roadmap
|
||||
- Phases with entry/exit criteria
|
||||
- Tasks grouped by phase
|
||||
- Clear deliverables per phase
|
||||
|
||||
### 6. Test Strategy
|
||||
- Test pyramid ratios
|
||||
- Coverage requirements
|
||||
- Critical test scenarios per module
|
||||
- Guidelines for test generation
|
||||
|
||||
### 7. Architecture & Risks
|
||||
- Technical architecture
|
||||
- Data models
|
||||
- Technology decisions
|
||||
- Risk mitigation strategies
|
||||
|
||||
---
|
||||
|
||||
## Using RPG with Task Master
|
||||
|
||||
### Step 1: Create PRD with RPG Template
|
||||
|
||||
Use a code-context-aware tool to fill out the template:
|
||||
|
||||
```bash
|
||||
# In Claude Code, Cursor, or similar
|
||||
"Create a PRD using @.taskmaster/templates/example_prd_rpg.txt for [your project]"
|
||||
```
|
||||
|
||||
**Why code context matters:** The AI needs to understand your existing codebase to make informed decisions about:
|
||||
- Module boundaries
|
||||
- Dependency relationships
|
||||
- Integration points
|
||||
- Naming conventions
|
||||
|
||||
**Recommended tools:**
|
||||
- Claude Code (claude-code CLI)
|
||||
- Cursor/Windsurf
|
||||
- Gemini CLI (large contexts)
|
||||
- Codex/Grok CLI
|
||||
|
||||
### Step 2: Parse PRD into Tasks
|
||||
|
||||
```bash
|
||||
task-master parse-prd .taskmaster/docs/your-prd.txt --research
|
||||
```
|
||||
|
||||
Task Master will:
|
||||
1. Extract capabilities → Main tasks
|
||||
2. Extract features → Subtasks
|
||||
3. Parse dependencies → Task dependencies
|
||||
4. Order by phases → Task priorities
|
||||
|
||||
**Result:** A dependency-aware task graph ready for topological execution.
|
||||
|
||||
### Step 3: Analyze Complexity
|
||||
|
||||
```bash
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
Review the complexity report to identify tasks that need expansion.
|
||||
|
||||
### Step 4: Expand Tasks
|
||||
|
||||
```bash
|
||||
task-master expand --all --research
|
||||
```
|
||||
|
||||
Break down complex tasks into manageable subtasks while preserving dependency chains.
|
||||
|
||||
---
|
||||
|
||||
## RPG Benefits
|
||||
|
||||
### For Solo Developers
|
||||
- Clear roadmap for implementing complex features
|
||||
- Prevents architectural mistakes early
|
||||
- Explicit dependency tracking avoids integration issues
|
||||
- Enables resuming work after interruptions
|
||||
|
||||
### For Teams
|
||||
- Parallel development of independent modules
|
||||
- Clear contracts between modules (explicit dependencies)
|
||||
- Reduced merge conflicts (proper module boundaries)
|
||||
- Onboarding aid (architectural overview in PRD)
|
||||
|
||||
### For AI Agents
|
||||
- Structured context for code generation
|
||||
- Clear scope boundaries per task
|
||||
- Dependency awareness prevents incomplete implementations
|
||||
- Test strategy guidance for TDD workflows
|
||||
|
||||
---
|
||||
|
||||
## RPG vs Standard Template
|
||||
|
||||
| Aspect | Standard Template | RPG Template |
|
||||
|--------|------------------|--------------|
|
||||
| **Best for** | Simple features | Complex systems |
|
||||
| **Dependency handling** | Implicit | Explicit graph |
|
||||
| **Structure guidance** | Minimal | Step-by-step |
|
||||
| **Examples** | Few | Inline good/bad examples |
|
||||
| **Module boundaries** | Vague | Precise mapping |
|
||||
| **Task ordering** | Manual | Automatic (topological) |
|
||||
| **Learning curve** | Low | Medium |
|
||||
| **Resulting task quality** | Good | Excellent |
|
||||
|
||||
---
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
### 1. Spend Time on Dependencies
|
||||
The dependency graph section is the most valuable. List all dependencies explicitly, even if they seem obvious.
|
||||
|
||||
### 2. Keep Features Atomic
|
||||
Each feature should be independently testable. If a feature description is vague ("handle data"), break it into specific features.
|
||||
|
||||
### 3. Progressive Refinement
|
||||
Don't try to get everything perfect on the first pass:
|
||||
1. Fill out high-level sections
|
||||
2. Review and refine
|
||||
3. Add detail where needed
|
||||
4. Let `task-master expand` break down complex tasks further
|
||||
|
||||
### 4. Use Research Mode
|
||||
```bash
|
||||
task-master parse-prd --research
|
||||
```
|
||||
The `--research` flag leverages AI to enhance task generation with domain knowledge.
|
||||
|
||||
### 5. Validate Early
|
||||
```bash
|
||||
task-master validate-dependencies
|
||||
```
|
||||
Check for circular dependencies or orphaned modules before starting implementation.
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### ❌ Mixing Functional and Structural
|
||||
```
|
||||
Bad: "Capability: validation.js"
|
||||
Good: "Capability: Data Validation" → maps to "src/validation/"
|
||||
```
|
||||
|
||||
### ❌ Vague Module Boundaries
|
||||
```
|
||||
Bad: "Module: utils"
|
||||
Good: "Module: string-utilities" with clear exports
|
||||
```
|
||||
|
||||
### ❌ Implicit Dependencies
|
||||
```
|
||||
Bad: "Module: API handlers (needs validation)"
|
||||
Good: "Module: API handlers, Depends on: [validation, error-handling]"
|
||||
```
|
||||
|
||||
### ❌ Skipping Test Strategy
|
||||
Without test strategy, the AI won't know what to test during implementation.
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Discuss idea with AI**: Explain your project concept
|
||||
2. **Reference RPG template**: Show AI the `example_prd_rpg.txt`
|
||||
3. **Co-create PRD**: Work through each section with AI guidance
|
||||
4. **Save to docs**: Place in `.taskmaster/docs/your-project.txt`
|
||||
5. **Parse PRD**: `task-master parse-prd .taskmaster/docs/your-project.txt --research`
|
||||
6. **Analyze**: `task-master analyze-complexity --research`
|
||||
7. **Expand**: `task-master expand --all --research`
|
||||
8. **Start work**: `task-master next`
|
||||
|
||||
---
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [PRD Creation and Parsing Guide](/getting-started/quick-start/prd-quick)
|
||||
- [Task Structure Documentation](/capabilities/task-structure)
|
||||
- [Microsoft Research RPG Paper](https://arxiv.org/abs/2410.21376) (Original methodology)
|
||||
|
||||
---
|
||||
|
||||
<Tip>
|
||||
The RPG template includes inline `<instruction>` and `<example>` blocks that teach the method as you use it. Read these sections carefully - they provide valuable guidance at each decision point.
|
||||
</Tip>
|
||||
163
apps/docs/capabilities/task-structure.mdx
Normal file
163
apps/docs/capabilities/task-structure.mdx
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: "Task Structure"
|
||||
sidebarTitle: "Task Structure"
|
||||
description: "Tasks in Task Master follow a specific format designed to provide comprehensive information for both humans and AI assistants."
|
||||
---
|
||||
|
||||
## Task Fields in tasks.json
|
||||
|
||||
Tasks in tasks.json have the following structure:
|
||||
|
||||
| Field | Description | Example |
|
||||
| -------------- | ---------------------------------------------- | ------------------------------------------------------ |
|
||||
| `id` | Unique identifier for the task. | `1` |
|
||||
| `title` | Brief, descriptive title. | `"Initialize Repo"` |
|
||||
| `description` | What the task involves. | `"Create a new repository, set up initial structure."` |
|
||||
| `status` | Current state. | `"pending"`, `"done"`, `"deferred"` |
|
||||
| `dependencies` | Prerequisite task IDs. ✅ Completed, ⏱️ Pending | `[1, 2]` |
|
||||
| `priority` | Task importance. | `"high"`, `"medium"`, `"low"` |
|
||||
| `details` | Implementation instructions. | `"Use GitHub client ID/secret, handle callback..."` |
|
||||
| `testStrategy` | How to verify success. | `"Deploy and confirm 'Hello World' response."` |
|
||||
| `subtasks` | Nested subtasks related to the main task. | `[{"id": 1, "title": "Configure OAuth", ...}]` |
|
||||
|
||||
## Task File Format
|
||||
|
||||
Individual task files follow this format:
|
||||
|
||||
```
|
||||
# Task ID: <id>
|
||||
# Title: <title>
|
||||
# Status: <status>
|
||||
# Dependencies: <comma-separated list of dependency IDs>
|
||||
# Priority: <priority>
|
||||
# Description: <brief description>
|
||||
# Details:
|
||||
<detailed implementation notes>
|
||||
|
||||
# Test Strategy:
|
||||
<verification approach>
|
||||
```
|
||||
|
||||
## Features in Detail
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Analyzing Task Complexity">
|
||||
The `analyze-complexity` command:
|
||||
|
||||
- Analyzes each task using AI to assess its complexity on a scale of 1-10
|
||||
- Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS
|
||||
- Generates tailored prompts for expanding each task
|
||||
- Creates a comprehensive JSON report with ready-to-use commands
|
||||
- Saves the report to scripts/task-complexity-report.json by default
|
||||
|
||||
The generated report contains:
|
||||
|
||||
- Complexity analysis for each task (scored 1-10)
|
||||
- Recommended number of subtasks based on complexity
|
||||
- AI-generated expansion prompts customized for each task
|
||||
- Ready-to-run expansion commands directly within each task analysis
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing Complexity Report">
|
||||
The `complexity-report` command:
|
||||
|
||||
- Displays a formatted, easy-to-read version of the complexity analysis report
|
||||
- Shows tasks organized by complexity score (highest to lowest)
|
||||
- Provides complexity distribution statistics (low, medium, high)
|
||||
- Highlights tasks recommended for expansion based on threshold score
|
||||
- Includes ready-to-use expansion commands for each complex task
|
||||
- If no report exists, offers to generate one on the spot
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Smart Task Expansion">
|
||||
The `expand` command automatically checks for and uses the complexity report:
|
||||
|
||||
When a complexity report exists:
|
||||
|
||||
- Tasks are automatically expanded using the recommended subtask count and prompts
|
||||
- When expanding all tasks, they're processed in order of complexity (highest first)
|
||||
- Research-backed generation is preserved from the complexity analysis
|
||||
- You can still override recommendations with explicit command-line options
|
||||
|
||||
Example workflow:
|
||||
|
||||
```bash
|
||||
# Generate the complexity analysis report with research capabilities
|
||||
task-master analyze-complexity --research
|
||||
|
||||
# Review the report in a readable format
|
||||
task-master complexity-report
|
||||
|
||||
# Expand tasks using the optimized recommendations
|
||||
task-master expand --id=8
|
||||
# or expand all tasks
|
||||
task-master expand --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Finding the Next Task">
|
||||
The `next` command:
|
||||
|
||||
- Identifies tasks that are pending/in-progress and have all dependencies satisfied
|
||||
- Prioritizes tasks by priority level, dependency count, and task ID
|
||||
- Displays comprehensive information about the selected task:
|
||||
- Basic task details (ID, title, priority, dependencies)
|
||||
- Implementation details
|
||||
- Subtasks (if they exist)
|
||||
- Provides contextual suggested actions:
|
||||
- Command to mark the task as in-progress
|
||||
- Command to mark the task as done
|
||||
- Commands for working with subtasks
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing Specific Task Details">
|
||||
The `show` command:
|
||||
|
||||
- Displays comprehensive details about a specific task or subtask
|
||||
- Shows task status, priority, dependencies, and detailed implementation notes
|
||||
- For parent tasks, displays all subtasks and their status
|
||||
- For subtasks, shows parent task relationship
|
||||
- Provides contextual action suggestions based on the task's state
|
||||
- Works with both regular tasks and subtasks (using the format taskId.subtaskId)
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Best Practices for AI-Driven Development
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="📝 Detailed PRD" icon="lightbulb">
|
||||
The more detailed your PRD, the better the generated tasks will be.
|
||||
</Card>
|
||||
|
||||
<Card title="👀 Review Tasks" icon="magnifying-glass">
|
||||
After parsing the PRD, review the tasks to ensure they make sense and have appropriate dependencies.
|
||||
</Card>
|
||||
|
||||
<Card title="📊 Analyze Complexity" icon="chart-line">
|
||||
Use the complexity analysis feature to identify which tasks should be broken down further.
|
||||
</Card>
|
||||
|
||||
<Card title="⛓️ Follow Dependencies" icon="link">
|
||||
Always respect task dependencies - the Cursor agent will help with this.
|
||||
</Card>
|
||||
|
||||
<Card title="🔄 Update As You Go" icon="arrows-rotate">
|
||||
If your implementation diverges from the plan, use the update command to keep future tasks aligned.
|
||||
</Card>
|
||||
|
||||
<Card title="📦 Break Down Tasks" icon="boxes-stacked">
|
||||
Use the expand command to break down complex tasks into manageable subtasks.
|
||||
</Card>
|
||||
|
||||
<Card title="🔄 Regenerate Files" icon="file-arrow-up">
|
||||
After any updates to tasks.json, regenerate the task files to keep them in sync.
|
||||
</Card>
|
||||
|
||||
<Card title="💬 Provide Context" icon="comment">
|
||||
When asking the Cursor agent to help with a task, provide context about what you're trying to achieve.
|
||||
</Card>
|
||||
|
||||
<Card title="✅ Validate Dependencies" icon="circle-check">
|
||||
Periodically run the validate-dependencies command to check for invalid or circular dependencies.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
84
apps/docs/docs.json
Normal file
84
apps/docs/docs.json
Normal file
@@ -0,0 +1,84 @@
|
||||
{
|
||||
"$schema": "https://mintlify.com/docs.json",
|
||||
"theme": "mint",
|
||||
"name": "Task Master",
|
||||
"colors": {
|
||||
"primary": "#3366CC",
|
||||
"light": "#6699FF",
|
||||
"dark": "#24478F"
|
||||
},
|
||||
"favicon": "/favicon.svg",
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Task Master Documentation",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Welcome",
|
||||
"pages": ["introduction"]
|
||||
},
|
||||
{
|
||||
"group": "Getting Started",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Quick Start",
|
||||
"pages": [
|
||||
"getting-started/quick-start/quick-start",
|
||||
"getting-started/quick-start/requirements",
|
||||
"getting-started/quick-start/installation",
|
||||
"getting-started/quick-start/configuration-quick",
|
||||
"getting-started/quick-start/prd-quick",
|
||||
"getting-started/quick-start/tasks-quick",
|
||||
"getting-started/quick-start/execute-quick"
|
||||
]
|
||||
},
|
||||
"getting-started/api-keys",
|
||||
"getting-started/faq",
|
||||
"getting-started/contribute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Best Practices",
|
||||
"pages": [
|
||||
"best-practices/index",
|
||||
"best-practices/configuration-advanced",
|
||||
"best-practices/advanced-tasks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Technical Capabilities",
|
||||
"pages": [
|
||||
"capabilities/mcp",
|
||||
"capabilities/cli-root-commands",
|
||||
"capabilities/task-structure"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"global": {
|
||||
"anchors": [
|
||||
{
|
||||
"anchor": "Github",
|
||||
"href": "https://github.com/eyaltoledano/claude-task-master",
|
||||
"icon": "github"
|
||||
},
|
||||
{
|
||||
"anchor": "Discord",
|
||||
"href": "https://discord.gg/fWJkU7rf",
|
||||
"icon": "discord"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"logo": {
|
||||
"light": "/logo/task-master-logo.png",
|
||||
"dark": "/logo/task-master-logo.png"
|
||||
},
|
||||
"footer": {
|
||||
"socials": {
|
||||
"x": "https://x.com/TaskmasterAI",
|
||||
"github": "https://github.com/eyaltoledano/claude-task-master"
|
||||
}
|
||||
}
|
||||
}
|
||||
9
apps/docs/favicon.svg
Normal file
9
apps/docs/favicon.svg
Normal file
@@ -0,0 +1,9 @@
|
||||
<svg width="100" height="100" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
|
||||
<!-- Blue form with check from logo -->
|
||||
<rect x="16" y="10" width="68" height="80" rx="9" fill="#3366CC"/>
|
||||
<polyline points="33,44 41,55 56,29" fill="none" stroke="#FFFFFF" stroke-width="6"/>
|
||||
<circle cx="33" cy="64" r="4" fill="#FFFFFF"/>
|
||||
<rect x="43" y="61" width="27" height="6" fill="#FFFFFF"/>
|
||||
<circle cx="33" cy="77" r="4" fill="#FFFFFF"/>
|
||||
<rect x="43" y="75" width="27" height="6" fill="#FFFFFF"/>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 513 B |
285
apps/docs/getting-started/api-keys.mdx
Normal file
285
apps/docs/getting-started/api-keys.mdx
Normal file
@@ -0,0 +1,285 @@
|
||||
# API Keys Configuration
|
||||
|
||||
Task Master supports multiple AI providers through environment variables. This page lists all available API keys and their configuration requirements.
|
||||
|
||||
## Required API Keys
|
||||
|
||||
> **Note**: At least one required API key must be configured for Task Master to function.
|
||||
>
|
||||
> "Required: Yes" below means "required to use that specific provider," not "required globally." You only need at least one provider configured.
|
||||
|
||||
### ANTHROPIC_API_KEY (Recommended)
|
||||
- **Provider**: Anthropic Claude models
|
||||
- **Format**: `sk-ant-api03-...`
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
|
||||
- **Get Key**: [Anthropic Console](https://console.anthropic.com/)
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY="sk-ant-api03-your-key-here"
|
||||
```
|
||||
|
||||
### PERPLEXITY_API_KEY (Highly Recommended for Research)
|
||||
- **Provider**: Perplexity AI (Research features)
|
||||
- **Format**: `pplx-...`
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Purpose**: Enables research-backed task expansions and updates
|
||||
- **Models**: Perplexity Sonar models
|
||||
- **Get Key**: [Perplexity API](https://www.perplexity.ai/settings/api)
|
||||
|
||||
```bash
|
||||
PERPLEXITY_API_KEY="pplx-your-key-here"
|
||||
```
|
||||
|
||||
### OPENAI_API_KEY
|
||||
- **Provider**: OpenAI GPT models
|
||||
- **Format**: `sk-proj-...` or `sk-...`
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, O1 models
|
||||
- **Get Key**: [OpenAI Platform](https://platform.openai.com/api-keys)
|
||||
|
||||
```bash
|
||||
OPENAI_API_KEY="sk-proj-your-key-here"
|
||||
```
|
||||
|
||||
### OPENAI_CODEX_API_KEY (New)
|
||||
- **Provider**: Codex CLI (GPT-5 and GPT-5-Codex models)
|
||||
- **Format**: Various formats
|
||||
- **Required**: ❌ **No** (OAuth-first authentication via `codex login`)
|
||||
- **Models**: GPT-5, GPT-5-Codex (272K input / 128K output context)
|
||||
- **Authentication**: Primary authentication is OAuth via `codex login` command
|
||||
- **Features**: Codebase analysis capabilities automatically enabled
|
||||
- **Get Access**: Follow Codex CLI setup instructions
|
||||
|
||||
```bash
|
||||
# Optional - OAuth via 'codex login' is preferred
|
||||
OPENAI_CODEX_API_KEY="your-codex-api-key-here"
|
||||
```
|
||||
|
||||
### GOOGLE_API_KEY
|
||||
- **Provider**: Google Gemini models
|
||||
- **Format**: Various formats
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: Gemini Pro, Gemini Flash, Gemini Ultra
|
||||
- **Get Key**: [Google AI Studio](https://aistudio.google.com/app/apikey)
|
||||
- **Alternative**: Use `GOOGLE_APPLICATION_CREDENTIALS` for service account (Google Vertex)
|
||||
|
||||
```bash
|
||||
GOOGLE_API_KEY="your-google-api-key-here"
|
||||
```
|
||||
|
||||
### GROQ_API_KEY
|
||||
- **Provider**: Groq (High-performance inference)
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: Llama models, Mixtral models (via Groq)
|
||||
- **Get Key**: [Groq Console](https://console.groq.com/keys)
|
||||
|
||||
```bash
|
||||
GROQ_API_KEY="your-groq-key-here"
|
||||
```
|
||||
|
||||
### OPENROUTER_API_KEY
|
||||
- **Provider**: OpenRouter (Multiple model access)
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: Access to various models through single API
|
||||
- **Get Key**: [OpenRouter](https://openrouter.ai/keys)
|
||||
|
||||
```bash
|
||||
OPENROUTER_API_KEY="your-openrouter-key-here"
|
||||
```
|
||||
|
||||
### AZURE_OPENAI_API_KEY
|
||||
- **Provider**: Azure OpenAI Service
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Requirements**: Also requires `AZURE_OPENAI_ENDPOINT` configuration
|
||||
- **Models**: GPT models via Azure
|
||||
- **Get Key**: [Azure Portal](https://portal.azure.com/)
|
||||
|
||||
```bash
|
||||
AZURE_OPENAI_API_KEY="your-azure-key-here"
|
||||
```
|
||||
|
||||
### XAI_API_KEY
|
||||
- **Provider**: xAI (Grok) models
|
||||
- **Required**: ✅ **Yes**
|
||||
- **Models**: Grok models
|
||||
- **Get Key**: [xAI Console](https://console.x.ai/)
|
||||
|
||||
```bash
|
||||
XAI_API_KEY="your-xai-key-here"
|
||||
```
|
||||
|
||||
## Optional API Keys
|
||||
|
||||
> **Note**: These API keys are optional - providers will work without them or use alternative authentication methods.
|
||||
|
||||
### AWS_ACCESS_KEY_ID (Bedrock)
|
||||
- **Provider**: AWS Bedrock
|
||||
- **Required**: ❌ **No** (uses AWS credential chain)
|
||||
- **Models**: Claude models via AWS Bedrock
|
||||
- **Authentication**: Uses AWS credential chain (profiles, IAM roles, etc.)
|
||||
- **Get Key**: [AWS Console](https://console.aws.amazon.com/iam/)
|
||||
|
||||
```bash
|
||||
# Optional - AWS credential chain is preferred
|
||||
AWS_ACCESS_KEY_ID="your-aws-access-key"
|
||||
AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
|
||||
```
|
||||
|
||||
### CLAUDE_CODE_API_KEY
|
||||
- **Provider**: Claude Code CLI
|
||||
- **Required**: ❌ **No** (uses OAuth tokens)
|
||||
- **Purpose**: Integration with local Claude Code CLI
|
||||
- **Authentication**: Uses OAuth tokens, no API key needed
|
||||
|
||||
```bash
|
||||
# Not typically needed
|
||||
CLAUDE_CODE_API_KEY="not-usually-required"
|
||||
```
|
||||
|
||||
### GEMINI_API_KEY
|
||||
- **Provider**: Gemini CLI
|
||||
- **Required**: ❌ **No** (uses OAuth authentication)
|
||||
- **Purpose**: Integration with Gemini CLI
|
||||
- **Authentication**: Primarily uses OAuth via CLI, API key is optional
|
||||
|
||||
```bash
|
||||
# Optional - OAuth via CLI is preferred
|
||||
GEMINI_API_KEY="your-gemini-key-here"
|
||||
```
|
||||
|
||||
### GROK_CLI_API_KEY
|
||||
- **Provider**: Grok CLI
|
||||
- **Required**: ❌ **No** (can use CLI config)
|
||||
- **Purpose**: Integration with Grok CLI
|
||||
- **Authentication**: Can use Grok CLI's own config file
|
||||
|
||||
```bash
|
||||
# Optional - CLI config is preferred
|
||||
GROK_CLI_API_KEY="your-grok-cli-key"
|
||||
```
|
||||
|
||||
### OLLAMA_API_KEY
|
||||
- **Provider**: Ollama (Local/Remote)
|
||||
- **Required**: ❌ **No** (local installation doesn't need key)
|
||||
- **Purpose**: For remote Ollama servers that require authentication
|
||||
- **Requirements**: Only needed for remote servers with authentication
|
||||
- **Note**: Not needed for local Ollama installations
|
||||
|
||||
```bash
|
||||
# Only needed for remote Ollama servers
|
||||
OLLAMA_API_KEY="your-ollama-api-key-here"
|
||||
```
|
||||
|
||||
### GITHUB_API_KEY
|
||||
- **Provider**: GitHub (Import/Export features)
|
||||
- **Format**: `ghp_...` or `github_pat_...`
|
||||
- **Required**: ❌ **No** (for GitHub features only)
|
||||
- **Purpose**: GitHub import/export features
|
||||
- **Get Key**: [GitHub Settings](https://github.com/settings/tokens)
|
||||
|
||||
```bash
|
||||
GITHUB_API_KEY="ghp-your-github-key-here"
|
||||
```
|
||||
|
||||
## Configuration Methods
|
||||
|
||||
### Method 1: Environment File (.env)
|
||||
Create a `.env` file in your project root:
|
||||
|
||||
```bash
|
||||
# Copy from .env.example
|
||||
cp .env.example .env
|
||||
|
||||
# Edit with your keys
|
||||
vim .env
|
||||
```
|
||||
|
||||
### Method 2: System Environment Variables
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="your-key-here"
|
||||
export PERPLEXITY_API_KEY="your-key-here"
|
||||
# ... other keys
|
||||
```
|
||||
|
||||
### Method 3: MCP Server Configuration
|
||||
For Claude Code integration, configure keys in `.mcp.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"timeout": 300000,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-key-here",
|
||||
"PERPLEXITY_API_KEY": "your-key-here",
|
||||
"OPENAI_API_KEY": "your-key-here",
|
||||
"OPENAI_CODEX_API_KEY": "your-codex-key-here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: The `timeout: 300000` (300 seconds) setting is recommended for long-running AI operations like PRD parsing and complexity analysis.
|
||||
|
||||
## Key Requirements
|
||||
|
||||
### Minimum Requirements
|
||||
- **At least one** AI provider key is required
|
||||
- **ANTHROPIC_API_KEY** is recommended as the primary provider
|
||||
- **PERPLEXITY_API_KEY** is highly recommended for research features
|
||||
|
||||
### Provider-Specific Requirements
|
||||
- **Azure OpenAI**: Requires both `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT` configuration
|
||||
- **Google Vertex**: Requires `VERTEX_PROJECT_ID` and `VERTEX_LOCATION` environment variables
|
||||
- **AWS Bedrock**: Uses AWS credential chain (profiles, IAM roles, etc.) instead of API keys
|
||||
- **Ollama**: Only needs API key for remote servers with authentication
|
||||
- **CLI Providers**: Gemini CLI, Grok CLI, and Claude Code use OAuth/CLI config instead of API keys
|
||||
|
||||
## Model Configuration
|
||||
|
||||
After setting up API keys, configure which models to use:
|
||||
|
||||
```bash
|
||||
# Interactive model setup
|
||||
task-master models --setup
|
||||
|
||||
# Set specific models
|
||||
task-master models --set-main claude-3-5-sonnet-20241022
|
||||
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
||||
task-master models --set-fallback gpt-4o-mini
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
1. **Never commit API keys** to version control
|
||||
2. **Use .env files** and add them to `.gitignore`
|
||||
3. **Rotate keys regularly** especially if compromised
|
||||
4. **Use minimal permissions** for service accounts
|
||||
5. **Monitor usage** to detect unauthorized access
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Key Validation
|
||||
```bash
|
||||
# Check if keys are properly configured
|
||||
task-master models
|
||||
|
||||
# Test specific provider
|
||||
task-master add-task --prompt="test task" --model=claude-3-5-sonnet-20241022
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
- **Invalid key format**: Check the expected format for each provider
|
||||
- **Insufficient permissions**: Ensure keys have necessary API access
|
||||
- **Rate limits**: Some providers have usage limits
|
||||
- **Regional restrictions**: Some models may not be available in all regions
|
||||
|
||||
### Getting Help
|
||||
If you encounter issues with API key configuration:
|
||||
- Check the [FAQ](/getting-started/faq) for common solutions
|
||||
- Join our [Discord community](https://discord.gg/fWJkU7rf) for support
|
||||
- Report issues on [GitHub](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
335
apps/docs/getting-started/contribute.mdx
Normal file
335
apps/docs/getting-started/contribute.mdx
Normal file
@@ -0,0 +1,335 @@
|
||||
# Contributing to Task Master
|
||||
|
||||
Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀
|
||||
|
||||
## 🤝 Our Collaborative Approach
|
||||
|
||||
We're a **PR-friendly team** that values collaboration:
|
||||
|
||||
- ✅ **We review PRs quickly** - Usually within hours, not days
|
||||
- ✅ **We're super reactive** - Expect fast feedback and engagement
|
||||
- ✅ **We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it
|
||||
- ✅ **We're open to all contributions** - From bug fixes to major features
|
||||
|
||||
**We don't mind AI-generated code**, but we do expect you to:
|
||||
|
||||
- ✅ **Review and understand** what the AI generated
|
||||
- ✅ **Test the code thoroughly** before submitting
|
||||
- ✅ **Ensure it's well-written** and follows our patterns
|
||||
- ❌ **Don't submit "AI slop"** - untested, unreviewed AI output
|
||||
|
||||
> **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time!
|
||||
|
||||
## 🚀 Quick Start for Contributors
|
||||
|
||||
### 1. Fork and Clone
|
||||
|
||||
```bash
|
||||
git clone https://github.com/YOUR_USERNAME/claude-task-master.git
|
||||
cd claude-task-master
|
||||
npm install
|
||||
```
|
||||
|
||||
### 2. Create a Feature Branch
|
||||
|
||||
**Important**: Always target the `next` branch, not `main`:
|
||||
|
||||
```bash
|
||||
git checkout next
|
||||
git pull origin next
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
### 3. Make Your Changes
|
||||
|
||||
Follow our development guidelines below.
|
||||
|
||||
### 4. Test Everything Yourself
|
||||
|
||||
**Before submitting your PR**, ensure:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting if needed
|
||||
npm run format
|
||||
```
|
||||
|
||||
### 5. Create a Changeset
|
||||
|
||||
**Required for most changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
See the [Changeset Guidelines](#changeset-guidelines) below for details.
|
||||
|
||||
### 6. Submit Your PR
|
||||
|
||||
- Target the `next` branch
|
||||
- Write a clear description
|
||||
- Reference any related issues
|
||||
|
||||
## 📋 Development Guidelines
|
||||
|
||||
### Branch Strategy
|
||||
|
||||
- **`main`**: Production-ready code
|
||||
- **`next`**: Development branch - **target this for PRs**
|
||||
- **Feature branches**: `feature/description` or `fix/description`
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
1. **Write tests** for new functionality
|
||||
2. **Follow existing patterns** in the codebase
|
||||
3. **Add JSDoc comments** for functions
|
||||
4. **Keep functions focused** and single-purpose
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
Your PR **must pass all CI checks**:
|
||||
|
||||
- ✅ **Unit tests**: `npm test`
|
||||
- ✅ **Format check**: `npm run format-check`
|
||||
|
||||
**Test your changes locally first** - this saves review time and shows you care about quality.
|
||||
|
||||
## 📦 Changeset Guidelines
|
||||
|
||||
We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs.
|
||||
|
||||
### When to Create a Changeset
|
||||
|
||||
**Always create a changeset for**:
|
||||
|
||||
- ✅ New features
|
||||
- ✅ Bug fixes
|
||||
- ✅ Breaking changes
|
||||
- ✅ Performance improvements
|
||||
- ✅ User-facing documentation updates
|
||||
- ✅ Dependency updates that affect functionality
|
||||
|
||||
**Skip changesets for**:
|
||||
|
||||
- ❌ Internal documentation only
|
||||
- ❌ Test-only changes
|
||||
- ❌ Code formatting/linting
|
||||
- ❌ Development tooling that doesn't affect users
|
||||
|
||||
### How to Create a Changeset
|
||||
|
||||
1. **After making your changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
2. **Choose the bump type**:
|
||||
|
||||
- **Major**: Breaking changes
|
||||
- **Minor**: New features
|
||||
- **Patch**: Bug fixes, docs, performance improvements
|
||||
|
||||
3. **Write a clear summary**:
|
||||
|
||||
```
|
||||
Add support for custom AI models in MCP configuration
|
||||
```
|
||||
|
||||
4. **Commit the changeset file** with your changes:
|
||||
```bash
|
||||
git add .changeset/*.md
|
||||
git commit -m "feat: add custom AI model support"
|
||||
```
|
||||
|
||||
### Changeset vs Git Commit Messages
|
||||
|
||||
- **Changeset summary**: User-facing, goes in CHANGELOG.md
|
||||
- **Git commit**: Developer-facing, explains the technical change
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Changeset summary (user-facing)
|
||||
"Add support for custom Ollama models"
|
||||
|
||||
# Git commit message (developer-facing)
|
||||
"feat(models): implement custom Ollama model validation
|
||||
|
||||
- Add model validation for custom Ollama endpoints
|
||||
- Update configuration schema to support custom models
|
||||
- Add tests for new validation logic"
|
||||
```
|
||||
|
||||
## 🔧 Development Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 18+
|
||||
- npm or yarn
|
||||
|
||||
### Environment Setup
|
||||
|
||||
1. **Copy environment template**:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. **Add your API keys** (for testing AI features):
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
OPENAI_API_KEY=your_key_here
|
||||
# Add others as needed
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Run tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run with coverage
|
||||
npm run test:coverage
|
||||
|
||||
# Run E2E tests
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
### Code Formatting
|
||||
|
||||
We use Prettier for consistent formatting:
|
||||
|
||||
```bash
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting
|
||||
npm run format
|
||||
```
|
||||
|
||||
## 📝 PR Guidelines
|
||||
|
||||
### Before Submitting
|
||||
|
||||
- [ ] **Target the `next` branch**
|
||||
- [ ] **Test everything locally**
|
||||
- [ ] **Run the full test suite**
|
||||
- [ ] **Check code formatting**
|
||||
- [ ] **Create a changeset** (if needed)
|
||||
- [ ] **Re-read your changes** - ensure they're clean and well-thought-out
|
||||
|
||||
### PR Description Template
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
|
||||
Brief description of what this PR does.
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Breaking change
|
||||
- [ ] Documentation update
|
||||
|
||||
## Testing
|
||||
|
||||
- [ ] I have tested this locally
|
||||
- [ ] All existing tests pass
|
||||
- [ ] I have added tests for new functionality
|
||||
|
||||
## Changeset
|
||||
|
||||
- [ ] I have created a changeset (or this change doesn't need one)
|
||||
|
||||
## Additional Notes
|
||||
|
||||
Any additional context or notes for reviewers.
|
||||
```
|
||||
|
||||
### What We Look For
|
||||
|
||||
✅ **Good PRs**:
|
||||
|
||||
- Clear, focused changes
|
||||
- Comprehensive testing
|
||||
- Good commit messages
|
||||
- Proper changeset (when needed)
|
||||
- Self-reviewed code
|
||||
|
||||
❌ **Avoid**:
|
||||
|
||||
- Massive PRs that change everything
|
||||
- Untested code
|
||||
- Formatting issues
|
||||
- Missing changesets for user-facing changes
|
||||
- AI-generated code that wasn't reviewed
|
||||
|
||||
## 🏗️ Project Structure
|
||||
|
||||
```
|
||||
claude-task-master/
|
||||
├── bin/ # CLI executables
|
||||
├── mcp-server/ # MCP server implementation
|
||||
├── scripts/ # Core task management logic
|
||||
├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here)
|
||||
├── tests/ # Test files
|
||||
├── docs/ # Documentation
|
||||
└── .cursor/ # Cursor IDE rules and configuration
|
||||
└── assets/ # Assets like rules and configuration for all IDEs
|
||||
```
|
||||
|
||||
### Key Areas for Contribution
|
||||
|
||||
- **CLI Commands**: `scripts/modules/commands.js`
|
||||
- **MCP Tools**: `mcp-server/src/tools/`
|
||||
- **Core Logic**: `scripts/modules/task-manager/`
|
||||
- **AI Providers**: `src/ai-providers/`
|
||||
- **Tests**: `tests/`
|
||||
|
||||
## 🐛 Reporting Issues
|
||||
|
||||
### Bug Reports
|
||||
|
||||
Include:
|
||||
|
||||
- Task Master version
|
||||
- Node.js version
|
||||
- Operating system
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Error messages/logs
|
||||
|
||||
### Feature Requests
|
||||
|
||||
Include:
|
||||
|
||||
- Clear description of the feature
|
||||
- Use case/motivation
|
||||
- Proposed implementation (if you have ideas)
|
||||
- Willingness to contribute
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
|
||||
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
|
||||
|
||||
## 📄 License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause).
|
||||
|
||||
---
|
||||
|
||||
**Thank you for contributing to Task Master!** 🎉
|
||||
|
||||
Your contributions help make AI-driven development more accessible and efficient for everyone.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user