Compare commits
78 Commits
chore/fix.
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3d11093732 | ||
|
|
7c1d05958f | ||
|
|
3eeb19590a | ||
|
|
587745046f | ||
|
|
c61c73f827 | ||
|
|
15900d9fd5 | ||
|
|
7cf4004038 | ||
|
|
0f3ab00f26 | ||
|
|
a7ad4c8e92 | ||
|
|
0d54747894 | ||
|
|
df26c65632 | ||
|
|
e80e5bb7cd | ||
|
|
c4f92f6a0a | ||
|
|
be0c0f267c | ||
|
|
a983f75d4f | ||
|
|
e743aaa8c2 | ||
|
|
16ffffaf68 | ||
|
|
f254aed4a6 | ||
|
|
dd3b47bb2b | ||
|
|
37af0f1912 | ||
|
|
8783708e5e | ||
|
|
4dad2fd613 | ||
|
|
4cae2991d4 | ||
|
|
0d7ff627c9 | ||
|
|
db720a954d | ||
|
|
89335578ff | ||
|
|
781b8ef2af | ||
|
|
7d564920b5 | ||
|
|
2737fbaa67 | ||
|
|
9feb8d2dbf | ||
|
|
8a991587f1 | ||
|
|
7ceba2f572 | ||
|
|
10565f07d3 | ||
|
|
f27ce34fe9 | ||
|
|
71be933a8d | ||
|
|
5d94f1b471 | ||
|
|
3dee60dc3d | ||
|
|
f469515228 | ||
|
|
2fd0f026d3 | ||
|
|
e3ed4d7c14 | ||
|
|
fc47714340 | ||
|
|
30ae0e9a57 | ||
|
|
95640dcde8 | ||
|
|
311b2433e2 | ||
|
|
04e11b5e82 | ||
|
|
782728ff95 | ||
|
|
30ca144231 | ||
|
|
0220d0e994 | ||
|
|
41a8c2406a | ||
|
|
a003041cd8 | ||
|
|
6b57ead106 | ||
|
|
7b6e117b1d | ||
|
|
03b045e9cd | ||
|
|
699afdae59 | ||
|
|
80c09802e8 | ||
|
|
cf8f0f4b1c | ||
|
|
75c514cf5b | ||
|
|
41d1e671b1 | ||
|
|
a464e550b8 | ||
|
|
3a852afdae | ||
|
|
4bb63706b8 | ||
|
|
fcf14e09be | ||
|
|
4357af3f13 | ||
|
|
59f7676051 | ||
|
|
36468f3c93 | ||
|
|
ca4d93ee6a | ||
|
|
37fb569a62 | ||
|
|
ed0d4e6641 | ||
|
|
5184f8e7b2 | ||
|
|
587523a23b | ||
|
|
7a50f0c6ec | ||
|
|
adeb76ee15 | ||
|
|
d342070375 | ||
|
|
5e4dbac525 | ||
|
|
fb15c2eaf7 | ||
|
|
e8ceb08341 | ||
|
|
e495b2b559 | ||
|
|
e0d1d03f33 |
5
.changeset/clarify-force-move-docs.md
Normal file
5
.changeset/clarify-force-move-docs.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
@@ -2,7 +2,9 @@
|
||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||
"changelog": [
|
||||
"@changesets/changelog-github",
|
||||
{ "repo": "eyaltoledano/claude-task-master" }
|
||||
{
|
||||
"repo": "eyaltoledano/claude-task-master"
|
||||
}
|
||||
],
|
||||
"commit": false,
|
||||
"fixed": [],
|
||||
@@ -10,5 +12,7 @@
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"updateInternalDependencies": "patch",
|
||||
"ignore": []
|
||||
"ignore": [
|
||||
"docs"
|
||||
]
|
||||
}
|
||||
5
.changeset/crazy-zebras-drum.md
Normal file
5
.changeset/crazy-zebras-drum.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Restore Taskmaster claude-code commands and move clear commands under /remove to avoid collision with the claude-code /clear command.
|
||||
9
.changeset/curvy-moons-dig.md
Normal file
9
.changeset/curvy-moons-dig.md
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
16
.changeset/pre.json
Normal file
16
.changeset/pre.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.25.1",
|
||||
"docs": "0.0.1",
|
||||
"extension": "0.24.1"
|
||||
},
|
||||
"changesets": [
|
||||
"clarify-force-move-docs",
|
||||
"curvy-moons-dig",
|
||||
"sour-coins-lay",
|
||||
"strong-eagles-vanish",
|
||||
"wet-candies-accept"
|
||||
]
|
||||
}
|
||||
11
.changeset/sour-coins-lay.md
Normal file
11
.changeset/sour-coins-lay.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
12
.changeset/strong-eagles-vanish.md
Normal file
12
.changeset/strong-eagles-vanish.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
feat(move): improve cross-tag move UX and safety
|
||||
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
---
|
||||
14
.changeset/wet-candies-accept.md
Normal file
14
.changeset/wet-candies-accept.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
162
.claude/agents/task-checker.md
Normal file
162
.claude/agents/task-checker.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
name: task-checker
|
||||
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
|
||||
model: sonnet
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Task Specification Review**
|
||||
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
|
||||
- Understand the requirements, test strategy, and success criteria
|
||||
- Review any subtasks and their individual requirements
|
||||
|
||||
2. **Implementation Verification**
|
||||
- Use `Read` tool to examine all created/modified files
|
||||
- Use `Bash` tool to run compilation and build commands
|
||||
- Use `Grep` tool to search for required patterns and implementations
|
||||
- Verify file structure matches specifications
|
||||
- Check that all required methods/functions are implemented
|
||||
|
||||
3. **Test Execution**
|
||||
- Run tests specified in the task's testStrategy
|
||||
- Execute build commands (npm run build, tsc --noEmit, etc.)
|
||||
- Verify no compilation errors or warnings
|
||||
- Check for runtime errors where applicable
|
||||
- Test edge cases mentioned in requirements
|
||||
|
||||
4. **Code Quality Assessment**
|
||||
- Verify code follows project conventions
|
||||
- Check for proper error handling
|
||||
- Ensure TypeScript typing is strict (no 'any' unless justified)
|
||||
- Verify documentation/comments where required
|
||||
- Check for security best practices
|
||||
|
||||
5. **Dependency Validation**
|
||||
- Verify all task dependencies were actually completed
|
||||
- Check integration points with dependent tasks
|
||||
- Ensure no breaking changes to existing functionality
|
||||
|
||||
## Verification Workflow
|
||||
|
||||
1. **Retrieve Task Information**
|
||||
```
|
||||
Use mcp__task-master-ai__get_task to get full task details
|
||||
Note the implementation requirements and test strategy
|
||||
```
|
||||
|
||||
2. **Check File Existence**
|
||||
```bash
|
||||
# Verify all required files exist
|
||||
ls -la [expected directories]
|
||||
# Read key files to verify content
|
||||
```
|
||||
|
||||
3. **Verify Implementation**
|
||||
- Read each created/modified file
|
||||
- Check against requirements checklist
|
||||
- Verify all subtasks are complete
|
||||
|
||||
4. **Run Tests**
|
||||
```bash
|
||||
# TypeScript compilation
|
||||
cd [project directory] && npx tsc --noEmit
|
||||
|
||||
# Run specified tests
|
||||
npm test [specific test files]
|
||||
|
||||
# Build verification
|
||||
npm run build
|
||||
```
|
||||
|
||||
5. **Generate Verification Report**
|
||||
|
||||
## Output Format
|
||||
|
||||
```yaml
|
||||
verification_report:
|
||||
task_id: [ID]
|
||||
status: PASS | FAIL | PARTIAL
|
||||
score: [1-10]
|
||||
|
||||
requirements_met:
|
||||
- ✅ [Requirement that was satisfied]
|
||||
- ✅ [Another satisfied requirement]
|
||||
|
||||
issues_found:
|
||||
- ❌ [Issue description]
|
||||
- ⚠️ [Warning or minor issue]
|
||||
|
||||
files_verified:
|
||||
- path: [file path]
|
||||
status: [created/modified/verified]
|
||||
issues: [any problems found]
|
||||
|
||||
tests_run:
|
||||
- command: [test command]
|
||||
result: [pass/fail]
|
||||
output: [relevant output]
|
||||
|
||||
recommendations:
|
||||
- [Specific fix needed]
|
||||
- [Improvement suggestion]
|
||||
|
||||
verdict: |
|
||||
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
|
||||
[If FAIL: Specific list of what must be fixed]
|
||||
[If PASS: Confirmation that all requirements are met]
|
||||
```
|
||||
|
||||
## Decision Criteria
|
||||
|
||||
**Mark as PASS (ready for 'done'):**
|
||||
- All required files exist and contain expected content
|
||||
- All tests pass successfully
|
||||
- No compilation or build errors
|
||||
- All subtasks are complete
|
||||
- Core requirements are met
|
||||
- Code quality is acceptable
|
||||
|
||||
**Mark as PARTIAL (may proceed with warnings):**
|
||||
- Core functionality is implemented
|
||||
- Minor issues that don't block functionality
|
||||
- Missing nice-to-have features
|
||||
- Documentation could be improved
|
||||
- Tests pass but coverage could be better
|
||||
|
||||
**Mark as FAIL (must return to 'pending'):**
|
||||
- Required files are missing
|
||||
- Compilation or build errors
|
||||
- Tests fail
|
||||
- Core requirements not met
|
||||
- Security vulnerabilities detected
|
||||
- Breaking changes to existing code
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **BE THOROUGH**: Check every requirement systematically
|
||||
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
|
||||
- **BE FAIR**: Distinguish between critical issues and minor improvements
|
||||
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
|
||||
- **BE EFFICIENT**: Focus on requirements, not perfection
|
||||
|
||||
## Tools You MUST Use
|
||||
|
||||
- `Read`: Examine implementation files (READ-ONLY)
|
||||
- `Bash`: Run tests and verification commands
|
||||
- `Grep`: Search for patterns in code
|
||||
- `mcp__task-master-ai__get_task`: Get task details
|
||||
- **NEVER use Write/Edit** - you only verify, not fix
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
You are the quality gate between 'review' and 'done' status:
|
||||
1. Task-executor implements and marks as 'review'
|
||||
2. You verify and report PASS/FAIL
|
||||
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
|
||||
4. If FAIL, task-executor re-implements based on your report
|
||||
|
||||
Your verification ensures high quality and prevents accumulation of technical debt.
|
||||
92
.claude/agents/task-executor.md
Normal file
92
.claude/agents/task-executor.md
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
name: task-executor
|
||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
||||
|
||||
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
|
||||
- Execute ONE specific subtask or a small group of related subtasks
|
||||
- Complete your work, verify it, mark for review, and exit
|
||||
- Do NOT decide what to do next - the orchestrator handles task sequencing
|
||||
- Focus on implementation excellence within your assigned scope
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
|
||||
|
||||
2. **Rapid Implementation Planning**: Quickly identify:
|
||||
- The EXACT files you need to create/modify for THIS subtask
|
||||
- What already exists that you can build upon
|
||||
- The minimum viable implementation that satisfies requirements
|
||||
|
||||
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
|
||||
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
|
||||
- Use `Write` tool to create new files specified in the task
|
||||
- Use `Edit` tool to modify existing files
|
||||
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
|
||||
- Use `Read` tool to verify your implementations
|
||||
- Implement one subtask at a time for clarity and traceability
|
||||
- Follow the project's coding standards from CLAUDE.md if available
|
||||
- After each subtask, VERIFY the files exist using Read or ls commands
|
||||
|
||||
4. **Progress Documentation**:
|
||||
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
|
||||
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
|
||||
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
|
||||
- Tasks will be verified by task-checker before moving to 'done'
|
||||
|
||||
5. **Quality Assurance**:
|
||||
- Implement the testing strategy specified in the task
|
||||
- Verify that all acceptance criteria are met
|
||||
- Check for any dependency conflicts or integration issues
|
||||
- Run relevant tests before marking task as complete
|
||||
|
||||
6. **Dependency Management**:
|
||||
- Check task dependencies before starting implementation
|
||||
- If blocked by incomplete dependencies, clearly communicate this
|
||||
- Use `task-master validate-dependencies` when needed
|
||||
|
||||
**Implementation Workflow:**
|
||||
|
||||
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
|
||||
2. Check dependencies and prerequisites
|
||||
3. Plan implementation approach - list specific files to create
|
||||
4. Update task status to 'in-progress' using MCP tool
|
||||
5. **ACTUALLY IMPLEMENT** the solution using tools:
|
||||
- Use `Bash` to create directories
|
||||
- Use `Write` to create new files with actual content
|
||||
- Use `Edit` to modify existing files
|
||||
- DO NOT just describe what should be done - DO IT
|
||||
6. **VERIFY** your implementation:
|
||||
- Use `ls` or `Read` to confirm files were created
|
||||
- Use `Bash` to run any build/test commands
|
||||
- Ensure the implementation is real, not theoretical
|
||||
7. Log progress and decisions in subtask updates using MCP tools
|
||||
8. Test and verify the implementation works
|
||||
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
|
||||
10. Report completion with:
|
||||
- List of created/modified files
|
||||
- Any issues encountered
|
||||
- What needs verification by task-checker
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Focus on completing one task thoroughly before moving to the next
|
||||
- Maintain clear communication about what you're implementing and why
|
||||
- Follow existing code patterns and project conventions
|
||||
- Prioritize working code over extensive documentation unless docs are the task
|
||||
- Ask for clarification if task requirements are ambiguous
|
||||
- Consider edge cases and error handling in your implementations
|
||||
|
||||
**Integration with Task Master:**
|
||||
|
||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
||||
- Track your progress
|
||||
- Update task information
|
||||
- Maintain project state
|
||||
- Coordinate with the broader development workflow
|
||||
|
||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
||||
208
.claude/agents/task-orchestrator.md
Normal file
208
.claude/agents/task-orchestrator.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
name: task-orchestrator
|
||||
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
|
||||
model: opus
|
||||
color: green
|
||||
---
|
||||
|
||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
|
||||
|
||||
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
|
||||
|
||||
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
|
||||
|
||||
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
|
||||
|
||||
## Operational Workflow
|
||||
|
||||
### Initial Assessment Phase
|
||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
||||
2. Analyze task statuses, priorities, and dependencies
|
||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
||||
4. Group related tasks that could benefit from specialized executors
|
||||
5. Create an execution plan that maximizes parallelization
|
||||
|
||||
### Executor Deployment Phase
|
||||
1. For each independent task or task group:
|
||||
- Deploy a task-executor agent with specific instructions
|
||||
- Provide the executor with task ID, requirements, and context
|
||||
- Set clear completion criteria and reporting expectations
|
||||
2. Maintain a registry of active executors and their assigned tasks
|
||||
3. Establish communication protocols for progress updates
|
||||
|
||||
### Coordination Phase
|
||||
1. Monitor executor progress through task status updates
|
||||
2. When a task completes:
|
||||
- Verify completion with `get_task` or `task-master show <id>`
|
||||
- Update task status if needed using `set_task_status`
|
||||
- Reassess dependency graph for newly unblocked tasks
|
||||
- Deploy new executors for available work
|
||||
3. Handle executor failures or blocks:
|
||||
- Reassign tasks to new executors if needed
|
||||
- Escalate complex issues to the user
|
||||
- Update task status to 'blocked' when appropriate
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
**Parallel Execution Rules**:
|
||||
- Never assign dependent tasks to different executors simultaneously
|
||||
- Prioritize high-priority tasks when resources are limited
|
||||
- Group small, related subtasks for single executor efficiency
|
||||
- Balance executor load to prevent bottlenecks
|
||||
|
||||
**Context Management**:
|
||||
- Provide executors with minimal but sufficient context
|
||||
- Share relevant completed task information when it aids execution
|
||||
- Maintain a shared knowledge base of project-specific patterns
|
||||
|
||||
**Quality Assurance**:
|
||||
- Verify task completion before marking as done
|
||||
- Ensure test strategies are followed when specified
|
||||
- Coordinate cross-task integration testing when needed
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
When deploying executors, provide them with:
|
||||
```
|
||||
TASK ASSIGNMENT:
|
||||
- Task ID: [specific ID]
|
||||
- Objective: [clear goal]
|
||||
- Dependencies: [list any completed prerequisites]
|
||||
- Success Criteria: [specific completion requirements]
|
||||
- Context: [relevant project information]
|
||||
- Reporting: [when and how to report back]
|
||||
```
|
||||
|
||||
When receiving executor updates:
|
||||
1. Acknowledge completion or issues
|
||||
2. Update task status in Task Master
|
||||
3. Reassess execution strategy
|
||||
4. Deploy new executors as appropriate
|
||||
|
||||
## Decision Framework
|
||||
|
||||
**When to parallelize**:
|
||||
- Multiple pending tasks with no interdependencies
|
||||
- Sufficient context available for independent execution
|
||||
- Tasks are well-defined with clear success criteria
|
||||
|
||||
**When to serialize**:
|
||||
- Strong dependencies between tasks
|
||||
- Limited context or unclear requirements
|
||||
- Integration points requiring careful coordination
|
||||
|
||||
**When to escalate**:
|
||||
- Circular dependencies detected
|
||||
- Critical blockers affecting multiple tasks
|
||||
- Ambiguous requirements needing clarification
|
||||
- Resource conflicts between executors
|
||||
|
||||
## Error Handling
|
||||
|
||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
Track and optimize for:
|
||||
- Task completion rate
|
||||
- Parallel execution efficiency
|
||||
- Executor success rate
|
||||
- Time to completion for task groups
|
||||
- Dependency resolution speed
|
||||
|
||||
## Integration with Task Master
|
||||
|
||||
Leverage these Task Master MCP tools effectively:
|
||||
- `get_tasks` - Continuous queue monitoring
|
||||
- `get_task` - Detailed task analysis
|
||||
- `set_task_status` - Progress tracking
|
||||
- `next_task` - Fallback for serial execution
|
||||
- `analyze_project_complexity` - Strategic planning
|
||||
- `complexity_report` - Resource allocation
|
||||
|
||||
## Output Format for Execution
|
||||
|
||||
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
|
||||
|
||||
After completing your dependency analysis, you MUST output a structured execution plan:
|
||||
|
||||
```yaml
|
||||
execution_plan:
|
||||
EXECUTE_IN_PARALLEL:
|
||||
# Maximum 3 subtasks running simultaneously
|
||||
- subtask_id: [e.g., 118.2]
|
||||
parent_task: [e.g., 118]
|
||||
title: [Specific subtask title]
|
||||
priority: [high/medium/low]
|
||||
estimated_time: [e.g., 10 minutes]
|
||||
executor_prompt: |
|
||||
Execute Subtask [ID]: [Specific subtask title]
|
||||
|
||||
SPECIFIC REQUIREMENTS:
|
||||
[Exact implementation needed for THIS subtask only]
|
||||
|
||||
FILES TO CREATE/MODIFY:
|
||||
[Specific file paths]
|
||||
|
||||
CONTEXT:
|
||||
[What already exists that this subtask depends on]
|
||||
|
||||
SUCCESS CRITERIA:
|
||||
[Specific completion criteria for this subtask]
|
||||
|
||||
IMPORTANT:
|
||||
- Focus ONLY on this subtask
|
||||
- Mark subtask as 'review' when complete
|
||||
- Use MCP tool: mcp__task-master-ai__set_task_status
|
||||
|
||||
- subtask_id: [Another subtask that can run in parallel]
|
||||
parent_task: [Parent task ID]
|
||||
title: [Specific subtask title]
|
||||
priority: [priority]
|
||||
estimated_time: [time estimate]
|
||||
executor_prompt: |
|
||||
[Focused prompt for this specific subtask]
|
||||
|
||||
blocked:
|
||||
- task_id: [ID]
|
||||
title: [Task title]
|
||||
waiting_for: [list of blocking task IDs]
|
||||
becomes_ready_when: [condition for unblocking]
|
||||
|
||||
next_wave:
|
||||
trigger: "After tasks [IDs] complete"
|
||||
newly_available: [List of task IDs that will unblock]
|
||||
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
|
||||
|
||||
critical_path: [Ordered list of task IDs forming the critical path]
|
||||
|
||||
parallelization_instruction: |
|
||||
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
|
||||
simultaneously using multiple Task tool invocations in a single response.
|
||||
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
|
||||
|
||||
verification_needed:
|
||||
- task_id: [ID of any task in 'review' status]
|
||||
verification_focus: [what to check]
|
||||
```
|
||||
|
||||
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
|
||||
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
|
||||
2. Use multiple Task tool invocations in a SINGLE response
|
||||
3. Do not execute them sequentially - they must run in parallel
|
||||
4. Wait for all parallel executors to complete before proceeding to next wave
|
||||
|
||||
**IMPORTANT NOTES**:
|
||||
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
|
||||
- Provide complete, self-contained prompts for each executor
|
||||
- Executors should mark tasks as 'review' for verification, not 'done'
|
||||
- Be explicit about which tasks can run simultaneously
|
||||
|
||||
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.
|
||||
38
.claude/commands/dedupe.md
Normal file
38
.claude/commands/dedupe.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
259
.github/scripts/auto-close-duplicates.mjs
vendored
Normal file
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
Executable file
102
.github/scripts/check-pre-release-mode.mjs
vendored
Executable file
@@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Get context from command line argument or environment
|
||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
||||
|
||||
function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
if (existsSync(join(currentDir, 'package.json'))) {
|
||||
try {
|
||||
const pkg = JSON.parse(
|
||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||
);
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
function checkPreReleaseMode() {
|
||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
|
||||
// Check if pre.json exists
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
try {
|
||||
// Read and parse pre.json
|
||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
||||
const preJson = JSON.parse(preJsonContent);
|
||||
|
||||
// Check if we're in active pre-release mode
|
||||
if (preJson.mode === 'pre') {
|
||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
||||
console.error('');
|
||||
|
||||
// Provide context-specific error messages
|
||||
if (context === 'Release Check' || context === 'pull_request') {
|
||||
console.error(
|
||||
'Pre-release mode must be exited before merging to main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error(
|
||||
'To fix this, run the following commands in your branch:'
|
||||
);
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
console.error('');
|
||||
console.error('Then update this pull request.');
|
||||
} else if (context === 'Release' || context === 'main') {
|
||||
console.error(
|
||||
'Pre-release mode should only be used on feature branches, not main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands locally:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push origin main');
|
||||
console.error('');
|
||||
console.error('Then re-run this workflow.');
|
||||
} else {
|
||||
console.error('Pre-release mode must be exited before proceeding.');
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
||||
console.error(`Error details: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the check
|
||||
checkPreReleaseMode();
|
||||
30
.github/scripts/release.mjs
vendored
Executable file
30
.github/scripts/release.mjs
vendored
Executable file
@@ -0,0 +1,30 @@
|
||||
#!/usr/bin/env node
|
||||
import { existsSync, unlinkSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, runCommand } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
console.log('🚀 Starting release process...');
|
||||
|
||||
// Double-check we're not in pre-release mode (safety net)
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (existsSync(preJsonPath)) {
|
||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
||||
unlinkSync(preJsonPath);
|
||||
}
|
||||
|
||||
// Check if the extension version has changed and tag it
|
||||
// This prevents changeset from trying to publish the private package
|
||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
console.log('✅ Release process completed!');
|
||||
|
||||
// The extension tag (if created) will trigger the extension-release workflow
|
||||
21
.github/scripts/release.sh
vendored
21
.github/scripts/release.sh
vendored
@@ -1,21 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "🚀 Starting release process..."
|
||||
|
||||
# Double-check we're not in pre-release mode (safety net)
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
echo "⚠️ Warning: pre.json still exists. Removing it..."
|
||||
rm -f .changeset/pre.json
|
||||
fi
|
||||
|
||||
# Check if the extension version has changed and tag it
|
||||
# This prevents changeset from trying to publish the private package
|
||||
node .github/scripts/tag-extension.mjs
|
||||
|
||||
# Run changeset publish for npm packages
|
||||
npx changeset publish
|
||||
|
||||
echo "✅ Release process completed!"
|
||||
|
||||
# The extension tag (if created) will trigger the extension-release workflow
|
||||
76
.github/scripts/tag-extension.mjs
vendored
Normal file → Executable file
76
.github/scripts/tag-extension.mjs
vendored
Normal file → Executable file
@@ -1,33 +1,13 @@
|
||||
#!/usr/bin/env node
|
||||
import assert from 'node:assert/strict';
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Find the root directory by looking for package.json
|
||||
function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
if (existsSync(join(currentDir, 'package.json'))) {
|
||||
// Verify it's the root package.json by checking for expected fields
|
||||
try {
|
||||
const pkg = JSON.parse(
|
||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||
);
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
// Read the extension's package.json
|
||||
@@ -43,57 +23,11 @@ try {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Read root package.json for repository info
|
||||
const rootPkgPath = join(rootDir, 'package.json');
|
||||
let rootPkg;
|
||||
try {
|
||||
const rootPkgContent = readFileSync(rootPkgPath, 'utf8');
|
||||
rootPkg = JSON.parse(rootPkgContent);
|
||||
} catch (error) {
|
||||
console.error('Failed to read root package.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Ensure we have required fields
|
||||
assert(pkg.name, 'package.json must have a name field');
|
||||
assert(pkg.version, 'package.json must have a version field');
|
||||
assert(rootPkg.repository, 'root package.json must have a repository field');
|
||||
|
||||
const tag = `${pkg.name}@${pkg.version}`;
|
||||
|
||||
// Get repository URL from root package.json
|
||||
const repoUrl = rootPkg.repository.url;
|
||||
|
||||
const { status, stdout, error } = spawnSync('git', ['ls-remote', repoUrl, tag]);
|
||||
|
||||
assert.equal(status, 0, error);
|
||||
|
||||
const exists = String(stdout).trim() !== '';
|
||||
|
||||
if (!exists) {
|
||||
console.log(`Creating new extension tag: ${tag}`);
|
||||
|
||||
// Create the tag
|
||||
const tagResult = spawnSync('git', ['tag', tag]);
|
||||
if (tagResult.status !== 0) {
|
||||
console.error(
|
||||
'Failed to create tag:',
|
||||
tagResult.error || tagResult.stderr.toString()
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Push the tag
|
||||
const pushResult = spawnSync('git', ['push', 'origin', tag]);
|
||||
if (pushResult.status !== 0) {
|
||||
console.error(
|
||||
'Failed to push tag:',
|
||||
pushResult.error || pushResult.stderr.toString()
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||
} else {
|
||||
console.log(`Extension tag already exists: ${tag}`);
|
||||
}
|
||||
// Create and push the tag if it doesn't exist
|
||||
createAndPushTag(tag);
|
||||
|
||||
88
.github/scripts/utils.mjs
vendored
Executable file
88
.github/scripts/utils.mjs
vendored
Executable file
@@ -0,0 +1,88 @@
|
||||
#!/usr/bin/env node
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
|
||||
// Find the root directory by looking for package.json with task-master-ai
|
||||
export function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
const pkgPath = join(currentDir, 'package.json');
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
// Run a command with proper error handling
|
||||
export function runCommand(command, args = [], options = {}) {
|
||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
||||
const result = spawnSync(command, args, {
|
||||
encoding: 'utf8',
|
||||
stdio: 'inherit',
|
||||
...options
|
||||
});
|
||||
|
||||
if (result.status !== 0) {
|
||||
console.error(`Command failed with exit code ${result.status}`);
|
||||
process.exit(result.status);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Get package version from a package.json file
|
||||
export function getPackageVersion(packagePath) {
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
||||
return pkg.version;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`Failed to read package version from ${packagePath}:`,
|
||||
error.message
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Check if a git tag exists on remote
|
||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
||||
encoding: 'utf8'
|
||||
});
|
||||
|
||||
return result.status === 0 && result.stdout.trim() !== '';
|
||||
}
|
||||
|
||||
// Create and push a git tag if it doesn't exist
|
||||
export function createAndPushTag(tag, remote = 'origin') {
|
||||
// Check if tag already exists
|
||||
if (tagExistsOnRemote(tag, remote)) {
|
||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log(`Creating new tag: ${tag}`);
|
||||
|
||||
// Create the tag locally
|
||||
const tagResult = spawnSync('git', ['tag', tag]);
|
||||
if (tagResult.status !== 0) {
|
||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Push the tag to remote
|
||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
||||
if (pushResult.status !== 0) {
|
||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||
return true;
|
||||
}
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
31
.github/workflows/auto-close-duplicates.yml
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
46
.github/workflows/backfill-duplicate-comments.yml
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
110
.github/workflows/ci.yml
vendored
110
.github/workflows/ci.yml
vendored
@@ -9,70 +9,109 @@ on:
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
DO_NOT_TRACK: 1
|
||||
NODE_ENV: development
|
||||
|
||||
jobs:
|
||||
setup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install Dependencies
|
||||
id: install
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
# Fast checks that can run in parallel
|
||||
format-check:
|
||||
needs: setup
|
||||
name: Format Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Format Check
|
||||
run: npm run format-check
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
test:
|
||||
needs: setup
|
||||
typecheck:
|
||||
name: Typecheck
|
||||
timeout-minutes: 10
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Typecheck
|
||||
run: npm run typecheck
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
# Build job to ensure everything compiles
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build
|
||||
run: npm run build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
|
||||
test:
|
||||
name: Test
|
||||
timeout-minutes: 15
|
||||
runs-on: ubuntu-latest
|
||||
needs: [format-check, typecheck, build]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build packages (required for tests)
|
||||
run: npm run build:packages
|
||||
env:
|
||||
NODE_ENV: production
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
@@ -81,7 +120,6 @@ jobs:
|
||||
NODE_ENV: test
|
||||
CI: true
|
||||
FORCE_COLOR: 1
|
||||
timeout-minutes: 10
|
||||
|
||||
- name: Upload Test Results
|
||||
if: always()
|
||||
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
81
.github/workflows/claude-dedupe-issues.yml
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
57
.github/workflows/claude-docs-trigger.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
name: Trigger Claude Documentation Update
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
trigger-docs-update:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
actions: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
# Store changed files for Claude to analyze (escaped for JSON)
|
||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the commit message (escaped for JSON)
|
||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get diff for documentation context (escaped for JSON)
|
||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get commit SHA
|
||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Trigger Claude workflow
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# Trigger the Claude docs updater workflow with the change information
|
||||
gh workflow run claude-docs-updater.yml \
|
||||
--ref next \
|
||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
||||
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
145
.github/workflows/claude-docs-updater.yml
vendored
Normal file
@@ -0,0 +1,145 @@
|
||||
name: Claude Documentation Updater
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
commit_sha:
|
||||
description: 'The commit SHA that triggered this update'
|
||||
required: true
|
||||
type: string
|
||||
commit_message:
|
||||
description: 'The commit message'
|
||||
required: true
|
||||
type: string
|
||||
changed_files:
|
||||
description: 'List of changed files'
|
||||
required: true
|
||||
type: string
|
||||
commit_diff:
|
||||
description: 'Diff summary of changes'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: next
|
||||
fetch-depth: 0 # Need full history to checkout specific commit
|
||||
|
||||
- name: Create docs update branch
|
||||
id: create-branch
|
||||
run: |
|
||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||
git checkout -b $BRANCH_NAME
|
||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Claude Code to Update Documentation
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ inputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ inputs.changed_files }}
|
||||
|
||||
- Changes summary:
|
||||
${{ inputs.commit_diff }}
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
3. If documentation updates are needed:
|
||||
- Update relevant documentation files in apps/docs/
|
||||
- Ensure examples are updated if APIs changed
|
||||
- Update any configuration documentation if config options changed
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
- Include code examples where appropriate
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
id: check-changes
|
||||
run: |
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
git add -A
|
||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --local user.name "github-actions[bot]"
|
||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||
|
||||
This PR was automatically generated to update documentation based on recent changes.
|
||||
|
||||
Original commit: ${{ inputs.commit_message }}
|
||||
|
||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||
fi
|
||||
|
||||
- name: Push changes and create PR
|
||||
if: steps.check-changes.outputs.has_changes == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
--body "## 📚 Documentation Update
|
||||
|
||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||
|
||||
### Original Changes
|
||||
**Commit:** ${{ inputs.commit_sha }}
|
||||
**Message:** ${{ inputs.commit_message }}
|
||||
|
||||
### Changed Files in Original Commit
|
||||
\`\`\`
|
||||
${{ inputs.changed_files }}
|
||||
\`\`\`
|
||||
|
||||
### Documentation Updates
|
||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||
- [ ] Documentation accurately reflects the changes
|
||||
- [ ] Examples are correct and working
|
||||
- [ ] No important details are missing
|
||||
- [ ] Style is consistent with existing documentation
|
||||
|
||||
---
|
||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
107
.github/workflows/claude-issue-triage.yml
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
Normal file
36
.github/workflows/claude.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
26
.github/workflows/extension-release.yml
vendored
26
.github/workflows/extension-release.yml
vendored
@@ -89,32 +89,6 @@ jobs:
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: actions/create-release@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
tag_name: ${{ github.ref_name }}
|
||||
release_name: Extension ${{ github.ref_name }}
|
||||
body: |
|
||||
VS Code Extension Release ${{ github.ref_name }}
|
||||
|
||||
**Marketplaces:**
|
||||
- [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Hamster.task-master-hamster)
|
||||
- [Open VSX Registry](https://open-vsx.org/extension/Hamster/task-master-hamster)
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
- name: Upload VSIX to Release
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||
asset_path: apps/extension/vsix-build/${{ steps.vsix-info.outputs.vsix-filename }}
|
||||
asset_name: ${{ steps.vsix-info.outputs.vsix-filename }}
|
||||
asset_content_type: application/zip
|
||||
|
||||
- name: Upload Build Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
|
||||
176
.github/workflows/log-issue-events.yml
vendored
Normal file
176
.github/workflows/log-issue-events.yml
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
31
.github/workflows/pre-release.yml
vendored
31
.github/workflows/pre-release.yml
vendored
@@ -3,11 +3,13 @@ name: Pre-Release (RC)
|
||||
on:
|
||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||
|
||||
concurrency: pre-release-${{ github.ref }}
|
||||
|
||||
concurrency: pre-release-${{ github.ref_name }}
|
||||
jobs:
|
||||
rc:
|
||||
runs-on: ubuntu-latest
|
||||
# Only allow pre-releases on non-main branches
|
||||
if: github.ref != 'refs/heads/main'
|
||||
environment: extension-release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
@@ -34,9 +36,26 @@ jobs:
|
||||
|
||||
- name: Enter RC mode (if not already in RC mode)
|
||||
run: |
|
||||
# ensure we’re in the right pre-mode (tag "rc")
|
||||
if [ ! -f .changeset/pre.json ] \
|
||||
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
|
||||
# Check if we're in pre-release mode with the "rc" tag
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
||||
|
||||
if [ "$MODE" = "exit" ]; then
|
||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
||||
npx changeset pre exit
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
||||
echo "Already in RC pre-release mode"
|
||||
else
|
||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
else
|
||||
echo "No pre.json found, entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
|
||||
@@ -49,7 +68,7 @@ jobs:
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: npm run release
|
||||
publish: npx changeset publish
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
21
.github/workflows/release-check.yml
vendored
Normal file
21
.github/workflows/release-check.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
name: Release Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: release-check-${{ github.head_ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
check-release-mode:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
||||
24
.github/workflows/release.yml
vendored
24
.github/workflows/release.yml
vendored
@@ -38,31 +38,13 @@ jobs:
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Exit pre-release mode and clean up
|
||||
run: |
|
||||
echo "🔄 Ensuring we're not in pre-release mode for main branch..."
|
||||
|
||||
# Exit pre-release mode if we're in it
|
||||
npx changeset pre exit || echo "Not in pre-release mode"
|
||||
|
||||
# Remove pre.json file if it exists (belt and suspenders approach)
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
echo "🧹 Removing pre.json file..."
|
||||
rm -f .changeset/pre.json
|
||||
fi
|
||||
|
||||
# Verify the file is gone
|
||||
if [ ! -f .changeset/pre.json ]; then
|
||||
echo "✅ pre.json successfully removed"
|
||||
else
|
||||
echo "❌ Failed to remove pre.json"
|
||||
exit 1
|
||||
fi
|
||||
- name: Check pre-release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
||||
|
||||
- name: Create Release Pull Request or Publish to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: ./.github/scripts/release.sh
|
||||
publish: node ./.github/scripts/release.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
96
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
96
.github/workflows/weekly-metrics-discord.yml
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Get dates for last week
|
||||
run: |
|
||||
# Last 7 days
|
||||
first_day=$(date -d "7 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
|
||||
- name: Generate PR metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_metrics.md
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: |
|
||||
# Parse the metrics from the generated markdown files
|
||||
if [ -f "issue_metrics.md" ]; then
|
||||
# Extract key metrics using grep/awk
|
||||
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
|
||||
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
if [ -f "pr_metrics.md" ]; then
|
||||
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
|
||||
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
|
||||
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
|
||||
fi
|
||||
|
||||
# Set outputs for Discord action
|
||||
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
|
||||
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
|
||||
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
|
||||
**⏱️ Response Times**
|
||||
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
|
||||
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
|
||||
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
188
.taskmaster/docs/MIGRATION-ROADMAP.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# Task Master Migration Roadmap
|
||||
|
||||
## Overview
|
||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
||||
|
||||
## Architecture Vision
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ User Interfaces │
|
||||
├──────────┬──────────┬──────────┬────────────────┤
|
||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
||||
└──────────┴──────────┴──────────┴────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────┐
|
||||
│ @tm/core │
|
||||
│ (Business Logic) │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## Migration Phases
|
||||
|
||||
### Phase 1: Core Extraction ✅ (In Progress)
|
||||
**Goal**: Move all business logic to @tm/core
|
||||
|
||||
- [x] Create @tm/core package structure
|
||||
- [x] Move types and interfaces
|
||||
- [x] Implement TaskMasterCore facade
|
||||
- [x] Move storage adapters
|
||||
- [x] Move task services
|
||||
- [ ] Move AI providers
|
||||
- [ ] Move parser logic
|
||||
- [ ] Complete test coverage
|
||||
|
||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
||||
**Goal**: Create @tm/cli as a thin presentation layer
|
||||
|
||||
- [x] Create @tm/cli package structure
|
||||
- [x] Implement Command interface pattern
|
||||
- [x] Create CommandRegistry
|
||||
- [x] Build legacy bridge/adapter
|
||||
- [x] Migrate list-tasks command
|
||||
- [ ] Migrate remaining commands one by one
|
||||
- [ ] Remove UI logic from core
|
||||
|
||||
### Phase 3: Transitional Integration
|
||||
**Goal**: Use new packages in existing scripts without breaking changes
|
||||
|
||||
```javascript
|
||||
// scripts/modules/commands.js gradually adopts new commands
|
||||
import { ListTasksCommand } from '@tm/cli';
|
||||
const listCommand = new ListTasksCommand();
|
||||
|
||||
// Old interface remains the same
|
||||
programInstance
|
||||
.command('list')
|
||||
.action(async (options) => {
|
||||
// Use new command internally
|
||||
const result = await listCommand.execute(convertOptions(options));
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 4: MCP Package
|
||||
**Goal**: Separate MCP server as its own package
|
||||
|
||||
- [ ] Create @tm/mcp package
|
||||
- [ ] Move MCP server code
|
||||
- [ ] Use @tm/core for all logic
|
||||
- [ ] MCP becomes a thin RPC layer
|
||||
|
||||
### Phase 5: Complete Migration
|
||||
**Goal**: Remove old scripts, pure monorepo
|
||||
|
||||
- [ ] All commands migrated to @tm/cli
|
||||
- [ ] Remove scripts/modules/task-manager/*
|
||||
- [ ] Remove scripts/modules/commands.js
|
||||
- [ ] Update bin/task-master.js to use @tm/cli
|
||||
- [ ] Clean up dependencies
|
||||
|
||||
## Current Transitional Strategy
|
||||
|
||||
### 1. Adapter Pattern (commands-adapter.js)
|
||||
```javascript
|
||||
// Checks if new CLI is available and uses it
|
||||
// Falls back to legacy implementation if not
|
||||
export async function listTasksAdapter(...args) {
|
||||
if (cliAvailable) {
|
||||
return useNewImplementation(...args);
|
||||
}
|
||||
return useLegacyImplementation(...args);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Command Bridge Pattern
|
||||
```javascript
|
||||
// Allows new commands to work in old code
|
||||
const bridge = new CommandBridge(new ListTasksCommand());
|
||||
const data = await bridge.run(legacyOptions); // Legacy style
|
||||
const result = await bridge.execute(newOptions); // New style
|
||||
```
|
||||
|
||||
### 3. Gradual File Migration
|
||||
Instead of big-bang refactoring:
|
||||
1. Create new implementation in @tm/cli
|
||||
2. Add adapter in commands-adapter.js
|
||||
3. Update commands.js to use adapter
|
||||
4. Test both paths work
|
||||
5. Eventually remove adapter when all migrated
|
||||
|
||||
## Benefits of This Approach
|
||||
|
||||
1. **No Breaking Changes**: Existing CLI continues to work
|
||||
2. **Incremental PRs**: Each command can be migrated separately
|
||||
3. **Parallel Development**: New features can use new architecture
|
||||
4. **Easy Rollback**: Can disable new implementation if issues
|
||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
||||
|
||||
## Example PR Sequence
|
||||
|
||||
### PR 1: Core Package Setup ✅
|
||||
- Create @tm/core
|
||||
- Move types and interfaces
|
||||
- Basic TaskMasterCore implementation
|
||||
|
||||
### PR 2: CLI Package Foundation ✅
|
||||
- Create @tm/cli
|
||||
- Command interface and registry
|
||||
- Legacy bridge utilities
|
||||
|
||||
### PR 3: First Command Migration
|
||||
- Migrate list-tasks to new system
|
||||
- Add adapter in scripts
|
||||
- Test both implementations
|
||||
|
||||
### PR 4-N: Migrate Commands One by One
|
||||
- Each PR migrates 1-2 related commands
|
||||
- Small, reviewable changes
|
||||
- Continuous delivery
|
||||
|
||||
### Final PR: Cleanup
|
||||
- Remove legacy implementations
|
||||
- Remove adapters
|
||||
- Update documentation
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Dual Testing During Migration
|
||||
```javascript
|
||||
describe('List Tasks', () => {
|
||||
it('works with legacy implementation', async () => {
|
||||
// Force legacy
|
||||
const result = await legacyListTasks(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
|
||||
it('works with new implementation', async () => {
|
||||
// Force new
|
||||
const command = new ListTasksCommand();
|
||||
const result = await command.execute(...);
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('adapter chooses correctly', async () => {
|
||||
// Let adapter decide
|
||||
const result = await listTasksAdapter(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] All commands migrated without breaking changes
|
||||
- [ ] Test coverage maintained or improved
|
||||
- [ ] Performance maintained or improved
|
||||
- [ ] Cleaner, more maintainable codebase
|
||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
||||
|
||||
## Notes for Contributors
|
||||
|
||||
1. **Keep PRs Small**: Migrate one command at a time
|
||||
2. **Test Both Paths**: Ensure legacy and new both work
|
||||
3. **Document Changes**: Update this roadmap as you go
|
||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
||||
|
||||
This is a living document - update as the migration progresses!
|
||||
8
.taskmaster/docs/test-prd.txt
Normal file
8
.taskmaster/docs/test-prd.txt
Normal file
@@ -0,0 +1,8 @@
|
||||
Simple Todo App PRD
|
||||
|
||||
Create a basic todo list application with the following features:
|
||||
1. Add new todos
|
||||
2. Mark todos as complete
|
||||
3. Delete todos
|
||||
|
||||
That's it. Keep it simple.
|
||||
343
.taskmaster/docs/tm-core-phase-1.txt
Normal file
343
.taskmaster/docs/tm-core-phase-1.txt
Normal file
@@ -0,0 +1,343 @@
|
||||
# Product Requirements Document: tm-core Package - Parse PRD Feature
|
||||
|
||||
## Project Overview
|
||||
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
|
||||
|
||||
## Design Patterns & Architecture
|
||||
|
||||
### Patterns to Apply
|
||||
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
|
||||
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
|
||||
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
|
||||
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
|
||||
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
|
||||
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
|
||||
|
||||
### Naming Conventions
|
||||
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
|
||||
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
|
||||
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
|
||||
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
|
||||
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
|
||||
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
|
||||
|
||||
## Exact Folder Structure Required
|
||||
```
|
||||
packages/tm-core/
|
||||
├── src/
|
||||
│ ├── index.ts
|
||||
│ ├── types/
|
||||
│ │ └── index.ts
|
||||
│ ├── interfaces/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── storage.interface.ts
|
||||
│ │ ├── ai-provider.interface.ts
|
||||
│ │ └── configuration.interface.ts
|
||||
│ ├── tasks/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── task-parser.ts
|
||||
│ ├── ai/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── base-provider.ts
|
||||
│ │ ├── provider-factory.ts
|
||||
│ │ ├── prompt-builder.ts
|
||||
│ │ └── providers/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── anthropic-provider.ts
|
||||
│ │ ├── openai-provider.ts
|
||||
│ │ └── google-provider.ts
|
||||
│ ├── storage/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── file-storage.ts
|
||||
│ ├── config/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── config-manager.ts
|
||||
│ ├── utils/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── id-generator.ts
|
||||
│ └── errors/
|
||||
│ ├── index.ts # Barrel export
|
||||
│ └── task-master-error.ts
|
||||
├── tests/
|
||||
│ ├── task-parser.test.ts
|
||||
│ ├── integration/
|
||||
│ │ └── parse-prd.test.ts
|
||||
│ └── mocks/
|
||||
│ └── mock-provider.ts
|
||||
├── package.json
|
||||
├── tsconfig.json
|
||||
├── tsup.config.js
|
||||
└── jest.config.js
|
||||
```
|
||||
|
||||
## Specific Implementation Requirements
|
||||
|
||||
### 1. Create types/index.ts
|
||||
Define these exact TypeScript interfaces:
|
||||
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
|
||||
- `Subtask` interface with fields: id, title, description, completed
|
||||
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
|
||||
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
|
||||
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
|
||||
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
|
||||
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
|
||||
|
||||
### 2. Create interfaces/storage.interface.ts
|
||||
Define `IStorage` interface with these exact methods:
|
||||
- `loadTasks(tag?: string): Promise<Task[]>`
|
||||
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
|
||||
- `deleteTask(id: string, tag?: string): Promise<void>`
|
||||
- `exists(tag?: string): Promise<boolean>`
|
||||
|
||||
### 3. Create interfaces/ai-provider.interface.ts
|
||||
Define `IAIProvider` interface with these exact methods:
|
||||
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
|
||||
- `calculateTokens(text: string): number`
|
||||
- `getName(): string`
|
||||
- `getModel(): string`
|
||||
|
||||
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
|
||||
|
||||
### 4. Create interfaces/configuration.interface.ts
|
||||
Define `IConfiguration` interface with fields:
|
||||
- `projectPath: string`
|
||||
- `aiProvider: string`
|
||||
- `apiKey?: string`
|
||||
- `aiOptions?: AIOptions`
|
||||
- `mainModel?: string`
|
||||
- `researchModel?: string`
|
||||
- `fallbackModel?: string`
|
||||
- `tasksPath?: string`
|
||||
- `enableTags?: boolean`
|
||||
|
||||
### 5. Create tasks/task-parser.ts
|
||||
Create class `TaskParser` with:
|
||||
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
|
||||
- Private property `promptBuilder: PromptBuilder`
|
||||
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||
- Private method `readPRD(prdPath: string): Promise<string>`
|
||||
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
|
||||
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
|
||||
- Apply **Dependency Injection** pattern via constructor
|
||||
|
||||
### 6. Create ai/base-provider.ts
|
||||
Copy existing base-provider.js and convert to TypeScript abstract class:
|
||||
- Abstract class `BaseProvider` implementing `IAIProvider`
|
||||
- Protected properties: `apiKey: string`, `model: string`
|
||||
- Constructor accepting `apiKey: string` and `options: { model?: string }`
|
||||
- Abstract methods matching IAIProvider interface
|
||||
- Abstract method `getDefaultModel(): string`
|
||||
- Apply **Template Method** pattern for common provider logic
|
||||
|
||||
### 7. Create ai/provider-factory.ts
|
||||
Create class `ProviderFactory` with:
|
||||
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
|
||||
- Switch statement for providers: 'anthropic', 'openai', 'google'
|
||||
- Dynamic imports for each provider
|
||||
- Throw error for unknown providers
|
||||
- Apply **Factory** pattern for creating provider instances
|
||||
|
||||
Example implementation structure:
|
||||
```typescript
|
||||
switch (provider.toLowerCase()) {
|
||||
case 'anthropic':
|
||||
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
|
||||
return new AnthropicProvider(apiKey, { model });
|
||||
}
|
||||
```
|
||||
|
||||
### 8. Create ai/providers/anthropic-provider.ts
|
||||
Create class `AnthropicProvider` extending `BaseProvider`:
|
||||
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
|
||||
- Private property `client: Anthropic`
|
||||
- Implement all abstract methods from BaseProvider
|
||||
- Default model: 'claude-3-sonnet-20240229'
|
||||
- Handle API errors and wrap with meaningful messages
|
||||
|
||||
### 9. Create ai/providers/openai-provider.ts (placeholder)
|
||||
Create class `OpenAIProvider` extending `BaseProvider`:
|
||||
- Import OpenAI SDK when implemented
|
||||
- For now, throw error: "OpenAI provider not yet implemented"
|
||||
|
||||
### 10. Create ai/providers/google-provider.ts (placeholder)
|
||||
Create class `GoogleProvider` extending `BaseProvider`:
|
||||
- Import Google Generative AI SDK when implemented
|
||||
- For now, throw error: "Google provider not yet implemented"
|
||||
|
||||
### 11. Create ai/prompt-builder.ts
|
||||
Create class `PromptBuilder` with:
|
||||
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
|
||||
- Method `buildExpandPrompt(task: string, context?: string): string`
|
||||
- Use template literals for prompt construction
|
||||
- Include specific JSON format instructions in prompts
|
||||
|
||||
### 9. Create storage/file-storage.ts
|
||||
Create class `FileStorage` implementing `IStorage`:
|
||||
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
|
||||
- Constructor accepting `projectPath: string`
|
||||
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
|
||||
- Private method `ensureDirectory(dir: string): Promise<void>`
|
||||
- Implement all IStorage methods
|
||||
- Handle ENOENT errors by returning empty arrays
|
||||
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
|
||||
- Apply **Repository** pattern for data access abstraction
|
||||
|
||||
### 10. Create config/config-manager.ts
|
||||
Create class `ConfigManager`:
|
||||
- Private property `config: IConfiguration`
|
||||
- Constructor accepting `options: Partial<IConfiguration>`
|
||||
- Use Zod for validation with schema matching IConfiguration
|
||||
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
|
||||
- Method `getAll(): IConfiguration`
|
||||
- Method `validate(): boolean`
|
||||
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
|
||||
|
||||
### 11. Create utils/id-generator.ts
|
||||
Export functions:
|
||||
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
|
||||
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
|
||||
|
||||
### 16. Create src/index.ts
|
||||
Create main class `TaskMasterCore`:
|
||||
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
|
||||
- Constructor accepting `options: Partial<IConfiguration>`
|
||||
- Method `initialize(): Promise<void>` for lazy loading
|
||||
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||
- Method `getTasks(tag?: string): Promise<Task[]>`
|
||||
- Apply **Facade** pattern to provide simple API over complex subsystems
|
||||
|
||||
Export:
|
||||
- Class `TaskMasterCore`
|
||||
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
|
||||
- All types from './types'
|
||||
- All interfaces from './interfaces/*'
|
||||
|
||||
Import statements should use kebab-case:
|
||||
```typescript
|
||||
import { TaskParser } from './tasks/task-parser';
|
||||
import { FileStorage } from './storage/file-storage';
|
||||
import { ConfigManager } from './config/config-manager';
|
||||
import { ProviderFactory } from './ai/provider-factory';
|
||||
```
|
||||
|
||||
### 17. Configure package.json
|
||||
Create package.json with:
|
||||
- name: "@task-master/core"
|
||||
- version: "0.1.0"
|
||||
- type: "module"
|
||||
- main: "./dist/index.js"
|
||||
- module: "./dist/index.mjs"
|
||||
- types: "./dist/index.d.ts"
|
||||
- exports map for proper ESM/CJS support
|
||||
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
|
||||
- dependencies: zod@^3.23.8
|
||||
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
|
||||
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
|
||||
|
||||
### 18. Configure TypeScript
|
||||
Create tsconfig.json with:
|
||||
- target: "ES2022"
|
||||
- module: "ESNext"
|
||||
- strict: true (with all strict flags enabled)
|
||||
- declaration: true
|
||||
- outDir: "./dist"
|
||||
- rootDir: "./src"
|
||||
|
||||
### 19. Configure tsup
|
||||
Create tsup.config.js with:
|
||||
- entry: ['src/index.ts']
|
||||
- format: ['cjs', 'esm']
|
||||
- dts: true
|
||||
- sourcemap: true
|
||||
- clean: true
|
||||
- external: AI provider SDKs
|
||||
|
||||
### 20. Configure Jest
|
||||
Create jest.config.js with:
|
||||
- preset: 'ts-jest'
|
||||
- testEnvironment: 'node'
|
||||
- Coverage threshold: 80% for all metrics
|
||||
|
||||
## Build Process
|
||||
1. Use tsup to compile TypeScript to both CommonJS and ESM
|
||||
2. Generate .d.ts files for TypeScript consumers
|
||||
3. Output to dist/ directory
|
||||
4. Ensure tree-shaking works properly
|
||||
|
||||
## Testing Requirements
|
||||
- Create unit tests for TaskParser in tests/task-parser.test.ts
|
||||
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
|
||||
- Test error scenarios (file not found, invalid JSON, etc.)
|
||||
- Create integration test in tests/integration/parse-prd.test.ts
|
||||
- Follow kebab-case naming for all test files
|
||||
|
||||
## Success Criteria
|
||||
- TypeScript compilation with zero errors
|
||||
- No use of 'any' type
|
||||
- All interfaces properly exported
|
||||
- Compatible with existing tasks.json format
|
||||
- Feature flag support via USE_TM_CORE environment variable
|
||||
|
||||
## Import/Export Conventions
|
||||
- Use named exports for all classes and interfaces
|
||||
- Use barrel exports (index.ts) in each directory
|
||||
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
|
||||
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
|
||||
- Use .js extension in import paths for ESM compatibility
|
||||
|
||||
## Error Handling Patterns
|
||||
- Create custom error classes in `src/errors/` directory
|
||||
- All public methods should catch and wrap errors with context
|
||||
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
|
||||
- Never expose internal implementation details in error messages
|
||||
- Log errors to console.error only in development mode
|
||||
|
||||
## Barrel Exports Content
|
||||
|
||||
### interfaces/index.ts
|
||||
```typescript
|
||||
export type { IStorage } from './storage.interface';
|
||||
export type { IAIProvider, AIOptions } from './ai-provider.interface';
|
||||
export type { IConfiguration } from './configuration.interface';
|
||||
```
|
||||
|
||||
### tasks/index.ts
|
||||
```typescript
|
||||
export { TaskParser } from './task-parser';
|
||||
```
|
||||
|
||||
### ai/index.ts
|
||||
```typescript
|
||||
export { BaseProvider } from './base-provider';
|
||||
export { ProviderFactory } from './provider-factory';
|
||||
export { PromptBuilder } from './prompt-builder';
|
||||
```
|
||||
|
||||
### ai/providers/index.ts
|
||||
```typescript
|
||||
export { AnthropicProvider } from './anthropic-provider';
|
||||
export { OpenAIProvider } from './openai-provider';
|
||||
export { GoogleProvider } from './google-provider';
|
||||
```
|
||||
|
||||
### storage/index.ts
|
||||
```typescript
|
||||
export { FileStorage } from './file-storage';
|
||||
```
|
||||
|
||||
### config/index.ts
|
||||
```typescript
|
||||
export { ConfigManager } from './config-manager';
|
||||
```
|
||||
|
||||
### utils/index.ts
|
||||
```typescript
|
||||
export { generateTaskId, generateSubtaskId } from './id-generator';
|
||||
```
|
||||
|
||||
### errors/index.ts
|
||||
```typescript
|
||||
export { TaskMasterError } from './task-master-error';
|
||||
```
|
||||
@@ -0,0 +1,77 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:39:03.250Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
|
||||
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
|
||||
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
|
||||
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
|
||||
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
|
||||
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
|
||||
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
|
||||
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
|
||||
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
|
||||
}
|
||||
]
|
||||
}
|
||||
77
.taskmaster/reports/tm-core-complexity.json
Normal file
77
.taskmaster/reports/tm-core-complexity.json
Normal file
@@ -0,0 +1,77 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:15:01.327Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
|
||||
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
|
||||
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
|
||||
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
|
||||
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
|
||||
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 2,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
|
||||
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
|
||||
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
|
||||
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
||||
"lastSwitched": "2025-08-27T21:03:20.550Z",
|
||||
"branchTagMapping": {
|
||||
"v017-adds": "v017-adds",
|
||||
"next": "next"
|
||||
|
||||
@@ -7297,5 +7297,763 @@
|
||||
"updated": "2025-07-22T09:38:19.341Z",
|
||||
"description": "Tasks for cc-kiro-hooks context"
|
||||
}
|
||||
},
|
||||
"tm-core-phase-1": {
|
||||
"tasks": [
|
||||
{
|
||||
"id": 115,
|
||||
"title": "Initialize tm-core Package Structure",
|
||||
"description": "Create the initial package structure for tm-core with all required directories and configuration files",
|
||||
"details": "Create the packages/tm-core directory structure with all subdirectories as specified: src/, tests/, and all nested folders. Set up package.json with proper ESM/CJS configuration, tsconfig.json with strict TypeScript settings, tsup.config.js for dual format builds, and jest.config.js for testing. Ensure all barrel export files (index.ts) are created in each directory for clean imports.",
|
||||
"testStrategy": "Verify directory structure matches specification exactly, ensure all configuration files are valid JSON/JS, run 'npm install' to verify package.json is correct, run 'tsc --noEmit' to verify TypeScript configuration",
|
||||
"priority": "high",
|
||||
"dependencies": [],
|
||||
"status": "done",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create tm-core directory structure and base configuration files",
|
||||
"description": "Set up the packages/tm-core directory with all required subdirectories and initialize core configuration files",
|
||||
"dependencies": [],
|
||||
"details": "Create packages/tm-core directory with subdirectories: src/, src/types/, src/interfaces/, src/providers/, src/parsers/, src/builders/, src/utils/, src/errors/, tests/, tests/unit/, tests/integration/, tests/mocks/. Create package.json with name '@task-master/tm-core', version '1.0.0', type 'module', main/module/types fields for dual ESM/CJS support, and necessary dependencies (typescript, tsup, jest, @types/node). Set up tsconfig.json with strict mode, ES2022 target, module resolution, and proper include/exclude patterns.\n<info added on 2025-08-06T10:49:59.891Z>\nImplementation completed as specified. Directory structure verified with all paths created correctly. Package.json configured with dual ESM/CJS support using tsup build tool, exports field properly set for both formats. TypeScript configuration established with strict mode enabled, ES2022 target for modern JavaScript features, and path mappings configured for clean imports like '@/types' and '@/utils'. All configuration files are valid and ready for development.\n</info added on 2025-08-06T10:49:59.891Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Verify directory structure with fs.existsSync() checks, validate package.json structure with JSON.parse(), ensure tsconfig.json compiles without errors"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Configure build and test infrastructure",
|
||||
"description": "Set up tsup build configuration for dual format support and Jest testing configuration",
|
||||
"dependencies": [
|
||||
"115.1"
|
||||
],
|
||||
"details": "Create tsup.config.js with dual format configuration (ESM and CJS), entry points from src/index.ts, declaration files generation, and sourcemaps. Configure jest.config.js with TypeScript preset, ESM support, proper module name mapping, coverage thresholds (80%), and test environment setup. Create .gitignore for node_modules, dist, and coverage directories. Add npm scripts in package.json for build, test, test:watch, and test:coverage commands.\n<info added on 2025-08-06T10:50:49.396Z>\nBuild process successfully configured with tsup.config.ts (TypeScript configuration file instead of JavaScript) supporting dual format output and multiple entry points including submodules. Jest configuration established with comprehensive ESM support and path alias mapping. Created tests/setup.ts for centralized test environment configuration. Added ES2022 compilation target for modern JavaScript features. Enhanced .gitignore to exclude additional development-specific files beyond the basic directories.\n</info added on 2025-08-06T10:50:49.396Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Run 'npm run build' to verify tsup configuration works, execute 'npm test' with a simple test file to confirm Jest setup, check that both .mjs and .cjs files are generated in dist/"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create barrel export files for all directories",
|
||||
"description": "Implement index.ts files in each directory to enable clean imports throughout the package",
|
||||
"dependencies": [
|
||||
"115.1"
|
||||
],
|
||||
"details": "Create index.ts in src/ that exports from all subdirectories. Create index.ts in each subdirectory (types/, interfaces/, providers/, parsers/, builders/, utils/, errors/) with appropriate exports. For now, add placeholder comments indicating what will be exported from each module. Ensure proper export syntax for TypeScript types and interfaces using 'export type' where appropriate. Structure exports to allow consumers to import like '@task-master/tm-core/types' or from the main entry point.\n<info added on 2025-08-06T10:51:56.837Z>\nImplementation complete. All barrel export files have been created successfully with:\n\n- Main src/index.ts exporting from all subdirectories with proper TypeScript syntax\n- Individual index.ts files in types/, providers/, storage/, parser/, utils/, and errors/ directories\n- Proper ES module syntax with .js extensions for TypeScript compatibility\n- Placeholder exports with @deprecated JSDoc tags to indicate future implementation\n- Clean module structure supporting both root imports and submodule imports like '@task-master/tm-core/types'\n- All files include appropriate documentation comments explaining their purpose\n</info added on 2025-08-06T10:51:56.837Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Compile with TypeScript to ensure all index.ts files are valid, verify no circular dependencies exist, check that imports from package root work correctly"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Add development tooling and documentation",
|
||||
"description": "Set up development tools, linting, and initial documentation structure",
|
||||
"dependencies": [
|
||||
"115.1",
|
||||
"115.2"
|
||||
],
|
||||
"details": "Create .eslintrc.js with TypeScript plugin and recommended rules for consistent code style. Add prettier configuration for code formatting. Create README.md with package overview, installation instructions, and usage examples (marked as 'coming soon'). Add CHANGELOG.md to track version changes. Create npm scripts for linting and formatting. Add pre-commit hooks configuration if needed. Document the dual ESM/CJS support in README.\n<info added on 2025-08-06T10:53:45.056Z>\nI'll analyze the user's request and the context to determine what new information should be added to the subtask's details.Successfully completed development tooling and documentation setup. Created .eslintrc.js with TypeScript plugin and comprehensive rules including no-explicit-any, consistent-type-imports, and proper TypeScript checks. Added .prettierrc.json with sensible defaults for consistent code formatting. Created comprehensive README.md with package overview, installation instructions, usage examples for both ESM and CommonJS, modular imports, architecture description, development setup, and detailed roadmap for tasks 116-125. Added CHANGELOG.md following Keep a Changelog format with current package status and planned features. All development tooling is configured and ready for use.\n</info added on 2025-08-06T10:53:45.056Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Run eslint on sample TypeScript files, verify prettier formats code consistently, ensure all npm scripts execute without errors"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Validate package structure and prepare for development",
|
||||
"description": "Perform final validation of the package structure and ensure it's ready for implementation",
|
||||
"dependencies": [
|
||||
"115.1",
|
||||
"115.2",
|
||||
"115.3",
|
||||
"115.4"
|
||||
],
|
||||
"details": "Run 'npm install' to ensure all dependencies are properly resolved. Execute 'tsc --noEmit' to verify TypeScript configuration is correct. Create a simple smoke test in tests/ that imports from the package to verify module resolution works. Ensure the package can be linked locally for testing in other projects. Verify that both CommonJS and ESM imports work correctly. Create a checklist in README for remaining implementation tasks based on tasks 116-125.\n<info added on 2025-08-06T11:02:21.457Z>\nSuccessfully validated package structure with comprehensive testing. All validations passed: npm install resolved dependencies without issues, TypeScript compilation (tsc --noEmit) showed no errors, and dual-format build (npm run build) successfully generated both ESM and CJS outputs with proper TypeScript declarations. Created and executed comprehensive smoke test suite covering all module imports, placeholder functionality, and type definitions - all 8 tests passing. Code quality tools (ESLint, Prettier) are properly configured and show no issues. Package is confirmed ready for local linking and supports both CommonJS and ESM import patterns. README updated with implementation checklist marking Task 115 as complete and clearly outlining remaining implementation tasks 116-125. Package structure validation is complete and development environment is fully prepared for core implementation phase.\n</info added on 2025-08-06T11:02:21.457Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Successfully run all build and test commands, verify package can be imported in both ESM and CJS test files, ensure TypeScript compilation produces no errors, confirm all directories contain appropriate index.ts files"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 116,
|
||||
"title": "Define Core TypeScript Types and Interfaces",
|
||||
"description": "Create all TypeScript type definitions and interfaces for the tm-core package",
|
||||
"details": "Create types/index.ts with Task, Subtask, TaskMetadata interfaces and type literals (TaskStatus, TaskPriority, TaskComplexity). Create all interface files: storage.interface.ts with IStorage methods, ai-provider.interface.ts with IAIProvider and AIOptions, configuration.interface.ts with IConfiguration. Use strict typing throughout, no 'any' types allowed. Follow naming conventions: interfaces prefixed with 'I', type literals in PascalCase.",
|
||||
"testStrategy": "Compile with TypeScript to ensure no type errors, create mock implementations to verify interfaces are complete, use type checking in IDE to confirm all required properties are defined",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
115
|
||||
],
|
||||
"status": "done",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create Core Task and Subtask Type Definitions",
|
||||
"description": "Create types/index.ts with fundamental Task and Subtask interfaces including all required properties",
|
||||
"dependencies": [],
|
||||
"details": "Create types/index.ts file. Define Task interface with properties: id (string), title (string), description (string), status (TaskStatus), priority (TaskPriority), dependencies (string[]), details (string), testStrategy (string), subtasks (Subtask[]). Define Subtask interface extending Task but with numeric id. Define TaskMetadata interface with version (string), lastModified (string), taskCount (number), completedCount (number). Export all interfaces.\n<info added on 2025-08-06T11:03:44.220Z>\nImplementation completed with comprehensive type system. Created all required interfaces with strict typing, added optional properties for enhanced functionality (createdAt, updatedAt, effort, actualEffort, tags). Implemented utility types for create/update operations with proper type constraints. Added filter interfaces for advanced querying. Included runtime type guards for safe type narrowing. Successfully compiled without TypeScript errors, ready for integration with storage and AI provider implementations.\n</info added on 2025-08-06T11:03:44.220Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Compile TypeScript files to ensure no type errors, create sample objects conforming to interfaces to verify completeness"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Define Type Literals and Enums",
|
||||
"description": "Create all type literal definitions for TaskStatus, TaskPriority, and TaskComplexity in the types file",
|
||||
"dependencies": [
|
||||
"116.1"
|
||||
],
|
||||
"details": "In types/index.ts, define type literals: TaskStatus = 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked'; TaskPriority = 'low' | 'medium' | 'high' | 'critical'; TaskComplexity = 'simple' | 'moderate' | 'complex' | 'very-complex'. Consider using const assertions for better type inference. Export all type literals.\n<info added on 2025-08-06T11:04:04.675Z>\nType literals were already implemented in subtask 116.1 as part of the comprehensive type system. The types/index.ts file includes all required type literals: TaskStatus with values 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked' | 'review', TaskPriority with values 'low' | 'medium' | 'high' | 'critical', and TaskComplexity with values 'simple' | 'moderate' | 'complex' | 'very-complex'. All type literals are properly exported and include comprehensive JSDoc documentation. TypeScript compilation verified the types work correctly.\n</info added on 2025-08-06T11:04:04.675Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Use TypeScript compiler to verify type literals work correctly, test with invalid values to ensure type checking catches errors"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create Storage Interface Definition",
|
||||
"description": "Create storage.interface.ts with IStorage interface defining all storage operation methods",
|
||||
"dependencies": [
|
||||
"116.1"
|
||||
],
|
||||
"details": "Create interfaces/storage.interface.ts file. Define IStorage interface with methods: loadTasks(tag?: string): Promise<Task[]>; saveTasks(tasks: Task[], tag?: string): Promise<void>; appendTasks(tasks: Task[], tag?: string): Promise<void>; updateTask(taskId: string, updates: Partial<Task>, tag?: string): Promise<void>; deleteTask(taskId: string, tag?: string): Promise<void>; exists(tag?: string): Promise<boolean>. Import Task type from types/index.ts.\n<info added on 2025-08-06T11:05:00.573Z>\nImplementation completed successfully. Extended IStorage interface beyond original specification to include metadata operations (loadMetadata, saveMetadata), tag management (getAllTags, deleteTag, renameTag, copyTag), and lifecycle methods (initialize, close, getStats). Added StorageStats interface for monitoring storage metrics and StorageConfig interface for configuration options. Implemented BaseStorage abstract class that provides common functionality including task validation using validateTask method, tag sanitization with sanitizeTag to ensure valid filenames, and backup path generation through getBackupPath for data safety. The abstract class serves as a foundation for concrete storage implementations, reducing code duplication and ensuring consistent behavior across different storage backends. All methods properly typed with async/await patterns and comprehensive error handling considerations.\n</info added on 2025-08-06T11:05:00.573Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Create mock implementation of IStorage to verify all methods are properly typed, ensure Promise return types are correct"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Create AI Provider Interface Definition",
|
||||
"description": "Create ai-provider.interface.ts with IAIProvider interface and AIOptions type",
|
||||
"dependencies": [
|
||||
"116.1"
|
||||
],
|
||||
"details": "Create interfaces/ai-provider.interface.ts file. Define AIOptions interface with properties: temperature (number), maxTokens (number), stream (boolean), topP (number), frequencyPenalty (number). Define IAIProvider interface with methods: generateCompletion(prompt: string, options?: AIOptions): Promise<string>; calculateTokens(text: string): number; getName(): string; getModel(): string; getDefaultModel(): string; isAvailable(): Promise<boolean>.\n<info added on 2025-08-06T11:06:15.795Z>\nFile successfully updated with expanded interface implementation details including comprehensive method signatures and supporting interfaces: AIOptions with full parameter set (temperature, maxTokens, stream, model, topP, topK, frequencyPenalty, presencePenalty, stopSequences, systemPrompt), AIResponse structure with content and usage metadata, AIModel interface for model information, ProviderInfo for capability tracking, ProviderUsageStats for usage monitoring, AIProviderConfig for initialization, and additional interfaces for streaming support. Documented BaseAIProvider abstract class implementation with validation, usage tracking, and common utility methods. All interfaces properly typed with strict TypeScript patterns and async/await support. No compilation errors.\n</info added on 2025-08-06T11:06:15.795Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Create stub implementation to verify interface completeness, test optional parameters work correctly"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create Configuration Interface Definition",
|
||||
"description": "Create configuration.interface.ts with IConfiguration interface for all config options",
|
||||
"dependencies": [
|
||||
"116.1",
|
||||
"116.2"
|
||||
],
|
||||
"details": "Create interfaces/configuration.interface.ts file. Define IConfiguration interface with properties: projectPath (string), aiProvider (string), apiKeys (Record<string, string>), models (object with main, research, fallback as strings), enableTags (boolean), defaultTag (string), maxConcurrentTasks (number), retryAttempts (number), retryDelay (number). Import necessary types from types/index.ts. Ensure all properties have appropriate types with no 'any' usage.\n<info added on 2025-08-06T11:07:43.367Z>\nImplementation completed successfully. Created comprehensive configuration system with:\n\n- Core IConfiguration interface with all required properties: projectPath, aiProvider, apiKeys, models configuration, providers settings, tasks management, tags configuration, storage options, retry behavior, logging preferences, and security settings\n- Supporting interfaces for each configuration section: ModelConfig for AI model selection, ProviderConfig for API provider settings, TaskSettings for task management options, TagSettings for tag-based organization, StorageSettings for persistence configuration, RetrySettings for error handling, LoggingSettings for debugging options, SecuritySettings for API key management\n- Configuration management system with IConfigurationFactory for creating configs from various sources (file, environment, defaults) and IConfigurationManager for runtime config operations including loading, saving, validation, watching for changes, and merging configurations\n- Validation system with ConfigValidationResult interface for detailed error reporting, ConfigSchema for JSON schema validation, and EnvironmentConfig for environment variable mapping\n- DEFAULT_CONFIG_VALUES constant providing sensible defaults for all configuration options\n- All interfaces properly typed with strict TypeScript typing, no 'any' usage, proper imports from types/index\n- Successfully exported all interfaces through main index.ts for package consumers\n- TypeScript compilation confirmed passing without any type errors\n</info added on 2025-08-06T11:07:43.367Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Create sample configuration objects to verify interface covers all needed options, test with partial configs to ensure optional properties work"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 117,
|
||||
"title": "Implement Storage Layer with Repository Pattern",
|
||||
"description": "Create FileStorage class implementing IStorage interface for task persistence",
|
||||
"details": "Implement FileStorage class in storage/file-storage.ts following Repository pattern. Constructor accepts projectPath, private basePath property set to {projectPath}/.taskmaster. Implement all IStorage methods: loadTasks, saveTasks, appendTasks, updateTask, deleteTask, exists. Handle file operations with proper error handling (ENOENT returns empty arrays). Use JSON format with tasks array and metadata object containing version and lastModified. Create getTasksPath method to handle tag-based file paths.",
|
||||
"testStrategy": "Unit test all FileStorage methods with mock file system, test error scenarios (missing files, invalid JSON), verify tag-based path generation, test concurrent operations, ensure proper directory creation",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
116
|
||||
],
|
||||
"status": "done",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create FileStorage class structure and constructor",
|
||||
"description": "Set up the FileStorage class skeleton with proper TypeScript typing and implement the constructor that accepts projectPath parameter",
|
||||
"dependencies": [],
|
||||
"details": "Create storage/file-storage.ts file. Import necessary Node.js modules (fs/promises, path). Import IStorage interface and Task type from types. Define FileStorage class implementing IStorage. Create constructor accepting projectPath string parameter. Initialize private basePath property as `${projectPath}/.taskmaster`. Add private property for managing file locks if needed for concurrent operations.",
|
||||
"status": "done",
|
||||
"testStrategy": "Unit test constructor initialization, verify basePath is correctly set, test with various projectPath inputs including edge cases"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement file path management and helper methods",
|
||||
"description": "Create internal helper methods for managing file paths and ensuring directory structure exists",
|
||||
"dependencies": [
|
||||
"117.1"
|
||||
],
|
||||
"details": "Implement private getTasksPath(tag?: string) method that returns path to tasks file based on optional tag parameter. If tag provided, return `${basePath}/tasks/${tag}.json`, otherwise `${basePath}/tasks/tasks.json`. Create private ensureDirectoryExists() method that creates .taskmaster and tasks directories if they don't exist using fs.mkdir with recursive option. Add private method for safe JSON parsing with error handling.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test getTasksPath with and without tags, verify directory creation works recursively, test JSON parsing with valid and invalid data"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement read operations: loadTasks and exists",
|
||||
"description": "Implement methods for reading tasks from the file system with proper error handling",
|
||||
"dependencies": [
|
||||
"117.2"
|
||||
],
|
||||
"details": "Implement loadTasks(tag?: string) method: use getTasksPath to get file path, read file using fs.readFile, parse JSON content, return tasks array from parsed data. Handle ENOENT error by returning empty array. Handle JSON parse errors appropriately. Implement exists(tag?: string) method: use fs.access to check if file exists at getTasksPath location, return boolean result.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test loadTasks with existing files, missing files (ENOENT), corrupted JSON files. Test exists method with present and absent files"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement write operations: saveTasks and appendTasks",
|
||||
"description": "Implement methods for persisting tasks to the file system with metadata",
|
||||
"dependencies": [
|
||||
"117.3"
|
||||
],
|
||||
"details": "Implement saveTasks(tasks: Task[], tag?: string) method: ensure directory exists, create data object with tasks array and metadata object containing version (e.g., '1.0.0') and lastModified (ISO timestamp). Write to file using fs.writeFile with JSON.stringify and proper formatting. Implement appendTasks(tasks: Task[], tag?: string) method: load existing tasks, merge with new tasks (avoiding duplicates by ID), call saveTasks with merged array.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test saveTasks creates files with correct structure, verify metadata is included, test appendTasks merges correctly without duplicates"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement update and delete operations",
|
||||
"description": "Implement methods for modifying and removing individual tasks with atomic operations",
|
||||
"dependencies": [
|
||||
"117.4"
|
||||
],
|
||||
"details": "Implement updateTask(taskId: string, updates: Partial<Task>, tag?: string) method: load tasks, find task by ID, merge updates using object spread, save updated tasks array. Return boolean indicating success. Implement deleteTask(taskId: string, tag?: string) method: load tasks, filter out task with matching ID, save filtered array. Return boolean indicating if task was found and deleted. Ensure both operations are atomic using temporary files if needed.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test updateTask with existing and non-existing tasks, verify partial updates work correctly. Test deleteTask removes correct task, handles missing tasks gracefully"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 118,
|
||||
"title": "Create AI Provider Base Architecture",
|
||||
"description": "Implement abstract BaseProvider class and provider interfaces using Template Method pattern",
|
||||
"details": "Convert existing base-provider.js to TypeScript abstract class BaseProvider implementing IAIProvider. Add protected properties for apiKey and model. Create abstract methods: generateCompletion, calculateTokens, getName, getModel, getDefaultModel. Apply Template Method pattern for common provider logic like error handling and retry logic. Ensure proper type safety throughout.",
|
||||
"testStrategy": "Create MockProvider extending BaseProvider to test abstract class functionality, verify all abstract methods are properly defined, test error handling and common logic",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
116
|
||||
],
|
||||
"status": "done",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Convert base-provider.js to TypeScript and define IAIProvider interface",
|
||||
"description": "Create the IAIProvider interface with all required method signatures and convert the existing base-provider.js file to a TypeScript file with proper type definitions",
|
||||
"dependencies": [],
|
||||
"details": "Create src/types/providers.ts with IAIProvider interface containing methods: generateCompletion(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, calculateTokens(text: string): number, getName(): string, getModel(): string, getDefaultModel(): string. Move base-provider.js to src/providers/base-provider.ts and add initial TypeScript types.\n<info added on 2025-08-06T12:16:45.893Z>\nSince the IAIProvider interface already exists in src/interfaces/ai-provider.interface.ts with all required methods and type definitions, update the subtask to focus on converting base-provider.js to TypeScript and implementing the BaseAIProvider abstract class. The conversion should extend the existing BaseAIProvider from src/interfaces/ai-provider.interface.ts rather than creating duplicate interfaces. Ensure the implementation aligns with the comprehensive interface that includes AIOptions, AIResponse, AIModel, ProviderInfo types and methods for streaming, validation, and usage tracking.\n</info added on 2025-08-06T12:16:45.893Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Verify that the interface is properly defined and that TypeScript compilation succeeds without errors"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement BaseProvider abstract class with core properties",
|
||||
"description": "Create the abstract BaseProvider class implementing IAIProvider with protected properties for apiKey and model configuration",
|
||||
"dependencies": [
|
||||
"118.1"
|
||||
],
|
||||
"details": "In base-provider.ts, define abstract class BaseProvider implements IAIProvider with protected properties: apiKey: string, model: string, maxRetries: number = 3, retryDelay: number = 1000. Add constructor that accepts BaseProviderConfig interface with apiKey and optional model. Implement getModel() method to return current model.\n<info added on 2025-08-06T12:28:45.485Z>\nI've reviewed the existing BaseAIProvider interface in the interfaces file. The task requires creating a separate BaseProvider abstract class in base-provider.ts that implements the IAIProvider interface, with specific protected properties and configuration. This appears to be a deliberate architectural decision to have a more concrete base class with built-in retry logic and configuration management that all provider implementations will extend.\n</info added on 2025-08-06T12:28:45.485Z>\n<info added on 2025-08-06T13:14:24.539Z>\nSuccessfully implemented BaseProvider abstract class:\n\nIMPLEMENTED FILES:\n✅ packages/tm-core/src/providers/base-provider.ts - Created new BaseProvider abstract class\n✅ packages/tm-core/src/providers/index.ts - Updated to export BaseProvider\n\nIMPLEMENTATION DETAILS:\n- Created BaseProviderConfig interface with required apiKey and optional model\n- BaseProvider abstract class implements IAIProvider interface\n- Protected properties implemented as specified:\n - apiKey: string \n - model: string\n - maxRetries: number = 3\n - retryDelay: number = 1000\n- Constructor accepts BaseProviderConfig and sets apiKey and model (using getDefaultModel() if not provided)\n- Implemented getModel() method that returns current model\n- All IAIProvider methods declared as abstract (to be implemented by concrete providers)\n- Uses .js extension for ESM import compatibility\n- TypeScript compilation verified successful\n\nThe BaseProvider provides the foundation for concrete provider implementations with shared retry logic properties and standardized configuration.\n</info added on 2025-08-06T13:14:24.539Z>\n<info added on 2025-08-20T17:16:14.037Z>\nREFACTORING REQUIRED: The BaseProvider implementation needs to be relocated from packages/tm-core/src/providers/base-provider.ts to packages/tm-core/src/providers/ai/base-provider.ts following the new directory structure. The class must implement the Template Method pattern with the following structure:\n\n1. Keep constructor concise (under 10 lines) - only initialize apiKey and model properties\n2. Remove maxRetries and retryDelay from constructor - these should be class-level constants or configurable separately\n3. Implement all abstract methods from IAIProvider: generateCompletion, calculateTokens, getName, getModel, getDefaultModel\n4. Add protected template methods for extensibility:\n - validateInput(input: string): void - for input validation with early returns\n - prepareRequest(input: string, options?: any): any - for request preparation\n - handleResponse(response: any): string - for response processing\n - handleError(error: any): never - for consistent error handling\n5. Apply clean code principles: extract complex logic into small focused methods, use early returns to reduce nesting, ensure each method has single responsibility\n\nThe refactored BaseProvider will serve as a robust foundation using Template Method pattern, allowing concrete providers to override specific behaviors while maintaining consistent structure and error handling across all AI provider implementations.\n</info added on 2025-08-20T17:16:14.037Z>\n<info added on 2025-08-21T15:57:30.467Z>\nREFACTORING UPDATE: The BaseProvider implementation in packages/tm-core/src/providers/base-provider.ts is now affected by the core/ folder removal and needs its import paths updated. Since base-provider.ts imports from '../interfaces/provider.interface.js', this import remains valid as both providers/ and interfaces/ are at the same level. No changes needed to BaseProvider imports due to the flattening. The file structure reorganization maintains the relative path relationship between providers/ and interfaces/ directories.\n</info added on 2025-08-21T15:57:30.467Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Create a test file that attempts to instantiate BaseProvider directly (should fail) and verify that protected properties are accessible in child classes"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Define abstract methods and implement Template Method pattern",
|
||||
"description": "Add all abstract methods to BaseProvider and implement the Template Method pattern for common provider operations",
|
||||
"dependencies": [
|
||||
"118.2"
|
||||
],
|
||||
"details": "Add abstract methods: protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, abstract calculateTokens(text: string): number, abstract getName(): string, abstract getDefaultModel(): string. Implement public generateCompletion() as template method that calls generateCompletionInternal() with error handling and retry logic.\n<info added on 2025-08-20T17:16:38.315Z>\nApply Template Method pattern following clean code principles:\n\nDefine abstract methods:\n- protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>\n- protected abstract calculateTokens(text: string): number\n- protected abstract getName(): string\n- protected abstract getDefaultModel(): string\n- protected abstract getMaxRetries(): number\n- protected abstract getRetryDelay(): number\n\nImplement template method generateCompletion():\n- Call validateInput() with early returns for invalid prompt/options\n- Call prepareRequest() to format the request\n- Execute generateCompletionInternal() with retry logic\n- Call handleResponse() to process the result\n- Call handleError() in catch blocks\n\nAdd protected helper methods:\n- validateInput(prompt: string, options?: CompletionOptions): ValidationResult - Check prompt length, validate options, return early on errors\n- prepareRequest(prompt: string, options?: CompletionOptions): PreparedRequest - Format prompt, merge with defaults, add metadata\n- handleResponse(result: CompletionResult): ProcessedResult - Validate response format, extract completion text, add usage metrics\n- handleError(error: unknown, attempt: number): void - Log error details, determine if retryable, throw TaskMasterError\n\nExtract retry logic helpers:\n- shouldRetry(error: unknown, attempt: number): boolean - Check error type and attempt count\n- calculateBackoffDelay(attempt: number): number - Use exponential backoff with jitter\n- isRateLimitError(error: unknown): boolean - Detect rate limit responses\n- isTimeoutError(error: unknown): boolean - Detect timeout errors\n\nUse named constants:\n- DEFAULT_MAX_RETRIES = 3\n- BASE_RETRY_DELAY_MS = 1000\n- MAX_RETRY_DELAY_MS = 32000\n- BACKOFF_MULTIPLIER = 2\n- JITTER_FACTOR = 0.1\n\nEnsure each method stays under 30 lines by extracting complex logic into focused helper methods.\n</info added on 2025-08-20T17:16:38.315Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Create MockProvider extending BaseProvider to verify all abstract methods must be implemented and template method properly delegates to internal methods"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement error handling and retry logic with exponential backoff",
|
||||
"description": "Add comprehensive error handling and retry mechanism with exponential backoff for API calls in the template method",
|
||||
"dependencies": [
|
||||
"118.3"
|
||||
],
|
||||
"details": "In generateCompletion() template method, wrap generateCompletionInternal() in try-catch with retry logic. Implement exponential backoff: delay * Math.pow(2, attempt). Add error types: ProviderError, RateLimitError, AuthenticationError extending Error. Log errors in development mode only. Handle specific error cases like rate limits (429), authentication errors (401), and network timeouts.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test retry logic with MockProvider that fails N times then succeeds, verify exponential backoff timing, test different error scenarios and their handling"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Add validation, logging, and completion options handling",
|
||||
"description": "Implement input validation, debug logging for development, and proper handling of completion options like temperature and max tokens",
|
||||
"dependencies": [
|
||||
"118.4"
|
||||
],
|
||||
"details": "Add validatePrompt() method to check for empty/invalid prompts. Add validateOptions() to ensure temperature is between 0-2, maxTokens is positive. Implement debug logging using console.log only when NODE_ENV !== 'production'. Create CompletionOptions interface with optional temperature, maxTokens, topP, frequencyPenalty, presencePenalty. Ensure all validation errors throw descriptive ProviderError instances.",
|
||||
"status": "done",
|
||||
"testStrategy": "Test validation with invalid inputs (empty prompts, negative maxTokens, temperature > 2), verify logging only occurs in development, test option merging with defaults"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 119,
|
||||
"title": "Implement Provider Factory with Dynamic Imports",
|
||||
"description": "Create ProviderFactory class using Factory pattern for AI provider instantiation",
|
||||
"details": "Implement ProviderFactory class in ai/provider-factory.ts with static create method. Use switch statement for provider selection ('anthropic', 'openai', 'google'). Implement dynamic imports for each provider to enable tree-shaking. Return Promise<IAIProvider> from create method. Handle unknown providers with meaningful error messages. Ensure proper typing for configuration object.",
|
||||
"testStrategy": "Test factory with each provider type, verify dynamic imports work correctly, test error handling for unknown providers, mock dynamic imports for unit testing",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
118
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create ProviderFactory class structure and types",
|
||||
"description": "Set up the ProviderFactory class file with proper TypeScript types and interfaces",
|
||||
"dependencies": [],
|
||||
"details": "Create ai/provider-factory.ts file. Define ProviderFactory class with static create method signature. Import IAIProvider interface from base provider. Define ProviderType as union type ('anthropic' | 'openai' | 'google'). Set up proper return type as Promise<IAIProvider> for the create method to support dynamic imports.\n<info added on 2025-08-20T17:16:56.506Z>\nClean code architecture implementation: Move to src/providers/ai/provider-factory.ts. Follow Single Responsibility Principle - factory only creates providers, no other responsibilities. Create validateProviderConfig() method for provider configuration validation. Define PROVIDER_NAMES constant object with provider string values. Implement create() method with early returns pattern for better readability. Apply Dependency Inversion - factory depends on IAIProvider interface abstraction, not concrete implementations. Keep method under 40 lines following clean code practices.\n</info added on 2025-08-20T17:16:56.506Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Verify file structure and type definitions compile correctly"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement provider selection logic with switch statement",
|
||||
"description": "Add the core switch statement logic to handle different provider types",
|
||||
"dependencies": [
|
||||
"119.1"
|
||||
],
|
||||
"details": "Inside the static create method, implement switch statement on provider type parameter. Add cases for 'anthropic', 'openai', and 'google'. Add default case that throws a descriptive error for unknown providers (e.g., throw new Error(`Unknown provider: ${providerType}`)). Structure each case to prepare for dynamic imports.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test switch statement with valid and invalid provider types, verify error messages"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Add dynamic imports for each provider",
|
||||
"description": "Implement dynamic import() statements for lazy loading provider modules",
|
||||
"dependencies": [
|
||||
"119.2"
|
||||
],
|
||||
"details": "In each switch case, use dynamic import() to load the provider module: for 'anthropic' case use await import('./providers/anthropic-provider'), similar for OpenAI and Google providers. Extract the default export or specific class from each dynamic import. This enables tree-shaking by only loading the selected provider.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Mock dynamic imports in tests, verify only requested provider is loaded"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Instantiate providers with configuration",
|
||||
"description": "Create provider instances with proper configuration passing",
|
||||
"dependencies": [
|
||||
"119.3"
|
||||
],
|
||||
"details": "After each dynamic import, instantiate the provider class with the configuration object passed to create method. Ensure configuration object is properly typed (use IConfiguration or relevant subset). Return the instantiated provider instance. Handle any instantiation errors and wrap them with context about which provider failed.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test provider instantiation with various configuration objects, verify configuration is passed correctly"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Add error handling and validation",
|
||||
"description": "Implement comprehensive error handling for all failure scenarios",
|
||||
"dependencies": [
|
||||
"119.4"
|
||||
],
|
||||
"details": "Wrap dynamic imports in try-catch blocks to handle module loading failures. Add validation for configuration object before passing to providers. Create custom error messages that include the provider type and specific failure reason. Consider adding a ProviderFactoryError custom error class. Ensure all errors bubble up properly while maintaining async/await chain.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test various error scenarios: missing provider modules, invalid configurations, network failures during dynamic import"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 120,
|
||||
"title": "Implement Anthropic Provider",
|
||||
"description": "Create AnthropicProvider class extending BaseProvider with full Anthropic SDK integration",
|
||||
"details": "Create AnthropicProvider class in ai/providers/anthropic-provider.ts extending BaseProvider. Import and use @anthropic-ai/sdk. Initialize private client property in constructor. Implement all abstract methods: generateCompletion using Claude API, calculateTokens using appropriate tokenizer, getName returning 'anthropic', getModel returning current model, getDefaultModel returning 'claude-3-sonnet-20240229'. Wrap API errors with context.",
|
||||
"testStrategy": "Mock Anthropic SDK for unit tests, test API error handling, verify token calculation accuracy, test with different model configurations",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
118
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Set up AnthropicProvider class structure and dependencies",
|
||||
"description": "Create the AnthropicProvider class file with proper imports and class structure extending BaseProvider",
|
||||
"dependencies": [],
|
||||
"details": "Create ai/providers/anthropic-provider.ts file. Import BaseProvider from base-provider.ts and import Anthropic from @anthropic-ai/sdk. Import necessary types including IAIProvider, ChatMessage, and ChatCompletion. Set up the class declaration extending BaseProvider with proper TypeScript typing. Add private client property declaration of type Anthropic.\n<info added on 2025-08-20T17:17:15.019Z>\nFile should be created at src/providers/ai/adapters/anthropic-provider.ts instead of ai/providers/anthropic-provider.ts. Follow clean code principles: keep constructor minimal (under 10 lines) with only client initialization. Extract API call logic into separate small methods (each under 20 lines). Use early returns in generateCompletionInternal() for better readability. Extract error mapping logic to a dedicated mapAnthropicError() method. Avoid magic strings - define constants for model names and API parameters.\n</info added on 2025-08-20T17:17:15.019Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Verify file structure and imports compile without errors, ensure class properly extends BaseProvider"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement constructor and client initialization",
|
||||
"description": "Create the constructor that accepts configuration and initializes the Anthropic SDK client",
|
||||
"dependencies": [
|
||||
"120.1"
|
||||
],
|
||||
"details": "Implement constructor accepting IConfiguration parameter. Call super(config) to initialize BaseProvider. Initialize the private client property by creating new Anthropic instance with apiKey from config.apiKeys.anthropic. Add validation to ensure API key exists, throwing meaningful error if missing. Store the model configuration from config.model or use default.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test constructor with valid and invalid configurations, verify client initialization, test API key validation"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement generateCompletion method with Claude API",
|
||||
"description": "Implement the main generateCompletion method that calls Anthropic's Claude API and handles responses",
|
||||
"dependencies": [
|
||||
"120.2"
|
||||
],
|
||||
"details": "Implement async generateCompletion method accepting ChatMessage array. Map ChatMessage format to Anthropic's expected format (role and content). Use client.messages.create() with appropriate parameters including model, max_tokens, and messages. Transform Anthropic response format to ChatCompletion interface. Handle streaming vs non-streaming responses. Implement proper error handling wrapping API errors with context.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Mock Anthropic SDK client.messages.create, test with various message formats, verify response transformation, test error scenarios"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement token calculation and utility methods",
|
||||
"description": "Implement calculateTokens method and other required abstract methods from BaseProvider",
|
||||
"dependencies": [
|
||||
"120.3"
|
||||
],
|
||||
"details": "Implement calculateTokens method using appropriate tokenizer (tiktoken or claude-tokenizer if available). Implement getName method returning 'anthropic' string constant. Implement getModel method returning current model from configuration. Implement getDefaultModel method returning 'claude-3-sonnet-20240229'. Add any additional helper methods for token counting or message formatting.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test token calculation accuracy with various input strings, verify utility methods return correct values"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Add comprehensive error handling and type exports",
|
||||
"description": "Implement robust error handling throughout the class and ensure proper TypeScript exports",
|
||||
"dependencies": [
|
||||
"120.4"
|
||||
],
|
||||
"details": "Wrap all Anthropic API calls in try-catch blocks. Create custom error messages that include context about the operation being performed. Handle rate limiting errors specifically. Ensure all methods have proper TypeScript return types. Export the AnthropicProvider class as default export. Add JSDoc comments for all public methods. Ensure proper error propagation maintaining stack traces.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test various API error scenarios, verify error messages include context, test rate limit handling, ensure TypeScript types are correctly exported"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 121,
|
||||
"title": "Create Prompt Builder and Task Parser",
|
||||
"description": "Implement PromptBuilder class and TaskParser with Dependency Injection",
|
||||
"details": "Create PromptBuilder class with buildParsePrompt and buildExpandPrompt methods using template literals. Include specific JSON format instructions. Create TaskParser class accepting IAIProvider and IConfiguration via constructor (Dependency Injection). Implement parsePRD method to read PRD file, use PromptBuilder to create prompt, call AI provider, extract tasks from response, and enrich with metadata. Handle parsing errors gracefully.",
|
||||
"testStrategy": "Unit test prompt building with various inputs, mock AI provider responses, test JSON extraction logic, verify error handling for malformed responses, integration test with real PRD files",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
119,
|
||||
120
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create PromptBuilder Class Structure",
|
||||
"description": "Implement the PromptBuilder class with template methods for generating AI prompts",
|
||||
"dependencies": [],
|
||||
"details": "Create src/services/prompt-builder.ts. Define PromptBuilder class with two public methods: buildParsePrompt(prdContent: string): string and buildExpandPrompt(task: Task): string. Use template literals to construct prompts with clear JSON format instructions. Include system instructions for AI to follow specific output formats. Add private helper methods for common prompt sections like JSON schema definitions and response format examples.\n<info added on 2025-08-20T17:17:31.467Z>\nRefactor to src/services/prompts/prompt-builder.ts to separate concerns. Implement buildTaskPrompt() method. Define prompt template constants: PARSE_PROMPT_TEMPLATE, EXPAND_PROMPT_TEMPLATE, TASK_PROMPT_TEMPLATE, JSON_FORMAT_INSTRUCTIONS. Move JSON schema definitions and format instructions to constants. Ensure each template uses template literals with ${} placeholders. Keep all methods under 40 lines by extracting logic into focused helper methods. Use descriptive constant names for all repeated strings or instruction blocks.\n</info added on 2025-08-20T17:17:31.467Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Unit test both prompt methods with sample inputs. Verify prompt contains required JSON structure instructions. Test edge cases like empty PRD content or minimal task objects."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement TaskParser Class with DI",
|
||||
"description": "Create TaskParser class accepting IAIProvider and IConfiguration through constructor injection",
|
||||
"dependencies": [
|
||||
"121.1"
|
||||
],
|
||||
"details": "Create src/services/task-parser.ts. Define TaskParser class with constructor(private aiProvider: IAIProvider, private config: IConfiguration). Add private promptBuilder property initialized in constructor. Implement basic class structure with placeholder methods. Ensure proper TypeScript typing for all parameters and properties. Follow dependency injection pattern for testability.\n<info added on 2025-08-20T17:17:49.624Z>\nUpdate file location to src/services/tasks/task-parser.ts instead of src/services/task-parser.ts. Refactor parsePRD() method to stay under 40 lines by extracting logic into helper methods: readPRD(), validatePRD(), extractTasksFromResponse(), and enrichTasksWithMetadata(). Each helper method should be under 20 lines. Implement early returns in validation methods for cleaner code flow. Remove any file I/O operations from the parser class - delegate all storage operations to injected dependencies. Ensure clean separation of concerns with parser focused only on task parsing logic.\n</info added on 2025-08-20T17:17:49.624Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test constructor properly stores injected dependencies. Verify class instantiation with mock providers. Test TypeScript compilation with proper interface implementations."
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement parsePRD Method Core Logic",
|
||||
"description": "Create the main parsePRD method that orchestrates the PRD parsing workflow",
|
||||
"dependencies": [
|
||||
"121.2"
|
||||
],
|
||||
"details": "Implement parsePRD(filePath: string): Promise<ParsedTask[]> method in TaskParser. Read PRD file using fs.promises.readFile. Use promptBuilder.buildParsePrompt() to create AI prompt. Call aiProvider.generateResponse() with constructed prompt. Extract JSON array from AI response using regex or JSON.parse. Handle potential parsing errors with try-catch blocks. Return empty array on errors after logging.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test with mock AI provider returning valid JSON. Test file reading with various file paths. Mock file system for controlled testing. Verify proper error logging without throwing."
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Add Task Enrichment and Metadata",
|
||||
"description": "Enhance parsed tasks with additional metadata and validation",
|
||||
"dependencies": [
|
||||
"121.3"
|
||||
],
|
||||
"details": "After extracting tasks from AI response, enrich each task with metadata: add createdAt timestamp, set initial status to 'pending', validate required fields (id, title, description). Add priority field with default 'medium' if not provided. Ensure all tasks have valid structure before returning. Create private enrichTask(task: any): ParsedTask method for this logic. Handle missing or malformed task data gracefully.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test enrichment adds all required metadata. Verify validation catches malformed tasks. Test default values are applied correctly. Ensure timestamps are properly formatted."
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement Comprehensive Error Handling",
|
||||
"description": "Add robust error handling throughout the TaskParser implementation",
|
||||
"dependencies": [
|
||||
"121.4"
|
||||
],
|
||||
"details": "Wrap file reading in try-catch to handle FILE_NOT_FOUND errors. Catch AI provider errors and wrap in appropriate TaskMasterError. Handle JSON parsing errors when extracting from AI response. Add specific error handling for network timeouts, rate limits, and malformed responses. Log errors with context in development mode only. Return meaningful error messages without exposing internals. Ensure all errors are properly typed as TaskMasterError instances.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test each error scenario separately: missing files, AI provider failures, malformed JSON, network errors. Verify proper error codes are used. Test that errors don't expose sensitive information."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 122,
|
||||
"title": "Implement Configuration Management",
|
||||
"description": "Create ConfigManager class with Zod validation for configuration",
|
||||
"details": "Implement ConfigManager in config/config-manager.ts accepting Partial<IConfiguration> in constructor. Use Zod to create validation schema matching IConfiguration interface. Implement get method with TypeScript generics for type-safe access, getAll returning full config, validate method for validation. Set defaults: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true. Handle validation errors with clear messages.",
|
||||
"testStrategy": "Test with various configuration combinations, verify Zod validation catches invalid configs, test default values, ensure type safety of get method",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
116
|
||||
],
|
||||
"status": "in-progress",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create Zod validation schema for IConfiguration",
|
||||
"description": "Define a Zod schema that matches the IConfiguration interface structure with proper validation rules",
|
||||
"dependencies": [],
|
||||
"details": "Create configSchema in config/config-manager.ts using z.object() to define validation for all IConfiguration properties. Include string validations for projectPath, enum validation for aiProvider ('anthropic', 'openai', etc.), boolean for enableTags, and any other configuration fields. Use z.string().min(1) for required strings, z.enum() for provider types, and appropriate validators for other fields.\n<info added on 2025-08-06T13:14:58.822Z>\nCompleted Zod validation schema implementation in packages/tm-core/src/config/validation.ts\n\nIMPLEMENTATION DETAILS:\n- Created comprehensive Zod schemas matching IConfiguration interface structure exactly\n- All required schemas exported as expected by config-schema.ts:\n * configurationSchema - Main configuration validation with custom refinements\n * partialConfigurationSchema - For partial updates (using base schema without refinements)\n * modelConfigSchema - Model configuration validation\n * providerConfigSchema - AI provider configuration validation \n * taskSettingsSchema - Task management settings validation\n * loggingSettingsSchema/loggingConfigSchema - Logging configuration (with legacy alias)\n * tagSettingsSchema - Tag management settings validation\n * storageSettingsSchema - Storage configuration validation\n * retrySettingsSchema - Retry/resilience settings validation\n * securitySettingsSchema - Security settings validation\n * cacheConfigSchema - Cache configuration stub (for consistency)\n\nKEY FEATURES:\n- Proper Zod validation rules applied (string lengths, number ranges, enums)\n- Custom refinements for business logic (maxRetryDelay >= retryDelay)\n- Comprehensive enum schemas for all union types\n- Legacy alias support for backwards compatibility\n- All 13 nested interface schemas implemented with appropriate constraints\n- Type exports for runtime validation\n\nVALIDATION INCLUDES:\n- String validations with min lengths for required fields\n- Enum validation for providers, priorities, complexities, log levels, etc.\n- Number range validations (min/max constraints)\n- URL validation for baseUrl fields\n- Array validations with proper item types\n- Record validations for dynamic key-value pairs\n- Optional field handling with appropriate defaults\n\nTESTED AND VERIFIED:\n- All schemas compile correctly with TypeScript\n- Import/export chain works properly through config-schema.ts\n- Basic validation tests pass for key schemas\n- No conflicts with existing IConfiguration interface structure\n</info added on 2025-08-06T13:14:58.822Z>\n<info added on 2025-08-20T17:18:12.343Z>\nCreated ConfigManager class at src/config/config-manager.ts with the following implementation:\n\nSTRUCTURE:\n- DEFAULT_CONFIG constant defined with complete default values for all configuration properties\n- Constructor validates config using validate() method (follows Fail-Fast principle)\n- Constructor kept under 15 lines as required\n- Type-safe get<K>() method using TypeScript generics for accessing specific config properties\n- getAll() method returns complete validated configuration\n- validate() method extracted for configuration validation using Zod schema\n- mergeWithDefaults() helper extracted for merging partial config with defaults\n\nKEY IMPLEMENTATION DETAILS:\n- Imports configurationSchema from src/config/schemas/config.schema.ts\n- Uses z.infer<typeof configurationSchema> for type safety\n- Validates on construction with clear error messages\n- No nested ternaries used\n- Proper error handling with ConfigValidationError\n- Type-safe property access with keyof IConfiguration constraint\n\nMETHODS:\n- constructor(config?: Partial<IConfiguration>) - Validates and stores config\n- get<K extends keyof IConfiguration>(key: K): IConfiguration[K] - Type-safe getter\n- getAll(): IConfiguration - Returns full config\n- private validate(config: unknown): IConfiguration - Validates using Zod\n- private mergeWithDefaults(config: Partial<IConfiguration>): IConfiguration - Merges with defaults\n\nAlso created src/config/schemas/config.schema.ts importing the configurationSchema from validation.ts for cleaner organization.\n</info added on 2025-08-20T17:18:12.343Z>",
|
||||
"status": "review",
|
||||
"testStrategy": "Test schema validation with valid and invalid configurations, ensure all IConfiguration fields are covered"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement ConfigManager class constructor and storage",
|
||||
"description": "Create ConfigManager class with constructor that accepts Partial<IConfiguration> and initializes configuration with defaults",
|
||||
"dependencies": [
|
||||
"122.1"
|
||||
],
|
||||
"details": "Define ConfigManager class with private config property. In constructor, merge provided partial config with defaults (projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true). Store the merged configuration internally. Ensure the class is properly typed with IConfiguration interface.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test constructor with various partial configs, verify defaults are applied correctly, test with empty config"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement validate method with error handling",
|
||||
"description": "Create validate method that uses Zod schema to validate configuration and provides clear error messages",
|
||||
"dependencies": [
|
||||
"122.1",
|
||||
"122.2"
|
||||
],
|
||||
"details": "Implement validate(): void method that runs configSchema.parse(this.config) within try-catch block. On ZodError, transform the error into user-friendly messages that clearly indicate which fields are invalid and why. Consider creating a custom error class for configuration validation errors. The method should throw if validation fails.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test with invalid configs to ensure proper error messages, verify all validation rules work correctly"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement type-safe get method with generics",
|
||||
"description": "Create generic get method for retrieving individual configuration values with TypeScript type inference",
|
||||
"dependencies": [
|
||||
"122.2"
|
||||
],
|
||||
"details": "Implement get<K extends keyof IConfiguration>(key: K): IConfiguration[K] method that returns the value for a specific configuration key. Use TypeScript generics and keyof operator to ensure type safety. The method should provide proper type inference so consumers get the correct type based on the key they request.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test type inference with different keys, verify TypeScript catches invalid keys at compile time"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement getAll method and finalize class",
|
||||
"description": "Create getAll method to return full configuration and ensure proper exports",
|
||||
"dependencies": [
|
||||
"122.2",
|
||||
"122.3",
|
||||
"122.4"
|
||||
],
|
||||
"details": "Implement getAll(): IConfiguration method that returns a deep copy of the entire configuration object to prevent external mutations. Add JSDoc comments to all public methods. Export the ConfigManager class and ensure it's properly integrated with the module structure. Consider adding a static factory method if needed.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test getAll returns complete config, verify returned object is immutable, test integration with other modules"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 123,
|
||||
"title": "Create Utility Functions and Error Handling",
|
||||
"description": "Implement ID generation utilities and custom error classes",
|
||||
"details": "Create id-generator.ts with generateTaskId and generateSubtaskId functions using specified formats with timestamp and random components. Create TaskMasterError class extending Error in errors/task-master-error.ts with error codes (FILE_NOT_FOUND, PARSE_ERROR, etc.). Ensure errors don't expose internal details. Add development-only logging.",
|
||||
"testStrategy": "Test ID generation for uniqueness and format compliance, verify error classes properly extend Error, test error message formatting, ensure no sensitive data in errors",
|
||||
"priority": "low",
|
||||
"dependencies": [
|
||||
116
|
||||
],
|
||||
"status": "in-progress",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create ID generation utilities module",
|
||||
"description": "Implement the id-generator.ts module with functions for generating unique task and subtask IDs",
|
||||
"dependencies": [],
|
||||
"details": "Create src/utils/id-generator.ts file. Implement generateTaskId() function that returns format 'TASK-{timestamp}-{random}' where timestamp is Date.now() and random is 4-character alphanumeric string. Implement generateSubtaskId(parentId) function that returns format '{parentId}.{sequential}' where sequential increments based on existing subtasks. Use crypto.randomBytes or Math.random for randomness. Export both functions as named exports.\n<info added on 2025-08-06T12:42:22.203Z>\nThe ID generator module has been successfully implemented with the following completed features:\n- generateTaskId() function that creates unique IDs in 'TASK-{timestamp}-{random}' format\n- generateSubtaskId() function that generates sequential subtask IDs in '{parentId}.{sequential}' format\n- Input validation functions to ensure ID integrity\n- Proper TypeScript type definitions and interfaces\n- Comprehensive JSDoc documentation with usage examples\n- All functions exported as named exports from src/utils/id-generator.ts\n</info added on 2025-08-06T12:42:22.203Z>",
|
||||
"status": "done",
|
||||
"testStrategy": "Test uniqueness by generating 1000 IDs and checking for duplicates, verify format compliance with regex, test subtask ID sequential numbering"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Create base error class structure",
|
||||
"description": "Implement the TaskMasterError base class that extends Error with proper error handling capabilities",
|
||||
"dependencies": [],
|
||||
"details": "Create src/errors/task-master-error.ts file. Define TaskMasterError class extending Error. Add constructor accepting (message: string, code: string, details?: any). Set this.name = 'TaskMasterError'. Create error code constants: FILE_NOT_FOUND = 'FILE_NOT_FOUND', PARSE_ERROR = 'PARSE_ERROR', VALIDATION_ERROR = 'VALIDATION_ERROR', API_ERROR = 'API_ERROR'. Override toString() to format errors appropriately. Ensure stack trace is preserved.\n<info added on 2025-08-06T13:13:11.635Z>\nCompleted TaskMasterError base class implementation:\n\nIMPLEMENTATION DETAILS:\n- TaskMasterError class fully implemented extending Error\n- Added proper prototype chain fix with Object.setPrototypeOf(this, TaskMasterError.prototype)\n- Includes all required properties: code (from ERROR_CODES), timestamp, context, cause\n- toJSON method implemented for full serialization support\n- Error sanitization implemented via getSanitizedDetails() and containsSensitiveInfo() methods\n- Error chaining with cause property fully supported\n- Additional utility methods: getUserMessage(), toString(), is(), hasCode(), withContext(), wrap()\n\nSUCCESS CRITERIA VERIFIED:\n✅ TaskMasterError class fully implemented\n✅ Extends Error with proper prototype chain fix (Object.setPrototypeOf)\n✅ Includes all required properties and methods\n✅ toJSON method for serialization\n✅ Error sanitization logic for production (containsSensitiveInfo method)\n✅ Comprehensive error context and metadata support\n\nFILE MODIFIED: packages/tm-core/src/errors/task-master-error.ts\n</info added on 2025-08-06T13:13:11.635Z>\n<info added on 2025-08-20T17:18:38.499Z>\nRefactored to follow clean code principles:\n\nCLEAN CODE IMPROVEMENTS:\n- Moved TaskMasterError class to be under 40 lines by extracting methods\n- Created separate error-codes.ts file with ERROR_CODES constant object\n- Extracted sanitizeMessage() method to handle message sanitization\n- Extracted addContext() method for adding error context\n- Extracted toJSON() method for serialization\n- Added static factory methods: fromError(), notFound(), parseError(), validationError(), apiError()\n- Improved error chaining with proper 'cause' property handling\n- Ensured user-friendly messages that hide implementation details\n- Maintained all existing functionality while improving code organization\n\nFILES CREATED/MODIFIED:\n- packages/tm-core/src/errors/error-codes.ts (new file with ERROR_CODES)\n- packages/tm-core/src/errors/task-master-error.ts (refactored to under 40 lines)\n</info added on 2025-08-20T17:18:38.499Z>",
|
||||
"status": "review",
|
||||
"testStrategy": "Test that error extends Error properly, verify error.name is set correctly, test toString() output format, ensure stack trace exists"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement error sanitization and security features",
|
||||
"description": "Add security features to prevent exposure of sensitive internal details in error messages",
|
||||
"dependencies": [
|
||||
"123.2"
|
||||
],
|
||||
"details": "In TaskMasterError class, add private sanitizeDetails() method that removes sensitive data like API keys, file paths beyond project root, and internal state. Implement toJSON() method that returns sanitized error object for external consumption. Add static isSafeForProduction() method to validate error messages don't contain patterns like absolute paths, environment variables, or API credentials. Store original details in private property for debugging.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test sanitization removes absolute paths, API keys, and sensitive patterns, verify toJSON returns safe object, test original details are preserved internally"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Add development-only logging functionality",
|
||||
"description": "Implement conditional logging that only operates in development environment",
|
||||
"dependencies": [
|
||||
"123.2",
|
||||
"123.3"
|
||||
],
|
||||
"details": "In task-master-error.ts, add static enableDevLogging property defaulting to process.env.NODE_ENV !== 'production'. Add logError() method that console.error's full error details only when enableDevLogging is true. Include timestamp, error code, sanitized message, and full stack trace in dev logs. In production, log only error code and safe message. Create static setDevLogging(enabled: boolean) to control logging.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test logging output in dev vs production modes, verify sensitive data isn't logged in production, test log format includes all required fields"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create specialized error subclasses",
|
||||
"description": "Implement specific error classes for different error scenarios inheriting from TaskMasterError",
|
||||
"dependencies": [
|
||||
"123.2",
|
||||
"123.3",
|
||||
"123.4"
|
||||
],
|
||||
"details": "Create FileNotFoundError extending TaskMasterError with code FILE_NOT_FOUND, accepting filePath parameter. Create ParseError with code PARSE_ERROR for parsing failures, accepting source and line number. Create ValidationError with code VALIDATION_ERROR for data validation, accepting field and value. Create APIError with code API_ERROR for external API failures, accepting statusCode and provider. Each should format appropriate user-friendly messages while storing technical details internally.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test each error class constructor and message formatting, verify inheritance chain, test that each error type has correct code, ensure specialized errors work with logging system"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 124,
|
||||
"title": "Implement TaskMasterCore Facade",
|
||||
"description": "Create main TaskMasterCore class as Facade pattern entry point",
|
||||
"details": "Create TaskMasterCore class in index.ts with private properties for config, storage, aiProvider, and parser. Implement initialize method for lazy loading of AI provider. Implement parsePRD method that coordinates parser, storage, and configuration. Implement getTasks for retrieving stored tasks. Apply Facade pattern to hide complexity. Export createTaskMaster factory function, all types and interfaces. Use proper import paths with .js extensions for ESM.",
|
||||
"testStrategy": "Integration test full parse flow, test lazy initialization, verify facade properly delegates to subsystems, test with different configurations, ensure exports are correct",
|
||||
"priority": "high",
|
||||
"dependencies": [
|
||||
117,
|
||||
121,
|
||||
122
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create TaskMasterCore class structure with type definitions",
|
||||
"description": "Set up the main TaskMasterCore class in src/index.ts with all necessary imports, type definitions, and class structure following the Facade pattern",
|
||||
"dependencies": [],
|
||||
"details": "Create src/index.ts file. Import IConfiguration, ITaskStorage, IAIProvider, and IPRDParser interfaces. Define TaskMasterCore class with private properties: _config (ConfigManager), _storage (ITaskStorage), _aiProvider (IAIProvider | null), _parser (IPRDParser | null). Add constructor accepting options parameter of type Partial<IConfiguration>. Initialize _config with ConfigManager, set other properties to null for lazy loading. Import all necessary types from their respective modules using .js extensions for ESM compatibility.\n<info added on 2025-08-20T17:18:56.625Z>\nApply Facade pattern principles: simple public interface, hide subsystem complexity. Keep all methods under 30 lines by extracting logic. Implement lazy initialization pattern in initialize() method - only create dependencies when first needed. Extract createDependencies() private helper method to handle creation of storage, AI provider, and parser instances. Add createTaskMaster() factory function for convenient instance creation. Use barrel exports pattern - export all public types and interfaces that clients need (IConfiguration, ITaskStorage, IAIProvider, IPRDParser, TaskMasterCore). Follow Interface Segregation Principle - only expose methods and types that clients actually need, hide internal implementation details.\n</info added on 2025-08-20T17:18:56.625Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test class instantiation with various configuration options, verify private properties are correctly initialized, ensure TypeScript types are properly enforced"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement initialize method for lazy loading",
|
||||
"description": "Create the initialize method that handles lazy loading of AI provider and parser instances based on configuration",
|
||||
"dependencies": [
|
||||
"124.1"
|
||||
],
|
||||
"details": "Implement async initialize() method in TaskMasterCore. Check if _aiProvider is null, if so create appropriate provider based on config.aiProvider value using a factory pattern or switch statement. Similarly initialize _parser if null. Store instances in private properties for reuse. Handle provider initialization errors gracefully. Ensure method is idempotent - calling multiple times should not recreate instances. Use dynamic imports if needed for code splitting.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test lazy initialization occurs only once, verify correct provider is instantiated based on config, test error handling for invalid providers, ensure idempotency"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement parsePRD method with coordination logic",
|
||||
"description": "Create parsePRD method that coordinates the parser, AI provider, and storage to parse PRD content and store results",
|
||||
"dependencies": [
|
||||
"124.1",
|
||||
"124.2"
|
||||
],
|
||||
"details": "Implement async parsePRD(content: string) method. First call initialize() to ensure components are loaded. Use _parser.parse() to parse the PRD content, passing the AI provider for task generation. Take the parsed tasks and use _storage.saveTasks() to persist them. Handle errors from parser or storage operations. Return the parsed tasks array. Implement proper error context and logging for debugging.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Integration test with mock parser and storage, verify coordination between components, test error propagation from subsystems, ensure tasks are properly stored"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement getTasks method and other facade methods",
|
||||
"description": "Create getTasks method and any other necessary facade methods to retrieve and manage tasks",
|
||||
"dependencies": [
|
||||
"124.1"
|
||||
],
|
||||
"details": "Implement async getTasks() method that calls _storage.loadTasks() and returns the tasks array. Add getTask(id: string) for retrieving single task. Consider adding updateTask, deleteTask methods if needed. All methods should follow facade pattern - simple interface hiding complex operations. Add proper TypeScript return types for all methods. Handle storage not initialized scenarios.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test task retrieval with various scenarios, verify proper delegation to storage, test edge cases like empty task lists or invalid IDs"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Create factory function and module exports",
|
||||
"description": "Implement createTaskMaster factory function and set up all module exports including types and interfaces",
|
||||
"dependencies": [
|
||||
"124.1",
|
||||
"124.2",
|
||||
"124.3",
|
||||
"124.4"
|
||||
],
|
||||
"details": "Create createTaskMaster(options?: Partial<IConfiguration>) factory function that returns a new TaskMasterCore instance. Export this as the primary entry point. Re-export all types and interfaces from submodules: ITask, IConfiguration, IAIProvider, ITaskStorage, IPRDParser, etc. Use 'export type' for type-only exports. Ensure all imports use .js extensions for ESM. Create index.d.ts if needed for better TypeScript support. Add JSDoc comments for public API.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test factory function creates proper instances, verify all exports are accessible, test TypeScript type inference works correctly, ensure ESM imports resolve properly"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 125,
|
||||
"title": "Create Placeholder Providers and Complete Testing",
|
||||
"description": "Implement placeholder providers for OpenAI and Google, create comprehensive test suite",
|
||||
"details": "Create OpenAIProvider and GoogleProvider classes extending BaseProvider, throwing 'not yet implemented' errors. Create MockProvider in tests/mocks for testing without API calls. Write unit tests for TaskParser, integration tests for parse-prd flow, ensure 80% code coverage. Follow kebab-case naming for test files. Test error scenarios comprehensively.",
|
||||
"testStrategy": "Run full test suite with coverage report, verify all edge cases are tested, ensure mock provider behaves like real providers, test both success and failure paths",
|
||||
"priority": "medium",
|
||||
"dependencies": [
|
||||
120,
|
||||
124
|
||||
],
|
||||
"status": "pending",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Create OpenAIProvider placeholder class",
|
||||
"description": "Implement OpenAIProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
|
||||
"dependencies": [],
|
||||
"details": "Create src/providers/openai-provider.ts file. Import BaseProvider from base-provider.ts. Implement class OpenAIProvider extends BaseProvider. Override parseText() method to throw new Error('OpenAI provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write unit test to verify OpenAIProvider extends BaseProvider correctly and throws expected error when parseText is called"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Create GoogleProvider placeholder class",
|
||||
"description": "Implement GoogleProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
|
||||
"dependencies": [],
|
||||
"details": "Create src/providers/google-provider.ts file. Import BaseProvider from base-provider.ts. Implement class GoogleProvider extends BaseProvider. Override parseText() method to throw new Error('Google provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Write unit test to verify GoogleProvider extends BaseProvider correctly and throws expected error when parseText is called"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create MockProvider for testing",
|
||||
"description": "Implement MockProvider class in tests/mocks directory that simulates provider behavior without making actual API calls",
|
||||
"dependencies": [],
|
||||
"details": "Create tests/mocks/mock-provider.ts file. Extend BaseProvider class. Implement parseText() to return predefined mock task data based on input. Add methods to configure mock responses, simulate errors, and track method calls. Include delay simulation for realistic testing. Export class and helper functions for test setup.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test MockProvider returns consistent mock data, can simulate different scenarios (success/failure), and properly tracks method invocations"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Write unit tests for TaskParser",
|
||||
"description": "Create comprehensive unit tests for TaskParser class covering all methods and edge cases",
|
||||
"dependencies": [
|
||||
"125.3"
|
||||
],
|
||||
"details": "Create tests/unit/task-parser.test.ts file. Test TaskParser constructor with different providers. Test parseFromText method with valid/invalid inputs. Test error handling for malformed responses. Use MockProvider to simulate API responses. Test task ID generation and structure validation. Ensure all public methods are covered.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Achieve 100% code coverage for TaskParser class, test both success and failure paths, verify error messages are appropriate"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Write integration tests for parse-prd flow",
|
||||
"description": "Create end-to-end integration tests for the complete PRD parsing workflow",
|
||||
"dependencies": [
|
||||
"125.3",
|
||||
"125.4"
|
||||
],
|
||||
"details": "Create tests/integration/parse-prd-flow.test.ts file. Test full flow from PRD input to task output. Test with MockProvider simulating successful parsing. Test error scenarios (file not found, parse errors, network failures). Test task dependency resolution. Verify output format matches expected structure. Test with different PRD formats and sizes.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Run coverage report to ensure 80% overall coverage, verify all critical paths are tested, ensure tests are deterministic and don't depend on external services"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"created": "2025-08-06T08:51:19.649Z",
|
||||
"updated": "2025-08-20T21:32:21.837Z",
|
||||
"description": "Tasks for tm-core-phase-1 context"
|
||||
}
|
||||
}
|
||||
}
|
||||
15
.vscode/settings.json
vendored
15
.vscode/settings.json
vendored
@@ -10,5 +10,18 @@
|
||||
},
|
||||
|
||||
"json.format.enable": true,
|
||||
"json.validate.enable": true
|
||||
"json.validate.enable": true,
|
||||
"typescript.tsdk": "node_modules/typescript/lib",
|
||||
"[typescript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[typescriptreact]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[javascript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
}
|
||||
}
|
||||
|
||||
353
CHANGELOG.md
353
CHANGELOG.md
@@ -1,5 +1,358 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.26.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
## 0.26.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.25.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1152](https://github.com/eyaltoledano/claude-task-master/pull/1152) [`8933557`](https://github.com/eyaltoledano/claude-task-master/commit/89335578ffffc65504b2055c0c85aa7521e5e79b) Thanks [@ben-vargas](https://github.com/ben-vargas)! - fix(claude-code): prevent crash/hang when the optional `@anthropic-ai/claude-code` SDK is missing by guarding `AbortError instanceof` checks and adding explicit SDK presence checks in `doGenerate`/`doStream`. Also bump the optional dependency to `^1.0.88` for improved export consistency.
|
||||
|
||||
Related to JSON truncation handling in #920; this change addresses a separate error-path crash reported in #1142.
|
||||
|
||||
- [#1151](https://github.com/eyaltoledano/claude-task-master/pull/1151) [`db720a9`](https://github.com/eyaltoledano/claude-task-master/commit/db720a954d390bb44838cd021b8813dde8f3d8de) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Temporarily disable streaming for improved model compatibility - will be re-enabled in upcoming release
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1098](https://github.com/eyaltoledano/claude-task-master/pull/1098) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
|
||||
## 0.24.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
## 0.24.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
## 0.23.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
|
||||
## 0.23.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -3,3 +3,7 @@
|
||||
## Task Master AI Instructions
|
||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||
@./.taskmaster/CLAUDE.md
|
||||
|
||||
## Changeset Guidelines
|
||||
|
||||
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||
42
README.md
42
README.md
@@ -1,14 +1,39 @@
|
||||
# Task Master [](https://github.com/eyaltoledano/claude-task-master/stargazers)
|
||||
<a name="readme-top"></a>
|
||||
|
||||
[](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [](https://badge.fury.io/js/task-master-ai) [](https://discord.gg/taskmasterai) [](LICENSE)
|
||||
<div align='center'>
|
||||
<a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
[](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai)
|
||||
<p align="center">
|
||||
<a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a>
|
||||
</p>
|
||||
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
|
||||
<p align="center">
|
||||
<b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> |
|
||||
<a href="https://docs.task-master.dev" target="_blank">Docs</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a>
|
||||
<a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a>
|
||||
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
</p>
|
||||
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
|
||||
|
||||
[](https://x.com/eyaltoledano)
|
||||
[](https://x.com/RalphEcom)
|
||||
[](https://x.com/jasonzhou1993)
|
||||
|
||||
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
|
||||
|
||||
@@ -31,7 +56,7 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
#### Quick Install for Cursor 1.0+ (One-Click)
|
||||
|
||||
[](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
[](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
@@ -230,6 +255,11 @@ task-master show 1,3,5
|
||||
# Research fresh information with project context
|
||||
task-master research "What are the latest best practices for JWT authentication?"
|
||||
|
||||
# Move tasks between tags (cross-tag movement)
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
|
||||
|
||||
55
apps/cli/package.json
Normal file
55
apps/cli/package.json
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
"name": "@tm/cli",
|
||||
"version": "1.0.0",
|
||||
"description": "Task Master CLI - Command line interface for task management",
|
||||
"type": "module",
|
||||
"main": "./dist/index.js",
|
||||
"types": "./src/index.ts",
|
||||
"exports": {
|
||||
".": {
|
||||
"types": "./src/index.ts",
|
||||
"import": "./dist/index.js"
|
||||
}
|
||||
},
|
||||
"files": ["dist", "README.md"],
|
||||
"scripts": {
|
||||
"build": "tsup",
|
||||
"dev": "tsup --watch",
|
||||
"typecheck": "tsc --noEmit",
|
||||
"lint": "biome check src",
|
||||
"format": "biome format --write src",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:coverage": "vitest run --coverage",
|
||||
"test:unit": "vitest run -t unit",
|
||||
"test:integration": "vitest run -t integration",
|
||||
"test:e2e": "vitest run --dir tests/e2e",
|
||||
"test:ci": "vitest run --coverage --reporter=dot"
|
||||
},
|
||||
"dependencies": {
|
||||
"@tm/core": "*",
|
||||
"@tm/workflow-engine": "*",
|
||||
"boxen": "^7.1.1",
|
||||
"chalk": "5.6.2",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^12.1.0",
|
||||
"inquirer": "^9.2.10",
|
||||
"ora": "^8.1.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@biomejs/biome": "^1.9.4",
|
||||
"@tm/build-config": "*",
|
||||
"@types/inquirer": "^9.0.3",
|
||||
"@types/node": "^22.10.5",
|
||||
"tsup": "^8.3.0",
|
||||
"tsx": "^4.20.4",
|
||||
"typescript": "^5.7.3",
|
||||
"vitest": "^2.1.8"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"keywords": ["task-master", "cli", "task-management", "productivity"],
|
||||
"author": "",
|
||||
"license": "MIT"
|
||||
}
|
||||
503
apps/cli/src/commands/auth.command.ts
Normal file
503
apps/cli/src/commands/auth.command.ts
Normal file
@@ -0,0 +1,503 @@
|
||||
/**
|
||||
* @fileoverview Auth command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import open from 'open';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type AuthCredentials
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from auth command
|
||||
*/
|
||||
export interface AuthResult {
|
||||
success: boolean;
|
||||
action: 'login' | 'logout' | 'status' | 'refresh';
|
||||
credentials?: AuthCredentials;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* AuthCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's AuthManager
|
||||
*/
|
||||
export class AuthCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: AuthResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'auth');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command with subcommands
|
||||
this.description('Manage authentication with tryhamster.com');
|
||||
|
||||
// Add subcommands
|
||||
this.addLoginCommand();
|
||||
this.addLogoutCommand();
|
||||
this.addStatusCommand();
|
||||
this.addRefreshCommand();
|
||||
|
||||
// Default action shows help
|
||||
this.action(() => {
|
||||
this.help();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add login subcommand
|
||||
*/
|
||||
private addLoginCommand(): void {
|
||||
this.command('login')
|
||||
.description('Authenticate with tryhamster.com')
|
||||
.action(async () => {
|
||||
await this.executeLogin();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add logout subcommand
|
||||
*/
|
||||
private addLogoutCommand(): void {
|
||||
this.command('logout')
|
||||
.description('Logout and clear credentials')
|
||||
.action(async () => {
|
||||
await this.executeLogout();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add status subcommand
|
||||
*/
|
||||
private addStatusCommand(): void {
|
||||
this.command('status')
|
||||
.description('Display authentication status')
|
||||
.action(async () => {
|
||||
await this.executeStatus();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add refresh subcommand
|
||||
*/
|
||||
private addRefreshCommand(): void {
|
||||
this.command('refresh')
|
||||
.description('Refresh authentication token')
|
||||
.action(async () => {
|
||||
await this.executeRefresh();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute login command
|
||||
*/
|
||||
private async executeLogin(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performInteractiveAuth();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Exit cleanly after successful authentication
|
||||
// Small delay to ensure all output is flushed
|
||||
setTimeout(() => {
|
||||
process.exit(0);
|
||||
}, 100);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute logout command
|
||||
*/
|
||||
private async executeLogout(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performLogout();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute status command
|
||||
*/
|
||||
private async executeStatus(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayStatus();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute refresh command
|
||||
*/
|
||||
private async executeRefresh(): Promise<void> {
|
||||
try {
|
||||
const result = await this.refreshToken();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display authentication status
|
||||
*/
|
||||
private displayStatus(): AuthResult {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
|
||||
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.green('✓ Authenticated'));
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
console.log(
|
||||
chalk.gray(` Token Type: ${credentials.tokenType || 'standard'}`)
|
||||
);
|
||||
|
||||
if (credentials.expiresAt) {
|
||||
const expiresAt = new Date(credentials.expiresAt);
|
||||
const now = new Date();
|
||||
const hoursRemaining = Math.floor(
|
||||
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
|
||||
);
|
||||
|
||||
if (hoursRemaining > 0) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.gray(' Expires: Never (API key)'));
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.gray(` Saved: ${new Date(credentials.savedAt).toLocaleString()}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'status',
|
||||
credentials,
|
||||
message: 'Authenticated'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "task-master auth login" to authenticate')
|
||||
);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'status',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform logout
|
||||
*/
|
||||
private async performLogout(): Promise<AuthResult> {
|
||||
try {
|
||||
await this.authManager.logout();
|
||||
ui.displaySuccess('Successfully logged out');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'logout',
|
||||
message: 'Successfully logged out'
|
||||
};
|
||||
} catch (error) {
|
||||
const message = `Failed to logout: ${(error as Error).message}`;
|
||||
ui.displayError(message);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'logout',
|
||||
message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh authentication token
|
||||
*/
|
||||
private async refreshToken(): Promise<AuthResult> {
|
||||
const spinner = ora('Refreshing authentication token...').start();
|
||||
|
||||
try {
|
||||
const credentials = await this.authManager.refreshToken();
|
||||
spinner.succeed('Token refreshed successfully');
|
||||
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` New expiration: ${credentials.expiresAt ? new Date(credentials.expiresAt).toLocaleString() : 'Never'}`
|
||||
)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'refresh',
|
||||
credentials,
|
||||
message: 'Token refreshed successfully'
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to refresh token');
|
||||
|
||||
if ((error as AuthenticationError).code === 'NO_REFRESH_TOKEN') {
|
||||
ui.displayWarning(
|
||||
'No refresh token available. Please re-authenticate.'
|
||||
);
|
||||
} else {
|
||||
ui.displayError(`Refresh failed: ${(error as Error).message}`);
|
||||
}
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'refresh',
|
||||
message: `Failed to refresh: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform interactive authentication
|
||||
*/
|
||||
private async performInteractiveAuth(): Promise<AuthResult> {
|
||||
ui.displayBanner('Task Master Authentication');
|
||||
|
||||
// Check if already authenticated
|
||||
if (this.authManager.isAuthenticated()) {
|
||||
const { continueAuth } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'continueAuth',
|
||||
message:
|
||||
'You are already authenticated. Do you want to re-authenticate?',
|
||||
default: false
|
||||
}
|
||||
]);
|
||||
|
||||
if (!continueAuth) {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
ui.displaySuccess('Using existing authentication');
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials: credentials || undefined,
|
||||
message: 'Using existing authentication'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Direct browser authentication - no menu needed
|
||||
const credentials = await this.authenticateWithBrowser();
|
||||
|
||||
ui.displaySuccess('Authentication successful!');
|
||||
console.log(
|
||||
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials,
|
||||
message: 'Authentication successful'
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleAuthError(error as AuthenticationError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'login',
|
||||
message: `Authentication failed: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Authenticate with browser using OAuth 2.0 with PKCE
|
||||
*/
|
||||
private async authenticateWithBrowser(): Promise<AuthCredentials> {
|
||||
let authSpinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Use AuthManager's new unified OAuth flow method with callbacks
|
||||
const credentials = await this.authManager.authenticateWithOAuth({
|
||||
// Callback to handle browser opening
|
||||
openBrowser: async (authUrl) => {
|
||||
await open(authUrl);
|
||||
},
|
||||
timeout: 5 * 60 * 1000, // 5 minutes
|
||||
|
||||
// Callback when auth URL is ready
|
||||
onAuthUrl: (authUrl) => {
|
||||
// Display authentication instructions
|
||||
console.log(chalk.blue.bold('\n🔐 Browser Authentication\n'));
|
||||
console.log(chalk.white(' Opening your browser to authenticate...'));
|
||||
console.log(chalk.gray(" If the browser doesn't open, visit:"));
|
||||
console.log(chalk.cyan.underline(` ${authUrl}\n`));
|
||||
},
|
||||
|
||||
// Callback when waiting for authentication
|
||||
onWaitingForAuth: () => {
|
||||
authSpinner = ora({
|
||||
text: 'Waiting for authentication...',
|
||||
spinner: 'dots'
|
||||
}).start();
|
||||
},
|
||||
|
||||
// Callback on success
|
||||
onSuccess: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.succeed('Authentication successful!');
|
||||
}
|
||||
},
|
||||
|
||||
// Callback on error
|
||||
onError: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.fail('Authentication failed');
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return credentials;
|
||||
} catch (error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle authentication errors
|
||||
*/
|
||||
private handleAuthError(error: AuthenticationError): void {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
switch (error.code) {
|
||||
case 'NETWORK_ERROR':
|
||||
ui.displayWarning(
|
||||
'Please check your internet connection and try again.'
|
||||
);
|
||||
break;
|
||||
case 'INVALID_CREDENTIALS':
|
||||
ui.displayWarning('Please check your credentials and try again.');
|
||||
break;
|
||||
case 'AUTH_EXPIRED':
|
||||
ui.displayWarning(
|
||||
'Your session has expired. Please authenticate again.'
|
||||
);
|
||||
break;
|
||||
default:
|
||||
if (process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack || ''));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
this.handleAuthError(error);
|
||||
} else {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: AuthResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): AuthResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current authentication status (for programmatic usage)
|
||||
*/
|
||||
isAuthenticated(): boolean {
|
||||
return this.authManager.isAuthenticated();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current credentials (for programmatic usage)
|
||||
*/
|
||||
getCredentials(): AuthCredentials | null {
|
||||
return this.authManager.getCredentials();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for auth command
|
||||
// But keeping method for consistency with other commands
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): AuthCommand {
|
||||
const authCommand = new AuthCommand(name);
|
||||
program.addCommand(authCommand);
|
||||
return authCommand;
|
||||
}
|
||||
}
|
||||
570
apps/cli/src/commands/context.command.ts
Normal file
570
apps/cli/src/commands/context.command.ts
Normal file
@@ -0,0 +1,570 @@
|
||||
/**
|
||||
* @fileoverview Context command for managing org/brief selection
|
||||
* Provides a clean interface for workspace context management
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora from 'ora';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from context command
|
||||
*/
|
||||
export interface ContextResult {
|
||||
success: boolean;
|
||||
action: 'show' | 'select-org' | 'select-brief' | 'clear' | 'set';
|
||||
context?: UserContext;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ContextCommand extending Commander's Command class
|
||||
* Manages user's workspace context (org/brief selection)
|
||||
*/
|
||||
export class ContextCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: ContextResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'context');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Manage workspace context (organization and brief selection)'
|
||||
);
|
||||
|
||||
// Add subcommands
|
||||
this.addOrgCommand();
|
||||
this.addBriefCommand();
|
||||
this.addClearCommand();
|
||||
this.addSetCommand();
|
||||
|
||||
// Default action shows current context
|
||||
this.action(async () => {
|
||||
await this.executeShow();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add org selection subcommand
|
||||
*/
|
||||
private addOrgCommand(): void {
|
||||
this.command('org')
|
||||
.description('Select an organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectOrg();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add brief selection subcommand
|
||||
*/
|
||||
private addBriefCommand(): void {
|
||||
this.command('brief')
|
||||
.description('Select a brief within the current organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectBrief();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add clear subcommand
|
||||
*/
|
||||
private addClearCommand(): void {
|
||||
this.command('clear')
|
||||
.description('Clear all context selections')
|
||||
.action(async () => {
|
||||
await this.executeClear();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add set subcommand for direct context setting
|
||||
*/
|
||||
private addSetCommand(): void {
|
||||
this.command('set')
|
||||
.description('Set context directly')
|
||||
.option('--org <id>', 'Organization ID')
|
||||
.option('--org-name <name>', 'Organization name')
|
||||
.option('--brief <id>', 'Brief ID')
|
||||
.option('--brief-name <name>', 'Brief name')
|
||||
.action(async (options) => {
|
||||
await this.executeSet(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute show current context
|
||||
*/
|
||||
private async executeShow(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayContext();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display current context
|
||||
*/
|
||||
private displayContext(): ContextResult {
|
||||
// Check authentication first
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(chalk.gray('\n Run "tm auth login" to authenticate first'));
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'show',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
|
||||
|
||||
if (context && (context.orgId || context.briefId)) {
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(chalk.green('✓ Organization'));
|
||||
if (context.orgName) {
|
||||
console.log(chalk.white(` ${context.orgName}`));
|
||||
}
|
||||
if (context.orgId) {
|
||||
console.log(chalk.gray(` ID: ${context.orgId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(chalk.green('\n✓ Brief'));
|
||||
if (context.briefName) {
|
||||
console.log(chalk.white(` ${context.briefName}`));
|
||||
}
|
||||
if (context.briefId) {
|
||||
console.log(chalk.gray(` ID: ${context.briefId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.updatedAt) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
`\n Last updated: ${new Date(context.updatedAt).toLocaleString()}`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
context,
|
||||
message: 'Context loaded'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ No context selected'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "tm context org" to select an organization')
|
||||
);
|
||||
console.log(chalk.gray(' Run "tm context brief" to select a brief'));
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
message: 'No context selected'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute org selection
|
||||
*/
|
||||
private async executeSelectOrg(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectOrganization();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select an organization interactively
|
||||
*/
|
||||
private async selectOrganization(): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching organizations...').start();
|
||||
|
||||
try {
|
||||
// Fetch organizations from API
|
||||
const organizations = await this.authManager.getOrganizations();
|
||||
spinner.stop();
|
||||
|
||||
if (organizations.length === 0) {
|
||||
ui.displayWarning('No organizations available');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-org',
|
||||
message: 'No organizations available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedOrg } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedOrg',
|
||||
message: 'Select an organization:',
|
||||
choices: organizations.map((org) => ({
|
||||
name: org.name,
|
||||
value: org
|
||||
}))
|
||||
}
|
||||
]);
|
||||
|
||||
// Update context
|
||||
await this.authManager.updateContext({
|
||||
orgId: selectedOrg.id,
|
||||
orgName: selectedOrg.name,
|
||||
// Clear brief when changing org
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected organization: ${selectedOrg.name}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-org',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected organization: ${selectedOrg.name}`
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch organizations');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute brief selection
|
||||
*/
|
||||
private async executeSelectBrief(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Check if org is selected
|
||||
const context = this.authManager.getContext();
|
||||
if (!context?.orgId) {
|
||||
ui.displayError(
|
||||
'No organization selected. Run "tm context org" first.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectBrief(context.orgId);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select a brief within the current organization
|
||||
*/
|
||||
private async selectBrief(orgId: string): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching briefs...').start();
|
||||
|
||||
try {
|
||||
// Fetch briefs from API
|
||||
const briefs = await this.authManager.getBriefs(orgId);
|
||||
spinner.stop();
|
||||
|
||||
if (briefs.length === 0) {
|
||||
ui.displayWarning('No briefs available in this organization');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-brief',
|
||||
message: 'No briefs available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedBrief } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedBrief',
|
||||
message: 'Select a brief:',
|
||||
choices: [
|
||||
{ name: '(No brief - organization level)', value: null },
|
||||
...briefs.map((brief) => ({
|
||||
name: `Brief ${brief.id.slice(0, 8)} (${new Date(brief.createdAt).toLocaleDateString()})`,
|
||||
value: brief
|
||||
}))
|
||||
]
|
||||
}
|
||||
]);
|
||||
|
||||
if (selectedBrief) {
|
||||
// Update context with brief
|
||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
briefId: selectedBrief.id,
|
||||
briefName: briefName
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected brief: ${briefName}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected brief: ${selectedBrief.name}`
|
||||
};
|
||||
} else {
|
||||
// Clear brief selection
|
||||
await this.authManager.updateContext({
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess('Cleared brief selection (organization level)');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Cleared brief selection'
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch briefs');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute clear context
|
||||
*/
|
||||
private async executeClear(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.clearContext();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all context selections
|
||||
*/
|
||||
private async clearContext(): Promise<ContextResult> {
|
||||
try {
|
||||
await this.authManager.clearContext();
|
||||
ui.displaySuccess('Context cleared');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'clear',
|
||||
message: 'Context cleared'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to clear context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'clear',
|
||||
message: `Failed to clear context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute set context with options
|
||||
*/
|
||||
private async executeSet(options: any): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.setContext(options);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set context directly from options
|
||||
*/
|
||||
private async setContext(options: any): Promise<ContextResult> {
|
||||
try {
|
||||
const context: Partial<UserContext> = {};
|
||||
|
||||
if (options.org) {
|
||||
context.orgId = options.org;
|
||||
}
|
||||
if (options.orgName) {
|
||||
context.orgName = options.orgName;
|
||||
}
|
||||
if (options.brief) {
|
||||
context.briefId = options.brief;
|
||||
}
|
||||
if (options.briefName) {
|
||||
context.briefName = options.briefName;
|
||||
}
|
||||
|
||||
if (Object.keys(context).length === 0) {
|
||||
ui.displayWarning('No context options provided');
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: 'No context options provided'
|
||||
};
|
||||
}
|
||||
|
||||
await this.authManager.updateContext(context);
|
||||
ui.displaySuccess('Context updated');
|
||||
|
||||
// Display what was set
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(
|
||||
chalk.gray(` Organization: ${context.orgName || context.orgId}`)
|
||||
);
|
||||
}
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(
|
||||
chalk.gray(` Brief: ${context.briefName || context.briefId}`)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'set',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Context updated'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to set context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: `Failed to set context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ContextResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ContextResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current context (for programmatic usage)
|
||||
*/
|
||||
getContext(): UserContext | null {
|
||||
return this.authManager.getContext();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for context command
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const contextCommand = new ContextCommand();
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
*/
|
||||
static register(program: Command, name?: string): ContextCommand {
|
||||
const contextCommand = new ContextCommand(name);
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
}
|
||||
38
apps/cli/src/commands/index.ts
Normal file
38
apps/cli/src/commands/index.ts
Normal file
@@ -0,0 +1,38 @@
|
||||
/**
|
||||
* Command registry - exports all CLI commands for central registration
|
||||
*/
|
||||
|
||||
import type { Command } from 'commander';
|
||||
import { ListTasksCommand } from './list.command.js';
|
||||
import { AuthCommand } from './auth.command.js';
|
||||
import WorkflowCommand from './workflow.command.js';
|
||||
|
||||
// Define interface for command classes that can register themselves
|
||||
export interface CommandRegistrar {
|
||||
register(program: Command, name?: string): any;
|
||||
}
|
||||
|
||||
// Future commands can be added here as they're created
|
||||
// The pattern is: each command exports a class with a static register(program: Command, name?: string) method
|
||||
|
||||
/**
|
||||
* Auto-register all exported commands that implement the CommandRegistrar interface
|
||||
*/
|
||||
export function registerAllCommands(program: Command): void {
|
||||
// Get all exports from this module
|
||||
const commands = [
|
||||
ListTasksCommand,
|
||||
AuthCommand,
|
||||
WorkflowCommand
|
||||
// Add new commands here as they're imported above
|
||||
];
|
||||
|
||||
commands.forEach((CommandClass) => {
|
||||
if (
|
||||
'register' in CommandClass &&
|
||||
typeof CommandClass.register === 'function'
|
||||
) {
|
||||
CommandClass.register(program);
|
||||
}
|
||||
});
|
||||
}
|
||||
324
apps/cli/src/commands/list.command.ts
Normal file
324
apps/cli/src/commands/list.command.ts
Normal file
@@ -0,0 +1,324 @@
|
||||
/**
|
||||
* @fileoverview ListTasks command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type Task,
|
||||
type TaskStatus,
|
||||
type TaskMasterCore,
|
||||
TASK_STATUSES,
|
||||
OUTPUT_FORMATS,
|
||||
STATUS_ICONS,
|
||||
type OutputFormat
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Options interface for the list command
|
||||
*/
|
||||
export interface ListCommandOptions {
|
||||
status?: string;
|
||||
tag?: string;
|
||||
withSubtasks?: boolean;
|
||||
format?: OutputFormat;
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from list command
|
||||
*/
|
||||
export interface ListTasksResult {
|
||||
tasks: Task[];
|
||||
total: number;
|
||||
filtered: number;
|
||||
tag?: string;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* ListTasksCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class ListTasksCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: ListTasksResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'list');
|
||||
|
||||
// Configure the command
|
||||
this.description('List tasks with optional filtering')
|
||||
.alias('ls')
|
||||
.option('-s, --status <status>', 'Filter by status (comma-separated)')
|
||||
.option('-t, --tag <tag>', 'Filter by tag')
|
||||
.option('--with-subtasks', 'Include subtasks in the output')
|
||||
.option(
|
||||
'-f, --format <format>',
|
||||
'Output format (text, json, compact)',
|
||||
'text'
|
||||
)
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: ListCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the list command
|
||||
*/
|
||||
private async executeCommand(options: ListCommandOptions): Promise<void> {
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get tasks from core
|
||||
const result = await this.getTasks(options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: ListCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (
|
||||
options.format &&
|
||||
!OUTPUT_FORMATS.includes(options.format as OutputFormat)
|
||||
) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: ${OUTPUT_FORMATS.join(', ')}`));
|
||||
return false;
|
||||
}
|
||||
|
||||
// Validate status
|
||||
if (options.status) {
|
||||
const statuses = options.status.split(',').map((s: string) => s.trim());
|
||||
|
||||
for (const status of statuses) {
|
||||
if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) {
|
||||
console.error(chalk.red(`Invalid status: ${status}`));
|
||||
console.error(
|
||||
chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`)
|
||||
);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tasks from tm-core
|
||||
*/
|
||||
private async getTasks(
|
||||
options: ListCommandOptions
|
||||
): Promise<ListTasksResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Build filter
|
||||
const filter =
|
||||
options.status && options.status !== 'all'
|
||||
? {
|
||||
status: options.status
|
||||
.split(',')
|
||||
.map((s: string) => s.trim() as TaskStatus)
|
||||
}
|
||||
: undefined;
|
||||
|
||||
// Call tm-core
|
||||
const result = await this.tmCore.getTaskList({
|
||||
tag: options.tag,
|
||||
filter,
|
||||
includeSubtasks: options.withSubtasks
|
||||
});
|
||||
|
||||
// Runtime guard to prevent 'auto' from reaching CLI consumers
|
||||
if (result.storageType === 'auto') {
|
||||
throw new Error(
|
||||
'Internal error: unresolved storage type reached CLI. Please check TaskService.getStorageType() implementation.'
|
||||
);
|
||||
}
|
||||
|
||||
return result as ListTasksResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: ListTasksResult,
|
||||
options: ListCommandOptions
|
||||
): void {
|
||||
const format = (options.format || 'text') as OutputFormat | 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'compact':
|
||||
this.displayCompact(result.tasks, options.withSubtasks);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayText(result, options.withSubtasks);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(data: ListTasksResult): void {
|
||||
console.log(
|
||||
JSON.stringify(
|
||||
{
|
||||
tasks: data.tasks,
|
||||
metadata: {
|
||||
total: data.total,
|
||||
filtered: data.filtered,
|
||||
tag: data.tag,
|
||||
storageType: data.storageType
|
||||
}
|
||||
},
|
||||
null,
|
||||
2
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in compact format
|
||||
*/
|
||||
private displayCompact(tasks: Task[], withSubtasks?: boolean): void {
|
||||
tasks.forEach((task) => {
|
||||
const icon = STATUS_ICONS[task.status];
|
||||
console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`);
|
||||
|
||||
if (withSubtasks && task.subtasks?.length) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subIcon = STATUS_ICONS[subtask.status];
|
||||
console.log(
|
||||
` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}`
|
||||
);
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in text format with tables
|
||||
*/
|
||||
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
||||
const { tasks, total, filtered, tag, storageType } = data;
|
||||
|
||||
// Header
|
||||
ui.displayBanner(`Task List${tag ? ` (${tag})` : ''}`);
|
||||
|
||||
// Statistics
|
||||
console.log(chalk.blue.bold('\n📊 Statistics:\n'));
|
||||
console.log(` Total tasks: ${chalk.cyan(total)}`);
|
||||
console.log(` Filtered: ${chalk.cyan(filtered)}`);
|
||||
if (tag) {
|
||||
console.log(` Tag: ${chalk.cyan(tag)}`);
|
||||
}
|
||||
console.log(` Storage: ${chalk.cyan(storageType)}`);
|
||||
|
||||
// No tasks message
|
||||
if (tasks.length === 0) {
|
||||
ui.displayWarning('No tasks found matching the criteria.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Task table
|
||||
console.log(chalk.blue.bold(`\n📋 Tasks (${tasks.length}):\n`));
|
||||
console.log(
|
||||
ui.createTaskTable(tasks, {
|
||||
showSubtasks: withSubtasks,
|
||||
showDependencies: true
|
||||
})
|
||||
);
|
||||
|
||||
// Progress bar
|
||||
const completedCount = tasks.filter(
|
||||
(t: Task) => t.status === 'done'
|
||||
).length;
|
||||
console.log(chalk.blue.bold('\n📊 Overall Progress:\n'));
|
||||
console.log(` ${ui.createProgressBar(completedCount, tasks.length)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ListTasksResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ListTasksResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): ListTasksCommand {
|
||||
const listCommand = new ListTasksCommand(name);
|
||||
program.addCommand(listCommand);
|
||||
return listCommand;
|
||||
}
|
||||
}
|
||||
58
apps/cli/src/commands/workflow.command.ts
Normal file
58
apps/cli/src/commands/workflow.command.ts
Normal file
@@ -0,0 +1,58 @@
|
||||
/**
|
||||
* @fileoverview Workflow Command
|
||||
* Main workflow command with subcommands
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import {
|
||||
WorkflowStartCommand,
|
||||
WorkflowListCommand,
|
||||
WorkflowStopCommand,
|
||||
WorkflowStatusCommand
|
||||
} from './workflow/index.js';
|
||||
|
||||
/**
|
||||
* WorkflowCommand - Main workflow command with subcommands
|
||||
*/
|
||||
export class WorkflowCommand extends Command {
|
||||
constructor(name?: string) {
|
||||
super(name || 'workflow');
|
||||
|
||||
this.description('Manage task execution workflows with git worktrees and Claude Code')
|
||||
.alias('wf');
|
||||
|
||||
// Register subcommands
|
||||
this.addSubcommands();
|
||||
}
|
||||
|
||||
private addSubcommands(): void {
|
||||
// Start workflow
|
||||
WorkflowStartCommand.register(this);
|
||||
|
||||
// List workflows
|
||||
WorkflowListCommand.register(this);
|
||||
|
||||
// Stop workflow
|
||||
WorkflowStopCommand.register(this);
|
||||
|
||||
// Show workflow status
|
||||
WorkflowStatusCommand.register(this);
|
||||
|
||||
// Alias commands for convenience
|
||||
this.addCommand(new WorkflowStartCommand('run')); // tm workflow run <task-id>
|
||||
this.addCommand(new WorkflowStopCommand('kill')); // tm workflow kill <workflow-id>
|
||||
this.addCommand(new WorkflowStatusCommand('info')); // tm workflow info <workflow-id>
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): WorkflowCommand {
|
||||
const workflowCommand = new WorkflowCommand(name);
|
||||
program.addCommand(workflowCommand);
|
||||
return workflowCommand;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
export default WorkflowCommand;
|
||||
9
apps/cli/src/commands/workflow/index.ts
Normal file
9
apps/cli/src/commands/workflow/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* @fileoverview Workflow Commands
|
||||
* Exports for all workflow-related CLI commands
|
||||
*/
|
||||
|
||||
export * from './workflow-start.command.js';
|
||||
export * from './workflow-list.command.js';
|
||||
export * from './workflow-stop.command.js';
|
||||
export * from './workflow-status.command.js';
|
||||
253
apps/cli/src/commands/workflow/workflow-list.command.ts
Normal file
253
apps/cli/src/commands/workflow/workflow-list.command.ts
Normal file
@@ -0,0 +1,253 @@
|
||||
/**
|
||||
* @fileoverview Workflow List Command
|
||||
* List active and recent workflow executions
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
TaskExecutionManager,
|
||||
type TaskExecutionManagerConfig,
|
||||
type WorkflowExecutionContext
|
||||
} from '@tm/workflow-engine';
|
||||
import * as ui from '../../utils/ui.js';
|
||||
|
||||
export interface WorkflowListOptions {
|
||||
project?: string;
|
||||
status?: string;
|
||||
format?: 'text' | 'json' | 'compact';
|
||||
worktreeBase?: string;
|
||||
claude?: string;
|
||||
all?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* WorkflowListCommand - List workflow executions
|
||||
*/
|
||||
export class WorkflowListCommand extends Command {
|
||||
private workflowManager?: TaskExecutionManager;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'list');
|
||||
|
||||
this.description('List active and recent workflow executions')
|
||||
.alias('ls')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option('-s, --status <status>', 'Filter by status (running, completed, failed, etc.)')
|
||||
.option('-f, --format <format>', 'Output format (text, json, compact)', 'text')
|
||||
.option('--worktree-base <path>', 'Base directory for worktrees', '../task-worktrees')
|
||||
.option('--claude <path>', 'Claude Code executable path', 'claude')
|
||||
.option('--all', 'Show all workflows including completed ones')
|
||||
.action(async (options: WorkflowListOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
private async executeCommand(options: WorkflowListOptions): Promise<void> {
|
||||
try {
|
||||
// Initialize workflow manager
|
||||
await this.initializeWorkflowManager(options);
|
||||
|
||||
// Get workflows
|
||||
let workflows = this.workflowManager!.listWorkflows();
|
||||
|
||||
// Apply status filter
|
||||
if (options.status) {
|
||||
workflows = workflows.filter(w => w.status === options.status);
|
||||
}
|
||||
|
||||
// Apply active filter (default behavior)
|
||||
if (!options.all) {
|
||||
workflows = workflows.filter(w =>
|
||||
['pending', 'initializing', 'running', 'paused'].includes(w.status)
|
||||
);
|
||||
}
|
||||
|
||||
// Display results
|
||||
this.displayResults(workflows, options);
|
||||
|
||||
} catch (error: any) {
|
||||
ui.displayError(error.message || 'Failed to list workflows');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
private async initializeWorkflowManager(options: WorkflowListOptions): Promise<void> {
|
||||
if (!this.workflowManager) {
|
||||
const projectRoot = options.project || process.cwd();
|
||||
const worktreeBase = path.resolve(projectRoot, options.worktreeBase || '../task-worktrees');
|
||||
|
||||
const config: TaskExecutionManagerConfig = {
|
||||
projectRoot,
|
||||
maxConcurrent: 5,
|
||||
defaultTimeout: 60,
|
||||
worktreeBase,
|
||||
claudeExecutable: options.claude || 'claude',
|
||||
debug: false
|
||||
};
|
||||
|
||||
this.workflowManager = new TaskExecutionManager(config);
|
||||
await this.workflowManager.initialize();
|
||||
}
|
||||
}
|
||||
|
||||
private displayResults(workflows: WorkflowExecutionContext[], options: WorkflowListOptions): void {
|
||||
switch (options.format) {
|
||||
case 'json':
|
||||
this.displayJson(workflows);
|
||||
break;
|
||||
case 'compact':
|
||||
this.displayCompact(workflows);
|
||||
break;
|
||||
case 'text':
|
||||
default:
|
||||
this.displayText(workflows);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
private displayJson(workflows: WorkflowExecutionContext[]): void {
|
||||
console.log(JSON.stringify({
|
||||
workflows: workflows.map(w => ({
|
||||
workflowId: `workflow-${w.taskId}`,
|
||||
taskId: w.taskId,
|
||||
taskTitle: w.taskTitle,
|
||||
status: w.status,
|
||||
worktreePath: w.worktreePath,
|
||||
branchName: w.branchName,
|
||||
processId: w.processId,
|
||||
startedAt: w.startedAt,
|
||||
lastActivity: w.lastActivity,
|
||||
metadata: w.metadata
|
||||
})),
|
||||
total: workflows.length,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2));
|
||||
}
|
||||
|
||||
private displayCompact(workflows: WorkflowExecutionContext[]): void {
|
||||
if (workflows.length === 0) {
|
||||
console.log(chalk.gray('No workflows found'));
|
||||
return;
|
||||
}
|
||||
|
||||
workflows.forEach(workflow => {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
const statusDisplay = this.getStatusDisplay(workflow.status);
|
||||
const duration = this.formatDuration(workflow.startedAt, workflow.lastActivity);
|
||||
|
||||
console.log(
|
||||
`${chalk.cyan(workflowId)} ${statusDisplay} ${workflow.taskTitle} ${chalk.gray(`(${duration})`)}`
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
private displayText(workflows: WorkflowExecutionContext[]): void {
|
||||
ui.displayBanner('Active Workflows');
|
||||
|
||||
if (workflows.length === 0) {
|
||||
ui.displayWarning('No workflows found');
|
||||
console.log();
|
||||
console.log(chalk.blue('💡 Start a new workflow with:'));
|
||||
console.log(` ${chalk.cyan('tm workflow start <task-id>')}`);
|
||||
return;
|
||||
}
|
||||
|
||||
// Statistics
|
||||
console.log(chalk.blue.bold('\n📊 Statistics:\n'));
|
||||
const statusCounts = this.getStatusCounts(workflows);
|
||||
Object.entries(statusCounts).forEach(([status, count]) => {
|
||||
console.log(` ${this.getStatusDisplay(status)}: ${chalk.cyan(count)}`);
|
||||
});
|
||||
|
||||
// Workflows table
|
||||
console.log(chalk.blue.bold(`\n🔄 Workflows (${workflows.length}):\n`));
|
||||
|
||||
const tableData = workflows.map(workflow => {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
const duration = this.formatDuration(workflow.startedAt, workflow.lastActivity);
|
||||
|
||||
return [
|
||||
chalk.cyan(workflowId),
|
||||
chalk.yellow(workflow.taskId),
|
||||
workflow.taskTitle.substring(0, 30) + (workflow.taskTitle.length > 30 ? '...' : ''),
|
||||
this.getStatusDisplay(workflow.status),
|
||||
workflow.processId ? chalk.green(workflow.processId.toString()) : chalk.gray('N/A'),
|
||||
chalk.gray(duration),
|
||||
chalk.gray(path.basename(workflow.worktreePath))
|
||||
];
|
||||
});
|
||||
|
||||
console.log(ui.createTable(
|
||||
['Workflow ID', 'Task ID', 'Task Title', 'Status', 'PID', 'Duration', 'Worktree'],
|
||||
tableData
|
||||
));
|
||||
|
||||
// Running workflows actions
|
||||
const runningWorkflows = workflows.filter(w => w.status === 'running');
|
||||
if (runningWorkflows.length > 0) {
|
||||
console.log(chalk.blue.bold('\n🚀 Quick Actions:\n'));
|
||||
runningWorkflows.slice(0, 3).forEach(workflow => {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
console.log(` • Attach to ${chalk.cyan(workflowId)}: ${chalk.gray(`tm workflow attach ${workflowId}`)}`);
|
||||
});
|
||||
|
||||
if (runningWorkflows.length > 3) {
|
||||
console.log(` ${chalk.gray(`... and ${runningWorkflows.length - 3} more`)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private getStatusDisplay(status: string): string {
|
||||
const statusMap = {
|
||||
pending: { icon: '⏳', color: chalk.yellow },
|
||||
initializing: { icon: '🔄', color: chalk.blue },
|
||||
running: { icon: '🚀', color: chalk.green },
|
||||
paused: { icon: '⏸️', color: chalk.orange },
|
||||
completed: { icon: '✅', color: chalk.green },
|
||||
failed: { icon: '❌', color: chalk.red },
|
||||
cancelled: { icon: '🛑', color: chalk.gray },
|
||||
timeout: { icon: '⏰', color: chalk.red }
|
||||
};
|
||||
|
||||
const statusInfo = statusMap[status as keyof typeof statusMap] || { icon: '❓', color: chalk.white };
|
||||
return `${statusInfo.icon} ${statusInfo.color(status)}`;
|
||||
}
|
||||
|
||||
private getStatusCounts(workflows: WorkflowExecutionContext[]): Record<string, number> {
|
||||
const counts: Record<string, number> = {};
|
||||
|
||||
workflows.forEach(workflow => {
|
||||
counts[workflow.status] = (counts[workflow.status] || 0) + 1;
|
||||
});
|
||||
|
||||
return counts;
|
||||
}
|
||||
|
||||
private formatDuration(start: Date, end: Date): string {
|
||||
const diff = end.getTime() - start.getTime();
|
||||
const minutes = Math.floor(diff / (1000 * 60));
|
||||
const hours = Math.floor(minutes / 60);
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes % 60}m`;
|
||||
} else if (minutes > 0) {
|
||||
return `${minutes}m`;
|
||||
} else {
|
||||
return '<1m';
|
||||
}
|
||||
}
|
||||
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.workflowManager) {
|
||||
this.workflowManager.removeAllListeners();
|
||||
}
|
||||
}
|
||||
|
||||
static register(program: Command, name?: string): WorkflowListCommand {
|
||||
const command = new WorkflowListCommand(name);
|
||||
program.addCommand(command);
|
||||
return command;
|
||||
}
|
||||
}
|
||||
239
apps/cli/src/commands/workflow/workflow-start.command.ts
Normal file
239
apps/cli/src/commands/workflow/workflow-start.command.ts
Normal file
@@ -0,0 +1,239 @@
|
||||
/**
|
||||
* @fileoverview Workflow Start Command
|
||||
* Start task execution in isolated worktree with Claude Code process
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type TaskMasterCore
|
||||
} from '@tm/core';
|
||||
import {
|
||||
TaskExecutionManager,
|
||||
type TaskExecutionManagerConfig
|
||||
} from '@tm/workflow-engine';
|
||||
import * as ui from '../../utils/ui.js';
|
||||
|
||||
export interface WorkflowStartOptions {
|
||||
project?: string;
|
||||
branch?: string;
|
||||
timeout?: number;
|
||||
worktreeBase?: string;
|
||||
claude?: string;
|
||||
debug?: boolean;
|
||||
env?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* WorkflowStartCommand - Start task execution workflow
|
||||
*/
|
||||
export class WorkflowStartCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private workflowManager?: TaskExecutionManager;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'start');
|
||||
|
||||
this.description('Start task execution in isolated worktree')
|
||||
.argument('<task-id>', 'Task ID to execute')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option('-b, --branch <name>', 'Custom branch name for worktree')
|
||||
.option('-t, --timeout <minutes>', 'Execution timeout in minutes', '60')
|
||||
.option('--worktree-base <path>', 'Base directory for worktrees', '../task-worktrees')
|
||||
.option('--claude <path>', 'Claude Code executable path', 'claude')
|
||||
.option('--debug', 'Enable debug logging')
|
||||
.option('--env <vars>', 'Environment variables (KEY=VALUE,KEY2=VALUE2)')
|
||||
.action(async (taskId: string, options: WorkflowStartOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
});
|
||||
}
|
||||
|
||||
private async executeCommand(taskId: string, options: WorkflowStartOptions): Promise<void> {
|
||||
try {
|
||||
// Initialize components
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
await this.initializeWorkflowManager(options);
|
||||
|
||||
// Get task details
|
||||
const task = await this.getTask(taskId);
|
||||
if (!task) {
|
||||
throw new Error(`Task ${taskId} not found`);
|
||||
}
|
||||
|
||||
// Check if task already has active workflow
|
||||
const existingWorkflow = this.workflowManager!.getWorkflowByTaskId(taskId);
|
||||
if (existingWorkflow) {
|
||||
ui.displayWarning(`Task ${taskId} already has an active workflow`);
|
||||
console.log(`Workflow ID: ${chalk.cyan('workflow-' + taskId)}`);
|
||||
console.log(`Status: ${this.getStatusDisplay(existingWorkflow.status)}`);
|
||||
console.log(`Worktree: ${chalk.gray(existingWorkflow.worktreePath)}`);
|
||||
return;
|
||||
}
|
||||
|
||||
// Parse environment variables
|
||||
const env = this.parseEnvironmentVariables(options.env);
|
||||
|
||||
// Display task info
|
||||
ui.displayBanner(`Starting Workflow for Task ${taskId}`);
|
||||
console.log(`${chalk.blue('Task:')} ${task.title}`);
|
||||
console.log(`${chalk.blue('Description:')} ${task.description}`);
|
||||
|
||||
if (task.dependencies?.length) {
|
||||
console.log(`${chalk.blue('Dependencies:')} ${task.dependencies.join(', ')}`);
|
||||
}
|
||||
|
||||
console.log(`${chalk.blue('Priority:')} ${task.priority || 'normal'}`);
|
||||
console.log();
|
||||
|
||||
// Start workflow
|
||||
ui.displaySpinner('Creating worktree and starting Claude Code process...');
|
||||
|
||||
const workflowId = await this.workflowManager!.startTaskExecution(task, {
|
||||
branchName: options.branch,
|
||||
timeout: parseInt(options.timeout || '60'),
|
||||
env
|
||||
});
|
||||
|
||||
const workflow = this.workflowManager!.getWorkflowStatus(workflowId);
|
||||
|
||||
ui.displaySuccess('Workflow started successfully!');
|
||||
console.log();
|
||||
console.log(`${chalk.green('✓')} Workflow ID: ${chalk.cyan(workflowId)}`);
|
||||
console.log(`${chalk.green('✓')} Worktree: ${chalk.gray(workflow?.worktreePath)}`);
|
||||
console.log(`${chalk.green('✓')} Branch: ${chalk.gray(workflow?.branchName)}`);
|
||||
console.log(`${chalk.green('✓')} Process ID: ${chalk.gray(workflow?.processId)}`);
|
||||
console.log();
|
||||
|
||||
// Display next steps
|
||||
console.log(chalk.blue.bold('📋 Next Steps:'));
|
||||
console.log(` • Monitor: ${chalk.cyan(`tm workflow status ${workflowId}`)}`);
|
||||
console.log(` • Attach: ${chalk.cyan(`tm workflow attach ${workflowId}`)}`);
|
||||
console.log(` • Stop: ${chalk.cyan(`tm workflow stop ${workflowId}`)}`);
|
||||
console.log();
|
||||
|
||||
// Setup event listeners for real-time updates
|
||||
this.setupEventListeners();
|
||||
|
||||
} catch (error: any) {
|
||||
ui.displayError(error.message || 'Failed to start workflow');
|
||||
|
||||
if (options.debug && error.stack) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
private async initializeWorkflowManager(options: WorkflowStartOptions): Promise<void> {
|
||||
if (!this.workflowManager) {
|
||||
const projectRoot = options.project || process.cwd();
|
||||
const worktreeBase = path.resolve(projectRoot, options.worktreeBase || '../task-worktrees');
|
||||
|
||||
const config: TaskExecutionManagerConfig = {
|
||||
projectRoot,
|
||||
maxConcurrent: 5,
|
||||
defaultTimeout: parseInt(options.timeout || '60'),
|
||||
worktreeBase,
|
||||
claudeExecutable: options.claude || 'claude',
|
||||
debug: options.debug || false
|
||||
};
|
||||
|
||||
this.workflowManager = new TaskExecutionManager(config);
|
||||
await this.workflowManager.initialize();
|
||||
}
|
||||
}
|
||||
|
||||
private async getTask(taskId: string) {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
const result = await this.tmCore.getTaskList({});
|
||||
return result.tasks.find(task => task.id === taskId);
|
||||
}
|
||||
|
||||
private parseEnvironmentVariables(envString?: string): Record<string, string> | undefined {
|
||||
if (!envString) return undefined;
|
||||
|
||||
const env: Record<string, string> = {};
|
||||
|
||||
for (const pair of envString.split(',')) {
|
||||
const [key, ...valueParts] = pair.trim().split('=');
|
||||
if (key && valueParts.length > 0) {
|
||||
env[key] = valueParts.join('=');
|
||||
}
|
||||
}
|
||||
|
||||
return Object.keys(env).length > 0 ? env : undefined;
|
||||
}
|
||||
|
||||
private getStatusDisplay(status: string): string {
|
||||
const colors = {
|
||||
pending: chalk.yellow,
|
||||
initializing: chalk.blue,
|
||||
running: chalk.green,
|
||||
paused: chalk.orange,
|
||||
completed: chalk.green,
|
||||
failed: chalk.red,
|
||||
cancelled: chalk.gray,
|
||||
timeout: chalk.red
|
||||
};
|
||||
|
||||
const color = colors[status as keyof typeof colors] || chalk.white;
|
||||
return color(status);
|
||||
}
|
||||
|
||||
private setupEventListeners(): void {
|
||||
if (!this.workflowManager) return;
|
||||
|
||||
this.workflowManager.on('workflow.started', (event) => {
|
||||
console.log(`${chalk.green('🚀')} Workflow started: ${event.workflowId}`);
|
||||
});
|
||||
|
||||
this.workflowManager.on('process.output', (event) => {
|
||||
if (event.data?.stream === 'stdout') {
|
||||
console.log(`${chalk.blue('[OUT]')} ${event.data.data.trim()}`);
|
||||
} else if (event.data?.stream === 'stderr') {
|
||||
console.log(`${chalk.red('[ERR]')} ${event.data.data.trim()}`);
|
||||
}
|
||||
});
|
||||
|
||||
this.workflowManager.on('workflow.completed', (event) => {
|
||||
console.log(`${chalk.green('✅')} Workflow completed: ${event.workflowId}`);
|
||||
});
|
||||
|
||||
this.workflowManager.on('workflow.failed', (event) => {
|
||||
console.log(`${chalk.red('❌')} Workflow failed: ${event.workflowId}`);
|
||||
if (event.error) {
|
||||
console.log(`${chalk.red('Error:')} ${event.error.message}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.workflowManager) {
|
||||
// Don't cleanup workflows, just disconnect
|
||||
this.workflowManager.removeAllListeners();
|
||||
}
|
||||
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
static register(program: Command, name?: string): WorkflowStartCommand {
|
||||
const command = new WorkflowStartCommand(name);
|
||||
program.addCommand(command);
|
||||
return command;
|
||||
}
|
||||
}
|
||||
339
apps/cli/src/commands/workflow/workflow-status.command.ts
Normal file
339
apps/cli/src/commands/workflow/workflow-status.command.ts
Normal file
@@ -0,0 +1,339 @@
|
||||
/**
|
||||
* @fileoverview Workflow Status Command
|
||||
* Show detailed status of a specific workflow
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
TaskExecutionManager,
|
||||
type TaskExecutionManagerConfig
|
||||
} from '@tm/workflow-engine';
|
||||
import * as ui from '../../utils/ui.js';
|
||||
|
||||
export interface WorkflowStatusOptions {
|
||||
project?: string;
|
||||
worktreeBase?: string;
|
||||
claude?: string;
|
||||
watch?: boolean;
|
||||
format?: 'text' | 'json';
|
||||
}
|
||||
|
||||
/**
|
||||
* WorkflowStatusCommand - Show workflow execution status
|
||||
*/
|
||||
export class WorkflowStatusCommand extends Command {
|
||||
private workflowManager?: TaskExecutionManager;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'status');
|
||||
|
||||
this.description('Show detailed status of a workflow execution')
|
||||
.argument('<workflow-id>', 'Workflow ID or task ID to check')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option('--worktree-base <path>', 'Base directory for worktrees', '../task-worktrees')
|
||||
.option('--claude <path>', 'Claude Code executable path', 'claude')
|
||||
.option('-w, --watch', 'Watch for status changes (refresh every 2 seconds)')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.action(async (workflowId: string, options: WorkflowStatusOptions) => {
|
||||
await this.executeCommand(workflowId, options);
|
||||
});
|
||||
}
|
||||
|
||||
private async executeCommand(workflowId: string, options: WorkflowStatusOptions): Promise<void> {
|
||||
try {
|
||||
// Initialize workflow manager
|
||||
await this.initializeWorkflowManager(options);
|
||||
|
||||
if (options.watch) {
|
||||
await this.watchWorkflowStatus(workflowId, options);
|
||||
} else {
|
||||
await this.showWorkflowStatus(workflowId, options);
|
||||
}
|
||||
|
||||
} catch (error: any) {
|
||||
ui.displayError(error.message || 'Failed to get workflow status');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
private async initializeWorkflowManager(options: WorkflowStatusOptions): Promise<void> {
|
||||
if (!this.workflowManager) {
|
||||
const projectRoot = options.project || process.cwd();
|
||||
const worktreeBase = path.resolve(projectRoot, options.worktreeBase || '../task-worktrees');
|
||||
|
||||
const config: TaskExecutionManagerConfig = {
|
||||
projectRoot,
|
||||
maxConcurrent: 5,
|
||||
defaultTimeout: 60,
|
||||
worktreeBase,
|
||||
claudeExecutable: options.claude || 'claude',
|
||||
debug: false
|
||||
};
|
||||
|
||||
this.workflowManager = new TaskExecutionManager(config);
|
||||
await this.workflowManager.initialize();
|
||||
}
|
||||
}
|
||||
|
||||
private async showWorkflowStatus(workflowId: string, options: WorkflowStatusOptions): Promise<void> {
|
||||
// Try to find workflow by ID or task ID
|
||||
let workflow = this.workflowManager!.getWorkflowStatus(workflowId);
|
||||
|
||||
if (!workflow) {
|
||||
// Try as task ID
|
||||
workflow = this.workflowManager!.getWorkflowByTaskId(workflowId);
|
||||
}
|
||||
|
||||
if (!workflow) {
|
||||
throw new Error(`Workflow not found: ${workflowId}`);
|
||||
}
|
||||
|
||||
if (options.format === 'json') {
|
||||
this.displayJsonStatus(workflow);
|
||||
} else {
|
||||
this.displayTextStatus(workflow);
|
||||
}
|
||||
}
|
||||
|
||||
private async watchWorkflowStatus(workflowId: string, options: WorkflowStatusOptions): Promise<void> {
|
||||
console.log(chalk.blue.bold('👀 Watching workflow status (Press Ctrl+C to exit)\n'));
|
||||
|
||||
let lastStatus = '';
|
||||
let updateCount = 0;
|
||||
|
||||
const updateStatus = async () => {
|
||||
try {
|
||||
// Clear screen and move cursor to top
|
||||
if (updateCount > 0) {
|
||||
process.stdout.write('\x1b[2J\x1b[0f');
|
||||
}
|
||||
|
||||
let workflow = this.workflowManager!.getWorkflowStatus(workflowId);
|
||||
|
||||
if (!workflow) {
|
||||
workflow = this.workflowManager!.getWorkflowByTaskId(workflowId);
|
||||
}
|
||||
|
||||
if (!workflow) {
|
||||
console.log(chalk.red(`Workflow not found: ${workflowId}`));
|
||||
return;
|
||||
}
|
||||
|
||||
// Display header with timestamp
|
||||
console.log(chalk.blue.bold('👀 Watching Workflow Status'));
|
||||
console.log(chalk.gray(`Last updated: ${new Date().toLocaleTimeString()}\n`));
|
||||
|
||||
this.displayTextStatus(workflow);
|
||||
|
||||
// Check if workflow has ended
|
||||
if (['completed', 'failed', 'cancelled', 'timeout'].includes(workflow.status)) {
|
||||
console.log(chalk.yellow('\n⚠️ Workflow has ended. Stopping watch mode.'));
|
||||
return;
|
||||
}
|
||||
|
||||
updateCount++;
|
||||
|
||||
} catch (error) {
|
||||
console.error(chalk.red('Error updating status:'), error);
|
||||
}
|
||||
};
|
||||
|
||||
// Initial display
|
||||
await updateStatus();
|
||||
|
||||
// Setup interval for updates
|
||||
const interval = setInterval(updateStatus, 2000);
|
||||
|
||||
// Handle Ctrl+C
|
||||
process.on('SIGINT', () => {
|
||||
clearInterval(interval);
|
||||
console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
// Keep the process alive
|
||||
await new Promise(() => {});
|
||||
}
|
||||
|
||||
private displayJsonStatus(workflow: any): void {
|
||||
const status = {
|
||||
workflowId: `workflow-${workflow.taskId}`,
|
||||
taskId: workflow.taskId,
|
||||
taskTitle: workflow.taskTitle,
|
||||
taskDescription: workflow.taskDescription,
|
||||
status: workflow.status,
|
||||
worktreePath: workflow.worktreePath,
|
||||
branchName: workflow.branchName,
|
||||
processId: workflow.processId,
|
||||
startedAt: workflow.startedAt,
|
||||
lastActivity: workflow.lastActivity,
|
||||
duration: this.calculateDuration(workflow.startedAt, workflow.lastActivity),
|
||||
metadata: workflow.metadata
|
||||
};
|
||||
|
||||
console.log(JSON.stringify(status, null, 2));
|
||||
}
|
||||
|
||||
private displayTextStatus(workflow: any): void {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
const duration = this.formatDuration(workflow.startedAt, workflow.lastActivity);
|
||||
|
||||
ui.displayBanner(`Workflow Status: ${workflowId}`);
|
||||
|
||||
// Basic information
|
||||
console.log(chalk.blue.bold('\n📋 Basic Information:\n'));
|
||||
console.log(` Workflow ID: ${chalk.cyan(workflowId)}`);
|
||||
console.log(` Task ID: ${chalk.cyan(workflow.taskId)}`);
|
||||
console.log(` Task Title: ${workflow.taskTitle}`);
|
||||
console.log(` Status: ${this.getStatusDisplay(workflow.status)}`);
|
||||
console.log(` Duration: ${chalk.gray(duration)}`);
|
||||
|
||||
// Task details
|
||||
if (workflow.taskDescription) {
|
||||
console.log(chalk.blue.bold('\n📝 Task Details:\n'));
|
||||
console.log(` ${workflow.taskDescription}`);
|
||||
}
|
||||
|
||||
// Process information
|
||||
console.log(chalk.blue.bold('\n⚙️ Process Information:\n'));
|
||||
console.log(` Process ID: ${workflow.processId ? chalk.green(workflow.processId) : chalk.gray('N/A')}`);
|
||||
console.log(` Worktree: ${chalk.gray(workflow.worktreePath)}`);
|
||||
console.log(` Branch: ${chalk.gray(workflow.branchName)}`);
|
||||
|
||||
// Timing information
|
||||
console.log(chalk.blue.bold('\n⏰ Timing:\n'));
|
||||
console.log(` Started: ${chalk.gray(workflow.startedAt.toLocaleString())}`);
|
||||
console.log(` Last Activity: ${chalk.gray(workflow.lastActivity.toLocaleString())}`);
|
||||
|
||||
// Metadata
|
||||
if (workflow.metadata && Object.keys(workflow.metadata).length > 0) {
|
||||
console.log(chalk.blue.bold('\n🔖 Metadata:\n'));
|
||||
Object.entries(workflow.metadata).forEach(([key, value]) => {
|
||||
console.log(` ${key}: ${chalk.gray(String(value))}`);
|
||||
});
|
||||
}
|
||||
|
||||
// Status-specific information
|
||||
this.displayStatusSpecificInfo(workflow);
|
||||
|
||||
// Actions
|
||||
this.displayAvailableActions(workflow);
|
||||
}
|
||||
|
||||
private displayStatusSpecificInfo(workflow: any): void {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
|
||||
switch (workflow.status) {
|
||||
case 'running':
|
||||
console.log(chalk.blue.bold('\n🚀 Running Status:\n'));
|
||||
console.log(` ${chalk.green('●')} Process is actively executing`);
|
||||
console.log(` ${chalk.blue('ℹ')} Monitor output with: ${chalk.cyan(`tm workflow attach ${workflowId}`)}`);
|
||||
break;
|
||||
|
||||
case 'paused':
|
||||
console.log(chalk.blue.bold('\n⏸️ Paused Status:\n'));
|
||||
console.log(` ${chalk.yellow('●')} Workflow is paused`);
|
||||
console.log(` ${chalk.blue('ℹ')} Resume with: ${chalk.cyan(`tm workflow resume ${workflowId}`)}`);
|
||||
break;
|
||||
|
||||
case 'completed':
|
||||
console.log(chalk.blue.bold('\n✅ Completed Status:\n'));
|
||||
console.log(` ${chalk.green('●')} Workflow completed successfully`);
|
||||
console.log(` ${chalk.blue('ℹ')} Resources have been cleaned up`);
|
||||
break;
|
||||
|
||||
case 'failed':
|
||||
console.log(chalk.blue.bold('\n❌ Failed Status:\n'));
|
||||
console.log(` ${chalk.red('●')} Workflow execution failed`);
|
||||
console.log(` ${chalk.blue('ℹ')} Check logs for error details`);
|
||||
break;
|
||||
|
||||
case 'initializing':
|
||||
console.log(chalk.blue.bold('\n🔄 Initializing Status:\n'));
|
||||
console.log(` ${chalk.blue('●')} Setting up worktree and process`);
|
||||
console.log(` ${chalk.blue('ℹ')} This should complete shortly`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
private displayAvailableActions(workflow: any): void {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
console.log(chalk.blue.bold('\n🎯 Available Actions:\n'));
|
||||
|
||||
switch (workflow.status) {
|
||||
case 'running':
|
||||
console.log(` • Attach: ${chalk.cyan(`tm workflow attach ${workflowId}`)}`);
|
||||
console.log(` • Pause: ${chalk.cyan(`tm workflow pause ${workflowId}`)}`);
|
||||
console.log(` • Stop: ${chalk.cyan(`tm workflow stop ${workflowId}`)}`);
|
||||
break;
|
||||
|
||||
case 'paused':
|
||||
console.log(` • Resume: ${chalk.cyan(`tm workflow resume ${workflowId}`)}`);
|
||||
console.log(` • Stop: ${chalk.cyan(`tm workflow stop ${workflowId}`)}`);
|
||||
break;
|
||||
|
||||
case 'pending':
|
||||
case 'initializing':
|
||||
console.log(` • Stop: ${chalk.cyan(`tm workflow stop ${workflowId}`)}`);
|
||||
break;
|
||||
|
||||
case 'completed':
|
||||
case 'failed':
|
||||
case 'cancelled':
|
||||
console.log(` • View logs: ${chalk.cyan(`tm workflow logs ${workflowId}`)}`);
|
||||
console.log(` • Start new: ${chalk.cyan(`tm workflow start ${workflow.taskId}`)}`);
|
||||
break;
|
||||
}
|
||||
|
||||
console.log(` • List all: ${chalk.cyan('tm workflow list')}`);
|
||||
}
|
||||
|
||||
private getStatusDisplay(status: string): string {
|
||||
const statusMap = {
|
||||
pending: { icon: '⏳', color: chalk.yellow },
|
||||
initializing: { icon: '🔄', color: chalk.blue },
|
||||
running: { icon: '🚀', color: chalk.green },
|
||||
paused: { icon: '⏸️', color: chalk.orange },
|
||||
completed: { icon: '✅', color: chalk.green },
|
||||
failed: { icon: '❌', color: chalk.red },
|
||||
cancelled: { icon: '🛑', color: chalk.gray },
|
||||
timeout: { icon: '⏰', color: chalk.red }
|
||||
};
|
||||
|
||||
const statusInfo = statusMap[status as keyof typeof statusMap] || { icon: '❓', color: chalk.white };
|
||||
return `${statusInfo.icon} ${statusInfo.color(status)}`;
|
||||
}
|
||||
|
||||
private formatDuration(start: Date, end: Date): string {
|
||||
const diff = end.getTime() - start.getTime();
|
||||
const minutes = Math.floor(diff / (1000 * 60));
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const seconds = Math.floor((diff % (1000 * 60)) / 1000);
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes % 60}m ${seconds}s`;
|
||||
} else if (minutes > 0) {
|
||||
return `${minutes}m ${seconds}s`;
|
||||
} else {
|
||||
return `${seconds}s`;
|
||||
}
|
||||
}
|
||||
|
||||
private calculateDuration(start: Date, end: Date): number {
|
||||
return Math.floor((end.getTime() - start.getTime()) / 1000);
|
||||
}
|
||||
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.workflowManager) {
|
||||
this.workflowManager.removeAllListeners();
|
||||
}
|
||||
}
|
||||
|
||||
static register(program: Command, name?: string): WorkflowStatusCommand {
|
||||
const command = new WorkflowStatusCommand(name);
|
||||
program.addCommand(command);
|
||||
return command;
|
||||
}
|
||||
}
|
||||
260
apps/cli/src/commands/workflow/workflow-stop.command.ts
Normal file
260
apps/cli/src/commands/workflow/workflow-stop.command.ts
Normal file
@@ -0,0 +1,260 @@
|
||||
/**
|
||||
* @fileoverview Workflow Stop Command
|
||||
* Stop and clean up workflow execution
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
TaskExecutionManager,
|
||||
type TaskExecutionManagerConfig
|
||||
} from '@tm/workflow-engine';
|
||||
import * as ui from '../../utils/ui.js';
|
||||
|
||||
export interface WorkflowStopOptions {
|
||||
project?: string;
|
||||
worktreeBase?: string;
|
||||
claude?: string;
|
||||
force?: boolean;
|
||||
all?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* WorkflowStopCommand - Stop workflow execution
|
||||
*/
|
||||
export class WorkflowStopCommand extends Command {
|
||||
private workflowManager?: TaskExecutionManager;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'stop');
|
||||
|
||||
this.description('Stop workflow execution and clean up resources')
|
||||
.argument('[workflow-id]', 'Workflow ID to stop (or task ID)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option(
|
||||
'--worktree-base <path>',
|
||||
'Base directory for worktrees',
|
||||
'../task-worktrees'
|
||||
)
|
||||
.option('--claude <path>', 'Claude Code executable path', 'claude')
|
||||
.option('-f, --force', 'Force stop (kill process immediately)')
|
||||
.option('--all', 'Stop all running workflows')
|
||||
.action(
|
||||
async (
|
||||
workflowId: string | undefined,
|
||||
options: WorkflowStopOptions
|
||||
) => {
|
||||
await this.executeCommand(workflowId, options);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
private async executeCommand(
|
||||
workflowId: string | undefined,
|
||||
options: WorkflowStopOptions
|
||||
): Promise<void> {
|
||||
try {
|
||||
// Initialize workflow manager
|
||||
await this.initializeWorkflowManager(options);
|
||||
|
||||
if (options.all) {
|
||||
await this.stopAllWorkflows(options);
|
||||
} else if (workflowId) {
|
||||
await this.stopSingleWorkflow(workflowId, options);
|
||||
} else {
|
||||
ui.displayError('Please specify a workflow ID or use --all flag');
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
ui.displayError(error.message || 'Failed to stop workflow');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
private async initializeWorkflowManager(
|
||||
options: WorkflowStopOptions
|
||||
): Promise<void> {
|
||||
if (!this.workflowManager) {
|
||||
const projectRoot = options.project || process.cwd();
|
||||
const worktreeBase = path.resolve(
|
||||
projectRoot,
|
||||
options.worktreeBase || '../task-worktrees'
|
||||
);
|
||||
|
||||
const config: TaskExecutionManagerConfig = {
|
||||
projectRoot,
|
||||
maxConcurrent: 5,
|
||||
defaultTimeout: 60,
|
||||
worktreeBase,
|
||||
claudeExecutable: options.claude || 'claude',
|
||||
debug: false
|
||||
};
|
||||
|
||||
this.workflowManager = new TaskExecutionManager(config);
|
||||
await this.workflowManager.initialize();
|
||||
}
|
||||
}
|
||||
|
||||
private async stopSingleWorkflow(
|
||||
workflowId: string,
|
||||
options: WorkflowStopOptions
|
||||
): Promise<void> {
|
||||
// Try to find workflow by ID or task ID
|
||||
let workflow = this.workflowManager!.getWorkflowStatus(workflowId);
|
||||
|
||||
if (!workflow) {
|
||||
// Try as task ID
|
||||
workflow = this.workflowManager!.getWorkflowByTaskId(workflowId);
|
||||
}
|
||||
|
||||
if (!workflow) {
|
||||
throw new Error(`Workflow not found: ${workflowId}`);
|
||||
}
|
||||
|
||||
const actualWorkflowId = `workflow-${workflow.taskId}`;
|
||||
|
||||
// Display workflow info
|
||||
console.log(chalk.blue.bold(`🛑 Stopping Workflow: ${actualWorkflowId}`));
|
||||
console.log(`${chalk.blue('Task:')} ${workflow.taskTitle}`);
|
||||
console.log(
|
||||
`${chalk.blue('Status:')} ${this.getStatusDisplay(workflow.status)}`
|
||||
);
|
||||
console.log(
|
||||
`${chalk.blue('Worktree:')} ${chalk.gray(workflow.worktreePath)}`
|
||||
);
|
||||
|
||||
if (workflow.processId) {
|
||||
console.log(
|
||||
`${chalk.blue('Process ID:')} ${chalk.gray(workflow.processId)}`
|
||||
);
|
||||
}
|
||||
|
||||
console.log();
|
||||
|
||||
// Confirm if not forced
|
||||
if (!options.force && ['running', 'paused'].includes(workflow.status)) {
|
||||
const shouldProceed = await ui.confirm(
|
||||
`Are you sure you want to stop this ${workflow.status} workflow?`
|
||||
);
|
||||
|
||||
if (!shouldProceed) {
|
||||
console.log(chalk.gray('Operation cancelled'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Stop the workflow
|
||||
ui.displaySpinner('Stopping workflow and cleaning up resources...');
|
||||
|
||||
await this.workflowManager!.stopTaskExecution(
|
||||
actualWorkflowId,
|
||||
options.force
|
||||
);
|
||||
|
||||
ui.displaySuccess('Workflow stopped successfully!');
|
||||
console.log();
|
||||
console.log(`${chalk.green('✓')} Process terminated`);
|
||||
console.log(`${chalk.green('✓')} Worktree cleaned up`);
|
||||
console.log(`${chalk.green('✓')} State updated`);
|
||||
}
|
||||
|
||||
private async stopAllWorkflows(options: WorkflowStopOptions): Promise<void> {
|
||||
const workflows = this.workflowManager!.listWorkflows();
|
||||
const activeWorkflows = workflows.filter((w) =>
|
||||
['pending', 'initializing', 'running', 'paused'].includes(w.status)
|
||||
);
|
||||
|
||||
if (activeWorkflows.length === 0) {
|
||||
ui.displayWarning('No active workflows to stop');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue.bold(`🛑 Stopping ${activeWorkflows.length} Active Workflows`)
|
||||
);
|
||||
console.log();
|
||||
|
||||
// List workflows to be stopped
|
||||
activeWorkflows.forEach((workflow) => {
|
||||
console.log(
|
||||
` • ${chalk.cyan(`workflow-${workflow.taskId}`)} - ${workflow.taskTitle} ${this.getStatusDisplay(workflow.status)}`
|
||||
);
|
||||
});
|
||||
console.log();
|
||||
|
||||
// Confirm if not forced
|
||||
if (!options.force) {
|
||||
const shouldProceed = await ui.confirm(
|
||||
`Are you sure you want to stop all ${activeWorkflows.length} active workflows?`
|
||||
);
|
||||
|
||||
if (!shouldProceed) {
|
||||
console.log(chalk.gray('Operation cancelled'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Stop all workflows
|
||||
ui.displaySpinner('Stopping all workflows...');
|
||||
|
||||
let stopped = 0;
|
||||
let failed = 0;
|
||||
|
||||
for (const workflow of activeWorkflows) {
|
||||
try {
|
||||
const workflowId = `workflow-${workflow.taskId}`;
|
||||
await this.workflowManager!.stopTaskExecution(
|
||||
workflowId,
|
||||
options.force
|
||||
);
|
||||
stopped++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
|
||||
);
|
||||
failed++;
|
||||
}
|
||||
}
|
||||
|
||||
console.log();
|
||||
if (stopped > 0) {
|
||||
ui.displaySuccess(`Successfully stopped ${stopped} workflows`);
|
||||
}
|
||||
|
||||
if (failed > 0) {
|
||||
ui.displayWarning(`Failed to stop ${failed} workflows`);
|
||||
}
|
||||
}
|
||||
|
||||
private getStatusDisplay(status: string): string {
|
||||
const statusMap = {
|
||||
pending: { icon: '⏳', color: chalk.yellow },
|
||||
initializing: { icon: '🔄', color: chalk.blue },
|
||||
running: { icon: '🚀', color: chalk.green },
|
||||
paused: { icon: '⏸️', color: chalk.hex('#FFA500') },
|
||||
completed: { icon: '✅', color: chalk.green },
|
||||
failed: { icon: '❌', color: chalk.red },
|
||||
cancelled: { icon: '🛑', color: chalk.gray },
|
||||
timeout: { icon: '⏰', color: chalk.red }
|
||||
};
|
||||
|
||||
const statusInfo = statusMap[status as keyof typeof statusMap] || {
|
||||
icon: '❓',
|
||||
color: chalk.white
|
||||
};
|
||||
return `${statusInfo.icon} ${statusInfo.color(status)}`;
|
||||
}
|
||||
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.workflowManager) {
|
||||
this.workflowManager.removeAllListeners();
|
||||
}
|
||||
}
|
||||
|
||||
static register(program: Command, name?: string): WorkflowStopCommand {
|
||||
const command = new WorkflowStopCommand(name);
|
||||
program.addCommand(command);
|
||||
return command;
|
||||
}
|
||||
}
|
||||
24
apps/cli/src/index.ts
Normal file
24
apps/cli/src/index.ts
Normal file
@@ -0,0 +1,24 @@
|
||||
/**
|
||||
* @fileoverview Main entry point for @tm/cli package
|
||||
* Exports all public APIs for the CLI presentation layer
|
||||
*/
|
||||
|
||||
// Commands
|
||||
export { ListTasksCommand } from './commands/list.command.js';
|
||||
export { AuthCommand } from './commands/auth.command.js';
|
||||
export { WorkflowCommand } from './commands/workflow.command.js';
|
||||
export { ContextCommand } from './commands/context.command.js';
|
||||
|
||||
// Command registry
|
||||
export { registerAllCommands } from './commands/index.js';
|
||||
|
||||
// UI utilities (for other commands to use)
|
||||
export * as ui from './utils/ui.js';
|
||||
|
||||
// Re-export commonly used types from tm-core
|
||||
export type {
|
||||
Task,
|
||||
TaskStatus,
|
||||
TaskPriority,
|
||||
TaskMasterCore
|
||||
} from '@tm/core';
|
||||
384
apps/cli/src/utils/ui.ts
Normal file
384
apps/cli/src/utils/ui.ts
Normal file
@@ -0,0 +1,384 @@
|
||||
/**
|
||||
* @fileoverview UI utilities for Task Master CLI
|
||||
* Provides formatting, display, and visual components for the command line interface
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import type { Task, TaskStatus, TaskPriority } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function getStatusWithColor(
|
||||
status: TaskStatus,
|
||||
forTable: boolean = false
|
||||
): string {
|
||||
const statusConfig = {
|
||||
done: {
|
||||
color: chalk.green,
|
||||
icon: String.fromCharCode(8730),
|
||||
tableIcon: String.fromCharCode(8730)
|
||||
}, // √
|
||||
pending: { color: chalk.yellow, icon: 'o', tableIcon: 'o' },
|
||||
'in-progress': {
|
||||
color: chalk.hex('#FFA500'),
|
||||
icon: String.fromCharCode(9654),
|
||||
tableIcon: '>'
|
||||
}, // ▶
|
||||
deferred: { color: chalk.gray, icon: 'x', tableIcon: 'x' },
|
||||
blocked: { color: chalk.red, icon: '!', tableIcon: '!' },
|
||||
review: { color: chalk.magenta, icon: '?', tableIcon: '?' },
|
||||
cancelled: { color: chalk.gray, icon: 'X', tableIcon: 'X' }
|
||||
};
|
||||
|
||||
const config = statusConfig[status] || {
|
||||
color: chalk.red,
|
||||
icon: 'X',
|
||||
tableIcon: 'X'
|
||||
};
|
||||
|
||||
// Use simple ASCII characters for stable display
|
||||
const simpleIcons = {
|
||||
done: String.fromCharCode(8730), // √
|
||||
pending: 'o',
|
||||
'in-progress': '>',
|
||||
deferred: 'x',
|
||||
blocked: '!',
|
||||
review: '?',
|
||||
cancelled: 'X'
|
||||
};
|
||||
|
||||
const icon = forTable ? simpleIcons[status] || 'X' : config.icon;
|
||||
return config.color(`${icon} ${status}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored priority display
|
||||
*/
|
||||
export function getPriorityWithColor(priority: TaskPriority): string {
|
||||
const priorityColors: Record<TaskPriority, (text: string) => string> = {
|
||||
critical: chalk.red.bold,
|
||||
high: chalk.red,
|
||||
medium: chalk.yellow,
|
||||
low: chalk.gray
|
||||
};
|
||||
|
||||
const colorFn = priorityColors[priority] || chalk.white;
|
||||
return colorFn(priority);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored complexity display
|
||||
*/
|
||||
export function getComplexityWithColor(complexity: number | string): string {
|
||||
const score =
|
||||
typeof complexity === 'string' ? parseInt(complexity, 10) : complexity;
|
||||
|
||||
if (isNaN(score)) {
|
||||
return chalk.gray('N/A');
|
||||
}
|
||||
|
||||
if (score >= 8) {
|
||||
return chalk.red.bold(`${score} (High)`);
|
||||
} else if (score >= 5) {
|
||||
return chalk.yellow(`${score} (Medium)`);
|
||||
} else {
|
||||
return chalk.green(`${score} (Low)`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate text to specified length
|
||||
*/
|
||||
export function truncate(text: string, maxLength: number): string {
|
||||
if (text.length <= maxLength) {
|
||||
return text;
|
||||
}
|
||||
return text.substring(0, maxLength - 3) + '...';
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a progress bar
|
||||
*/
|
||||
export function createProgressBar(
|
||||
completed: number,
|
||||
total: number,
|
||||
width: number = 30
|
||||
): string {
|
||||
if (total === 0) {
|
||||
return chalk.gray('No tasks');
|
||||
}
|
||||
|
||||
const percentage = Math.round((completed / total) * 100);
|
||||
const filled = Math.round((completed / total) * width);
|
||||
const empty = width - filled;
|
||||
|
||||
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
|
||||
|
||||
return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a fancy banner
|
||||
*/
|
||||
export function displayBanner(title: string = 'Task Master'): void {
|
||||
console.log(
|
||||
boxen(chalk.white.bold(title), {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue',
|
||||
textAlignment: 'center'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display an error message (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function displayError(message: string, details?: string): void {
|
||||
console.error(
|
||||
boxen(
|
||||
chalk.red.bold('X Error: ') +
|
||||
chalk.white(message) +
|
||||
(details ? '\n\n' + chalk.gray(details) : ''),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'red'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a success message
|
||||
*/
|
||||
export function displaySuccess(message: string): void {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a warning message
|
||||
*/
|
||||
export function displayWarning(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display info message
|
||||
*/
|
||||
export function displayInfo(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependencies with their status
|
||||
*/
|
||||
export function formatDependenciesWithStatus(
|
||||
dependencies: string[] | number[],
|
||||
tasks: Task[]
|
||||
): string {
|
||||
if (!dependencies || dependencies.length === 0) {
|
||||
return chalk.gray('none');
|
||||
}
|
||||
|
||||
const taskMap = new Map(tasks.map((t) => [t.id.toString(), t]));
|
||||
|
||||
return dependencies
|
||||
.map((depId) => {
|
||||
const task = taskMap.get(depId.toString());
|
||||
if (!task) {
|
||||
return chalk.red(`${depId} (not found)`);
|
||||
}
|
||||
|
||||
const statusIcon =
|
||||
task.status === 'done'
|
||||
? '✓'
|
||||
: task.status === 'in-progress'
|
||||
? '►'
|
||||
: '○';
|
||||
|
||||
return `${depId}${statusIcon}`;
|
||||
})
|
||||
.join(', ');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a task table for display
|
||||
*/
|
||||
export function createTaskTable(
|
||||
tasks: Task[],
|
||||
options?: {
|
||||
showSubtasks?: boolean;
|
||||
showComplexity?: boolean;
|
||||
showDependencies?: boolean;
|
||||
}
|
||||
): string {
|
||||
const {
|
||||
showSubtasks = false,
|
||||
showComplexity = false,
|
||||
showDependencies = true
|
||||
} = options || {};
|
||||
|
||||
// Calculate dynamic column widths based on terminal width
|
||||
const terminalWidth = process.stdout.columns || 100;
|
||||
const baseColWidths = showComplexity
|
||||
? [8, Math.floor(terminalWidth * 0.35), 18, 12, 15, 12] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||
: [8, Math.floor(terminalWidth * 0.4), 18, 12, 20]; // ID, Title, Status, Priority, Dependencies
|
||||
|
||||
const headers = [
|
||||
chalk.blue.bold('ID'),
|
||||
chalk.blue.bold('Title'),
|
||||
chalk.blue.bold('Status'),
|
||||
chalk.blue.bold('Priority')
|
||||
];
|
||||
const colWidths = baseColWidths.slice(0, 4);
|
||||
|
||||
if (showDependencies) {
|
||||
headers.push(chalk.blue.bold('Dependencies'));
|
||||
colWidths.push(baseColWidths[4]);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
headers.push(chalk.blue.bold('Complexity'));
|
||||
colWidths.push(baseColWidths[5] || 12);
|
||||
}
|
||||
|
||||
const table = new Table({
|
||||
head: headers,
|
||||
style: { head: [], border: [] },
|
||||
colWidths,
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
tasks.forEach((task) => {
|
||||
const row: string[] = [
|
||||
chalk.cyan(task.id.toString()),
|
||||
truncate(task.title, colWidths[1] - 3),
|
||||
getStatusWithColor(task.status, true), // Use table version
|
||||
getPriorityWithColor(task.priority)
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
row.push(formatDependenciesWithStatus(task.dependencies, tasks));
|
||||
}
|
||||
|
||||
if (showComplexity && 'complexity' in task) {
|
||||
row.push(getComplexityWithColor(task.complexity as number | string));
|
||||
}
|
||||
|
||||
table.push(row);
|
||||
|
||||
// Add subtasks if requested
|
||||
if (showSubtasks && task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subRow: string[] = [
|
||||
chalk.gray(` └─ ${subtask.id}`),
|
||||
chalk.gray(truncate(subtask.title, colWidths[1] - 6)),
|
||||
chalk.gray(getStatusWithColor(subtask.status, true)),
|
||||
chalk.gray(subtask.priority || 'medium')
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
subRow.push(
|
||||
chalk.gray(
|
||||
subtask.dependencies && subtask.dependencies.length > 0
|
||||
? subtask.dependencies.map((dep) => String(dep)).join(', ')
|
||||
: 'None'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
subRow.push(chalk.gray('--'));
|
||||
}
|
||||
|
||||
table.push(subRow);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return table.toString();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Display a spinner with message (mock implementation)
|
||||
*/
|
||||
export function displaySpinner(message: string): void {
|
||||
console.log(chalk.blue('◐'), chalk.gray(message));
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple confirmation prompt
|
||||
*/
|
||||
export async function confirm(message: string): Promise<boolean> {
|
||||
// For now, return true. In a real implementation, use inquirer
|
||||
console.log(chalk.yellow('?'), chalk.white(message), chalk.gray('(y/n)'));
|
||||
|
||||
// Mock implementation - in production this would use inquirer
|
||||
return new Promise((resolve) => {
|
||||
process.stdin.once('data', (data) => {
|
||||
const answer = data.toString().trim().toLowerCase();
|
||||
resolve(answer === 'y' || answer === 'yes');
|
||||
});
|
||||
process.stdin.resume();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a generic table
|
||||
*/
|
||||
export function createTable(headers: string[], rows: string[][]): string {
|
||||
const table = new Table({
|
||||
head: headers.map(h => chalk.blue.bold(h)),
|
||||
style: {
|
||||
head: [],
|
||||
border: ['gray']
|
||||
},
|
||||
chars: {
|
||||
'top': '─',
|
||||
'top-mid': '┬',
|
||||
'top-left': '┌',
|
||||
'top-right': '┐',
|
||||
'bottom': '─',
|
||||
'bottom-mid': '┴',
|
||||
'bottom-left': '└',
|
||||
'bottom-right': '┘',
|
||||
'left': '│',
|
||||
'left-mid': '├',
|
||||
'mid': '─',
|
||||
'mid-mid': '┼',
|
||||
'right': '│',
|
||||
'right-mid': '┤',
|
||||
'middle': '│'
|
||||
}
|
||||
});
|
||||
|
||||
rows.forEach(row => table.push(row));
|
||||
return table.toString();
|
||||
}
|
||||
27
apps/cli/tsconfig.json
Normal file
27
apps/cli/tsconfig.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"lib": ["ES2022"],
|
||||
"moduleResolution": "bundler",
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"esModuleInterop": true,
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"declaration": true,
|
||||
"declarationMap": true,
|
||||
"sourceMap": true,
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src",
|
||||
"resolveJsonModule": true,
|
||||
"allowJs": false,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"types": ["node"]
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "tests"]
|
||||
}
|
||||
8
apps/cli/tsup.config.ts
Normal file
8
apps/cli/tsup.config.ts
Normal file
@@ -0,0 +1,8 @@
|
||||
import { defineConfig } from 'tsup';
|
||||
import { cliConfig, mergeConfig } from '@tm/build-config';
|
||||
|
||||
export default defineConfig(
|
||||
mergeConfig(cliConfig, {
|
||||
entry: ['src/index.ts']
|
||||
})
|
||||
);
|
||||
3
apps/docs/CHANGELOG.md
Normal file
3
apps/docs/CHANGELOG.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# docs
|
||||
|
||||
## 0.0.1
|
||||
22
apps/docs/README.md
Normal file
22
apps/docs/README.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Task Master Documentation
|
||||
|
||||
Welcome to the Task Master documentation. Use the links below to navigate to the information you need:
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master
|
||||
- [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands
|
||||
- [Task Structure](archive/ctask-structure.md) - Understanding the task format and features
|
||||
|
||||
## Examples & Licensing
|
||||
|
||||
- [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples
|
||||
- [Licensing Information](archive/clicensing.md) - Detailed information about the license
|
||||
|
||||
## Need More Help?
|
||||
|
||||
If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
|
||||
114
apps/docs/archive/Installation.mdx
Normal file
114
apps/docs/archive/Installation.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: "Installation(2)"
|
||||
description: "This guide walks you through setting up Task Master in your development environment."
|
||||
---
|
||||
|
||||
## Initial Setup
|
||||
|
||||
<Tip>
|
||||
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
|
||||
</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Option 1: Using MCP (Recommended)" icon="sparkles">
|
||||
<Steps>
|
||||
<Step title="Add the MCP config to your editor">
|
||||
<Link href="https://cursor.sh">Cursor</Link> recommended, but it works with other text editors
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package", "task-master-ai", "task-master-mcp"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"MODEL": "claude-3-7-sonnet-20250219",
|
||||
"PERPLEXITY_MODEL": "sonar-pro",
|
||||
"MAX_TOKENS": 128000,
|
||||
"TEMPERATURE": 0.2,
|
||||
"DEFAULT_SUBTASKS": 5,
|
||||
"DEFAULT_PRIORITY": "medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Step>
|
||||
<Step title="Enable the MCP in your editor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Prompt the AI to initialize Task Master">
|
||||
> "Can you please initialize taskmaster-ai into my project?"
|
||||
|
||||
**The AI will:**
|
||||
|
||||
1. Create necessary project structure
|
||||
2. Set up initial configuration files
|
||||
3. Guide you through the rest of the process
|
||||
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
|
||||
5. **Use natural language commands** to interact with Task Master:
|
||||
|
||||
> "Can you parse my PRD at scripts/prd.txt?"
|
||||
>
|
||||
> "What's the next task I should work on?"
|
||||
>
|
||||
> "Can you help me implement task 3?"
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Option 2: Manual Installation">
|
||||
If you prefer to use the command line interface directly:
|
||||
|
||||
<Steps>
|
||||
<Step title="Install">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
npm install -g task-master-ai
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npm install task-master-ai
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Initialize a new project">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
task-master init
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npx task-master-init
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Commands
|
||||
|
||||
<Tip>
|
||||
After setting up Task Master, you can use these commands (either via AI prompts or CLI)
|
||||
</Tip>
|
||||
|
||||
```bash
|
||||
# Parse a PRD and generate tasks
|
||||
task-master parse-prd your-prd.txt
|
||||
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# Show the next task to work on
|
||||
task-master next
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
263
apps/docs/archive/ai-client-utils-example.mdx
Normal file
263
apps/docs/archive/ai-client-utils-example.mdx
Normal file
@@ -0,0 +1,263 @@
|
||||
---
|
||||
title: "AI Client Utilities for MCP Tools"
|
||||
description: "This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools."
|
||||
---
|
||||
## Examples
|
||||
<AccordionGroup>
|
||||
<Accordion title="Basic Usage with Direct Functions">
|
||||
```javascript
|
||||
// In your direct function implementation:
|
||||
import {
|
||||
getAnthropicClientForMCP,
|
||||
getModelConfig,
|
||||
handleClaudeError
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function someAiOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Initialize Anthropic client with session from context
|
||||
const client = getAnthropicClientForMCP(context.session, log);
|
||||
|
||||
// Get model configuration with defaults or session overrides
|
||||
const modelConfig = getModelConfig(context.session);
|
||||
|
||||
// Make API call with proper error handling
|
||||
try {
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature,
|
||||
messages: [{ role: 'user', content: 'Your prompt here' }]
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response
|
||||
};
|
||||
} catch (apiError) {
|
||||
// Use helper to get user-friendly error message
|
||||
const friendlyMessage = handleClaudeError(apiError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_API_ERROR',
|
||||
message: friendlyMessage
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle client initialization errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_CLIENT_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Integration with AsyncOperationManager">
|
||||
```javascript
|
||||
// In your MCP tool implementation:
|
||||
import {
|
||||
AsyncOperationManager,
|
||||
StatusCodes
|
||||
} from '../../utils/async-operation-manager.js';
|
||||
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
|
||||
|
||||
export async function someAiOperation(args, context) {
|
||||
const { session, mcpLog } = context;
|
||||
const log = mcpLog || console;
|
||||
|
||||
try {
|
||||
// Create operation description
|
||||
const operationDescription = `AI operation: ${args.someParam}`;
|
||||
|
||||
// Start async operation
|
||||
const operation = AsyncOperationManager.createOperation(
|
||||
operationDescription,
|
||||
async (reportProgress) => {
|
||||
try {
|
||||
// Initial progress report
|
||||
reportProgress({
|
||||
progress: 0,
|
||||
status: 'Starting AI operation...'
|
||||
});
|
||||
|
||||
// Call direct function with session and progress reporting
|
||||
const result = await someAiOperationDirect(args, log, {
|
||||
reportProgress,
|
||||
mcpLog: log,
|
||||
session
|
||||
});
|
||||
|
||||
// Final progress update
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: result.success ? 'Operation completed' : 'Operation failed',
|
||||
result: result.data,
|
||||
error: result.error
|
||||
});
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Handle errors in the operation
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: 'Operation failed',
|
||||
error: {
|
||||
message: error.message,
|
||||
code: error.code || 'OPERATION_FAILED'
|
||||
}
|
||||
});
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Return immediate response with operation ID
|
||||
return {
|
||||
status: StatusCodes.ACCEPTED,
|
||||
body: {
|
||||
success: true,
|
||||
message: 'Operation started',
|
||||
operationId: operation.id
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// Handle errors in the MCP tool
|
||||
log.error(`Error in someAiOperation: ${error.message}`);
|
||||
return {
|
||||
status: StatusCodes.INTERNAL_SERVER_ERROR,
|
||||
body: {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'OPERATION_FAILED',
|
||||
message: error.message
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Using Research Capabilities with Perplexity">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import {
|
||||
getPerplexityClientForMCP,
|
||||
getBestAvailableAIModel
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function researchOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Get the best AI model for this operation based on needs
|
||||
const { type, client } = await getBestAvailableAIModel(
|
||||
context.session,
|
||||
{ requiresResearch: true },
|
||||
log
|
||||
);
|
||||
|
||||
// Report which model we're using
|
||||
if (context.reportProgress) {
|
||||
await context.reportProgress({
|
||||
progress: 10,
|
||||
status: `Using ${type} model for research...`
|
||||
});
|
||||
}
|
||||
|
||||
// Make API call based on the model type
|
||||
if (type === 'perplexity') {
|
||||
// Call Perplexity
|
||||
const response = await client.chat.completions.create({
|
||||
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
|
||||
messages: [{ role: 'user', content: args.researchQuery }],
|
||||
temperature: 0.1
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.choices[0].message.content
|
||||
};
|
||||
} else {
|
||||
// Call Claude as fallback
|
||||
// (Implementation depends on specific needs)
|
||||
// ...
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'RESEARCH_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Configuration Override">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import { getModelConfig } from '../utils/ai-client-utils.js';
|
||||
|
||||
// Using custom defaults for a specific operation
|
||||
const operationDefaults = {
|
||||
model: 'claude-3-haiku-20240307', // Faster, smaller model
|
||||
maxTokens: 1000, // Lower token limit
|
||||
temperature: 0.2 // Lower temperature for more deterministic output
|
||||
};
|
||||
|
||||
// Get model config with operation-specific defaults
|
||||
const modelConfig = getModelConfig(context.session, operationDefaults);
|
||||
|
||||
// Now use modelConfig in your API calls
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature
|
||||
// Other parameters...
|
||||
});
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Error Handling">
|
||||
- Always use try/catch blocks around both client initialization and API calls
|
||||
- Use `handleClaudeError` to provide user-friendly error messages
|
||||
- Return standardized error objects with code and message
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Progress Reporting">
|
||||
- Report progress at key points (starting, processing, completing)
|
||||
- Include meaningful status messages
|
||||
- Include error details in progress reports when failures occur
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Session Handling">
|
||||
- Always pass the session from the context to the AI client getters
|
||||
- Use `getModelConfig` to respect user settings from session
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Selection">
|
||||
- Use `getBestAvailableAIModel` when you need to select between different models
|
||||
- Set `requiresResearch: true` when you need Perplexity capabilities
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AsyncOperationManager Integration">
|
||||
- Create descriptive operation names
|
||||
- Handle all errors within the operation function
|
||||
- Return standardized results from direct functions
|
||||
- Return immediate responses with operation IDs
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
180
apps/docs/archive/ai-development-workflow.mdx
Normal file
180
apps/docs/archive/ai-development-workflow.mdx
Normal file
@@ -0,0 +1,180 @@
|
||||
---
|
||||
title: "AI Development Workflow"
|
||||
description: "Learn how Task Master and Cursor AI work together to streamline your development workflow"
|
||||
---
|
||||
|
||||
<Tip>The Cursor agent is pre-configured (via the rules file) to follow this workflow</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="1. Task Discovery and Selection">
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="2. Task Implementation">
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="3. Task Verification">
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="4. Task Completion">
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="5. Handling Implementation Drift">
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="6. Breaking Down Complex Tasks">
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using Perplexity AI:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Example Cursor AI Interactions
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
208
apps/docs/archive/command-reference.mdx
Normal file
208
apps/docs/archive/command-reference.mdx
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: "Task Master Commands"
|
||||
description: "A comprehensive reference of all available Task Master commands"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
80
apps/docs/archive/configuration.mdx
Normal file
80
apps/docs/archive/configuration.mdx
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: "Configuration"
|
||||
description: "Configure Task Master through environment variables in a .env file"
|
||||
---
|
||||
|
||||
## Required Configuration
|
||||
|
||||
<Note>
|
||||
Task Master requires an Anthropic API key to function. Add this to your `.env` file:
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
```
|
||||
|
||||
You can obtain an API key from the [Anthropic Console](https://console.anthropic.com/).
|
||||
</Note>
|
||||
|
||||
## Optional Configuration
|
||||
|
||||
| Variable | Default Value | Description | Example |
|
||||
| --- | --- | --- | --- |
|
||||
| `MODEL` | `"claude-3-7-sonnet-20250219"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
|
||||
| `MAX_TOKENS` | `"4000"` | Maximum tokens for responses | `MAX_TOKENS=8000` |
|
||||
| `TEMPERATURE` | `"0.7"` | Temperature for model responses | `TEMPERATURE=0.5` |
|
||||
| `DEBUG` | `"false"` | Enable debug logging | `DEBUG=true` |
|
||||
| `LOG_LEVEL` | `"info"` | Console output level | `LOG_LEVEL=debug` |
|
||||
| `DEFAULT_SUBTASKS` | `"3"` | Default subtask count | `DEFAULT_SUBTASKS=5` |
|
||||
| `DEFAULT_PRIORITY` | `"medium"` | Default priority | `DEFAULT_PRIORITY=high` |
|
||||
| `PROJECT_NAME` | `"MCP SaaS MVP"` | Project name in metadata | `PROJECT_NAME=My Awesome Project` |
|
||||
| `PROJECT_VERSION` | `"1.0.0"` | Version in metadata | `PROJECT_VERSION=2.1.0` |
|
||||
| `PERPLEXITY_API_KEY` | - | For research-backed features | `PERPLEXITY_API_KEY=pplx-...` |
|
||||
| `PERPLEXITY_MODEL` | `"sonar-medium-online"` | Perplexity model | `PERPLEXITY_MODEL=sonar-large-online` |
|
||||
|
||||
## Example .env File
|
||||
|
||||
```
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
|
||||
# Optional - Claude Configuration
|
||||
MODEL=claude-3-7-sonnet-20250219
|
||||
MAX_TOKENS=4000
|
||||
TEMPERATURE=0.7
|
||||
|
||||
# Optional - Perplexity API for Research
|
||||
PERPLEXITY_API_KEY=pplx-your-api-key
|
||||
PERPLEXITY_MODEL=sonar-medium-online
|
||||
|
||||
# Optional - Project Info
|
||||
PROJECT_NAME=My Project
|
||||
PROJECT_VERSION=1.0.0
|
||||
|
||||
# Optional - Application Configuration
|
||||
DEFAULT_SUBTASKS=3
|
||||
DEFAULT_PRIORITY=medium
|
||||
DEBUG=false
|
||||
LOG_LEVEL=info
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
<Note>
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide] page.
|
||||
</Note>
|
||||
95
apps/docs/archive/cursor-setup.mdx
Normal file
95
apps/docs/archive/cursor-setup.mdx
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: "Cursor AI Integration"
|
||||
description: "Learn how to set up and use Task Master with Cursor AI"
|
||||
---
|
||||
|
||||
## Setting up Cursor AI Integration
|
||||
|
||||
<Check>
|
||||
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development.
|
||||
</Check>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Using Cursor with MCP (Recommended)" icon="sparkles">
|
||||
If you've already set up Task Master with MCP in Cursor, the integration is automatic. You can simply use natural language to interact with Task Master:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
Can you analyze the complexity of our tasks?
|
||||
I'd like to implement task 4. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Manual Cursor Setup">
|
||||
If you're not using MCP, you can still set up Cursor integration:
|
||||
|
||||
<Steps>
|
||||
<Step title="After initializing your project, open it in Cursor">
|
||||
The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
|
||||
</Step>
|
||||
<Step title="Place your PRD document in the scripts/ directory (e.g., scripts/prd.txt)">
|
||||
|
||||
</Step>
|
||||
<Step title="Open Cursor's AI chat and switch to Agent mode">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Alternative MCP Setup in Cursor">
|
||||
<Steps>
|
||||
<Step title="Go to Cursor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Navigate to the MCP section">
|
||||
|
||||
</Step>
|
||||
<Step title="Click on 'Add New MCP Server'">
|
||||
|
||||
</Step>
|
||||
<Step title="Configure with the following details:">
|
||||
- Name: "Task Master"
|
||||
- Type: "Command"
|
||||
- Command: "npx -y --package task-master-ai task-master-mcp"
|
||||
</Step>
|
||||
<Step title="Save Settings">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Initial Task Generation
|
||||
|
||||
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
|
||||
|
||||
```
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master parse-prd scripts/prd.txt
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Parse your PRD document
|
||||
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
|
||||
- The agent will understand this process due to the Cursor rules
|
||||
|
||||
### Generate Individual Task Files
|
||||
|
||||
Next, ask the agent to generate individual task files:
|
||||
|
||||
```
|
||||
Please generate individual task files from tasks.json
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
This creates individual task files in the `tasks/` directory (e.g., `task_001.txt`, `task_002.txt`), making it easier to reference specific tasks.
|
||||
56
apps/docs/archive/examples.mdx
Normal file
56
apps/docs/archive/examples.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "Example Cursor AI Interactions"
|
||||
description: "Below are some common interactions with Cursor AI when using Task Master"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
210
apps/docs/best-practices/advanced-tasks.mdx
Normal file
210
apps/docs/best-practices/advanced-tasks.mdx
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
title: Advanced Tasks
|
||||
sidebarTitle: "Advanced Tasks"
|
||||
---
|
||||
|
||||
## AI-Driven Development Workflow
|
||||
|
||||
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
|
||||
|
||||
### 1. Task Discovery and Selection
|
||||
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
```
|
||||
Can you show me tasks 1, 3, and 5 to understand their current status?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
|
||||
### 2. Task Implementation
|
||||
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
|
||||
### 2.1. Viewing Multiple Tasks
|
||||
|
||||
For efficient context gathering and batch operations:
|
||||
|
||||
```
|
||||
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master show 5,7,9` to display a compact summary table
|
||||
- Show task status, priority, and progress indicators
|
||||
- Provide an interactive action menu with batch operations
|
||||
- Allow you to perform group actions like marking multiple tasks as in-progress
|
||||
|
||||
### 3. Task Verification
|
||||
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
|
||||
### 4. Task Completion
|
||||
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
|
||||
### 5. Handling Implementation Drift
|
||||
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
|
||||
|
||||
# OR, if research is needed to find best practices for MongoDB:
|
||||
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
|
||||
### 6. Reorganizing Tasks
|
||||
|
||||
If you need to reorganize your task structure:
|
||||
|
||||
```
|
||||
I think subtask 5.2 would fit better as part of task 7 instead. Can you move it there?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master move --from=5.2 --to=7.3
|
||||
```
|
||||
|
||||
You can reorganize tasks in various ways:
|
||||
|
||||
- Moving a standalone task to become a subtask: `--from=5 --to=7`
|
||||
- Moving a subtask to become a standalone task: `--from=5.2 --to=7`
|
||||
- Moving a subtask to a different parent: `--from=5.2 --to=7.3`
|
||||
- Reordering subtasks within the same parent: `--from=5.2 --to=5.4`
|
||||
- Moving a task to a new ID position: `--from=5 --to=25` (even if task 25 doesn't exist yet)
|
||||
- Moving multiple tasks at once: `--from=10,11,12 --to=16,17,18` (must have same number of IDs, Taskmaster will look through each position)
|
||||
|
||||
When moving tasks to new IDs:
|
||||
|
||||
- The system automatically creates placeholder tasks for non-existent destination IDs
|
||||
- This prevents accidental data loss during reorganization
|
||||
- Any tasks that depend on moved tasks will have their dependencies updated
|
||||
- When moving a parent task, all its subtasks are automatically moved with it and renumbered
|
||||
|
||||
This is particularly useful as your project understanding evolves and you need to refine your task structure.
|
||||
|
||||
### 7. Resolving Merge Conflicts with Tasks
|
||||
|
||||
When working with a team, you might encounter merge conflicts in your tasks.json file if multiple team members create tasks on different branches. The move command makes resolving these conflicts straightforward:
|
||||
|
||||
```
|
||||
I just merged the main branch and there's a conflict with tasks.json. My teammates created tasks 10-15 while I created tasks 10-12 on my branch. Can you help me resolve this?
|
||||
```
|
||||
|
||||
The agent will help you:
|
||||
|
||||
1. Keep your teammates' tasks (10-15)
|
||||
2. Move your tasks to new positions to avoid conflicts:
|
||||
|
||||
```bash
|
||||
# Move your tasks to new positions (e.g., 16-18)
|
||||
task-master move --from=10 --to=16
|
||||
task-master move --from=11 --to=17
|
||||
task-master move --from=12 --to=18
|
||||
```
|
||||
|
||||
This approach preserves everyone's work while maintaining a clean task structure, making it much easier to handle task conflicts than trying to manually merge JSON files.
|
||||
|
||||
### 8. Breaking Down Complex Tasks
|
||||
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using the configured research model:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
317
apps/docs/best-practices/configuration-advanced.mdx
Normal file
317
apps/docs/best-practices/configuration-advanced.mdx
Normal file
@@ -0,0 +1,317 @@
|
||||
---
|
||||
title: Advanced Configuration
|
||||
sidebarTitle: "Advanced Configuration"
|
||||
---
|
||||
|
||||
|
||||
Taskmaster uses two primary methods for configuration:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
|
||||
|
||||
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
|
||||
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
|
||||
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
|
||||
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
|
||||
- **Example Structure:**
|
||||
```json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2,
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1,
|
||||
"baseURL": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"defaultTag": "master",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
|
||||
|
||||
- For projects that haven't migrated to the new structure yet.
|
||||
- **Location:** Project root directory.
|
||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||
|
||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
- **Location:**
|
||||
- For CLI usage: Create a `.env` file in your project root.
|
||||
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
|
||||
- **Required API Keys (Depending on configured providers):**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key.
|
||||
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
|
||||
- `MISTRAL_API_KEY`: Your Mistral API key.
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
|
||||
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
|
||||
|
||||
## Tagged Task Lists Configuration (v0.17+)
|
||||
|
||||
Taskmaster includes a tagged task lists system for multi-context task management.
|
||||
|
||||
### Global Tag Settings
|
||||
|
||||
```json
|
||||
"global": {
|
||||
"defaultTag": "master"
|
||||
}
|
||||
```
|
||||
|
||||
- **`defaultTag`** (string): Default tag context for new operations (default: "master")
|
||||
|
||||
### Git Integration
|
||||
|
||||
Task Master provides manual git integration through the `--from-branch` option:
|
||||
|
||||
- **Manual Tag Creation**: Use `task-master add-tag --from-branch` to create a tag based on your current git branch name
|
||||
- **User Control**: No automatic tag switching - you control when and how tags are created
|
||||
- **Flexible Workflow**: Supports any git workflow without imposing rigid branch-tag mappings
|
||||
|
||||
## State Management File
|
||||
|
||||
Taskmaster uses `.taskmaster/state.json` to track tagged system runtime information:
|
||||
|
||||
```json
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-06-11T20:26:12.598Z",
|
||||
"migrationNoticeShown": true
|
||||
}
|
||||
```
|
||||
|
||||
- **`currentTag`**: Currently active tag context
|
||||
- **`lastSwitched`**: Timestamp of last tag switch
|
||||
- **`migrationNoticeShown`**: Whether migration notice has been displayed
|
||||
|
||||
This file is automatically created during tagged system migration and should not be manually edited.
|
||||
|
||||
## Example `.env` File (for API Keys)
|
||||
|
||||
```
|
||||
# Required API keys for providers configured in .taskmaster/config.json
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
|
||||
PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# GOOGLE_API_KEY=AIzaSy...
|
||||
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
# etc.
|
||||
|
||||
# Optional Endpoint Overrides
|
||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
||||
#
|
||||
# Azure OpenAI Configuration
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Configuration Errors
|
||||
|
||||
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
|
||||
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
|
||||
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
## Provider-Specific Configuration
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- A Google Cloud account with Vertex AI API enabled
|
||||
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
|
||||
- A Google Cloud project ID
|
||||
2. **Authentication Options**:
|
||||
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
|
||||
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
|
||||
3. **Required Configuration**:
|
||||
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
|
||||
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
|
||||
4. **Example Setup**:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
Or using service account:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
5. **In .taskmaster/config.json**:
|
||||
```json
|
||||
"global": {
|
||||
"vertexProjectId": "my-gcp-project-123",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
```
|
||||
|
||||
### Azure OpenAI Configuration
|
||||
|
||||
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- An Azure account with an active subscription
|
||||
- Azure OpenAI service resource created in the Azure portal
|
||||
- Azure OpenAI API key and endpoint URL
|
||||
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource
|
||||
|
||||
2. **Authentication**:
|
||||
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
|
||||
- Configure the endpoint URL using one of the methods below
|
||||
|
||||
3. **Configuration Options**:
|
||||
|
||||
**Option 1: Using Global Azure Base URL (affects all Azure models)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Environment Variables**:
|
||||
```bash
|
||||
# In .env file
|
||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
|
||||
# Optional: Override endpoint for all Azure models
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||
```
|
||||
|
||||
5. **Important Notes**:
|
||||
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
|
||||
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
|
||||
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`
|
||||
|
||||
6. **Troubleshooting**:
|
||||
|
||||
**"Resource not found" errors:**
|
||||
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
|
||||
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
|
||||
- Check that your Azure OpenAI resource is in the correct region and properly deployed
|
||||
|
||||
**Authentication errors:**
|
||||
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
|
||||
- Ensure your Azure OpenAI resource has the necessary permissions
|
||||
- Check that your subscription has not been suspended or reached quota limits
|
||||
|
||||
**Model availability errors:**
|
||||
- Confirm the model is deployed in your Azure OpenAI resource
|
||||
- Verify the deployment name matches your configuration exactly (case-sensitive)
|
||||
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
|
||||
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
|
||||
8
apps/docs/best-practices/index.mdx
Normal file
8
apps/docs/best-practices/index.mdx
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
title: Intro to Advanced Usage
|
||||
sidebarTitle: "Advanced Usage"
|
||||
---
|
||||
|
||||
# Best Practices
|
||||
|
||||
Explore advanced tips, recommended workflows, and best practices for getting the most out of Task Master.
|
||||
237
apps/docs/capabilities/cli-root-commands.mdx
Normal file
237
apps/docs/capabilities/cli-root-commands.mdx
Normal file
@@ -0,0 +1,237 @@
|
||||
---
|
||||
title: CLI Commands
|
||||
sidebarTitle: "CLI Commands"
|
||||
---
|
||||
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Workflow Management">
|
||||
```bash
|
||||
# Start workflow execution for a task
|
||||
task-master workflow start <task-id>
|
||||
# or use alias
|
||||
task-master workflow run <task-id>
|
||||
|
||||
# List all active workflows
|
||||
task-master workflow list
|
||||
|
||||
# Check status of a specific workflow
|
||||
task-master workflow status <workflow-id>
|
||||
# or use alias
|
||||
task-master workflow info <workflow-id>
|
||||
|
||||
# Stop a running workflow
|
||||
task-master workflow stop <workflow-id>
|
||||
# or use alias
|
||||
task-master workflow kill <workflow-id>
|
||||
```
|
||||
|
||||
The workflow system executes tasks in isolated git worktrees with dedicated Claude Code processes, providing:
|
||||
- **Isolated Execution**: Each task runs in its own git worktree
|
||||
- **Process Management**: Spawns dedicated Claude Code processes
|
||||
- **Real-time Monitoring**: Track progress and output
|
||||
- **Parallel Execution**: Run multiple tasks concurrently
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
241
apps/docs/capabilities/index.mdx
Normal file
241
apps/docs/capabilities/index.mdx
Normal file
@@ -0,0 +1,241 @@
|
||||
---
|
||||
title: Technical Capabilities
|
||||
sidebarTitle: "Technical Capabilities"
|
||||
---
|
||||
|
||||
# Capabilities (Technical)
|
||||
|
||||
Discover the technical capabilities of Task Master, including supported models, integrations, and more.
|
||||
|
||||
# CLI Interface Synopsis
|
||||
|
||||
This document outlines the command-line interface (CLI) for the Task Master application, as defined in `bin/task-master.js` and the `scripts/modules/commands.js` file (which I will assume exists based on the context). This guide is intended for those writing user-facing documentation to understand how users interact with the application from the command line.
|
||||
|
||||
## Entry Point
|
||||
|
||||
The main entry point for the CLI is the `task-master` command, which is an executable script that spawns the main application logic in `scripts/dev.js`.
|
||||
|
||||
## Global Options
|
||||
|
||||
The following options are available for all commands:
|
||||
|
||||
- `-h, --help`: Display help information.
|
||||
- `--version`: Display the application's version.
|
||||
|
||||
## Commands
|
||||
|
||||
The CLI is organized into a series of commands, each with its own set of options. The following is a summary of the available commands, categorized by their functionality.
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add`**: Creates a new task using an AI-powered prompt.
|
||||
- `--prompt <prompt>`: The prompt to use for generating the task.
|
||||
- `--dependencies <dependencies>`: A comma-separated list of task IDs that this task depends on.
|
||||
- `--priority <priority>`: The priority of the task (e.g., `high`, `medium`, `low`).
|
||||
- **`add-subtask`**: Adds a subtask to a parent task.
|
||||
- `--parent-id <parentId>`: The ID of the parent task.
|
||||
- `--task-id <taskId>`: The ID of an existing task to convert to a subtask.
|
||||
- `--title <title>`: The title of the new subtask.
|
||||
- **`remove`**: Removes one or more tasks or subtasks.
|
||||
- `--ids <ids>`: A comma-separated list of task or subtask IDs to remove.
|
||||
- **`remove-subtask`**: Removes a subtask from its parent.
|
||||
- `--id <subtaskId>`: The ID of the subtask to remove (in the format `parentId.subtaskId`).
|
||||
- `--convert-to-task`: Converts the subtask to a standalone task.
|
||||
- **`update`**: Updates multiple tasks starting from a specific ID.
|
||||
- `--from <fromId>`: The ID of the task to start updating from.
|
||||
- `--prompt <prompt>`: The new context to apply to the tasks.
|
||||
- **`update-task`**: Updates a single task.
|
||||
- `--id <taskId>`: The ID of the task to update.
|
||||
- `--prompt <prompt>`: The new context to apply to the task.
|
||||
- **`update-subtask`**: Appends information to a subtask.
|
||||
- `--id <subtaskId>`: The ID of the subtask to update (in the format `parentId.subtaskId`).
|
||||
- `--prompt <prompt>`: The information to append to the subtask.
|
||||
- **`move`**: Moves a task or subtask.
|
||||
- `--from <sourceId>`: The ID of the task or subtask to move.
|
||||
- `--to <destinationId>`: The destination ID.
|
||||
- **`clear-subtasks`**: Clears all subtasks from one or more tasks.
|
||||
- `--ids <ids>`: A comma-separated list of task IDs.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`list`**: Lists all tasks.
|
||||
- `--status <status>`: Filters tasks by status.
|
||||
- `--with-subtasks`: Includes subtasks in the list.
|
||||
- **`show`**: Shows the details of a specific task.
|
||||
- `--id <taskId>`: The ID of the task to show.
|
||||
- **`next`**: Shows the next task to work on.
|
||||
- **`set-status`**: Sets the status of a task or subtask.
|
||||
- `--id <id>`: The ID of the task or subtask.
|
||||
- `--status <status>`: The new status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse-prd`**: Parses a PRD to generate tasks.
|
||||
- `--file <file>`: The path to the PRD file.
|
||||
- `--num-tasks <numTasks>`: The number of tasks to generate.
|
||||
- **`expand`**: Expands a task into subtasks.
|
||||
- `--id <taskId>`: The ID of the task to expand.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate.
|
||||
- **`expand-all`**: Expands all eligible tasks.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate for each task.
|
||||
- **`analyze-complexity`**: Analyzes task complexity.
|
||||
- `--file <file>`: The path to the tasks file.
|
||||
- **`complexity-report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Project and Configuration
|
||||
|
||||
- **`init`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`migrate`**: Migrates a project to the new directory structure.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
- `--query <query>`: The research query.
|
||||
|
||||
This synopsis provides a comprehensive overview of the CLI commands and their options, which should be helpful for creating user-facing documentation.
|
||||
|
||||
|
||||
# Core Implementation Synopsis
|
||||
|
||||
This document provides a high-level overview of the core implementation of the Task Master application, focusing on the functionalities exposed through `scripts/modules/task-manager.js`. This serves as a guide for understanding the application's capabilities when writing user-facing documentation.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The application revolves around the management of tasks and subtasks, which are stored in a `tasks.json` file. The core logic provides functionalities to create, read, update, and delete tasks and subtasks, as well as manage their dependencies and statuses.
|
||||
|
||||
### Task Structure
|
||||
|
||||
A task is a JSON object with the following key properties:
|
||||
|
||||
- `id`: A unique number identifying the task.
|
||||
- `title`: A string representing the task's title.
|
||||
- `description`: A string providing a brief description of the task.
|
||||
- `details`: A string containing detailed information about the task.
|
||||
- `testStrategy`: A string describing how to test the task.
|
||||
- `status`: A string representing the task's current status (e.g., `pending`, `in-progress`, `done`).
|
||||
- `dependencies`: An array of task IDs that this task depends on.
|
||||
- `priority`: A string representing the task's priority (e.g., `high`, `medium`, `low`).
|
||||
- `subtasks`: An array of subtask objects.
|
||||
|
||||
A subtask has a similar structure to a task but is nested within a parent task.
|
||||
|
||||
## Feature Categories
|
||||
|
||||
The core functionalities can be categorized as follows:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks.
|
||||
|
||||
- **`addTask(prompt, dependencies, priority)`**: Creates a new task using an AI-powered prompt to generate the title, description, details, and test strategy. It can also be used to create a task manually by providing the task data directly.
|
||||
- **`addSubtask(parentId, existingTaskId, newSubtaskData)`**: Adds a subtask to a parent task. It can either convert an existing task into a subtask or create a new subtask from scratch.
|
||||
- **`removeTask(taskIds)`**: Removes one or more tasks or subtasks.
|
||||
- **`removeSubtask(subtaskId, convertToTask)`**: Removes a subtask from its parent. It can optionally convert the subtask into a standalone task.
|
||||
- **`updateTaskById(taskId, prompt)`**: Updates a task's information based on a prompt.
|
||||
- **`updateSubtaskById(subtaskId, prompt)`**: Appends additional information to a subtask's details.
|
||||
- **`updateTasks(fromId, prompt)`**: Updates multiple tasks starting from a specific ID based on a new context.
|
||||
- **`moveTask(sourceId, destinationId)`**: Moves a task or subtask to a new position.
|
||||
- **`clearSubtasks(taskIds)`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
These functions are used to retrieve information about tasks and manage their status.
|
||||
|
||||
- **`listTasks(statusFilter, withSubtasks)`**: Lists all tasks, with options to filter by status and include subtasks.
|
||||
- **`findTaskById(taskId)`**: Finds a task by its ID.
|
||||
- **`taskExists(taskId)`**: Checks if a task with a given ID exists.
|
||||
- **`setTaskStatus(taskIdInput, newStatus)`**: Sets the status of a task or subtask.
|
||||
-al
|
||||
- **`updateSingleTaskStatus(taskIdInput, newStatus)`**: A helper function to update the status of a single task or subtask.
|
||||
- **`findNextTask()`**: Determines the next task to work on based on dependencies and status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
These functions leverage AI to analyze and break down tasks.
|
||||
|
||||
- **`parsePRD(prdPath, numTasks)`**: Parses a Product Requirements Document (PRD) to generate an initial set of tasks.
|
||||
- **`expandTask(taskId, numSubtasks)`**: Expands a task into a specified number of subtasks using AI.
|
||||
- **`expandAllTasks(numSubtasks)`**: Expands all eligible pending or in-progress tasks.
|
||||
- **`analyzeTaskComplexity(options)`**: Analyzes the complexity of tasks and generates recommendations for expansion.
|
||||
- **`readComplexityReport()`**: Reads the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
These functions are crucial for managing the relationships between tasks.
|
||||
|
||||
- **`isTaskDependentOn(task, targetTaskId)`**: Checks if a task has a direct or indirect dependency on another task.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
These functions are for managing the project and its configuration.
|
||||
|
||||
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`.
|
||||
- **`migrateProject()`**: Migrates the project to the new `.taskmaster` directory structure.
|
||||
- **`performResearch(query, options)`**: Performs AI-powered research with project context.
|
||||
|
||||
This overview should provide a solid foundation for creating user-facing documentation. For more detailed information on each function, refer to the source code in `scripts/modules/task-manager/`.
|
||||
|
||||
|
||||
# MCP Interface Synopsis
|
||||
|
||||
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add_task`**: Creates a new task.
|
||||
- **`add_subtask`**: Adds a subtask to a parent task.
|
||||
- **`remove_task`**: Removes one or more tasks or subtasks.
|
||||
- **`remove_subtask`**: Removes a subtask from its parent.
|
||||
- **`update_task`**: Updates a single task.
|
||||
- **`update_subtask`**: Appends information to a subtask.
|
||||
- **`update`**: Updates multiple tasks.
|
||||
- **`move_task`**: Moves a task or subtask.
|
||||
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`get_tasks`**: Lists all tasks.
|
||||
- **`get_task`**: Shows the details of a specific task.
|
||||
- **`next_task`**: Shows the next task to work on.
|
||||
- **`set_task_status`**: Sets the status of a task or subtask.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse_prd`**: Parses a PRD to generate tasks.
|
||||
- **`expand_task`**: Expands a task into subtasks.
|
||||
- **`expand_all`**: Expands all eligible tasks.
|
||||
- **`analyze_project_complexity`**: Analyzes task complexity.
|
||||
- **`complexity_report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
- **`add_dependency`**: Adds a dependency to a task.
|
||||
- **`remove_dependency`**: Removes a dependency from a task.
|
||||
- **`validate_dependencies`**: Validates the dependencies of all tasks.
|
||||
- **`fix_dependencies`**: Fixes any invalid dependencies.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
- **`initialize_project`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`models`**: Manages AI model configurations.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
|
||||
### 6. Tag Management
|
||||
|
||||
- **`add_tag`**: Creates a new tag.
|
||||
- **`delete_tag`**: Deletes a tag.
|
||||
- **`list_tags`**: Lists all tags.
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
|
||||
This synopsis provides a clear overview of the MCP interface and its available tools, which will be valuable for anyone writing documentation for developers who need to interact with the Task Master application programmatically.
|
||||
68
apps/docs/capabilities/mcp.mdx
Normal file
68
apps/docs/capabilities/mcp.mdx
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: MCP Tools
|
||||
sidebarTitle: "MCP Tools"
|
||||
---
|
||||
|
||||
# MCP Tools
|
||||
|
||||
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add_task`**: Creates a new task.
|
||||
- **`add_subtask`**: Adds a subtask to a parent task.
|
||||
- **`remove_task`**: Removes one or more tasks or subtasks.
|
||||
- **`remove_subtask`**: Removes a subtask from its parent.
|
||||
- **`update_task`**: Updates a single task.
|
||||
- **`update_subtask`**: Appends information to a subtask.
|
||||
- **`update`**: Updates multiple tasks.
|
||||
- **`move_task`**: Moves a task or subtask.
|
||||
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`get_tasks`**: Lists all tasks.
|
||||
- **`get_task`**: Shows the details of a specific task.
|
||||
- **`next_task`**: Shows the next task to work on.
|
||||
- **`set_task_status`**: Sets the status of a task or subtask.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse_prd`**: Parses a PRD to generate tasks.
|
||||
- **`expand_task`**: Expands a task into subtasks.
|
||||
- **`expand_all`**: Expands all eligible tasks.
|
||||
- **`analyze_project_complexity`**: Analyzes task complexity.
|
||||
- **`complexity_report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
- **`add_dependency`**: Adds a dependency to a task.
|
||||
- **`remove_dependency`**: Removes a dependency from a task.
|
||||
- **`validate_dependencies`**: Validates the dependencies of all tasks.
|
||||
- **`fix_dependencies`**: Fixes any invalid dependencies.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
- **`initialize_project`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`models`**: Manages AI model configurations.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
|
||||
### 6. Tag Management
|
||||
|
||||
- **`add_tag`**: Creates a new tag.
|
||||
- **`delete_tag`**: Deletes a tag.
|
||||
- **`list_tags`**: Lists all tags.
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
163
apps/docs/capabilities/task-structure.mdx
Normal file
163
apps/docs/capabilities/task-structure.mdx
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: "Task Structure"
|
||||
sidebarTitle: "Task Structure"
|
||||
description: "Tasks in Task Master follow a specific format designed to provide comprehensive information for both humans and AI assistants."
|
||||
---
|
||||
|
||||
## Task Fields in tasks.json
|
||||
|
||||
Tasks in tasks.json have the following structure:
|
||||
|
||||
| Field | Description | Example |
|
||||
| -------------- | ---------------------------------------------- | ------------------------------------------------------ |
|
||||
| `id` | Unique identifier for the task. | `1` |
|
||||
| `title` | Brief, descriptive title. | `"Initialize Repo"` |
|
||||
| `description` | What the task involves. | `"Create a new repository, set up initial structure."` |
|
||||
| `status` | Current state. | `"pending"`, `"done"`, `"deferred"` |
|
||||
| `dependencies` | Prerequisite task IDs. ✅ Completed, ⏱️ Pending | `[1, 2]` |
|
||||
| `priority` | Task importance. | `"high"`, `"medium"`, `"low"` |
|
||||
| `details` | Implementation instructions. | `"Use GitHub client ID/secret, handle callback..."` |
|
||||
| `testStrategy` | How to verify success. | `"Deploy and confirm 'Hello World' response."` |
|
||||
| `subtasks` | Nested subtasks related to the main task. | `[{"id": 1, "title": "Configure OAuth", ...}]` |
|
||||
|
||||
## Task File Format
|
||||
|
||||
Individual task files follow this format:
|
||||
|
||||
```
|
||||
# Task ID: <id>
|
||||
# Title: <title>
|
||||
# Status: <status>
|
||||
# Dependencies: <comma-separated list of dependency IDs>
|
||||
# Priority: <priority>
|
||||
# Description: <brief description>
|
||||
# Details:
|
||||
<detailed implementation notes>
|
||||
|
||||
# Test Strategy:
|
||||
<verification approach>
|
||||
```
|
||||
|
||||
## Features in Detail
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Analyzing Task Complexity">
|
||||
The `analyze-complexity` command:
|
||||
|
||||
- Analyzes each task using AI to assess its complexity on a scale of 1-10
|
||||
- Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS
|
||||
- Generates tailored prompts for expanding each task
|
||||
- Creates a comprehensive JSON report with ready-to-use commands
|
||||
- Saves the report to scripts/task-complexity-report.json by default
|
||||
|
||||
The generated report contains:
|
||||
|
||||
- Complexity analysis for each task (scored 1-10)
|
||||
- Recommended number of subtasks based on complexity
|
||||
- AI-generated expansion prompts customized for each task
|
||||
- Ready-to-run expansion commands directly within each task analysis
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing Complexity Report">
|
||||
The `complexity-report` command:
|
||||
|
||||
- Displays a formatted, easy-to-read version of the complexity analysis report
|
||||
- Shows tasks organized by complexity score (highest to lowest)
|
||||
- Provides complexity distribution statistics (low, medium, high)
|
||||
- Highlights tasks recommended for expansion based on threshold score
|
||||
- Includes ready-to-use expansion commands for each complex task
|
||||
- If no report exists, offers to generate one on the spot
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Smart Task Expansion">
|
||||
The `expand` command automatically checks for and uses the complexity report:
|
||||
|
||||
When a complexity report exists:
|
||||
|
||||
- Tasks are automatically expanded using the recommended subtask count and prompts
|
||||
- When expanding all tasks, they're processed in order of complexity (highest first)
|
||||
- Research-backed generation is preserved from the complexity analysis
|
||||
- You can still override recommendations with explicit command-line options
|
||||
|
||||
Example workflow:
|
||||
|
||||
```bash
|
||||
# Generate the complexity analysis report with research capabilities
|
||||
task-master analyze-complexity --research
|
||||
|
||||
# Review the report in a readable format
|
||||
task-master complexity-report
|
||||
|
||||
# Expand tasks using the optimized recommendations
|
||||
task-master expand --id=8
|
||||
# or expand all tasks
|
||||
task-master expand --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Finding the Next Task">
|
||||
The `next` command:
|
||||
|
||||
- Identifies tasks that are pending/in-progress and have all dependencies satisfied
|
||||
- Prioritizes tasks by priority level, dependency count, and task ID
|
||||
- Displays comprehensive information about the selected task:
|
||||
- Basic task details (ID, title, priority, dependencies)
|
||||
- Implementation details
|
||||
- Subtasks (if they exist)
|
||||
- Provides contextual suggested actions:
|
||||
- Command to mark the task as in-progress
|
||||
- Command to mark the task as done
|
||||
- Commands for working with subtasks
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing Specific Task Details">
|
||||
The `show` command:
|
||||
|
||||
- Displays comprehensive details about a specific task or subtask
|
||||
- Shows task status, priority, dependencies, and detailed implementation notes
|
||||
- For parent tasks, displays all subtasks and their status
|
||||
- For subtasks, shows parent task relationship
|
||||
- Provides contextual action suggestions based on the task's state
|
||||
- Works with both regular tasks and subtasks (using the format taskId.subtaskId)
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Best Practices for AI-Driven Development
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="📝 Detailed PRD" icon="lightbulb">
|
||||
The more detailed your PRD, the better the generated tasks will be.
|
||||
</Card>
|
||||
|
||||
<Card title="👀 Review Tasks" icon="magnifying-glass">
|
||||
After parsing the PRD, review the tasks to ensure they make sense and have appropriate dependencies.
|
||||
</Card>
|
||||
|
||||
<Card title="📊 Analyze Complexity" icon="chart-line">
|
||||
Use the complexity analysis feature to identify which tasks should be broken down further.
|
||||
</Card>
|
||||
|
||||
<Card title="⛓️ Follow Dependencies" icon="link">
|
||||
Always respect task dependencies - the Cursor agent will help with this.
|
||||
</Card>
|
||||
|
||||
<Card title="🔄 Update As You Go" icon="arrows-rotate">
|
||||
If your implementation diverges from the plan, use the update command to keep future tasks aligned.
|
||||
</Card>
|
||||
|
||||
<Card title="📦 Break Down Tasks" icon="boxes-stacked">
|
||||
Use the expand command to break down complex tasks into manageable subtasks.
|
||||
</Card>
|
||||
|
||||
<Card title="🔄 Regenerate Files" icon="file-arrow-up">
|
||||
After any updates to tasks.json, regenerate the task files to keep them in sync.
|
||||
</Card>
|
||||
|
||||
<Card title="💬 Provide Context" icon="comment">
|
||||
When asking the Cursor agent to help with a task, provide context about what you're trying to achieve.
|
||||
</Card>
|
||||
|
||||
<Card title="✅ Validate Dependencies" icon="circle-check">
|
||||
Periodically run the validate-dependencies command to check for invalid or circular dependencies.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
221
apps/docs/capabilities/workflows.mdx
Normal file
221
apps/docs/capabilities/workflows.mdx
Normal file
@@ -0,0 +1,221 @@
|
||||
---
|
||||
title: "Workflow Engine"
|
||||
sidebarTitle: "Workflows"
|
||||
---
|
||||
|
||||
The Task Master Workflow Engine provides advanced task execution capabilities with git worktree isolation and Claude Code process management.
|
||||
|
||||
## Overview
|
||||
|
||||
The workflow system extends Task Master with powerful execution features:
|
||||
|
||||
- **Git Worktree Isolation**: Each task runs in its own isolated git worktree
|
||||
- **Process Sandboxing**: Spawns dedicated Claude Code processes for task execution
|
||||
- **Real-time Monitoring**: Track workflow progress and process output
|
||||
- **State Management**: Persistent workflow state across sessions
|
||||
- **Parallel Execution**: Run multiple tasks concurrently with resource limits
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Starting a Workflow
|
||||
|
||||
```bash
|
||||
# Start workflow for a specific task
|
||||
task-master workflow start 1.2
|
||||
|
||||
# Using the alias
|
||||
task-master workflow run 1.2
|
||||
```
|
||||
|
||||
### Monitoring Workflows
|
||||
|
||||
```bash
|
||||
# List all active workflows
|
||||
task-master workflow list
|
||||
|
||||
# Check specific workflow status
|
||||
task-master workflow status workflow-1.2-1234567890-abc123
|
||||
|
||||
# Using the alias
|
||||
task-master workflow info workflow-1.2-1234567890-abc123
|
||||
```
|
||||
|
||||
### Stopping Workflows
|
||||
|
||||
```bash
|
||||
# Stop a running workflow
|
||||
task-master workflow stop workflow-1.2-1234567890-abc123
|
||||
|
||||
# Force stop using alias
|
||||
task-master workflow kill workflow-1.2-1234567890-abc123
|
||||
```
|
||||
|
||||
## Workflow States
|
||||
|
||||
| State | Description |
|
||||
|-------|-------------|
|
||||
| `pending` | Created but not started |
|
||||
| `initializing` | Setting up worktree and process |
|
||||
| `running` | Active execution in progress |
|
||||
| `paused` | Temporarily stopped |
|
||||
| `completed` | Successfully finished |
|
||||
| `failed` | Error occurred during execution |
|
||||
| `cancelled` | User cancelled the workflow |
|
||||
| `timeout` | Exceeded time limit |
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Set these environment variables to customize workflow behavior:
|
||||
|
||||
- `TASKMASTER_WORKFLOW_DEBUG`: Enable debug logging
|
||||
- `TASKMASTER_CLAUDE_PATH`: Custom Claude Code executable path
|
||||
- `TASKMASTER_WORKTREE_BASE`: Base directory for worktrees
|
||||
- `TASKMASTER_MAX_CONCURRENT`: Maximum concurrent workflows
|
||||
|
||||
### Example Configuration
|
||||
|
||||
```bash
|
||||
# Enable debug mode
|
||||
export TASKMASTER_WORKFLOW_DEBUG=true
|
||||
|
||||
# Set custom Claude path
|
||||
export TASKMASTER_CLAUDE_PATH=/usr/local/bin/claude
|
||||
|
||||
# Set worktree base directory
|
||||
export TASKMASTER_WORKTREE_BASE=./worktrees
|
||||
|
||||
# Limit concurrent workflows
|
||||
export TASKMASTER_MAX_CONCURRENT=3
|
||||
```
|
||||
|
||||
## Git Worktree Integration
|
||||
|
||||
### How It Works
|
||||
|
||||
When you start a workflow:
|
||||
|
||||
1. **Worktree Creation**: A new git worktree is created for the task
|
||||
2. **Process Spawn**: A dedicated Claude Code process is launched in the worktree
|
||||
3. **Task Execution**: The task runs in complete isolation
|
||||
4. **State Tracking**: Progress is monitored and persisted
|
||||
5. **Cleanup**: Worktree is removed when workflow completes
|
||||
|
||||
### Worktree Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── .git/ # Main repository
|
||||
├── src/ # Main working directory
|
||||
└── worktrees/ # Workflow worktrees
|
||||
├── task-1.2/ # Worktree for task 1.2
|
||||
├── task-2.1/ # Worktree for task 2.1
|
||||
└── task-3.4/ # Worktree for task 3.4
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### When to Use Workflows
|
||||
|
||||
Use workflows for tasks that:
|
||||
|
||||
- Require isolated development environments
|
||||
- Need dedicated Claude Code attention
|
||||
- Benefit from parallel execution
|
||||
- Require process monitoring and state tracking
|
||||
|
||||
### Workflow Management
|
||||
|
||||
- **Start workflows for complex tasks** that need focused execution
|
||||
- **Monitor progress** using `workflow status` command
|
||||
- **Clean up completed workflows** to free resources
|
||||
- **Use meaningful task descriptions** for better workflow tracking
|
||||
|
||||
### Resource Management
|
||||
|
||||
- **Limit concurrent workflows** based on system resources
|
||||
- **Monitor workflow output** for debugging and progress tracking
|
||||
- **Stop unnecessary workflows** to free up resources
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Worktree Creation Fails**
|
||||
```bash
|
||||
# Check git version (requires 2.5+)
|
||||
git --version
|
||||
|
||||
# Verify project is a git repository
|
||||
git status
|
||||
```
|
||||
|
||||
**Claude Code Not Found**
|
||||
```bash
|
||||
# Check Claude installation
|
||||
which claude
|
||||
|
||||
# Set custom path
|
||||
export TASKMASTER_CLAUDE_PATH=/path/to/claude
|
||||
```
|
||||
|
||||
**Permission Errors**
|
||||
```bash
|
||||
# Check worktree directory permissions
|
||||
chmod -R 755 ./worktrees
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug logging for troubleshooting:
|
||||
|
||||
```bash
|
||||
export TASKMASTER_WORKFLOW_DEBUG=true
|
||||
task-master workflow start 1.2
|
||||
```
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### With VS Code Extension
|
||||
|
||||
The workflow engine integrates with the Task Master VS Code extension to provide:
|
||||
|
||||
- **Workflow Tree View**: Visual workflow management
|
||||
- **Process Monitoring**: Real-time output streaming
|
||||
- **Worktree Navigation**: Quick access to isolated workspaces
|
||||
- **Status Indicators**: Visual workflow state tracking
|
||||
|
||||
### With Task Management
|
||||
|
||||
```bash
|
||||
# Typical workflow
|
||||
task-master next # Find next task
|
||||
task-master workflow start 1.2 # Start workflow
|
||||
task-master workflow status <id> # Monitor progress
|
||||
task-master set-status --id=1.2 --status=done # Mark complete
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
Run multiple workflows simultaneously:
|
||||
|
||||
```bash
|
||||
# Start multiple workflows
|
||||
task-master workflow start 1.2
|
||||
task-master workflow start 2.1
|
||||
task-master workflow start 3.4
|
||||
|
||||
# Monitor all active workflows
|
||||
task-master workflow list
|
||||
```
|
||||
|
||||
### Process Monitoring
|
||||
|
||||
Each workflow provides real-time output monitoring and process management through the workflow engine's event system.
|
||||
|
||||
### State Persistence
|
||||
|
||||
Workflow state is automatically persisted across sessions, allowing you to resume monitoring workflows after restarting the CLI.
|
||||
84
apps/docs/docs.json
Normal file
84
apps/docs/docs.json
Normal file
@@ -0,0 +1,84 @@
|
||||
{
|
||||
"$schema": "https://mintlify.com/docs.json",
|
||||
"theme": "mint",
|
||||
"name": "Task Master",
|
||||
"colors": {
|
||||
"primary": "#3366CC",
|
||||
"light": "#6699FF",
|
||||
"dark": "#24478F"
|
||||
},
|
||||
"favicon": "/favicon.svg",
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Task Master Documentation",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Welcome",
|
||||
"pages": ["introduction"]
|
||||
},
|
||||
{
|
||||
"group": "Getting Started",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Quick Start",
|
||||
"pages": [
|
||||
"getting-started/quick-start/quick-start",
|
||||
"getting-started/quick-start/requirements",
|
||||
"getting-started/quick-start/installation",
|
||||
"getting-started/quick-start/configuration-quick",
|
||||
"getting-started/quick-start/prd-quick",
|
||||
"getting-started/quick-start/tasks-quick",
|
||||
"getting-started/quick-start/execute-quick"
|
||||
]
|
||||
},
|
||||
"getting-started/faq",
|
||||
"getting-started/contribute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Best Practices",
|
||||
"pages": [
|
||||
"best-practices/index",
|
||||
"best-practices/configuration-advanced",
|
||||
"best-practices/advanced-tasks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Technical Capabilities",
|
||||
"pages": [
|
||||
"capabilities/mcp",
|
||||
"capabilities/cli-root-commands",
|
||||
"capabilities/workflows",
|
||||
"capabilities/task-structure"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"global": {
|
||||
"anchors": [
|
||||
{
|
||||
"anchor": "Github",
|
||||
"href": "https://github.com/eyaltoledano/claude-task-master",
|
||||
"icon": "github"
|
||||
},
|
||||
{
|
||||
"anchor": "Discord",
|
||||
"href": "https://discord.gg/fWJkU7rf",
|
||||
"icon": "discord"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"logo": {
|
||||
"light": "/logo/task-master-logo.png",
|
||||
"dark": "/logo/task-master-logo.png"
|
||||
},
|
||||
"footer": {
|
||||
"socials": {
|
||||
"x": "https://x.com/TaskmasterAI",
|
||||
"github": "https://github.com/eyaltoledano/claude-task-master"
|
||||
}
|
||||
}
|
||||
}
|
||||
9
apps/docs/favicon.svg
Normal file
9
apps/docs/favicon.svg
Normal file
@@ -0,0 +1,9 @@
|
||||
<svg width="100" height="100" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
|
||||
<!-- Blue form with check from logo -->
|
||||
<rect x="16" y="10" width="68" height="80" rx="9" fill="#3366CC"/>
|
||||
<polyline points="33,44 41,55 56,29" fill="none" stroke="#FFFFFF" stroke-width="6"/>
|
||||
<circle cx="33" cy="64" r="4" fill="#FFFFFF"/>
|
||||
<rect x="43" y="61" width="27" height="6" fill="#FFFFFF"/>
|
||||
<circle cx="33" cy="77" r="4" fill="#FFFFFF"/>
|
||||
<rect x="43" y="75" width="27" height="6" fill="#FFFFFF"/>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 513 B |
335
apps/docs/getting-started/contribute.mdx
Normal file
335
apps/docs/getting-started/contribute.mdx
Normal file
@@ -0,0 +1,335 @@
|
||||
# Contributing to Task Master
|
||||
|
||||
Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀
|
||||
|
||||
## 🤝 Our Collaborative Approach
|
||||
|
||||
We're a **PR-friendly team** that values collaboration:
|
||||
|
||||
- ✅ **We review PRs quickly** - Usually within hours, not days
|
||||
- ✅ **We're super reactive** - Expect fast feedback and engagement
|
||||
- ✅ **We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it
|
||||
- ✅ **We're open to all contributions** - From bug fixes to major features
|
||||
|
||||
**We don't mind AI-generated code**, but we do expect you to:
|
||||
|
||||
- ✅ **Review and understand** what the AI generated
|
||||
- ✅ **Test the code thoroughly** before submitting
|
||||
- ✅ **Ensure it's well-written** and follows our patterns
|
||||
- ❌ **Don't submit "AI slop"** - untested, unreviewed AI output
|
||||
|
||||
> **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time!
|
||||
|
||||
## 🚀 Quick Start for Contributors
|
||||
|
||||
### 1. Fork and Clone
|
||||
|
||||
```bash
|
||||
git clone https://github.com/YOUR_USERNAME/claude-task-master.git
|
||||
cd claude-task-master
|
||||
npm install
|
||||
```
|
||||
|
||||
### 2. Create a Feature Branch
|
||||
|
||||
**Important**: Always target the `next` branch, not `main`:
|
||||
|
||||
```bash
|
||||
git checkout next
|
||||
git pull origin next
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
### 3. Make Your Changes
|
||||
|
||||
Follow our development guidelines below.
|
||||
|
||||
### 4. Test Everything Yourself
|
||||
|
||||
**Before submitting your PR**, ensure:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting if needed
|
||||
npm run format
|
||||
```
|
||||
|
||||
### 5. Create a Changeset
|
||||
|
||||
**Required for most changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
See the [Changeset Guidelines](#changeset-guidelines) below for details.
|
||||
|
||||
### 6. Submit Your PR
|
||||
|
||||
- Target the `next` branch
|
||||
- Write a clear description
|
||||
- Reference any related issues
|
||||
|
||||
## 📋 Development Guidelines
|
||||
|
||||
### Branch Strategy
|
||||
|
||||
- **`main`**: Production-ready code
|
||||
- **`next`**: Development branch - **target this for PRs**
|
||||
- **Feature branches**: `feature/description` or `fix/description`
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
1. **Write tests** for new functionality
|
||||
2. **Follow existing patterns** in the codebase
|
||||
3. **Add JSDoc comments** for functions
|
||||
4. **Keep functions focused** and single-purpose
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
Your PR **must pass all CI checks**:
|
||||
|
||||
- ✅ **Unit tests**: `npm test`
|
||||
- ✅ **Format check**: `npm run format-check`
|
||||
|
||||
**Test your changes locally first** - this saves review time and shows you care about quality.
|
||||
|
||||
## 📦 Changeset Guidelines
|
||||
|
||||
We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs.
|
||||
|
||||
### When to Create a Changeset
|
||||
|
||||
**Always create a changeset for**:
|
||||
|
||||
- ✅ New features
|
||||
- ✅ Bug fixes
|
||||
- ✅ Breaking changes
|
||||
- ✅ Performance improvements
|
||||
- ✅ User-facing documentation updates
|
||||
- ✅ Dependency updates that affect functionality
|
||||
|
||||
**Skip changesets for**:
|
||||
|
||||
- ❌ Internal documentation only
|
||||
- ❌ Test-only changes
|
||||
- ❌ Code formatting/linting
|
||||
- ❌ Development tooling that doesn't affect users
|
||||
|
||||
### How to Create a Changeset
|
||||
|
||||
1. **After making your changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
2. **Choose the bump type**:
|
||||
|
||||
- **Major**: Breaking changes
|
||||
- **Minor**: New features
|
||||
- **Patch**: Bug fixes, docs, performance improvements
|
||||
|
||||
3. **Write a clear summary**:
|
||||
|
||||
```
|
||||
Add support for custom AI models in MCP configuration
|
||||
```
|
||||
|
||||
4. **Commit the changeset file** with your changes:
|
||||
```bash
|
||||
git add .changeset/*.md
|
||||
git commit -m "feat: add custom AI model support"
|
||||
```
|
||||
|
||||
### Changeset vs Git Commit Messages
|
||||
|
||||
- **Changeset summary**: User-facing, goes in CHANGELOG.md
|
||||
- **Git commit**: Developer-facing, explains the technical change
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Changeset summary (user-facing)
|
||||
"Add support for custom Ollama models"
|
||||
|
||||
# Git commit message (developer-facing)
|
||||
"feat(models): implement custom Ollama model validation
|
||||
|
||||
- Add model validation for custom Ollama endpoints
|
||||
- Update configuration schema to support custom models
|
||||
- Add tests for new validation logic"
|
||||
```
|
||||
|
||||
## 🔧 Development Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 18+
|
||||
- npm or yarn
|
||||
|
||||
### Environment Setup
|
||||
|
||||
1. **Copy environment template**:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. **Add your API keys** (for testing AI features):
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
OPENAI_API_KEY=your_key_here
|
||||
# Add others as needed
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Run tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run with coverage
|
||||
npm run test:coverage
|
||||
|
||||
# Run E2E tests
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
### Code Formatting
|
||||
|
||||
We use Prettier for consistent formatting:
|
||||
|
||||
```bash
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting
|
||||
npm run format
|
||||
```
|
||||
|
||||
## 📝 PR Guidelines
|
||||
|
||||
### Before Submitting
|
||||
|
||||
- [ ] **Target the `next` branch**
|
||||
- [ ] **Test everything locally**
|
||||
- [ ] **Run the full test suite**
|
||||
- [ ] **Check code formatting**
|
||||
- [ ] **Create a changeset** (if needed)
|
||||
- [ ] **Re-read your changes** - ensure they're clean and well-thought-out
|
||||
|
||||
### PR Description Template
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
|
||||
Brief description of what this PR does.
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Breaking change
|
||||
- [ ] Documentation update
|
||||
|
||||
## Testing
|
||||
|
||||
- [ ] I have tested this locally
|
||||
- [ ] All existing tests pass
|
||||
- [ ] I have added tests for new functionality
|
||||
|
||||
## Changeset
|
||||
|
||||
- [ ] I have created a changeset (or this change doesn't need one)
|
||||
|
||||
## Additional Notes
|
||||
|
||||
Any additional context or notes for reviewers.
|
||||
```
|
||||
|
||||
### What We Look For
|
||||
|
||||
✅ **Good PRs**:
|
||||
|
||||
- Clear, focused changes
|
||||
- Comprehensive testing
|
||||
- Good commit messages
|
||||
- Proper changeset (when needed)
|
||||
- Self-reviewed code
|
||||
|
||||
❌ **Avoid**:
|
||||
|
||||
- Massive PRs that change everything
|
||||
- Untested code
|
||||
- Formatting issues
|
||||
- Missing changesets for user-facing changes
|
||||
- AI-generated code that wasn't reviewed
|
||||
|
||||
## 🏗️ Project Structure
|
||||
|
||||
```
|
||||
claude-task-master/
|
||||
├── bin/ # CLI executables
|
||||
├── mcp-server/ # MCP server implementation
|
||||
├── scripts/ # Core task management logic
|
||||
├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here)
|
||||
├── tests/ # Test files
|
||||
├── docs/ # Documentation
|
||||
└── .cursor/ # Cursor IDE rules and configuration
|
||||
└── assets/ # Assets like rules and configuration for all IDEs
|
||||
```
|
||||
|
||||
### Key Areas for Contribution
|
||||
|
||||
- **CLI Commands**: `scripts/modules/commands.js`
|
||||
- **MCP Tools**: `mcp-server/src/tools/`
|
||||
- **Core Logic**: `scripts/modules/task-manager/`
|
||||
- **AI Providers**: `src/ai-providers/`
|
||||
- **Tests**: `tests/`
|
||||
|
||||
## 🐛 Reporting Issues
|
||||
|
||||
### Bug Reports
|
||||
|
||||
Include:
|
||||
|
||||
- Task Master version
|
||||
- Node.js version
|
||||
- Operating system
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Error messages/logs
|
||||
|
||||
### Feature Requests
|
||||
|
||||
Include:
|
||||
|
||||
- Clear description of the feature
|
||||
- Use case/motivation
|
||||
- Proposed implementation (if you have ideas)
|
||||
- Willingness to contribute
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
|
||||
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
|
||||
|
||||
## 📄 License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause).
|
||||
|
||||
---
|
||||
|
||||
**Thank you for contributing to Task Master!** 🎉
|
||||
|
||||
Your contributions help make AI-driven development more accessible and efficient for everyone.
|
||||
12
apps/docs/getting-started/faq.mdx
Normal file
12
apps/docs/getting-started/faq.mdx
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
title: FAQ
|
||||
sidebarTitle: "FAQ"
|
||||
---
|
||||
|
||||
Coming soon.
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
|
||||
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
|
||||
112
apps/docs/getting-started/quick-start/configuration-quick.mdx
Normal file
112
apps/docs/getting-started/quick-start/configuration-quick.mdx
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: Configuration
|
||||
sidebarTitle: "Configuration"
|
||||
|
||||
---
|
||||
|
||||
Before getting started with Task Master, you'll need to set up your API keys. There are a couple of ways to do this depending on whether you're using the CLI or working inside MCP. It's also a good time to start getting familiar with the other configuration options available — even if you don’t need to adjust them yet, knowing what’s possible will help down the line.
|
||||
|
||||
## API Key Setup
|
||||
|
||||
Task Master uses environment variables to securely store provider API keys and optional endpoint URLs.
|
||||
|
||||
### MCP Usage: mcp.json file
|
||||
|
||||
For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json file.
|
||||
|
||||
```java .env lines icon="java"
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE",
|
||||
"GITHUB_API_KEY": "GITHUB_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### CLI Usage: `.env` File
|
||||
|
||||
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
||||
|
||||
|
||||
|
||||
```java .env lines icon="java"
|
||||
# Required API keys for providers configured in .taskmaster/config.json
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
|
||||
PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# GOOGLE_API_KEY=AIzaSy...
|
||||
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
# etc.
|
||||
|
||||
# Optional Endpoint Overrides
|
||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
||||
#
|
||||
# Azure OpenAI Configuration
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
```
|
||||
|
||||
## What Else Can Be Configured?
|
||||
|
||||
The main configuration file (`.taskmaster/config.json`) allows you to control nearly every aspect of Task Master’s behavior. Here’s a high-level look at what you can customize:
|
||||
|
||||
<Tip>
|
||||
You don’t need to configure everything up front. Most settings can be left as defaults or updated later as your workflow evolves.
|
||||
</Tip>
|
||||
|
||||
<Accordion title="View Configuration Options">
|
||||
|
||||
### Models and Providers
|
||||
- Role-based model setup: `main`, `research`, `fallback`
|
||||
- Provider selection (Anthropic, OpenAI, Perplexity, etc.)
|
||||
- Model IDs per role
|
||||
- Temperature, max tokens, and other generation settings
|
||||
- Custom base URLs for OpenAI-compatible APIs
|
||||
|
||||
### Global Settings
|
||||
- `logLevel`: Logging verbosity
|
||||
- `debug`: Enable/disable debug mode
|
||||
- `projectName`: Optional name for your project
|
||||
- `defaultTag`: Default tag for task grouping
|
||||
- `defaultSubtasks`: Number of subtasks to auto-generate
|
||||
- `defaultPriority`: Priority level for new tasks
|
||||
|
||||
### API Endpoint Overrides
|
||||
- `ollamaBaseURL`: Custom Ollama server URL
|
||||
- `azureBaseURL`: Global Azure endpoint
|
||||
- `vertexProjectId`: Google Vertex AI project ID
|
||||
- `vertexLocation`: Region for Vertex AI models
|
||||
|
||||
### Tag and Git Integration
|
||||
- Default tag context per project
|
||||
- Support for task isolation by tag
|
||||
- Manual tag creation from Git branches
|
||||
|
||||
### State Management
|
||||
- Active tag tracking
|
||||
- Migration state
|
||||
- Last tag switch timestamp
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Note>
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/docs/best-practices/configuration-advanced) page.
|
||||
</Note>
|
||||
59
apps/docs/getting-started/quick-start/execute-quick.mdx
Normal file
59
apps/docs/getting-started/quick-start/execute-quick.mdx
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Executing Tasks
|
||||
sidebarTitle: "Executing Tasks"
|
||||
---
|
||||
|
||||
Now that your tasks are generated and reviewed you are ready to begin executing.
|
||||
|
||||
## Select the Task to Work on: Next Task
|
||||
|
||||
Task Master has the "next" command to find the next task to work on. You can access it with the following request:
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
Alternatively you can use the CLI to show the next task
|
||||
```bash
|
||||
task-master next
|
||||
```
|
||||
|
||||
## Discuss Task
|
||||
When you know what task to work on next you can then start chatting with the agent to make sure it understands the plan of action.
|
||||
|
||||
You can tag relevant files and folders so it knows what context to pull up as it generates its plan. For example:
|
||||
```
|
||||
Please review Task 5 and confirm you understand how to execute before beginning. Refer to @models @api and @schema
|
||||
```
|
||||
The agent will begin analyzing the task and files and respond with the steps to complete the task.
|
||||
|
||||
## Agent Task execution
|
||||
|
||||
If you agree with the plan of action, tell the agent to get started.
|
||||
```
|
||||
You may begin. I believe in you.
|
||||
```
|
||||
|
||||
## Review and Test
|
||||
|
||||
Once the agent is finished with the task you can refer to the task testing strategy to make sure it was completed correctly.
|
||||
|
||||
## Update Task Status
|
||||
|
||||
If the task was completed correctly you can update the status to done
|
||||
|
||||
```
|
||||
Please mark Task 5 as done
|
||||
```
|
||||
The agent will execute
|
||||
```bash
|
||||
task-master set-status --id=5 --status=done
|
||||
```
|
||||
|
||||
## Rules and Context
|
||||
|
||||
If you ran into problems and had to debug errors you can create new rules as you go. This helps build context on your codebase that helps the creation and execution of future tasks.
|
||||
|
||||
## On to the Next Task!
|
||||
|
||||
By now you have all you need to get started executing code faster and smarter with Task Master.
|
||||
|
||||
If you have any questions please check out [Frequently Asked Questions](/docs/getting-started/faq)
|
||||
159
apps/docs/getting-started/quick-start/installation.mdx
Normal file
159
apps/docs/getting-started/quick-start/installation.mdx
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
title: Installation
|
||||
sidebarTitle: "Installation"
|
||||
---
|
||||
|
||||
Now that you have Node.js and your first API Key, you are ready to begin installing Task Master in one of three ways.
|
||||
|
||||
<Note>Cursor Users Can Use the One Click Install Below</Note>
|
||||
<Accordion title="Quick Install for Cursor 1.0+ (One-Click)">
|
||||
|
||||
<a href="cursor://anysphere.cursor-deeplink/mcp/install?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUJFX0FQSV9LRVkiOiJZT1VSX0FaVVJFX0tFWV9IRVJFIiwiT0xMQU1BX0FQSV9LRVkiOiJZT1VSX09MTEFNQV9BUElfS0VZX0hFUkUifX0%3D">
|
||||
<img
|
||||
className="block dark:hidden"
|
||||
src="https://cursor.com/deeplink/mcp-install-light.png"
|
||||
alt="Add Task Master MCP server to Cursor"
|
||||
noZoom
|
||||
/>
|
||||
<img
|
||||
className="hidden dark:block"
|
||||
src="https://cursor.com/deeplink/mcp-install-dark.png"
|
||||
alt="Add Task Master MCP server to Cursor"
|
||||
noZoom
|
||||
/>
|
||||
</a>
|
||||
|
||||
Or click the copy button (top-right of code block) then paste into your browser:
|
||||
|
||||
```text
|
||||
cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQo=
|
||||
```
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
</Accordion>
|
||||
## Installation Options
|
||||
|
||||
|
||||
<Accordion title="Option 1: MCP (Recommended)">
|
||||
|
||||
MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
## 1. Add your MCP config at the following path depending on your editor
|
||||
|
||||
| Editor | Scope | Linux/macOS Path | Windows Path | Key |
|
||||
| ------------ | ------- | ------------------------------------- | ------------------------------------------------- | ------------ |
|
||||
| **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` |
|
||||
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
|
||||
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
|
||||
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
|
||||
|
||||
## Manual Configuration
|
||||
|
||||
### Cursor & Windsurf (`mcpServers`)
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
|
||||
|
||||
### VS Code (`servers` + `type`)
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||
},
|
||||
"type": "stdio"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||
|
||||
#### 2. (Cursor-only) Enable Taskmaster MCP
|
||||
|
||||
Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable task-master-ai with the toggle
|
||||
|
||||
#### 3. (Optional) Configure the models you want to use
|
||||
|
||||
In your editor's AI chat pane, say:
|
||||
|
||||
```txt
|
||||
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
|
||||
```
|
||||
|
||||
For example, to use Claude Code (no API key required):
|
||||
```txt
|
||||
Change the main model to claude-code/sonnet
|
||||
```
|
||||
|
||||
#### 4. Initialize Task Master
|
||||
|
||||
In your editor's AI chat pane, say:
|
||||
|
||||
```txt
|
||||
Initialize taskmaster-ai in my project
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Option 2: Using Command Line">
|
||||
|
||||
## CLI Installation
|
||||
|
||||
```bash
|
||||
# Install globally
|
||||
npm install -g task-master-ai
|
||||
|
||||
# OR install locally within your project
|
||||
npm install task-master-ai
|
||||
```
|
||||
|
||||
## Initialize a new project
|
||||
|
||||
```bash
|
||||
# If installed globally
|
||||
task-master init
|
||||
|
||||
# If installed locally
|
||||
npx task-master init
|
||||
|
||||
# Initialize project with specific rules
|
||||
task-master init --rules cursor,windsurf,vscode
|
||||
```
|
||||
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
</Accordion>
|
||||
4
apps/docs/getting-started/quick-start/moving-forward.mdx
Normal file
4
apps/docs/getting-started/quick-start/moving-forward.mdx
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
title: Moving Forward
|
||||
sidebarTitle: "Moving Forward"
|
||||
---
|
||||
81
apps/docs/getting-started/quick-start/prd-quick.mdx
Normal file
81
apps/docs/getting-started/quick-start/prd-quick.mdx
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
title: PRD Creation and Parsing
|
||||
sidebarTitle: "PRD Creation and Parsing"
|
||||
---
|
||||
|
||||
# Writing a PRD
|
||||
|
||||
A PRD (Product Requirements Document) is the starting point of every task flow in Task Master. It defines what you're building and why. A clear PRD dramatically improves the quality of your tasks, your model outputs, and your final product — so it’s worth taking the time to get it right.
|
||||
|
||||
<Tip>
|
||||
You don’t need to define your whole app up front. You can write a focused PRD just for the next feature or module you’re working on.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
You can start with an empty project or you can start with a feature PRD on an existing project.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
You can add and parse multiple PRDs per project using the --append flag
|
||||
</Tip>
|
||||
|
||||
## What Makes a Good PRD?
|
||||
|
||||
- Clear objective — what’s the outcome or feature?
|
||||
- Context — what’s already in place or assumed?
|
||||
- Constraints — what limits or requirements need to be respected?
|
||||
- Reasoning — why are you building it this way?
|
||||
|
||||
The more context you give the model, the better the breakdown and results.
|
||||
|
||||
---
|
||||
|
||||
## Writing a PRD for Task Master
|
||||
|
||||
<Note>An example PRD can be found in .taskmaster/templates/example_prd.txt</Note>
|
||||
|
||||
|
||||
You can co-write your PRD with an LLM model using the following workflow:
|
||||
|
||||
1. **Chat about requirements** — explain what you want to build.
|
||||
2. **Show an example PRD** — share the example PRD so the model understands the expected format. The example uses formatting that work well with Task Master's code. Following the example will yield better results.
|
||||
3. **Iterate and refine** — work with the model to shape the draft into a clear and well-structured PRD.
|
||||
|
||||
This approach works great in Cursor, or anywhere you use a chat-based LLM.
|
||||
|
||||
---
|
||||
|
||||
## Where to Save Your PRD
|
||||
|
||||
Place your PRD file in the `.taskmaster/docs` folder in your project.
|
||||
|
||||
- You can have **multiple PRDs** per project.
|
||||
- Name your PRDs clearly so they’re easy to reference later.
|
||||
- Examples: `dashboard_redesign.txt`, `user_onboarding.txt`
|
||||
|
||||
---
|
||||
|
||||
# Parse your PRD into Tasks
|
||||
|
||||
This is where the Task Master magic begins.
|
||||
|
||||
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
|
||||
|
||||
```
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at .taskmaster/docs/<prd-name>.txt.
|
||||
```
|
||||
|
||||
The agent will execute the following command which you can alternatively paste into the CLI:
|
||||
|
||||
```bash
|
||||
task-master parse-prd .taskmaster/docs/<prd-name>.txt
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Parse your PRD document
|
||||
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
|
||||
|
||||
Now that you have written and parsed a PRD, you are ready to start setting up your tasks.
|
||||
|
||||
|
||||
19
apps/docs/getting-started/quick-start/quick-start.mdx
Normal file
19
apps/docs/getting-started/quick-start/quick-start.mdx
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
title: Quick Start
|
||||
sidebarTitle: "Quick Start"
|
||||
---
|
||||
|
||||
This guide is for new users who want to start using Task Master with minimal setup time.
|
||||
|
||||
It covers:
|
||||
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
|
||||
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
|
||||
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
|
||||
- [PRD](/docs/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
|
||||
- [Task Setup](/docs/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
|
||||
- [Executing Tasks](/docs/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
|
||||
- [Rules & Context](/docs/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
|
||||
|
||||
<Tip>
|
||||
By the end of this guide, you'll have everything you need to begin working productively with Task Master.
|
||||
</Tip>
|
||||
50
apps/docs/getting-started/quick-start/requirements.mdx
Normal file
50
apps/docs/getting-started/quick-start/requirements.mdx
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Requirements
|
||||
sidebarTitle: "Requirements"
|
||||
---
|
||||
Before you can start using TaskMaster AI, you'll need to install Node.js and set up at least one model API Key.
|
||||
|
||||
## 1. Node.js
|
||||
|
||||
TaskMaster AI is built with Node.js and requires it to run. npm (Node Package Manager) comes bundled with Node.js.
|
||||
|
||||
<Accordion title="Install Node.js">
|
||||
|
||||
### Installation
|
||||
|
||||
**Option 1: Download from official website**
|
||||
1. Visit [nodejs.org](https://nodejs.org)
|
||||
2. Download the **LTS (Long Term Support)** version for your operating system
|
||||
3. Run the installer and follow the setup wizard
|
||||
|
||||
**Option 2: Use a package manager**
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```bash Windows (Chocolatey)
|
||||
choco install nodejs
|
||||
```
|
||||
|
||||
```bash Windows (winget)
|
||||
winget install OpenJS.NodeJS
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
</Accordion>
|
||||
|
||||
## 2. Model API Key
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. For the purpose of a Quick Start we recommend setting up an API Key with Anthropic for your main model and Perplexity for your research model (optional but recommended).
|
||||
|
||||
<Tip>Task Master shows API costs per command used. Most users load $5-10 on their keys and don't have to top it off for a few months.</Tip>
|
||||
|
||||
At least one (1) of the following is required:
|
||||
|
||||
1. Anthropic API key (Claude API) - **recommended for Quick Start**
|
||||
2. OpenAI API key
|
||||
3. Google Gemini API key
|
||||
4. Perplexity API key (for research model)
|
||||
5. xAI API Key (for research or main model)
|
||||
6. OpenRouter API Key (for research or main model)
|
||||
7. Claude Code (no API key required - requires Claude Code CLI)
|
||||
4
apps/docs/getting-started/quick-start/rules-quick.mdx
Normal file
4
apps/docs/getting-started/quick-start/rules-quick.mdx
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
title: Rules and Context
|
||||
sidebarTitle: "Rules and Context"
|
||||
---
|
||||
69
apps/docs/getting-started/quick-start/tasks-quick.mdx
Normal file
69
apps/docs/getting-started/quick-start/tasks-quick.mdx
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Tasks Setup
|
||||
sidebarTitle: "Tasks Setup"
|
||||
---
|
||||
Now that your tasks are generated you can review the plan and prepare for execution.
|
||||
|
||||
<Tip>
|
||||
Not all of the setup steps are required but they are recommended in order to ensure your coding agents work on accurate tasks.
|
||||
</Tip>
|
||||
|
||||
## Expand Tasks
|
||||
Used to add detail to tasks and create subtasks. We recommend expanding all tasks using the MCP request below:
|
||||
```
|
||||
Expand all tasks into subtasks.
|
||||
```
|
||||
The agent will execute
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
## List/Show Tasks
|
||||
|
||||
Used to view task details. It is important to review the plan and ensure it makes sense in your project. Check for correct folder structures, dependencies, out of scope subtasks, etc.
|
||||
|
||||
To see a list of tasks and descriptions use the following command:
|
||||
|
||||
```
|
||||
List all pending tasks so I can review.
|
||||
```
|
||||
To see all tasks in the CLI you can use:
|
||||
```bash
|
||||
task-master list
|
||||
```
|
||||
|
||||
To see all implementation details of an individual task, including subtasks and testing strategy, you can use Show Task:
|
||||
|
||||
```
|
||||
Show task 2 so I can review.
|
||||
```
|
||||
|
||||
```bash
|
||||
task-master show --id=<##>
|
||||
```
|
||||
|
||||
## Update Tasks
|
||||
|
||||
If the task details need to be edited you can update the task using this request:
|
||||
|
||||
```
|
||||
Update Task 2 to use Postgres instead of MongoDB and remove the sharding subtask
|
||||
```
|
||||
Or this CLI command:
|
||||
|
||||
```bash
|
||||
task-master update-task --id=2 --prompt="use Postgres instead of MongoDB and remove the sharding subtask"
|
||||
```
|
||||
## Analyze complexity
|
||||
|
||||
Task Master can provide a complexity report which can be helpful to read before you begin. If you didn't already expand all your tasks, it could help identify which could be broken down further with subtasks.
|
||||
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
|
||||
You can view the report in a friendly table using:
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
|
||||
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>
|
||||
20
apps/docs/introduction.mdx
Normal file
20
apps/docs/introduction.mdx
Normal file
@@ -0,0 +1,20 @@
|
||||
<Tip>
|
||||
Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and refine each section.
|
||||
</Tip>
|
||||
|
||||
We've organized the docs into three sections depending on your experience level and goals:
|
||||
|
||||
### Getting Started - Jump in to [Quick Start](/docs/getting-started/quick-start)
|
||||
Designed for first-time users. Get set up, create your first PRD, and run your first task.
|
||||
|
||||
### Best Practices
|
||||
Covers common workflows, strategic usage of commands, model configuration tips, and real-world usage patterns. Recommended for active users.
|
||||
|
||||
### Technical Capabilities
|
||||
A detailed glossary of every root command and available capability — meant for power users and contributors.
|
||||
|
||||
---
|
||||
|
||||
Thanks for being here early. If you spot something broken or want to contribute, check out the [GitHub repo](https://github.com/eyaltoledano/claude-task-master).
|
||||
|
||||
Have questions? Join our [Discord community](https://discord.gg/fWJkU7rf) to connect with other users and get help from the team.
|
||||
18
apps/docs/licensing.md
Normal file
18
apps/docs/licensing.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Licensing
|
||||
|
||||
Task Master is licensed under the MIT License with Commons Clause. This means you can:
|
||||
|
||||
## ✅ Allowed:
|
||||
|
||||
- Use Task Master for any purpose (personal, commercial, academic)
|
||||
- Modify the code
|
||||
- Distribute copies
|
||||
- Create and sell products built using Task Master
|
||||
|
||||
## ❌ Not Allowed:
|
||||
|
||||
- Sell Task Master itself
|
||||
- Offer Task Master as a hosted service
|
||||
- Create competing products based on Task Master
|
||||
|
||||
{/* See the [LICENSE](../LICENSE) file for the complete license text. */}
|
||||
19
apps/docs/logo/dark.svg
Normal file
19
apps/docs/logo/dark.svg
Normal file
@@ -0,0 +1,19 @@
|
||||
<svg width="800" height="240" viewBox="0 0 800 240" xmlns="http://www.w3.org/2000/svg">
|
||||
<!-- Background -->
|
||||
<rect width="800" height="240" fill="transparent"/>
|
||||
|
||||
<!-- Curly braces -->
|
||||
<text x="40" y="156" font-size="140" fill="white" font-family="monospace">{</text>
|
||||
<text x="230" y="156" font-size="140" fill="white" font-family="monospace">}</text>
|
||||
|
||||
<!-- Blue form with check -->
|
||||
<rect x="120" y="50" width="120" height="140" rx="16" fill="#3366CC"/>
|
||||
<polyline points="150,110 164,128 190,84" fill="none" stroke="white" stroke-width="10"/>
|
||||
<circle cx="150" cy="144" r="7" fill="white"/>
|
||||
<rect x="168" y="140" width="48" height="10" fill="white"/>
|
||||
<circle cx="150" cy="168" r="7" fill="white"/>
|
||||
<rect x="168" y="164" width="48" height="10" fill="white"/>
|
||||
|
||||
<!-- Text -->
|
||||
<text x="340" y="156" font-family="Arial, sans-serif" font-size="76" font-weight="bold" fill="white">Task Master</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 929 B |
19
apps/docs/logo/light.svg
Normal file
19
apps/docs/logo/light.svg
Normal file
@@ -0,0 +1,19 @@
|
||||
<svg width="800" height="240" viewBox="0 0 800 240" xmlns="http://www.w3.org/2000/svg">
|
||||
<!-- Background -->
|
||||
<rect width="800" height="240" fill="transparent"/>
|
||||
|
||||
<!-- Curly braces -->
|
||||
<text x="40" y="156" font-size="140" fill="#000000" font-family="monospace">{</text>
|
||||
<text x="230" y="156" font-size="140" fill="#000000" font-family="monospace">}</text>
|
||||
|
||||
<!-- Blue form with check -->
|
||||
<rect x="120" y="50" width="120" height="140" rx="16" fill="#3366CC"/>
|
||||
<polyline points="150,110 164,128 190,84" fill="none" stroke="#FFFFFF" stroke-width="10"/>
|
||||
<circle cx="150" cy="144" r="7" fill="#FFFFFF"/>
|
||||
<rect x="168" y="140" width="48" height="10" fill="#FFFFFF"/>
|
||||
<circle cx="150" cy="168" r="7" fill="#FFFFFF"/>
|
||||
<rect x="168" y="164" width="48" height="10" fill="#FFFFFF"/>
|
||||
|
||||
<!-- Text -->
|
||||
<text x="340" y="156" font-family="Arial, sans-serif" font-size="76" font-weight="bold" fill="#000000">Task Master</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 941 B |
BIN
apps/docs/logo/task-master-logo.png
Normal file
BIN
apps/docs/logo/task-master-logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 29 KiB |
14
apps/docs/package.json
Normal file
14
apps/docs/package.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"version": "0.0.1",
|
||||
"private": true,
|
||||
"description": "Task Master documentation powered by Mintlify",
|
||||
"scripts": {
|
||||
"dev": "mintlify dev",
|
||||
"build": "mintlify build",
|
||||
"preview": "mintlify preview"
|
||||
},
|
||||
"devDependencies": {
|
||||
"mintlify": "^4.0.0"
|
||||
}
|
||||
}
|
||||
10
apps/docs/style.css
Normal file
10
apps/docs/style.css
Normal file
@@ -0,0 +1,10 @@
|
||||
/*
|
||||
* This file is used to override the default logo style of the docs theme.
|
||||
* It is not used for the actual documentation content.
|
||||
*/
|
||||
|
||||
#navbar img {
|
||||
height: auto !important; /* Let intrinsic SVG size determine height */
|
||||
width: 200px !important; /* Control width */
|
||||
margin-top: 5px !important; /* Add some space above the logo */
|
||||
}
|
||||
12
apps/docs/vercel.json
Normal file
12
apps/docs/vercel.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"rewrites": [
|
||||
{
|
||||
"source": "/",
|
||||
"destination": "https://taskmaster-49ce32d5.mintlify.dev/docs"
|
||||
},
|
||||
{
|
||||
"source": "/:match*",
|
||||
"destination": "https://taskmaster-49ce32d5.mintlify.dev/docs/:match*"
|
||||
}
|
||||
]
|
||||
}
|
||||
40
apps/docs/whats-new.mdx
Normal file
40
apps/docs/whats-new.mdx
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: "What's New"
|
||||
sidebarTitle: "What's New"
|
||||
---
|
||||
|
||||
## New Workflow Engine (Latest)
|
||||
|
||||
Task Master now includes a powerful workflow engine that revolutionizes how tasks are executed:
|
||||
|
||||
### 🚀 Key Features
|
||||
|
||||
- **Git Worktree Isolation**: Each task runs in its own isolated git worktree
|
||||
- **Claude Code Integration**: Spawns dedicated Claude Code processes for task execution
|
||||
- **Real-time Monitoring**: Track workflow progress and process output
|
||||
- **Parallel Execution**: Run multiple tasks concurrently with resource management
|
||||
- **State Persistence**: Workflow state is maintained across sessions
|
||||
|
||||
### 🔧 New CLI Commands
|
||||
|
||||
```bash
|
||||
# Start workflow execution
|
||||
task-master workflow start <task-id>
|
||||
|
||||
# Monitor active workflows
|
||||
task-master workflow list
|
||||
|
||||
# Check workflow status
|
||||
task-master workflow status <workflow-id>
|
||||
|
||||
# Stop running workflow
|
||||
task-master workflow stop <workflow-id>
|
||||
```
|
||||
|
||||
### 📖 Learn More
|
||||
|
||||
Check out the new [Workflow Documentation](/capabilities/workflows) for comprehensive usage guides and best practices.
|
||||
|
||||
---
|
||||
|
||||
An easy way to see the latest releases
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user