Compare commits
1 Commits
feat-gener
...
chore/impr
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ce9c521945 |
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
@@ -2,15 +2,13 @@
|
||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||
"changelog": [
|
||||
"@changesets/changelog-github",
|
||||
{
|
||||
"repo": "eyaltoledano/claude-task-master"
|
||||
}
|
||||
{ "repo": "eyaltoledano/claude-task-master" }
|
||||
],
|
||||
"commit": false,
|
||||
"fixed": [],
|
||||
"linked": [],
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"ignore": [
|
||||
"docs"
|
||||
]
|
||||
"updateInternalDependencies": "patch",
|
||||
"ignore": []
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Cursor IDE custom slash command support
|
||||
|
||||
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Move to AI SDK v5:
|
||||
|
||||
- Works better with claude-code and gemini-cli as ai providers
|
||||
- Improved openai model family compatibility
|
||||
- Migrate ollama provider to v2
|
||||
- Closes #1223, #1013, #1161, #1174
|
||||
8
.changeset/fuzzy-words-count.md
Normal file
8
.changeset/fuzzy-words-count.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Migrate AI services to use generateObject for structured data generation
|
||||
|
||||
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
|
||||
|
||||
### Key Changes:
|
||||
|
||||
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
|
||||
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
|
||||
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
|
||||
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
|
||||
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
|
||||
|
||||
### Technical Improvements:
|
||||
|
||||
- Centralized provider configuration in `ai-providers-unified.js`
|
||||
- Added `generateObject` support detection for each provider
|
||||
- Implemented proper error handling for schema validation failures
|
||||
- Maintained backward compatibility with existing prompt structures
|
||||
|
||||
### Bug Fixes:
|
||||
|
||||
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
|
||||
- Enhanced prompt instructions to enforce proper ID generation patterns
|
||||
- Ensured subtasks display correctly as X.1, X.2, X.3 format
|
||||
|
||||
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
13
.changeset/pre.json
Normal file
13
.changeset/pre.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.23.0",
|
||||
"extension": "0.23.0"
|
||||
},
|
||||
"changesets": [
|
||||
"fuzzy-words-count",
|
||||
"tender-trams-refuse",
|
||||
"vast-sites-leave"
|
||||
]
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Upgrade grok-cli ai provider to ai sdk v5
|
||||
8
.changeset/tender-trams-refuse.md
Normal file
8
.changeset/tender-trams-refuse.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix MCP scope-up/down tools not finding tasks
|
||||
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
11
.changeset/vast-sites-leave.md
Normal file
11
.changeset/vast-sites-leave.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve AI provider compatibility for JSON generation
|
||||
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
@@ -1,162 +0,0 @@
|
||||
---
|
||||
name: task-checker
|
||||
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
|
||||
model: sonnet
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Task Specification Review**
|
||||
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
|
||||
- Understand the requirements, test strategy, and success criteria
|
||||
- Review any subtasks and their individual requirements
|
||||
|
||||
2. **Implementation Verification**
|
||||
- Use `Read` tool to examine all created/modified files
|
||||
- Use `Bash` tool to run compilation and build commands
|
||||
- Use `Grep` tool to search for required patterns and implementations
|
||||
- Verify file structure matches specifications
|
||||
- Check that all required methods/functions are implemented
|
||||
|
||||
3. **Test Execution**
|
||||
- Run tests specified in the task's testStrategy
|
||||
- Execute build commands (npm run build, tsc --noEmit, etc.)
|
||||
- Verify no compilation errors or warnings
|
||||
- Check for runtime errors where applicable
|
||||
- Test edge cases mentioned in requirements
|
||||
|
||||
4. **Code Quality Assessment**
|
||||
- Verify code follows project conventions
|
||||
- Check for proper error handling
|
||||
- Ensure TypeScript typing is strict (no 'any' unless justified)
|
||||
- Verify documentation/comments where required
|
||||
- Check for security best practices
|
||||
|
||||
5. **Dependency Validation**
|
||||
- Verify all task dependencies were actually completed
|
||||
- Check integration points with dependent tasks
|
||||
- Ensure no breaking changes to existing functionality
|
||||
|
||||
## Verification Workflow
|
||||
|
||||
1. **Retrieve Task Information**
|
||||
```
|
||||
Use mcp__task-master-ai__get_task to get full task details
|
||||
Note the implementation requirements and test strategy
|
||||
```
|
||||
|
||||
2. **Check File Existence**
|
||||
```bash
|
||||
# Verify all required files exist
|
||||
ls -la [expected directories]
|
||||
# Read key files to verify content
|
||||
```
|
||||
|
||||
3. **Verify Implementation**
|
||||
- Read each created/modified file
|
||||
- Check against requirements checklist
|
||||
- Verify all subtasks are complete
|
||||
|
||||
4. **Run Tests**
|
||||
```bash
|
||||
# TypeScript compilation
|
||||
cd [project directory] && npx tsc --noEmit
|
||||
|
||||
# Run specified tests
|
||||
npm test [specific test files]
|
||||
|
||||
# Build verification
|
||||
npm run build
|
||||
```
|
||||
|
||||
5. **Generate Verification Report**
|
||||
|
||||
## Output Format
|
||||
|
||||
```yaml
|
||||
verification_report:
|
||||
task_id: [ID]
|
||||
status: PASS | FAIL | PARTIAL
|
||||
score: [1-10]
|
||||
|
||||
requirements_met:
|
||||
- ✅ [Requirement that was satisfied]
|
||||
- ✅ [Another satisfied requirement]
|
||||
|
||||
issues_found:
|
||||
- ❌ [Issue description]
|
||||
- ⚠️ [Warning or minor issue]
|
||||
|
||||
files_verified:
|
||||
- path: [file path]
|
||||
status: [created/modified/verified]
|
||||
issues: [any problems found]
|
||||
|
||||
tests_run:
|
||||
- command: [test command]
|
||||
result: [pass/fail]
|
||||
output: [relevant output]
|
||||
|
||||
recommendations:
|
||||
- [Specific fix needed]
|
||||
- [Improvement suggestion]
|
||||
|
||||
verdict: |
|
||||
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
|
||||
[If FAIL: Specific list of what must be fixed]
|
||||
[If PASS: Confirmation that all requirements are met]
|
||||
```
|
||||
|
||||
## Decision Criteria
|
||||
|
||||
**Mark as PASS (ready for 'done'):**
|
||||
- All required files exist and contain expected content
|
||||
- All tests pass successfully
|
||||
- No compilation or build errors
|
||||
- All subtasks are complete
|
||||
- Core requirements are met
|
||||
- Code quality is acceptable
|
||||
|
||||
**Mark as PARTIAL (may proceed with warnings):**
|
||||
- Core functionality is implemented
|
||||
- Minor issues that don't block functionality
|
||||
- Missing nice-to-have features
|
||||
- Documentation could be improved
|
||||
- Tests pass but coverage could be better
|
||||
|
||||
**Mark as FAIL (must return to 'pending'):**
|
||||
- Required files are missing
|
||||
- Compilation or build errors
|
||||
- Tests fail
|
||||
- Core requirements not met
|
||||
- Security vulnerabilities detected
|
||||
- Breaking changes to existing code
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **BE THOROUGH**: Check every requirement systematically
|
||||
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
|
||||
- **BE FAIR**: Distinguish between critical issues and minor improvements
|
||||
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
|
||||
- **BE EFFICIENT**: Focus on requirements, not perfection
|
||||
|
||||
## Tools You MUST Use
|
||||
|
||||
- `Read`: Examine implementation files (READ-ONLY)
|
||||
- `Bash`: Run tests and verification commands
|
||||
- `Grep`: Search for patterns in code
|
||||
- `mcp__task-master-ai__get_task`: Get task details
|
||||
- **NEVER use Write/Edit** - you only verify, not fix
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
You are the quality gate between 'review' and 'done' status:
|
||||
1. Task-executor implements and marks as 'review'
|
||||
2. You verify and report PASS/FAIL
|
||||
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
|
||||
4. If FAIL, task-executor re-implements based on your report
|
||||
|
||||
Your verification ensures high quality and prevents accumulation of technical debt.
|
||||
@@ -1,92 +0,0 @@
|
||||
---
|
||||
name: task-executor
|
||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
||||
|
||||
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
|
||||
- Execute ONE specific subtask or a small group of related subtasks
|
||||
- Complete your work, verify it, mark for review, and exit
|
||||
- Do NOT decide what to do next - the orchestrator handles task sequencing
|
||||
- Focus on implementation excellence within your assigned scope
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
|
||||
|
||||
2. **Rapid Implementation Planning**: Quickly identify:
|
||||
- The EXACT files you need to create/modify for THIS subtask
|
||||
- What already exists that you can build upon
|
||||
- The minimum viable implementation that satisfies requirements
|
||||
|
||||
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
|
||||
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
|
||||
- Use `Write` tool to create new files specified in the task
|
||||
- Use `Edit` tool to modify existing files
|
||||
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
|
||||
- Use `Read` tool to verify your implementations
|
||||
- Implement one subtask at a time for clarity and traceability
|
||||
- Follow the project's coding standards from CLAUDE.md if available
|
||||
- After each subtask, VERIFY the files exist using Read or ls commands
|
||||
|
||||
4. **Progress Documentation**:
|
||||
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
|
||||
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
|
||||
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
|
||||
- Tasks will be verified by task-checker before moving to 'done'
|
||||
|
||||
5. **Quality Assurance**:
|
||||
- Implement the testing strategy specified in the task
|
||||
- Verify that all acceptance criteria are met
|
||||
- Check for any dependency conflicts or integration issues
|
||||
- Run relevant tests before marking task as complete
|
||||
|
||||
6. **Dependency Management**:
|
||||
- Check task dependencies before starting implementation
|
||||
- If blocked by incomplete dependencies, clearly communicate this
|
||||
- Use `task-master validate-dependencies` when needed
|
||||
|
||||
**Implementation Workflow:**
|
||||
|
||||
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
|
||||
2. Check dependencies and prerequisites
|
||||
3. Plan implementation approach - list specific files to create
|
||||
4. Update task status to 'in-progress' using MCP tool
|
||||
5. **ACTUALLY IMPLEMENT** the solution using tools:
|
||||
- Use `Bash` to create directories
|
||||
- Use `Write` to create new files with actual content
|
||||
- Use `Edit` to modify existing files
|
||||
- DO NOT just describe what should be done - DO IT
|
||||
6. **VERIFY** your implementation:
|
||||
- Use `ls` or `Read` to confirm files were created
|
||||
- Use `Bash` to run any build/test commands
|
||||
- Ensure the implementation is real, not theoretical
|
||||
7. Log progress and decisions in subtask updates using MCP tools
|
||||
8. Test and verify the implementation works
|
||||
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
|
||||
10. Report completion with:
|
||||
- List of created/modified files
|
||||
- Any issues encountered
|
||||
- What needs verification by task-checker
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Focus on completing one task thoroughly before moving to the next
|
||||
- Maintain clear communication about what you're implementing and why
|
||||
- Follow existing code patterns and project conventions
|
||||
- Prioritize working code over extensive documentation unless docs are the task
|
||||
- Ask for clarification if task requirements are ambiguous
|
||||
- Consider edge cases and error handling in your implementations
|
||||
|
||||
**Integration with Task Master:**
|
||||
|
||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
||||
- Track your progress
|
||||
- Update task information
|
||||
- Maintain project state
|
||||
- Coordinate with the broader development workflow
|
||||
|
||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
||||
@@ -1,208 +0,0 @@
|
||||
---
|
||||
name: task-orchestrator
|
||||
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
|
||||
model: opus
|
||||
color: green
|
||||
---
|
||||
|
||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
|
||||
|
||||
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
|
||||
|
||||
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
|
||||
|
||||
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
|
||||
|
||||
## Operational Workflow
|
||||
|
||||
### Initial Assessment Phase
|
||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
||||
2. Analyze task statuses, priorities, and dependencies
|
||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
||||
4. Group related tasks that could benefit from specialized executors
|
||||
5. Create an execution plan that maximizes parallelization
|
||||
|
||||
### Executor Deployment Phase
|
||||
1. For each independent task or task group:
|
||||
- Deploy a task-executor agent with specific instructions
|
||||
- Provide the executor with task ID, requirements, and context
|
||||
- Set clear completion criteria and reporting expectations
|
||||
2. Maintain a registry of active executors and their assigned tasks
|
||||
3. Establish communication protocols for progress updates
|
||||
|
||||
### Coordination Phase
|
||||
1. Monitor executor progress through task status updates
|
||||
2. When a task completes:
|
||||
- Verify completion with `get_task` or `task-master show <id>`
|
||||
- Update task status if needed using `set_task_status`
|
||||
- Reassess dependency graph for newly unblocked tasks
|
||||
- Deploy new executors for available work
|
||||
3. Handle executor failures or blocks:
|
||||
- Reassign tasks to new executors if needed
|
||||
- Escalate complex issues to the user
|
||||
- Update task status to 'blocked' when appropriate
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
**Parallel Execution Rules**:
|
||||
- Never assign dependent tasks to different executors simultaneously
|
||||
- Prioritize high-priority tasks when resources are limited
|
||||
- Group small, related subtasks for single executor efficiency
|
||||
- Balance executor load to prevent bottlenecks
|
||||
|
||||
**Context Management**:
|
||||
- Provide executors with minimal but sufficient context
|
||||
- Share relevant completed task information when it aids execution
|
||||
- Maintain a shared knowledge base of project-specific patterns
|
||||
|
||||
**Quality Assurance**:
|
||||
- Verify task completion before marking as done
|
||||
- Ensure test strategies are followed when specified
|
||||
- Coordinate cross-task integration testing when needed
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
When deploying executors, provide them with:
|
||||
```
|
||||
TASK ASSIGNMENT:
|
||||
- Task ID: [specific ID]
|
||||
- Objective: [clear goal]
|
||||
- Dependencies: [list any completed prerequisites]
|
||||
- Success Criteria: [specific completion requirements]
|
||||
- Context: [relevant project information]
|
||||
- Reporting: [when and how to report back]
|
||||
```
|
||||
|
||||
When receiving executor updates:
|
||||
1. Acknowledge completion or issues
|
||||
2. Update task status in Task Master
|
||||
3. Reassess execution strategy
|
||||
4. Deploy new executors as appropriate
|
||||
|
||||
## Decision Framework
|
||||
|
||||
**When to parallelize**:
|
||||
- Multiple pending tasks with no interdependencies
|
||||
- Sufficient context available for independent execution
|
||||
- Tasks are well-defined with clear success criteria
|
||||
|
||||
**When to serialize**:
|
||||
- Strong dependencies between tasks
|
||||
- Limited context or unclear requirements
|
||||
- Integration points requiring careful coordination
|
||||
|
||||
**When to escalate**:
|
||||
- Circular dependencies detected
|
||||
- Critical blockers affecting multiple tasks
|
||||
- Ambiguous requirements needing clarification
|
||||
- Resource conflicts between executors
|
||||
|
||||
## Error Handling
|
||||
|
||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
Track and optimize for:
|
||||
- Task completion rate
|
||||
- Parallel execution efficiency
|
||||
- Executor success rate
|
||||
- Time to completion for task groups
|
||||
- Dependency resolution speed
|
||||
|
||||
## Integration with Task Master
|
||||
|
||||
Leverage these Task Master MCP tools effectively:
|
||||
- `get_tasks` - Continuous queue monitoring
|
||||
- `get_task` - Detailed task analysis
|
||||
- `set_task_status` - Progress tracking
|
||||
- `next_task` - Fallback for serial execution
|
||||
- `analyze_project_complexity` - Strategic planning
|
||||
- `complexity_report` - Resource allocation
|
||||
|
||||
## Output Format for Execution
|
||||
|
||||
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
|
||||
|
||||
After completing your dependency analysis, you MUST output a structured execution plan:
|
||||
|
||||
```yaml
|
||||
execution_plan:
|
||||
EXECUTE_IN_PARALLEL:
|
||||
# Maximum 3 subtasks running simultaneously
|
||||
- subtask_id: [e.g., 118.2]
|
||||
parent_task: [e.g., 118]
|
||||
title: [Specific subtask title]
|
||||
priority: [high/medium/low]
|
||||
estimated_time: [e.g., 10 minutes]
|
||||
executor_prompt: |
|
||||
Execute Subtask [ID]: [Specific subtask title]
|
||||
|
||||
SPECIFIC REQUIREMENTS:
|
||||
[Exact implementation needed for THIS subtask only]
|
||||
|
||||
FILES TO CREATE/MODIFY:
|
||||
[Specific file paths]
|
||||
|
||||
CONTEXT:
|
||||
[What already exists that this subtask depends on]
|
||||
|
||||
SUCCESS CRITERIA:
|
||||
[Specific completion criteria for this subtask]
|
||||
|
||||
IMPORTANT:
|
||||
- Focus ONLY on this subtask
|
||||
- Mark subtask as 'review' when complete
|
||||
- Use MCP tool: mcp__task-master-ai__set_task_status
|
||||
|
||||
- subtask_id: [Another subtask that can run in parallel]
|
||||
parent_task: [Parent task ID]
|
||||
title: [Specific subtask title]
|
||||
priority: [priority]
|
||||
estimated_time: [time estimate]
|
||||
executor_prompt: |
|
||||
[Focused prompt for this specific subtask]
|
||||
|
||||
blocked:
|
||||
- task_id: [ID]
|
||||
title: [Task title]
|
||||
waiting_for: [list of blocking task IDs]
|
||||
becomes_ready_when: [condition for unblocking]
|
||||
|
||||
next_wave:
|
||||
trigger: "After tasks [IDs] complete"
|
||||
newly_available: [List of task IDs that will unblock]
|
||||
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
|
||||
|
||||
critical_path: [Ordered list of task IDs forming the critical path]
|
||||
|
||||
parallelization_instruction: |
|
||||
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
|
||||
simultaneously using multiple Task tool invocations in a single response.
|
||||
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
|
||||
|
||||
verification_needed:
|
||||
- task_id: [ID of any task in 'review' status]
|
||||
verification_focus: [what to check]
|
||||
```
|
||||
|
||||
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
|
||||
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
|
||||
2. Use multiple Task tool invocations in a SINGLE response
|
||||
3. Do not execute them sequentially - they must run in parallel
|
||||
4. Wait for all parallel executors to complete before proceeding to next wave
|
||||
|
||||
**IMPORTANT NOTES**:
|
||||
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
|
||||
- Provide complete, self-contained prompts for each executor
|
||||
- Executors should mark tasks as 'review' for verification, not 'done'
|
||||
- Be explicit about which tasks can run simultaneously
|
||||
|
||||
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./dist/mcp-server.js"],
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Get context from command line argument or environment
|
||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
||||
|
||||
function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
if (existsSync(join(currentDir, 'package.json'))) {
|
||||
try {
|
||||
const pkg = JSON.parse(
|
||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||
);
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
function checkPreReleaseMode() {
|
||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
|
||||
// Check if pre.json exists
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
try {
|
||||
// Read and parse pre.json
|
||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
||||
const preJson = JSON.parse(preJsonContent);
|
||||
|
||||
// Check if we're in active pre-release mode
|
||||
if (preJson.mode === 'pre') {
|
||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
||||
console.error('');
|
||||
|
||||
// Provide context-specific error messages
|
||||
if (context === 'Release Check' || context === 'pull_request') {
|
||||
console.error(
|
||||
'Pre-release mode must be exited before merging to main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error(
|
||||
'To fix this, run the following commands in your branch:'
|
||||
);
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
console.error('');
|
||||
console.error('Then update this pull request.');
|
||||
} else if (context === 'Release' || context === 'main') {
|
||||
console.error(
|
||||
'Pre-release mode should only be used on feature branches, not main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands locally:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push origin main');
|
||||
console.error('');
|
||||
console.error('Then re-run this workflow.');
|
||||
} else {
|
||||
console.error('Pre-release mode must be exited before proceeding.');
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
||||
console.error(`Error details: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the check
|
||||
checkPreReleaseMode();
|
||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
||||
|
||||
function parseMetricsTable(content, metricName) {
|
||||
const lines = content.split('\n');
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i].trim();
|
||||
// Match a markdown table row like: | Metric Name | value | ...
|
||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
||||
const match = line.match(re);
|
||||
if (match) {
|
||||
return match[1].trim() || 'N/A';
|
||||
}
|
||||
}
|
||||
return 'N/A';
|
||||
}
|
||||
|
||||
function parseCountMetric(content, metricName) {
|
||||
const result = parseMetricsTable(content, metricName);
|
||||
// Extract number from string, handling commas and spaces
|
||||
const numberMatch = result.toString().match(/[\d,]+/);
|
||||
if (numberMatch) {
|
||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
||||
return isNaN(number) ? 0 : number;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function main() {
|
||||
const metrics = {
|
||||
issues_created: 0,
|
||||
issues_closed: 0,
|
||||
prs_created: 0,
|
||||
prs_merged: 0,
|
||||
issue_avg_first_response: 'N/A',
|
||||
issue_avg_time_to_close: 'N/A',
|
||||
pr_avg_first_response: 'N/A',
|
||||
pr_avg_merge_time: 'N/A'
|
||||
};
|
||||
|
||||
// Parse issue metrics
|
||||
if (existsSync('issue_metrics.md')) {
|
||||
console.log('📄 Found issue_metrics.md, parsing...');
|
||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
||||
|
||||
metrics.issues_created = parseCountMetric(
|
||||
issueContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.issues_closed = parseCountMetric(
|
||||
issueContent,
|
||||
'Number of items closed'
|
||||
);
|
||||
metrics.issue_avg_first_response = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to first response'
|
||||
);
|
||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
||||
}
|
||||
|
||||
// Parse PR created metrics
|
||||
if (existsSync('pr_created_metrics.md')) {
|
||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_created = parseCountMetric(
|
||||
prCreatedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.pr_avg_first_response = parseMetricsTable(
|
||||
prCreatedContent,
|
||||
'Time to first response'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
||||
);
|
||||
}
|
||||
|
||||
// Parse PR merged metrics (for more accurate merge data)
|
||||
if (existsSync('pr_merged_metrics.md')) {
|
||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_merged = parseCountMetric(
|
||||
prMergedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
// For merged PRs, "Time to close" is actually time to merge
|
||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
||||
prMergedContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
||||
);
|
||||
// Fallback: try old pr_metrics.md if it exists
|
||||
if (existsSync('pr_metrics.md')) {
|
||||
console.log('📄 Falling back to pr_metrics.md...');
|
||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
||||
|
||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
||||
metrics.prs_merged =
|
||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
||||
|
||||
const maybeMergeTime = parseMetricsTable(
|
||||
prContent,
|
||||
'Average time to merge'
|
||||
);
|
||||
metrics.pr_avg_merge_time =
|
||||
maybeMergeTime !== 'N/A'
|
||||
? maybeMergeTime
|
||||
: parseMetricsTable(prContent, 'Time to close');
|
||||
} else {
|
||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
||||
}
|
||||
}
|
||||
|
||||
// Output for GitHub Actions
|
||||
const output = Object.entries(metrics)
|
||||
.map(([key, value]) => `${key}=${value}`)
|
||||
.join('\n');
|
||||
|
||||
// Always output to stdout for debugging
|
||||
console.log('\n=== FINAL METRICS ===');
|
||||
Object.entries(metrics).forEach(([key, value]) => {
|
||||
console.log(`${key}: ${value}`);
|
||||
});
|
||||
|
||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
||||
if (process.env.GITHUB_OUTPUT) {
|
||||
try {
|
||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
||||
console.log(
|
||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { existsSync, unlinkSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, runCommand } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
console.log('🚀 Starting release process...');
|
||||
|
||||
// Double-check we're not in pre-release mode (safety net)
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (existsSync(preJsonPath)) {
|
||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
||||
unlinkSync(preJsonPath);
|
||||
}
|
||||
|
||||
// Check if the extension version has changed and tag it
|
||||
// This prevents changeset from trying to publish the private package
|
||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
console.log('✅ Release process completed!');
|
||||
|
||||
// The extension tag (if created) will trigger the extension-release workflow
|
||||
21
.github/scripts/release.sh
vendored
Executable file
21
.github/scripts/release.sh
vendored
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "🚀 Starting release process..."
|
||||
|
||||
# Double-check we're not in pre-release mode (safety net)
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
echo "⚠️ Warning: pre.json still exists. Removing it..."
|
||||
rm -f .changeset/pre.json
|
||||
fi
|
||||
|
||||
# Check if the extension version has changed and tag it
|
||||
# This prevents changeset from trying to publish the private package
|
||||
node .github/scripts/tag-extension.mjs
|
||||
|
||||
# Run changeset publish for npm packages
|
||||
npx changeset publish
|
||||
|
||||
echo "✅ Release process completed!"
|
||||
|
||||
# The extension tag (if created) will trigger the extension-release workflow
|
||||
114
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
114
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
@@ -1,13 +1,33 @@
|
||||
#!/usr/bin/env node
|
||||
import assert from 'node:assert/strict';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Find the root directory by looking for package.json
|
||||
function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
if (existsSync(join(currentDir, 'package.json'))) {
|
||||
// Verify it's the root package.json by checking for expected fields
|
||||
try {
|
||||
const pkg = JSON.parse(
|
||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||
);
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
// Read the extension's package.json
|
||||
@@ -23,11 +43,95 @@ try {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Read root package.json for repository info
|
||||
const rootPkgPath = join(rootDir, 'package.json');
|
||||
let rootPkg;
|
||||
try {
|
||||
const rootPkgContent = readFileSync(rootPkgPath, 'utf8');
|
||||
rootPkg = JSON.parse(rootPkgContent);
|
||||
} catch (error) {
|
||||
console.error('Failed to read root package.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Ensure we have required fields
|
||||
assert(pkg.name, 'package.json must have a name field');
|
||||
assert(pkg.version, 'package.json must have a version field');
|
||||
assert(rootPkg.repository, 'root package.json must have a repository field');
|
||||
|
||||
const tag = `${pkg.name}@${pkg.version}`;
|
||||
|
||||
// Create and push the tag if it doesn't exist
|
||||
createAndPushTag(tag);
|
||||
// Get repository URL from root package.json
|
||||
// Get repository URL and clean it up for git ls-remote
|
||||
let repoUrl = rootPkg.repository.url || rootPkg.repository;
|
||||
if (typeof repoUrl === 'string') {
|
||||
// Convert git+https://github.com/... to https://github.com/...
|
||||
repoUrl = repoUrl.replace(/^git\+/, '');
|
||||
// Ensure it ends with .git for proper remote access
|
||||
if (!repoUrl.endsWith('.git')) {
|
||||
repoUrl += '.git';
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Checking remote repository: ${repoUrl} for tag: ${tag}`);
|
||||
|
||||
let gitResult = spawnSync('git', ['ls-remote', repoUrl, tag], {
|
||||
encoding: 'utf8',
|
||||
env: { ...process.env }
|
||||
});
|
||||
|
||||
if (gitResult.status !== 0) {
|
||||
console.error('Git ls-remote failed:');
|
||||
console.error('Exit code:', gitResult.status);
|
||||
console.error('Error:', gitResult.error);
|
||||
console.error('Stderr:', gitResult.stderr);
|
||||
console.error('Command:', `git ls-remote ${repoUrl} ${tag}`);
|
||||
|
||||
// For CI environments, try using origin instead of the full URL
|
||||
if (process.env.CI) {
|
||||
console.log('Retrying with origin remote...');
|
||||
gitResult = spawnSync('git', ['ls-remote', 'origin', tag], {
|
||||
encoding: 'utf8'
|
||||
});
|
||||
|
||||
if (gitResult.status !== 0) {
|
||||
throw new Error(
|
||||
`Failed to check remote for tag ${tag}. Exit code: ${gitResult.status}`
|
||||
);
|
||||
}
|
||||
} else {
|
||||
throw new Error(
|
||||
`Failed to check remote for tag ${tag}. Exit code: ${gitResult.status}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const exists = String(gitResult.stdout).trim() !== '';
|
||||
|
||||
if (!exists) {
|
||||
console.log(`Creating new extension tag: ${tag}`);
|
||||
|
||||
// Create the tag
|
||||
const tagResult = spawnSync('git', ['tag', tag]);
|
||||
if (tagResult.status !== 0) {
|
||||
console.error(
|
||||
'Failed to create tag:',
|
||||
tagResult.error || tagResult.stderr.toString()
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Push the tag
|
||||
const pushResult = spawnSync('git', ['push', 'origin', tag]);
|
||||
if (pushResult.status !== 0) {
|
||||
console.error(
|
||||
'Failed to push tag:',
|
||||
pushResult.error || pushResult.stderr.toString()
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||
} else {
|
||||
console.log(`Extension tag already exists: ${tag}`);
|
||||
}
|
||||
|
||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
|
||||
// Find the root directory by looking for package.json with task-master-ai
|
||||
export function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
const pkgPath = join(currentDir, 'package.json');
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
// Run a command with proper error handling
|
||||
export function runCommand(command, args = [], options = {}) {
|
||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
||||
const result = spawnSync(command, args, {
|
||||
encoding: 'utf8',
|
||||
stdio: 'inherit',
|
||||
...options
|
||||
});
|
||||
|
||||
if (result.status !== 0) {
|
||||
console.error(`Command failed with exit code ${result.status}`);
|
||||
process.exit(result.status);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Get package version from a package.json file
|
||||
export function getPackageVersion(packagePath) {
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
||||
return pkg.version;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`Failed to read package version from ${packagePath}:`,
|
||||
error.message
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Check if a git tag exists on remote
|
||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
||||
encoding: 'utf8'
|
||||
});
|
||||
|
||||
return result.status === 0 && result.stdout.trim() !== '';
|
||||
}
|
||||
|
||||
// Create and push a git tag if it doesn't exist
|
||||
export function createAndPushTag(tag, remote = 'origin') {
|
||||
// Check if tag already exists
|
||||
if (tagExistsOnRemote(tag, remote)) {
|
||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log(`Creating new tag: ${tag}`);
|
||||
|
||||
// Create the tag locally
|
||||
const tagResult = spawnSync('git', ['tag', tag]);
|
||||
if (tagResult.status !== 0) {
|
||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Push the tag to remote
|
||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
||||
if (pushResult.status !== 0) {
|
||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||
return true;
|
||||
}
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
@@ -1,46 +0,0 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
126
.github/workflows/ci.yml
vendored
126
.github/workflows/ci.yml
vendored
@@ -6,124 +6,73 @@ on:
|
||||
- main
|
||||
- next
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
DO_NOT_TRACK: 1
|
||||
NODE_ENV: development
|
||||
|
||||
jobs:
|
||||
# Fast checks that can run in parallel
|
||||
format-check:
|
||||
name: Format Check
|
||||
setup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
- name: Install Dependencies
|
||||
id: install
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
format-check:
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Format Check
|
||||
run: npm run format-check
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
typecheck:
|
||||
name: Typecheck
|
||||
timeout-minutes: 10
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Typecheck
|
||||
run: npm run turbo:typecheck
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
# Build job to ensure everything compiles
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Upload build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
retention-days: 1
|
||||
|
||||
test:
|
||||
name: Test
|
||||
timeout-minutes: 15
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
needs: [format-check, typecheck, build]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
@@ -132,6 +81,7 @@ jobs:
|
||||
NODE_ENV: test
|
||||
CI: true
|
||||
FORCE_COLOR: 1
|
||||
timeout-minutes: 10
|
||||
|
||||
- name: Upload Test Results
|
||||
if: always()
|
||||
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
57
.github/workflows/claude-docs-trigger.yml
vendored
57
.github/workflows/claude-docs-trigger.yml
vendored
@@ -1,57 +0,0 @@
|
||||
name: Trigger Claude Documentation Update
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
trigger-docs-update:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
actions: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
# Store changed files for Claude to analyze (escaped for JSON)
|
||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the commit message (escaped for JSON)
|
||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get diff for documentation context (escaped for JSON)
|
||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get commit SHA
|
||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Trigger Claude workflow
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# Trigger the Claude docs updater workflow with the change information
|
||||
gh workflow run claude-docs-updater.yml \
|
||||
--ref next \
|
||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
||||
145
.github/workflows/claude-docs-updater.yml
vendored
145
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,145 +0,0 @@
|
||||
name: Claude Documentation Updater
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
commit_sha:
|
||||
description: 'The commit SHA that triggered this update'
|
||||
required: true
|
||||
type: string
|
||||
commit_message:
|
||||
description: 'The commit message'
|
||||
required: true
|
||||
type: string
|
||||
changed_files:
|
||||
description: 'List of changed files'
|
||||
required: true
|
||||
type: string
|
||||
commit_diff:
|
||||
description: 'Diff summary of changes'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: next
|
||||
fetch-depth: 0 # Need full history to checkout specific commit
|
||||
|
||||
- name: Create docs update branch
|
||||
id: create-branch
|
||||
run: |
|
||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||
git checkout -b $BRANCH_NAME
|
||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Claude Code to Update Documentation
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ inputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ inputs.changed_files }}
|
||||
|
||||
- Changes summary:
|
||||
${{ inputs.commit_diff }}
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
3. If documentation updates are needed:
|
||||
- Update relevant documentation files in apps/docs/
|
||||
- Ensure examples are updated if APIs changed
|
||||
- Update any configuration documentation if config options changed
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
- Include code examples where appropriate
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
id: check-changes
|
||||
run: |
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
git add -A
|
||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --local user.name "github-actions[bot]"
|
||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||
|
||||
This PR was automatically generated to update documentation based on recent changes.
|
||||
|
||||
Original commit: ${{ inputs.commit_message }}
|
||||
|
||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||
fi
|
||||
|
||||
- name: Push changes and create PR
|
||||
if: steps.check-changes.outputs.has_changes == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
--body "## 📚 Documentation Update
|
||||
|
||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||
|
||||
### Original Changes
|
||||
**Commit:** ${{ inputs.commit_sha }}
|
||||
**Message:** ${{ inputs.commit_message }}
|
||||
|
||||
### Changed Files in Original Commit
|
||||
\`\`\`
|
||||
${{ inputs.changed_files }}
|
||||
\`\`\`
|
||||
|
||||
### Documentation Updates
|
||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||
- [ ] Documentation accurately reflects the changes
|
||||
- [ ] Examples are correct and working
|
||||
- [ ] No important details are missing
|
||||
- [ ] Style is consistent with existing documentation
|
||||
|
||||
---
|
||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
5
.github/workflows/extension-ci.yml
vendored
5
.github/workflows/extension-ci.yml
vendored
@@ -41,7 +41,8 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Monorepo Dependencies
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
@@ -67,6 +68,7 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
@@ -98,6 +100,7 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
|
||||
29
.github/workflows/extension-release.yml
vendored
29
.github/workflows/extension-release.yml
vendored
@@ -31,7 +31,8 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Monorepo Dependencies
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
@@ -88,6 +89,32 @@ jobs:
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: actions/create-release@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
tag_name: ${{ github.ref_name }}
|
||||
release_name: Extension ${{ github.ref_name }}
|
||||
body: |
|
||||
VS Code Extension Release ${{ github.ref_name }}
|
||||
|
||||
**Marketplaces:**
|
||||
- [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Hamster.task-master-hamster)
|
||||
- [Open VSX Registry](https://open-vsx.org/extension/Hamster/task-master-hamster)
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
- name: Upload VSIX to Release
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||
asset_path: apps/extension/vsix-build/${{ steps.vsix-info.outputs.vsix-filename }}
|
||||
asset_name: ${{ steps.vsix-info.outputs.vsix-filename }}
|
||||
asset_content_type: application/zip
|
||||
|
||||
- name: Upload Build Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
|
||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
40
.github/workflows/pre-release.yml
vendored
40
.github/workflows/pre-release.yml
vendored
@@ -9,7 +9,6 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
# Only allow pre-releases on non-main branches
|
||||
if: github.ref != 'refs/heads/main'
|
||||
environment: extension-release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
@@ -36,26 +35,9 @@ jobs:
|
||||
|
||||
- name: Enter RC mode (if not already in RC mode)
|
||||
run: |
|
||||
# Check if we're in pre-release mode with the "rc" tag
|
||||
if [ -f .changeset/pre.json ]; then
|
||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
||||
|
||||
if [ "$MODE" = "exit" ]; then
|
||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
||||
npx changeset pre exit
|
||||
npx changeset pre enter rc
|
||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
||||
echo "Already in RC pre-release mode"
|
||||
else
|
||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
else
|
||||
echo "No pre.json found, entering RC mode..."
|
||||
# ensure we’re in the right pre-mode (tag "rc")
|
||||
if [ ! -f .changeset/pre.json ] \
|
||||
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
|
||||
npx changeset pre enter rc
|
||||
fi
|
||||
|
||||
@@ -65,24 +47,10 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
- name: Run format
|
||||
run: npm run format
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: npx changeset publish
|
||||
publish: npm run release
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
25
.github/workflows/release-check.yml
vendored
25
.github/workflows/release-check.yml
vendored
@@ -5,10 +5,6 @@ on:
|
||||
branches:
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: release-check-${{ github.head_ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
check-release-mode:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -18,4 +14,23 @@ jobs:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "🔍 Checking if branch is in pre-release mode..."
|
||||
|
||||
if [[ -f .changeset/pre.json ]]; then
|
||||
echo "❌ ERROR: This branch is in pre-release mode!"
|
||||
echo ""
|
||||
echo "Pre-release mode must be exited before merging to main."
|
||||
echo ""
|
||||
echo "To fix this, run the following commands in your branch:"
|
||||
echo " npx changeset pre exit"
|
||||
echo " git add -u"
|
||||
echo " git commit -m 'chore: exit pre-release mode'"
|
||||
echo " git push"
|
||||
echo ""
|
||||
echo "Then update this pull request."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Not in pre-release mode - PR can be merged"
|
||||
|
||||
31
.github/workflows/release.yml
vendored
31
.github/workflows/release.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
cache: 'npm'
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
@@ -39,21 +39,30 @@ jobs:
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Check pre-release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "🔍 Checking pre-release mode status..."
|
||||
if [[ -f .changeset/pre.json ]]; then
|
||||
echo "❌ ERROR: Main branch is in pre-release mode!"
|
||||
echo ""
|
||||
echo "Pre-release mode should only be used on feature branches, not main."
|
||||
echo ""
|
||||
echo "To fix this, run the following commands locally:"
|
||||
echo " npx changeset pre exit"
|
||||
echo " git add -u"
|
||||
echo " git commit -m 'chore: exit pre-release mode'"
|
||||
echo " git push origin main"
|
||||
echo ""
|
||||
echo "Then re-run this workflow."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
echo "✅ Not in pre-release mode - proceeding with release"
|
||||
|
||||
- name: Create Release Pull Request or Publish to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: node ./.github/scripts/release.mjs
|
||||
publish: ./.github/scripts/release.sh
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
108
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,108 +0,0 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Get dates for last 14 days
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
# Last 14 days
|
||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
OUTPUT_FILE: issue_metrics.md
|
||||
|
||||
- name: Generate PR created metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_created_metrics.md
|
||||
|
||||
- name: Generate PR merged metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_merged_metrics.md
|
||||
|
||||
- name: Debug generated metrics
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
echo "Listing markdown files in workspace:"
|
||||
ls -la *.md || true
|
||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
||||
if [ -f "$f" ]; then
|
||||
echo "== $f (first 10 lines) =="
|
||||
head -n 10 "$f"
|
||||
else
|
||||
echo "Missing $f"
|
||||
fi
|
||||
done
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: node .github/scripts/parse-metrics.mjs
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
*${{ env.date_range }}*
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
|
||||
**📈 Visual Analytics**
|
||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -94,6 +94,3 @@ apps/extension/.vscode-test/
|
||||
|
||||
# apps/extension
|
||||
apps/extension/vsix-build/
|
||||
|
||||
# turbo
|
||||
.turbo
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
||||
"defaultBranch": "main",
|
||||
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
||||
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
||||
}
|
||||
@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-sonnet-4-20250514",
|
||||
"maxTokens": 64000,
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
@@ -14,8 +14,8 @@
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"modelId": "claude-3-5-sonnet-20241022",
|
||||
"maxTokens": 8192,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
@@ -29,15 +29,9 @@
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||
"responseLanguage": "English",
|
||||
"enableCodebaseAnalysis": true,
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"defaultTag": "master"
|
||||
},
|
||||
"claudeCode": {},
|
||||
"grokCli": {
|
||||
"timeout": 120000,
|
||||
"workingDirectory": null,
|
||||
"defaultModel": "grok-4-latest"
|
||||
}
|
||||
"claudeCode": {}
|
||||
}
|
||||
|
||||
@@ -1,188 +0,0 @@
|
||||
# Task Master Migration Roadmap
|
||||
|
||||
## Overview
|
||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
||||
|
||||
## Architecture Vision
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ User Interfaces │
|
||||
├──────────┬──────────┬──────────┬────────────────┤
|
||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
||||
└──────────┴──────────┴──────────┴────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────┐
|
||||
│ @tm/core │
|
||||
│ (Business Logic) │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## Migration Phases
|
||||
|
||||
### Phase 1: Core Extraction ✅ (In Progress)
|
||||
**Goal**: Move all business logic to @tm/core
|
||||
|
||||
- [x] Create @tm/core package structure
|
||||
- [x] Move types and interfaces
|
||||
- [x] Implement TaskMasterCore facade
|
||||
- [x] Move storage adapters
|
||||
- [x] Move task services
|
||||
- [ ] Move AI providers
|
||||
- [ ] Move parser logic
|
||||
- [ ] Complete test coverage
|
||||
|
||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
||||
**Goal**: Create @tm/cli as a thin presentation layer
|
||||
|
||||
- [x] Create @tm/cli package structure
|
||||
- [x] Implement Command interface pattern
|
||||
- [x] Create CommandRegistry
|
||||
- [x] Build legacy bridge/adapter
|
||||
- [x] Migrate list-tasks command
|
||||
- [ ] Migrate remaining commands one by one
|
||||
- [ ] Remove UI logic from core
|
||||
|
||||
### Phase 3: Transitional Integration
|
||||
**Goal**: Use new packages in existing scripts without breaking changes
|
||||
|
||||
```javascript
|
||||
// scripts/modules/commands.js gradually adopts new commands
|
||||
import { ListTasksCommand } from '@tm/cli';
|
||||
const listCommand = new ListTasksCommand();
|
||||
|
||||
// Old interface remains the same
|
||||
programInstance
|
||||
.command('list')
|
||||
.action(async (options) => {
|
||||
// Use new command internally
|
||||
const result = await listCommand.execute(convertOptions(options));
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 4: MCP Package
|
||||
**Goal**: Separate MCP server as its own package
|
||||
|
||||
- [ ] Create @tm/mcp package
|
||||
- [ ] Move MCP server code
|
||||
- [ ] Use @tm/core for all logic
|
||||
- [ ] MCP becomes a thin RPC layer
|
||||
|
||||
### Phase 5: Complete Migration
|
||||
**Goal**: Remove old scripts, pure monorepo
|
||||
|
||||
- [ ] All commands migrated to @tm/cli
|
||||
- [ ] Remove scripts/modules/task-manager/*
|
||||
- [ ] Remove scripts/modules/commands.js
|
||||
- [ ] Update bin/task-master.js to use @tm/cli
|
||||
- [ ] Clean up dependencies
|
||||
|
||||
## Current Transitional Strategy
|
||||
|
||||
### 1. Adapter Pattern (commands-adapter.js)
|
||||
```javascript
|
||||
// Checks if new CLI is available and uses it
|
||||
// Falls back to legacy implementation if not
|
||||
export async function listTasksAdapter(...args) {
|
||||
if (cliAvailable) {
|
||||
return useNewImplementation(...args);
|
||||
}
|
||||
return useLegacyImplementation(...args);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Command Bridge Pattern
|
||||
```javascript
|
||||
// Allows new commands to work in old code
|
||||
const bridge = new CommandBridge(new ListTasksCommand());
|
||||
const data = await bridge.run(legacyOptions); // Legacy style
|
||||
const result = await bridge.execute(newOptions); // New style
|
||||
```
|
||||
|
||||
### 3. Gradual File Migration
|
||||
Instead of big-bang refactoring:
|
||||
1. Create new implementation in @tm/cli
|
||||
2. Add adapter in commands-adapter.js
|
||||
3. Update commands.js to use adapter
|
||||
4. Test both paths work
|
||||
5. Eventually remove adapter when all migrated
|
||||
|
||||
## Benefits of This Approach
|
||||
|
||||
1. **No Breaking Changes**: Existing CLI continues to work
|
||||
2. **Incremental PRs**: Each command can be migrated separately
|
||||
3. **Parallel Development**: New features can use new architecture
|
||||
4. **Easy Rollback**: Can disable new implementation if issues
|
||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
||||
|
||||
## Example PR Sequence
|
||||
|
||||
### PR 1: Core Package Setup ✅
|
||||
- Create @tm/core
|
||||
- Move types and interfaces
|
||||
- Basic TaskMasterCore implementation
|
||||
|
||||
### PR 2: CLI Package Foundation ✅
|
||||
- Create @tm/cli
|
||||
- Command interface and registry
|
||||
- Legacy bridge utilities
|
||||
|
||||
### PR 3: First Command Migration
|
||||
- Migrate list-tasks to new system
|
||||
- Add adapter in scripts
|
||||
- Test both implementations
|
||||
|
||||
### PR 4-N: Migrate Commands One by One
|
||||
- Each PR migrates 1-2 related commands
|
||||
- Small, reviewable changes
|
||||
- Continuous delivery
|
||||
|
||||
### Final PR: Cleanup
|
||||
- Remove legacy implementations
|
||||
- Remove adapters
|
||||
- Update documentation
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Dual Testing During Migration
|
||||
```javascript
|
||||
describe('List Tasks', () => {
|
||||
it('works with legacy implementation', async () => {
|
||||
// Force legacy
|
||||
const result = await legacyListTasks(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
|
||||
it('works with new implementation', async () => {
|
||||
// Force new
|
||||
const command = new ListTasksCommand();
|
||||
const result = await command.execute(...);
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('adapter chooses correctly', async () => {
|
||||
// Let adapter decide
|
||||
const result = await listTasksAdapter(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] All commands migrated without breaking changes
|
||||
- [ ] Test coverage maintained or improved
|
||||
- [ ] Performance maintained or improved
|
||||
- [ ] Cleaner, more maintainable codebase
|
||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
||||
|
||||
## Notes for Contributors
|
||||
|
||||
1. **Keep PRs Small**: Migrate one command at a time
|
||||
2. **Test Both Paths**: Ensure legacy and new both work
|
||||
3. **Document Changes**: Update this roadmap as you go
|
||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
||||
|
||||
This is a living document - update as the migration progresses!
|
||||
@@ -1,91 +0,0 @@
|
||||
<context>
|
||||
# Overview
|
||||
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
||||
|
||||
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
||||
|
||||
# Core Features
|
||||
- `start` command (Commander class style)
|
||||
- Hard-coded executor: `claude-code`
|
||||
- Standardized prompt designed for minimal changes following existing patterns
|
||||
- Shows claude-code output (no streaming)
|
||||
- Git status check for success detection
|
||||
- Auto-mark task done if successful
|
||||
|
||||
# User Experience
|
||||
```
|
||||
task-master start 12
|
||||
```
|
||||
1) Fetches Task #12 details
|
||||
2) Builds standardized prompt with task context
|
||||
3) Runs claude-code with the prompt
|
||||
4) Shows output
|
||||
5) Checks git status for changes
|
||||
6) Auto-marks task done if changes detected
|
||||
</context>
|
||||
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
|
||||
- Command pattern:
|
||||
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
||||
|
||||
- Task retrieval:
|
||||
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
||||
- Extract: id, title, description, details
|
||||
|
||||
- Executor (ultra-simple approach):
|
||||
- Execute `claude "full prompt here"` command directly
|
||||
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
||||
- Then tells Claude to implement the code changes
|
||||
- This opens Claude CLI interface naturally in the current terminal
|
||||
- No subprocess management needed - just execute the command
|
||||
|
||||
- Execution flow:
|
||||
1) Validate `<task_id>` exists; exit with error if not
|
||||
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
||||
3) Execute `claude "prompt"` command directly in terminal
|
||||
4) Claude CLI opens, runs `tm show`, then implements changes
|
||||
5) After Claude session ends, run `git status --porcelain` to detect changes
|
||||
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
||||
|
||||
- Success criteria:
|
||||
- Success = exit code 0 AND git shows modified/created files
|
||||
- Print changed file paths; warn if no changes detected
|
||||
|
||||
# Development Roadmap
|
||||
|
||||
MVP (ship in ~1 hour):
|
||||
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
||||
2) Validate task exists via tm-core
|
||||
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
||||
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
||||
|
||||
# Risks and Mitigations
|
||||
- Executor availability: Error clearly if `claude-code` provider fails
|
||||
- False success: Git-change heuristic acceptable for hackathon MVP
|
||||
|
||||
# Appendix
|
||||
|
||||
**Standardized Prompt Template:**
|
||||
```
|
||||
You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show <task_id>
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show <task_id> to understand what needs to be implemented.
|
||||
```
|
||||
|
||||
**Key References:**
|
||||
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
||||
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
||||
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
||||
</PRD>
|
||||
@@ -1,8 +0,0 @@
|
||||
Simple Todo App PRD
|
||||
|
||||
Create a basic todo list application with the following features:
|
||||
1. Add new todos
|
||||
2. Mark todos as complete
|
||||
3. Delete todos
|
||||
|
||||
That's it. Keep it simple.
|
||||
@@ -1,343 +0,0 @@
|
||||
# Product Requirements Document: tm-core Package - Parse PRD Feature
|
||||
|
||||
## Project Overview
|
||||
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
|
||||
|
||||
## Design Patterns & Architecture
|
||||
|
||||
### Patterns to Apply
|
||||
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
|
||||
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
|
||||
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
|
||||
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
|
||||
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
|
||||
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
|
||||
|
||||
### Naming Conventions
|
||||
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
|
||||
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
|
||||
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
|
||||
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
|
||||
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
|
||||
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
|
||||
|
||||
## Exact Folder Structure Required
|
||||
```
|
||||
packages/tm-core/
|
||||
├── src/
|
||||
│ ├── index.ts
|
||||
│ ├── types/
|
||||
│ │ └── index.ts
|
||||
│ ├── interfaces/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── storage.interface.ts
|
||||
│ │ ├── ai-provider.interface.ts
|
||||
│ │ └── configuration.interface.ts
|
||||
│ ├── tasks/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── task-parser.ts
|
||||
│ ├── ai/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── base-provider.ts
|
||||
│ │ ├── provider-factory.ts
|
||||
│ │ ├── prompt-builder.ts
|
||||
│ │ └── providers/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ ├── anthropic-provider.ts
|
||||
│ │ ├── openai-provider.ts
|
||||
│ │ └── google-provider.ts
|
||||
│ ├── storage/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── file-storage.ts
|
||||
│ ├── config/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── config-manager.ts
|
||||
│ ├── utils/
|
||||
│ │ ├── index.ts # Barrel export
|
||||
│ │ └── id-generator.ts
|
||||
│ └── errors/
|
||||
│ ├── index.ts # Barrel export
|
||||
│ └── task-master-error.ts
|
||||
├── tests/
|
||||
│ ├── task-parser.test.ts
|
||||
│ ├── integration/
|
||||
│ │ └── parse-prd.test.ts
|
||||
│ └── mocks/
|
||||
│ └── mock-provider.ts
|
||||
├── package.json
|
||||
├── tsconfig.json
|
||||
├── tsup.config.js
|
||||
└── jest.config.js
|
||||
```
|
||||
|
||||
## Specific Implementation Requirements
|
||||
|
||||
### 1. Create types/index.ts
|
||||
Define these exact TypeScript interfaces:
|
||||
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
|
||||
- `Subtask` interface with fields: id, title, description, completed
|
||||
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
|
||||
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
|
||||
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
|
||||
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
|
||||
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
|
||||
|
||||
### 2. Create interfaces/storage.interface.ts
|
||||
Define `IStorage` interface with these exact methods:
|
||||
- `loadTasks(tag?: string): Promise<Task[]>`
|
||||
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
|
||||
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
|
||||
- `deleteTask(id: string, tag?: string): Promise<void>`
|
||||
- `exists(tag?: string): Promise<boolean>`
|
||||
|
||||
### 3. Create interfaces/ai-provider.interface.ts
|
||||
Define `IAIProvider` interface with these exact methods:
|
||||
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
|
||||
- `calculateTokens(text: string): number`
|
||||
- `getName(): string`
|
||||
- `getModel(): string`
|
||||
|
||||
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
|
||||
|
||||
### 4. Create interfaces/configuration.interface.ts
|
||||
Define `IConfiguration` interface with fields:
|
||||
- `projectPath: string`
|
||||
- `aiProvider: string`
|
||||
- `apiKey?: string`
|
||||
- `aiOptions?: AIOptions`
|
||||
- `mainModel?: string`
|
||||
- `researchModel?: string`
|
||||
- `fallbackModel?: string`
|
||||
- `tasksPath?: string`
|
||||
- `enableTags?: boolean`
|
||||
|
||||
### 5. Create tasks/task-parser.ts
|
||||
Create class `TaskParser` with:
|
||||
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
|
||||
- Private property `promptBuilder: PromptBuilder`
|
||||
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||
- Private method `readPRD(prdPath: string): Promise<string>`
|
||||
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
|
||||
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
|
||||
- Apply **Dependency Injection** pattern via constructor
|
||||
|
||||
### 6. Create ai/base-provider.ts
|
||||
Copy existing base-provider.js and convert to TypeScript abstract class:
|
||||
- Abstract class `BaseProvider` implementing `IAIProvider`
|
||||
- Protected properties: `apiKey: string`, `model: string`
|
||||
- Constructor accepting `apiKey: string` and `options: { model?: string }`
|
||||
- Abstract methods matching IAIProvider interface
|
||||
- Abstract method `getDefaultModel(): string`
|
||||
- Apply **Template Method** pattern for common provider logic
|
||||
|
||||
### 7. Create ai/provider-factory.ts
|
||||
Create class `ProviderFactory` with:
|
||||
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
|
||||
- Switch statement for providers: 'anthropic', 'openai', 'google'
|
||||
- Dynamic imports for each provider
|
||||
- Throw error for unknown providers
|
||||
- Apply **Factory** pattern for creating provider instances
|
||||
|
||||
Example implementation structure:
|
||||
```typescript
|
||||
switch (provider.toLowerCase()) {
|
||||
case 'anthropic':
|
||||
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
|
||||
return new AnthropicProvider(apiKey, { model });
|
||||
}
|
||||
```
|
||||
|
||||
### 8. Create ai/providers/anthropic-provider.ts
|
||||
Create class `AnthropicProvider` extending `BaseProvider`:
|
||||
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
|
||||
- Private property `client: Anthropic`
|
||||
- Implement all abstract methods from BaseProvider
|
||||
- Default model: 'claude-3-sonnet-20240229'
|
||||
- Handle API errors and wrap with meaningful messages
|
||||
|
||||
### 9. Create ai/providers/openai-provider.ts (placeholder)
|
||||
Create class `OpenAIProvider` extending `BaseProvider`:
|
||||
- Import OpenAI SDK when implemented
|
||||
- For now, throw error: "OpenAI provider not yet implemented"
|
||||
|
||||
### 10. Create ai/providers/google-provider.ts (placeholder)
|
||||
Create class `GoogleProvider` extending `BaseProvider`:
|
||||
- Import Google Generative AI SDK when implemented
|
||||
- For now, throw error: "Google provider not yet implemented"
|
||||
|
||||
### 11. Create ai/prompt-builder.ts
|
||||
Create class `PromptBuilder` with:
|
||||
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
|
||||
- Method `buildExpandPrompt(task: string, context?: string): string`
|
||||
- Use template literals for prompt construction
|
||||
- Include specific JSON format instructions in prompts
|
||||
|
||||
### 9. Create storage/file-storage.ts
|
||||
Create class `FileStorage` implementing `IStorage`:
|
||||
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
|
||||
- Constructor accepting `projectPath: string`
|
||||
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
|
||||
- Private method `ensureDirectory(dir: string): Promise<void>`
|
||||
- Implement all IStorage methods
|
||||
- Handle ENOENT errors by returning empty arrays
|
||||
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
|
||||
- Apply **Repository** pattern for data access abstraction
|
||||
|
||||
### 10. Create config/config-manager.ts
|
||||
Create class `ConfigManager`:
|
||||
- Private property `config: IConfiguration`
|
||||
- Constructor accepting `options: Partial<IConfiguration>`
|
||||
- Use Zod for validation with schema matching IConfiguration
|
||||
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
|
||||
- Method `getAll(): IConfiguration`
|
||||
- Method `validate(): boolean`
|
||||
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
|
||||
|
||||
### 11. Create utils/id-generator.ts
|
||||
Export functions:
|
||||
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
|
||||
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
|
||||
|
||||
### 16. Create src/index.ts
|
||||
Create main class `TaskMasterCore`:
|
||||
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
|
||||
- Constructor accepting `options: Partial<IConfiguration>`
|
||||
- Method `initialize(): Promise<void>` for lazy loading
|
||||
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
||||
- Method `getTasks(tag?: string): Promise<Task[]>`
|
||||
- Apply **Facade** pattern to provide simple API over complex subsystems
|
||||
|
||||
Export:
|
||||
- Class `TaskMasterCore`
|
||||
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
|
||||
- All types from './types'
|
||||
- All interfaces from './interfaces/*'
|
||||
|
||||
Import statements should use kebab-case:
|
||||
```typescript
|
||||
import { TaskParser } from './tasks/task-parser';
|
||||
import { FileStorage } from './storage/file-storage';
|
||||
import { ConfigManager } from './config/config-manager';
|
||||
import { ProviderFactory } from './ai/provider-factory';
|
||||
```
|
||||
|
||||
### 17. Configure package.json
|
||||
Create package.json with:
|
||||
- name: "@task-master/core"
|
||||
- version: "0.1.0"
|
||||
- type: "module"
|
||||
- main: "./dist/index.js"
|
||||
- module: "./dist/index.mjs"
|
||||
- types: "./dist/index.d.ts"
|
||||
- exports map for proper ESM/CJS support
|
||||
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
|
||||
- dependencies: zod@^3.23.8
|
||||
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
|
||||
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
|
||||
|
||||
### 18. Configure TypeScript
|
||||
Create tsconfig.json with:
|
||||
- target: "ES2022"
|
||||
- module: "ESNext"
|
||||
- strict: true (with all strict flags enabled)
|
||||
- declaration: true
|
||||
- outDir: "./dist"
|
||||
- rootDir: "./src"
|
||||
|
||||
### 19. Configure tsup
|
||||
Create tsup.config.js with:
|
||||
- entry: ['src/index.ts']
|
||||
- format: ['cjs', 'esm']
|
||||
- dts: true
|
||||
- sourcemap: true
|
||||
- clean: true
|
||||
- external: AI provider SDKs
|
||||
|
||||
### 20. Configure Jest
|
||||
Create jest.config.js with:
|
||||
- preset: 'ts-jest'
|
||||
- testEnvironment: 'node'
|
||||
- Coverage threshold: 80% for all metrics
|
||||
|
||||
## Build Process
|
||||
1. Use tsup to compile TypeScript to both CommonJS and ESM
|
||||
2. Generate .d.ts files for TypeScript consumers
|
||||
3. Output to dist/ directory
|
||||
4. Ensure tree-shaking works properly
|
||||
|
||||
## Testing Requirements
|
||||
- Create unit tests for TaskParser in tests/task-parser.test.ts
|
||||
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
|
||||
- Test error scenarios (file not found, invalid JSON, etc.)
|
||||
- Create integration test in tests/integration/parse-prd.test.ts
|
||||
- Follow kebab-case naming for all test files
|
||||
|
||||
## Success Criteria
|
||||
- TypeScript compilation with zero errors
|
||||
- No use of 'any' type
|
||||
- All interfaces properly exported
|
||||
- Compatible with existing tasks.json format
|
||||
- Feature flag support via USE_TM_CORE environment variable
|
||||
|
||||
## Import/Export Conventions
|
||||
- Use named exports for all classes and interfaces
|
||||
- Use barrel exports (index.ts) in each directory
|
||||
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
|
||||
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
|
||||
- Use .js extension in import paths for ESM compatibility
|
||||
|
||||
## Error Handling Patterns
|
||||
- Create custom error classes in `src/errors/` directory
|
||||
- All public methods should catch and wrap errors with context
|
||||
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
|
||||
- Never expose internal implementation details in error messages
|
||||
- Log errors to console.error only in development mode
|
||||
|
||||
## Barrel Exports Content
|
||||
|
||||
### interfaces/index.ts
|
||||
```typescript
|
||||
export type { IStorage } from './storage.interface';
|
||||
export type { IAIProvider, AIOptions } from './ai-provider.interface';
|
||||
export type { IConfiguration } from './configuration.interface';
|
||||
```
|
||||
|
||||
### tasks/index.ts
|
||||
```typescript
|
||||
export { TaskParser } from './task-parser';
|
||||
```
|
||||
|
||||
### ai/index.ts
|
||||
```typescript
|
||||
export { BaseProvider } from './base-provider';
|
||||
export { ProviderFactory } from './provider-factory';
|
||||
export { PromptBuilder } from './prompt-builder';
|
||||
```
|
||||
|
||||
### ai/providers/index.ts
|
||||
```typescript
|
||||
export { AnthropicProvider } from './anthropic-provider';
|
||||
export { OpenAIProvider } from './openai-provider';
|
||||
export { GoogleProvider } from './google-provider';
|
||||
```
|
||||
|
||||
### storage/index.ts
|
||||
```typescript
|
||||
export { FileStorage } from './file-storage';
|
||||
```
|
||||
|
||||
### config/index.ts
|
||||
```typescript
|
||||
export { ConfigManager } from './config-manager';
|
||||
```
|
||||
|
||||
### utils/index.ts
|
||||
```typescript
|
||||
export { generateTaskId, generateSubtaskId } from './id-generator';
|
||||
```
|
||||
|
||||
### errors/index.ts
|
||||
```typescript
|
||||
export { TaskMasterError } from './task-master-error';
|
||||
```
|
||||
@@ -1,77 +0,0 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:39:03.250Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
|
||||
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
|
||||
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
|
||||
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
|
||||
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
|
||||
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
|
||||
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
|
||||
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
|
||||
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:15:01.327Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
|
||||
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
|
||||
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
|
||||
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
|
||||
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
|
||||
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 2,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
|
||||
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
|
||||
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
|
||||
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-09-12T22:25:27.535Z",
|
||||
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
||||
"branchTagMapping": {
|
||||
"v017-adds": "v017-adds",
|
||||
"next": "next"
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
# Task ID: 1
|
||||
# Title: Create start command class structure
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create the basic structure for the start command following the Commander class pattern
|
||||
# Details:
|
||||
Create a new file `apps/cli/src/commands/start.command.ts` based on the existing list.command.ts pattern. Implement the command class with proper command registration, description, and argument handling for the task_id parameter. The class should extend the base Command class and implement the required methods.
|
||||
|
||||
Example structure:
|
||||
```typescript
|
||||
import { Command } from 'commander';
|
||||
import { BaseCommand } from './base.command';
|
||||
|
||||
export class StartCommand extends BaseCommand {
|
||||
public register(program: Command): void {
|
||||
program
|
||||
.command('start')
|
||||
.alias('tm start')
|
||||
.description('Start implementing a task using claude-code')
|
||||
.argument('<task_id>', 'ID of the task to start')
|
||||
.action(async (taskId: string) => {
|
||||
await this.execute(taskId);
|
||||
});
|
||||
}
|
||||
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Implementation will be added in subsequent tasks
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command registers correctly by running the CLI with --help and checking that the start command appears with proper description and arguments. Test the basic structure by ensuring the command can be invoked without errors.
|
||||
@@ -1,26 +0,0 @@
|
||||
# Task ID: 2
|
||||
# Title: Register start command in CLI
|
||||
# Status: pending
|
||||
# Dependencies: 7
|
||||
# Priority: high
|
||||
# Description: Register the start command in the CLI application
|
||||
# Details:
|
||||
Update the CLI application to register the new start command. This involves importing the StartCommand class and adding it to the commands array in the CLI initialization.
|
||||
|
||||
In `apps/cli/src/index.ts` or the appropriate file where commands are registered:
|
||||
|
||||
```typescript
|
||||
import { StartCommand } from './commands/start.command';
|
||||
|
||||
// Add StartCommand to the commands array
|
||||
const commands = [
|
||||
// ... existing commands
|
||||
new StartCommand(),
|
||||
];
|
||||
|
||||
// Register all commands
|
||||
commands.forEach(command => command.register(program));
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command is correctly registered by running the CLI with --help and checking that the start command appears in the list of available commands.
|
||||
@@ -1,32 +0,0 @@
|
||||
# Task ID: 3
|
||||
# Title: Create standardized prompt builder
|
||||
# Status: pending
|
||||
# Dependencies: 1
|
||||
# Priority: medium
|
||||
# Description: Implement a function to build the standardized prompt for claude-code based on the task details
|
||||
# Details:
|
||||
Create a function in the StartCommand class that builds the standardized prompt according to the template provided in the PRD. The prompt should include instructions for Claude to first run `tm show <task_id>` to get task details, and then implement the required changes.
|
||||
|
||||
```typescript
|
||||
private buildPrompt(taskId: string): string {
|
||||
return `You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show ${taskId}
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show ${taskId} to understand what needs to be implemented.`;
|
||||
}
|
||||
```
|
||||
<info added on 2025-09-12T02:40:01.812Z>
|
||||
The prompt builder function will handle task context retrieval by instructing Claude to use the task-master show command. This approach ensures Claude has access to all necessary task details before implementation begins. The command syntax "tm show ${taskId}" embedded in the prompt will direct Claude to first gather the complete task context, including description, requirements, and any existing implementation details, before proceeding with code changes.
|
||||
</info added on 2025-09-12T02:40:01.812Z>
|
||||
|
||||
# Test Strategy:
|
||||
Verify the prompt is correctly formatted by calling the function with a sample task ID and checking that the output matches the expected template with the task ID properly inserted.
|
||||
@@ -1,36 +0,0 @@
|
||||
# Task ID: 4
|
||||
# Title: Implement claude-code executor
|
||||
# Status: pending
|
||||
# Dependencies: 3
|
||||
# Priority: high
|
||||
# Description: Add functionality to execute the claude-code command with the built prompt
|
||||
# Details:
|
||||
Implement the functionality to execute the claude command with the built prompt. This should use Node.js child_process.exec() to run the command directly in the terminal.
|
||||
|
||||
```typescript
|
||||
import { exec } from 'child_process';
|
||||
|
||||
// Inside execute method, after task validation
|
||||
private async executeClaude(prompt: string): Promise<void> {
|
||||
console.log('Starting claude-code to implement the task...');
|
||||
|
||||
try {
|
||||
// Execute claude with the prompt
|
||||
const claudeCommand = `claude "${prompt.replace(/"/g, '\\"')}"`;
|
||||
|
||||
// Use execSync to wait for the command to complete
|
||||
const { execSync } = require('child_process');
|
||||
execSync(claudeCommand, { stdio: 'inherit' });
|
||||
|
||||
console.log('Claude session completed.');
|
||||
} catch (error) {
|
||||
console.error('Error executing claude-code:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then call this method from the execute method after building the prompt.
|
||||
|
||||
# Test Strategy:
|
||||
Test by running the command with a valid task ID and verifying that the claude command is executed with the correct prompt. Check that the command handles errors appropriately if claude-code is not available.
|
||||
@@ -1,49 +0,0 @@
|
||||
# Task ID: 7
|
||||
# Title: Integrate execution flow in start command
|
||||
# Status: pending
|
||||
# Dependencies: 3, 4
|
||||
# Priority: high
|
||||
# Description: Connect all the components to implement the complete execution flow for the start command
|
||||
# Details:
|
||||
Update the execute method in the StartCommand class to integrate all the components and implement the complete execution flow as described in the PRD:
|
||||
1. Validate task exists
|
||||
2. Build standardized prompt
|
||||
3. Execute claude-code
|
||||
4. Check git status for changes
|
||||
5. Auto-mark task as done if changes detected
|
||||
|
||||
```typescript
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Validate task exists
|
||||
const core = await createTaskMasterCore();
|
||||
const task = await core.tasks.getById(parseInt(taskId, 10));
|
||||
|
||||
if (!task) {
|
||||
console.error(`Task with ID ${taskId} not found`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Build prompt
|
||||
const prompt = this.buildPrompt(taskId);
|
||||
|
||||
// Execute claude-code
|
||||
await this.executeClaude(prompt);
|
||||
|
||||
// Check git status
|
||||
const changedFiles = await this.checkGitChanges();
|
||||
|
||||
if (changedFiles.length > 0) {
|
||||
console.log('\nChanges detected in the following files:');
|
||||
changedFiles.forEach(file => console.log(`- ${file}`));
|
||||
|
||||
// Auto-mark task as done
|
||||
await this.markTaskAsDone(taskId);
|
||||
console.log(`\nTask ${taskId} completed successfully and marked as done.`);
|
||||
} else {
|
||||
console.warn('\nNo changes detected after claude-code execution. Task not marked as done.');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Test the complete execution flow by running the start command with a valid task ID and verifying that all steps are executed correctly. Test with both scenarios: when changes are detected and when no changes are detected.
|
||||
File diff suppressed because one or more lines are too long
15
.vscode/settings.json
vendored
15
.vscode/settings.json
vendored
@@ -10,18 +10,5 @@
|
||||
},
|
||||
|
||||
"json.format.enable": true,
|
||||
"json.validate.enable": true,
|
||||
"typescript.tsdk": "node_modules/typescript/lib",
|
||||
"[typescript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[typescriptreact]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[javascript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
}
|
||||
"json.validate.enable": true
|
||||
}
|
||||
|
||||
484
CHANGELOG.md
484
CHANGELOG.md
@@ -1,489 +1,5 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.27.2
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1248](https://github.com/eyaltoledano/claude-task-master/pull/1248) [`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix set-status for subtasks:
|
||||
- Parent tasks are now set as `done` when subtasks are all `done`
|
||||
- Parent tasks are now set as `in-progress` when at least one subtask is `in-progress` or `done`
|
||||
|
||||
## 0.27.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Zed MCP configuration by adding required "source" property
|
||||
- Add "source": "custom" property to task-master-ai server in Zed settings.json
|
||||
|
||||
## 0.27.1-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - One last testing final final
|
||||
|
||||
## 0.27.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
|
||||
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
|
||||
- we now bundle our whole package, so we no longer need the --package
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
|
||||
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
|
||||
- `task-master start` will automatically detect next-task when no ID is provided.
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
|
||||
|
||||
## Setup Instructions
|
||||
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
|
||||
2. **Set the environment variable**:
|
||||
```bash
|
||||
export GROK_CLI_API_KEY="your-api-key-here"
|
||||
```
|
||||
3. **Configure Task Master to use Grok**:
|
||||
```bash
|
||||
task-master models --set-main grok-beta
|
||||
# or
|
||||
task-master models --set-research grok-beta
|
||||
# or
|
||||
task-master models --set-fallback grok-beta
|
||||
```
|
||||
|
||||
## Key Features
|
||||
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
|
||||
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
|
||||
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
|
||||
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
|
||||
|
||||
## Available Models
|
||||
- `grok-beta` - Latest Grok model with codebase context
|
||||
- `grok-vision-beta` - Grok with vision capabilities and codebase context
|
||||
|
||||
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
|
||||
|
||||
## Credits
|
||||
|
||||
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
|
||||
|
||||
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
|
||||
- moving from main anthropic 3.7 to anthropic sonnet 4
|
||||
- moving from fallback anthropic 3.5 to anthropic 3.7
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
|
||||
|
||||
## 0.27.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
## 0.27.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [`255b9f0`](https://github.com/eyaltoledano/claude-task-master/commit/255b9f0334555b0063280abde701445cd62fa11b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Testing one more pre-release iteration
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Test out the RC
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee)]:
|
||||
- @tm/cli@0.27.0-rc.0
|
||||
|
||||
## 0.26.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1133](https://github.com/eyaltoledano/claude-task-master/pull/1133) [`df26c65`](https://github.com/eyaltoledano/claude-task-master/commit/df26c65632000874a73504963b08f18c46283144) Thanks [@neonwatty](https://github.com/neonwatty)! - Restore Taskmaster claude-code commands and move clear commands under /remove to avoid collision with the claude-code /clear command.
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.26.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
## 0.26.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.25.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1152](https://github.com/eyaltoledano/claude-task-master/pull/1152) [`8933557`](https://github.com/eyaltoledano/claude-task-master/commit/89335578ffffc65504b2055c0c85aa7521e5e79b) Thanks [@ben-vargas](https://github.com/ben-vargas)! - fix(claude-code): prevent crash/hang when the optional `@anthropic-ai/claude-code` SDK is missing by guarding `AbortError instanceof` checks and adding explicit SDK presence checks in `doGenerate`/`doStream`. Also bump the optional dependency to `^1.0.88` for improved export consistency.
|
||||
|
||||
Related to JSON truncation handling in #920; this change addresses a separate error-path crash reported in #1142.
|
||||
|
||||
- [#1151](https://github.com/eyaltoledano/claude-task-master/pull/1151) [`db720a9`](https://github.com/eyaltoledano/claude-task-master/commit/db720a954d390bb44838cd021b8813dde8f3d8de) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Temporarily disable streaming for improved model compatibility - will be re-enabled in upcoming release
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1098](https://github.com/eyaltoledano/claude-task-master/pull/1098) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
||||
- Ensures generated JSON includes all fields required by the schema
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
|
||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
||||
- scope_up_task and scope_down_task MCP tools now work properly
|
||||
|
||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
|
||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
||||
- Perplexity now uses JSON mode for more reliable structured output
|
||||
- Post-processing handles default values separately from schema validation
|
||||
|
||||
## 0.24.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
|
||||
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||
|
||||
## 0.24.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||
|
||||
## New Claude Code Agents
|
||||
|
||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||
|
||||
### task-orchestrator
|
||||
|
||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||
- Analyzes task dependencies to identify parallelizable work
|
||||
- Deploys multiple task-executor agents for concurrent execution
|
||||
- Monitors task completion and updates the dependency graph
|
||||
- Automatically identifies and starts newly unblocked tasks
|
||||
|
||||
### task-executor
|
||||
|
||||
Handles the actual implementation of individual tasks:
|
||||
- Executes specific tasks identified by the orchestrator
|
||||
- Works on concrete implementation rather than planning
|
||||
- Updates task status and logs progress
|
||||
- Can work in parallel with other executors on independent tasks
|
||||
|
||||
### task-checker
|
||||
|
||||
Verifies that completed tasks meet their specifications:
|
||||
- Reviews tasks marked as 'review' status
|
||||
- Validates implementation against requirements
|
||||
- Runs tests and checks for best practices
|
||||
- Ensures quality before marking tasks as 'done'
|
||||
|
||||
## Installation
|
||||
|
||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# In Claude Code, after initializing a project with tasks:
|
||||
|
||||
# Use task-orchestrator to analyze and coordinate work
|
||||
# The orchestrator will:
|
||||
# 1. Check task dependencies
|
||||
# 2. Identify tasks that can run in parallel
|
||||
# 3. Deploy executors for available work
|
||||
# 4. Monitor progress and deploy new executors as tasks complete
|
||||
|
||||
# Use task-executor for specific task implementation
|
||||
# When the orchestrator identifies task 2.3 needs work:
|
||||
# The executor will implement that specific task
|
||||
```
|
||||
|
||||
## Benefits
|
||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||
|
||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||
|
||||
## 0.23.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -3,7 +3,3 @@
|
||||
## Task Master AI Instructions
|
||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||
@./.taskmaster/CLAUDE.md
|
||||
|
||||
## Changeset Guidelines
|
||||
|
||||
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||
64
README.md
64
README.md
@@ -1,39 +1,14 @@
|
||||
<a name="readme-top"></a>
|
||||
# Task Master [](https://github.com/eyaltoledano/claude-task-master/stargazers)
|
||||
|
||||
<div align='center'>
|
||||
<a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
[](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [](https://badge.fury.io/js/task-master-ai) [](https://discord.gg/taskmasterai) [](LICENSE)
|
||||
|
||||
<p align="center">
|
||||
<a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a>
|
||||
</p>
|
||||
[](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai) [](https://www.npmjs.com/package/task-master-ai)
|
||||
|
||||
<p align="center">
|
||||
<b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> |
|
||||
<a href="https://docs.task-master.dev" target="_blank">Docs</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a>
|
||||
<a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a>
|
||||
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a>
|
||||
</p>
|
||||
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
|
||||
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
|
||||
|
||||
[](https://x.com/eyaltoledano)
|
||||
[](https://x.com/RalphEcom)
|
||||
[](https://x.com/jasonzhou1993)
|
||||
|
||||
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
|
||||
|
||||
@@ -56,23 +31,10 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
#### Quick Install for Cursor 1.0+ (One-Click)
|
||||
|
||||
[](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
[](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
#### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
|
||||
@@ -105,18 +67,17 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
|
||||
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
|
||||
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
|
||||
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
|
||||
|
||||
##### Manual Configuration
|
||||
|
||||
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
|
||||
###### Cursor & Windsurf (`mcpServers`)
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
@@ -136,7 +97,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
|
||||
|
||||
###### VS Code (`servers` + `type`)
|
||||
|
||||
@@ -145,7 +106,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
"servers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
@@ -269,11 +230,6 @@ task-master show 1,3,5
|
||||
# Research fresh information with project context
|
||||
task-master research "What are the latest best practices for JWT authentication?"
|
||||
|
||||
# Move tasks between tags (cross-tag movement)
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
# @tm/cli
|
||||
|
||||
## null
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@null
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@0.26.1
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
|
||||
|
||||
## 1.1.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`cd90b4d`](https://github.com/eyaltoledano/claude-task-master/commit/cd90b4d65fc2f04bdad9fb73aba320b58a124240) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
|
||||
@@ -1,52 +0,0 @@
|
||||
{
|
||||
"name": "@tm/cli",
|
||||
"description": "Task Master CLI - Command line interface for task management",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"main": "./dist/index.js",
|
||||
"types": "./src/index.ts",
|
||||
"exports": {
|
||||
".": "./src/index.ts"
|
||||
},
|
||||
"files": ["dist", "README.md"],
|
||||
"scripts": {
|
||||
"typecheck": "tsc --noEmit",
|
||||
"lint": "biome check src",
|
||||
"format": "biome format --write src",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:coverage": "vitest run --coverage",
|
||||
"test:unit": "vitest run -t unit",
|
||||
"test:integration": "vitest run -t integration",
|
||||
"test:e2e": "vitest run --dir tests/e2e",
|
||||
"test:ci": "vitest run --coverage --reporter=dot"
|
||||
},
|
||||
"dependencies": {
|
||||
"@tm/core": "*",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "5.6.2",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^12.1.0",
|
||||
"inquirer": "^12.5.0",
|
||||
"ora": "^8.2.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@biomejs/biome": "^1.9.4",
|
||||
"@types/inquirer": "^9.0.3",
|
||||
"@types/node": "^22.10.5",
|
||||
"tsx": "^4.20.4",
|
||||
"typescript": "^5.9.2",
|
||||
"vitest": "^2.1.8"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"keywords": ["task-master", "cli", "task-management", "productivity"],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"typesVersions": {
|
||||
"*": {
|
||||
"*": ["src/*"]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,514 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Auth command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import open from 'open';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type AuthCredentials
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from auth command
|
||||
*/
|
||||
export interface AuthResult {
|
||||
success: boolean;
|
||||
action: 'login' | 'logout' | 'status' | 'refresh';
|
||||
credentials?: AuthCredentials;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* AuthCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's AuthManager
|
||||
*/
|
||||
export class AuthCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: AuthResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'auth');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command with subcommands
|
||||
this.description('Manage authentication with tryhamster.com');
|
||||
|
||||
// Add subcommands
|
||||
this.addLoginCommand();
|
||||
this.addLogoutCommand();
|
||||
this.addStatusCommand();
|
||||
this.addRefreshCommand();
|
||||
|
||||
// Default action shows help
|
||||
this.action(() => {
|
||||
this.help();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add login subcommand
|
||||
*/
|
||||
private addLoginCommand(): void {
|
||||
this.command('login')
|
||||
.description('Authenticate with tryhamster.com')
|
||||
.action(async () => {
|
||||
await this.executeLogin();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add logout subcommand
|
||||
*/
|
||||
private addLogoutCommand(): void {
|
||||
this.command('logout')
|
||||
.description('Logout and clear credentials')
|
||||
.action(async () => {
|
||||
await this.executeLogout();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add status subcommand
|
||||
*/
|
||||
private addStatusCommand(): void {
|
||||
this.command('status')
|
||||
.description('Display authentication status')
|
||||
.action(async () => {
|
||||
await this.executeStatus();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add refresh subcommand
|
||||
*/
|
||||
private addRefreshCommand(): void {
|
||||
this.command('refresh')
|
||||
.description('Refresh authentication token')
|
||||
.action(async () => {
|
||||
await this.executeRefresh();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute login command
|
||||
*/
|
||||
private async executeLogin(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performInteractiveAuth();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Exit cleanly after successful authentication
|
||||
// Small delay to ensure all output is flushed
|
||||
setTimeout(() => {
|
||||
process.exit(0);
|
||||
}, 100);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute logout command
|
||||
*/
|
||||
private async executeLogout(): Promise<void> {
|
||||
try {
|
||||
const result = await this.performLogout();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute status command
|
||||
*/
|
||||
private async executeStatus(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayStatus();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute refresh command
|
||||
*/
|
||||
private async executeRefresh(): Promise<void> {
|
||||
try {
|
||||
const result = await this.refreshToken();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display authentication status
|
||||
*/
|
||||
private displayStatus(): AuthResult {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
|
||||
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.green('✓ Authenticated'));
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
console.log(
|
||||
chalk.gray(` Token Type: ${credentials.tokenType || 'standard'}`)
|
||||
);
|
||||
|
||||
if (credentials.expiresAt) {
|
||||
const expiresAt = new Date(credentials.expiresAt);
|
||||
const now = new Date();
|
||||
const hoursRemaining = Math.floor(
|
||||
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
|
||||
);
|
||||
|
||||
if (hoursRemaining > 0) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.gray(' Expires: Never (API key)'));
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.gray(` Saved: ${new Date(credentials.savedAt).toLocaleString()}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'status',
|
||||
credentials,
|
||||
message: 'Authenticated'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "task-master auth login" to authenticate')
|
||||
);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'status',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform logout
|
||||
*/
|
||||
private async performLogout(): Promise<AuthResult> {
|
||||
try {
|
||||
await this.authManager.logout();
|
||||
ui.displaySuccess('Successfully logged out');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'logout',
|
||||
message: 'Successfully logged out'
|
||||
};
|
||||
} catch (error) {
|
||||
const message = `Failed to logout: ${(error as Error).message}`;
|
||||
ui.displayError(message);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'logout',
|
||||
message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh authentication token
|
||||
*/
|
||||
private async refreshToken(): Promise<AuthResult> {
|
||||
const spinner = ora('Refreshing authentication token...').start();
|
||||
|
||||
try {
|
||||
const credentials = await this.authManager.refreshToken();
|
||||
spinner.succeed('Token refreshed successfully');
|
||||
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` New expiration: ${credentials.expiresAt ? new Date(credentials.expiresAt).toLocaleString() : 'Never'}`
|
||||
)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'refresh',
|
||||
credentials,
|
||||
message: 'Token refreshed successfully'
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to refresh token');
|
||||
|
||||
if ((error as AuthenticationError).code === 'NO_REFRESH_TOKEN') {
|
||||
ui.displayWarning(
|
||||
'No refresh token available. Please re-authenticate.'
|
||||
);
|
||||
} else {
|
||||
ui.displayError(`Refresh failed: ${(error as Error).message}`);
|
||||
}
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'refresh',
|
||||
message: `Failed to refresh: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform interactive authentication
|
||||
*/
|
||||
private async performInteractiveAuth(): Promise<AuthResult> {
|
||||
ui.displayBanner('Task Master Authentication');
|
||||
|
||||
// Check if already authenticated
|
||||
if (this.authManager.isAuthenticated()) {
|
||||
const { continueAuth } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'continueAuth',
|
||||
message:
|
||||
'You are already authenticated. Do you want to re-authenticate?',
|
||||
default: false
|
||||
}
|
||||
]);
|
||||
|
||||
if (!continueAuth) {
|
||||
const credentials = this.authManager.getCredentials();
|
||||
ui.displaySuccess('Using existing authentication');
|
||||
|
||||
if (credentials) {
|
||||
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
|
||||
console.log(chalk.gray(` User ID: ${credentials.userId}`));
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials: credentials || undefined,
|
||||
message: 'Using existing authentication'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Direct browser authentication - no menu needed
|
||||
const credentials = await this.authenticateWithBrowser();
|
||||
|
||||
ui.displaySuccess('Authentication successful!');
|
||||
console.log(
|
||||
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
credentials,
|
||||
message: 'Authentication successful'
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleAuthError(error as AuthenticationError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'login',
|
||||
message: `Authentication failed: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Authenticate with browser using OAuth 2.0 with PKCE
|
||||
*/
|
||||
private async authenticateWithBrowser(): Promise<AuthCredentials> {
|
||||
let authSpinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Use AuthManager's new unified OAuth flow method with callbacks
|
||||
const credentials = await this.authManager.authenticateWithOAuth({
|
||||
// Callback to handle browser opening
|
||||
openBrowser: async (authUrl) => {
|
||||
await open(authUrl);
|
||||
},
|
||||
timeout: 5 * 60 * 1000, // 5 minutes
|
||||
|
||||
// Callback when auth URL is ready
|
||||
onAuthUrl: (authUrl) => {
|
||||
// Display authentication instructions
|
||||
console.log(chalk.blue.bold('\n🔐 Browser Authentication\n'));
|
||||
console.log(chalk.white(' Opening your browser to authenticate...'));
|
||||
console.log(chalk.gray(" If the browser doesn't open, visit:"));
|
||||
console.log(chalk.cyan.underline(` ${authUrl}\n`));
|
||||
},
|
||||
|
||||
// Callback when waiting for authentication
|
||||
onWaitingForAuth: () => {
|
||||
authSpinner = ora({
|
||||
text: 'Waiting for authentication...',
|
||||
spinner: 'dots'
|
||||
}).start();
|
||||
},
|
||||
|
||||
// Callback on success
|
||||
onSuccess: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.succeed('Authentication successful!');
|
||||
}
|
||||
},
|
||||
|
||||
// Callback on error
|
||||
onError: () => {
|
||||
if (authSpinner) {
|
||||
authSpinner.fail('Authentication failed');
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return credentials;
|
||||
} catch (error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle authentication errors
|
||||
*/
|
||||
private handleAuthError(error: AuthenticationError): void {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
switch (error.code) {
|
||||
case 'NETWORK_ERROR':
|
||||
ui.displayWarning(
|
||||
'Please check your internet connection and try again.'
|
||||
);
|
||||
break;
|
||||
case 'INVALID_CREDENTIALS':
|
||||
ui.displayWarning('Please check your credentials and try again.');
|
||||
break;
|
||||
case 'AUTH_EXPIRED':
|
||||
ui.displayWarning(
|
||||
'Your session has expired. Please authenticate again.'
|
||||
);
|
||||
break;
|
||||
default:
|
||||
if (process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack || ''));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
this.handleAuthError(error);
|
||||
} else {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: AuthResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): AuthResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current authentication status (for programmatic usage)
|
||||
*/
|
||||
isAuthenticated(): boolean {
|
||||
return this.authManager.isAuthenticated();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current credentials (for programmatic usage)
|
||||
*/
|
||||
getCredentials(): AuthCredentials | null {
|
||||
return this.authManager.getCredentials();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for auth command
|
||||
// But keeping method for consistency with other commands
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const authCommand = new AuthCommand();
|
||||
program.addCommand(authCommand);
|
||||
return authCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
*/
|
||||
static register(program: Command, name?: string): AuthCommand {
|
||||
const authCommand = new AuthCommand(name);
|
||||
program.addCommand(authCommand);
|
||||
return authCommand;
|
||||
}
|
||||
}
|
||||
@@ -1,713 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Context command for managing org/brief selection
|
||||
* Provides a clean interface for workspace context management
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { Ora } from 'ora';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Result type from context command
|
||||
*/
|
||||
export interface ContextResult {
|
||||
success: boolean;
|
||||
action: 'show' | 'select-org' | 'select-brief' | 'clear' | 'set';
|
||||
context?: UserContext;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* ContextCommand extending Commander's Command class
|
||||
* Manages user's workspace context (org/brief selection)
|
||||
*/
|
||||
export class ContextCommand extends Command {
|
||||
private authManager: AuthManager;
|
||||
private lastResult?: ContextResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'context');
|
||||
|
||||
// Initialize auth manager
|
||||
this.authManager = AuthManager.getInstance();
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Manage workspace context (organization and brief selection)'
|
||||
);
|
||||
|
||||
// Add subcommands
|
||||
this.addOrgCommand();
|
||||
this.addBriefCommand();
|
||||
this.addClearCommand();
|
||||
this.addSetCommand();
|
||||
|
||||
// Accept optional positional argument for brief ID or Hamster URL
|
||||
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
|
||||
|
||||
// Default action: if an argument is provided, resolve and set context; else show
|
||||
this.action(async (briefOrUrl?: string) => {
|
||||
if (briefOrUrl && briefOrUrl.trim().length > 0) {
|
||||
await this.executeSetFromBriefInput(briefOrUrl.trim());
|
||||
return;
|
||||
}
|
||||
await this.executeShow();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add org selection subcommand
|
||||
*/
|
||||
private addOrgCommand(): void {
|
||||
this.command('org')
|
||||
.description('Select an organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectOrg();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add brief selection subcommand
|
||||
*/
|
||||
private addBriefCommand(): void {
|
||||
this.command('brief')
|
||||
.description('Select a brief within the current organization')
|
||||
.action(async () => {
|
||||
await this.executeSelectBrief();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add clear subcommand
|
||||
*/
|
||||
private addClearCommand(): void {
|
||||
this.command('clear')
|
||||
.description('Clear all context selections')
|
||||
.action(async () => {
|
||||
await this.executeClear();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add set subcommand for direct context setting
|
||||
*/
|
||||
private addSetCommand(): void {
|
||||
this.command('set')
|
||||
.description('Set context directly')
|
||||
.option('--org <id>', 'Organization ID')
|
||||
.option('--org-name <name>', 'Organization name')
|
||||
.option('--brief <id>', 'Brief ID')
|
||||
.option('--brief-name <name>', 'Brief name')
|
||||
.action(async (options) => {
|
||||
await this.executeSet(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute show current context
|
||||
*/
|
||||
private async executeShow(): Promise<void> {
|
||||
try {
|
||||
const result = this.displayContext();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display current context
|
||||
*/
|
||||
private displayContext(): ContextResult {
|
||||
// Check authentication first
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
console.log(chalk.yellow('✗ Not authenticated'));
|
||||
console.log(chalk.gray('\n Run "tm auth login" to authenticate first'));
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'show',
|
||||
message: 'Not authenticated'
|
||||
};
|
||||
}
|
||||
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
|
||||
|
||||
if (context && (context.orgId || context.briefId)) {
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(chalk.green('✓ Organization'));
|
||||
if (context.orgName) {
|
||||
console.log(chalk.white(` ${context.orgName}`));
|
||||
}
|
||||
if (context.orgId) {
|
||||
console.log(chalk.gray(` ID: ${context.orgId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(chalk.green('\n✓ Brief'));
|
||||
if (context.briefName) {
|
||||
console.log(chalk.white(` ${context.briefName}`));
|
||||
}
|
||||
if (context.briefId) {
|
||||
console.log(chalk.gray(` ID: ${context.briefId}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (context.updatedAt) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
`\n Last updated: ${new Date(context.updatedAt).toLocaleString()}`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
context,
|
||||
message: 'Context loaded'
|
||||
};
|
||||
} else {
|
||||
console.log(chalk.yellow('✗ No context selected'));
|
||||
console.log(
|
||||
chalk.gray('\n Run "tm context org" to select an organization')
|
||||
);
|
||||
console.log(chalk.gray(' Run "tm context brief" to select a brief'));
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'show',
|
||||
message: 'No context selected'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute org selection
|
||||
*/
|
||||
private async executeSelectOrg(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectOrganization();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select an organization interactively
|
||||
*/
|
||||
private async selectOrganization(): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching organizations...').start();
|
||||
|
||||
try {
|
||||
// Fetch organizations from API
|
||||
const organizations = await this.authManager.getOrganizations();
|
||||
spinner.stop();
|
||||
|
||||
if (organizations.length === 0) {
|
||||
ui.displayWarning('No organizations available');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-org',
|
||||
message: 'No organizations available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedOrg } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedOrg',
|
||||
message: 'Select an organization:',
|
||||
choices: organizations.map((org) => ({
|
||||
name: org.name,
|
||||
value: org
|
||||
}))
|
||||
}
|
||||
]);
|
||||
|
||||
// Update context
|
||||
await this.authManager.updateContext({
|
||||
orgId: selectedOrg.id,
|
||||
orgName: selectedOrg.name,
|
||||
// Clear brief when changing org
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected organization: ${selectedOrg.name}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-org',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected organization: ${selectedOrg.name}`
|
||||
};
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch organizations');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute brief selection
|
||||
*/
|
||||
private async executeSelectBrief(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Check if org is selected
|
||||
const context = this.authManager.getContext();
|
||||
if (!context?.orgId) {
|
||||
ui.displayError(
|
||||
'No organization selected. Run "tm context org" first.'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.selectBrief(context.orgId);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select a brief within the current organization
|
||||
*/
|
||||
private async selectBrief(orgId: string): Promise<ContextResult> {
|
||||
const spinner = ora('Fetching briefs...').start();
|
||||
|
||||
try {
|
||||
// Fetch briefs from API
|
||||
const briefs = await this.authManager.getBriefs(orgId);
|
||||
spinner.stop();
|
||||
|
||||
if (briefs.length === 0) {
|
||||
ui.displayWarning('No briefs available in this organization');
|
||||
return {
|
||||
success: false,
|
||||
action: 'select-brief',
|
||||
message: 'No briefs available'
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection
|
||||
const { selectedBrief } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedBrief',
|
||||
message: 'Select a brief:',
|
||||
choices: [
|
||||
{ name: '(No brief - organization level)', value: null },
|
||||
...briefs.map((brief) => ({
|
||||
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
|
||||
value: brief
|
||||
}))
|
||||
]
|
||||
}
|
||||
]);
|
||||
|
||||
if (selectedBrief) {
|
||||
// Update context with brief
|
||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
briefId: selectedBrief.id,
|
||||
briefName: briefName
|
||||
});
|
||||
|
||||
ui.displaySuccess(`Selected brief: ${briefName}`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected brief: ${selectedBrief.name}`
|
||||
};
|
||||
} else {
|
||||
// Clear brief selection
|
||||
await this.authManager.updateContext({
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
|
||||
ui.displaySuccess('Cleared brief selection (organization level)');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Cleared brief selection'
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.fail('Failed to fetch briefs');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute clear context
|
||||
*/
|
||||
private async executeClear(): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.clearContext();
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all context selections
|
||||
*/
|
||||
private async clearContext(): Promise<ContextResult> {
|
||||
try {
|
||||
await this.authManager.clearContext();
|
||||
ui.displaySuccess('Context cleared');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'clear',
|
||||
message: 'Context cleared'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to clear context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'clear',
|
||||
message: `Failed to clear context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute set context with options
|
||||
*/
|
||||
private async executeSet(options: any): Promise<void> {
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const result = await this.setContext(options);
|
||||
this.setLastResult(result);
|
||||
|
||||
if (!result.success) {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute setting context from a brief ID or Hamster URL
|
||||
*/
|
||||
private async executeSetFromBriefInput(briefOrUrl: string): Promise<void> {
|
||||
let spinner: Ora | undefined;
|
||||
try {
|
||||
// Check authentication
|
||||
if (!this.authManager.isAuthenticated()) {
|
||||
ui.displayError('Not authenticated. Run "tm auth login" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
spinner = ora('Resolving brief...');
|
||||
spinner.start();
|
||||
|
||||
// Extract brief ID
|
||||
const briefId = this.extractBriefId(briefOrUrl);
|
||||
if (!briefId) {
|
||||
spinner.fail('Could not extract a brief ID from the provided input');
|
||||
ui.displayError(
|
||||
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Fetch brief and resolve its organization
|
||||
const brief = await this.authManager.getBrief(briefId);
|
||||
if (!brief) {
|
||||
spinner.fail('Brief not found or you do not have access');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Fetch org to get a friendly name (optional)
|
||||
let orgName: string | undefined;
|
||||
try {
|
||||
const org = await this.authManager.getOrganization(brief.accountId);
|
||||
orgName = org?.name;
|
||||
} catch {
|
||||
// Non-fatal if org lookup fails
|
||||
}
|
||||
|
||||
// Update context: set org and brief
|
||||
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
orgId: brief.accountId,
|
||||
orgName,
|
||||
briefId: brief.id,
|
||||
briefName
|
||||
});
|
||||
|
||||
spinner.succeed('Context set from brief');
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Organization: ${orgName || brief.accountId}\n Brief: ${briefName}`
|
||||
)
|
||||
);
|
||||
|
||||
this.setLastResult({
|
||||
success: true,
|
||||
action: 'set',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Context set from brief'
|
||||
});
|
||||
} catch (error: any) {
|
||||
try {
|
||||
if (spinner?.isSpinning) spinner.stop();
|
||||
} catch {}
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a brief ID from raw input (ID or Hamster URL)
|
||||
*/
|
||||
private extractBriefId(input: string): string | null {
|
||||
const raw = input?.trim() ?? '';
|
||||
if (!raw) return null;
|
||||
|
||||
const parseUrl = (s: string): URL | null => {
|
||||
try {
|
||||
return new URL(s);
|
||||
} catch {}
|
||||
try {
|
||||
return new URL(`https://${s}`);
|
||||
} catch {}
|
||||
return null;
|
||||
};
|
||||
|
||||
const fromParts = (path: string): string | null => {
|
||||
const parts = path.split('/').filter(Boolean);
|
||||
const briefsIdx = parts.lastIndexOf('briefs');
|
||||
const candidate =
|
||||
briefsIdx >= 0 && parts.length > briefsIdx + 1
|
||||
? parts[briefsIdx + 1]
|
||||
: parts[parts.length - 1];
|
||||
return candidate?.trim() || null;
|
||||
};
|
||||
|
||||
// 1) URL (absolute or scheme‑less)
|
||||
const url = parseUrl(raw);
|
||||
if (url) {
|
||||
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
|
||||
const candidate = (qId || fromParts(url.pathname)) ?? null;
|
||||
if (candidate) {
|
||||
// Light sanity check; let API be the final validator
|
||||
if (this.isLikelyId(candidate) || candidate.length >= 8)
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// 2) Looks like a path without scheme
|
||||
if (raw.includes('/')) {
|
||||
const candidate = fromParts(raw);
|
||||
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
|
||||
// 3) Fallback: raw token
|
||||
return raw;
|
||||
}
|
||||
|
||||
/**
|
||||
* Heuristic to check if a string looks like a brief ID (UUID-like)
|
||||
*/
|
||||
private isLikelyId(value: string): boolean {
|
||||
const uuidRegex =
|
||||
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
|
||||
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i; // ULID
|
||||
const slugRegex = /^[A-Za-z0-9_-]{16,}$/; // general token
|
||||
return (
|
||||
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set context directly from options
|
||||
*/
|
||||
private async setContext(options: any): Promise<ContextResult> {
|
||||
try {
|
||||
const context: Partial<UserContext> = {};
|
||||
|
||||
if (options.org) {
|
||||
context.orgId = options.org;
|
||||
}
|
||||
if (options.orgName) {
|
||||
context.orgName = options.orgName;
|
||||
}
|
||||
if (options.brief) {
|
||||
context.briefId = options.brief;
|
||||
}
|
||||
if (options.briefName) {
|
||||
context.briefName = options.briefName;
|
||||
}
|
||||
|
||||
if (Object.keys(context).length === 0) {
|
||||
ui.displayWarning('No context options provided');
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: 'No context options provided'
|
||||
};
|
||||
}
|
||||
|
||||
await this.authManager.updateContext(context);
|
||||
ui.displaySuccess('Context updated');
|
||||
|
||||
// Display what was set
|
||||
if (context.orgName || context.orgId) {
|
||||
console.log(
|
||||
chalk.gray(` Organization: ${context.orgName || context.orgId}`)
|
||||
);
|
||||
}
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(
|
||||
chalk.gray(` Brief: ${context.briefName || context.briefId}`)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'set',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: 'Context updated'
|
||||
};
|
||||
} catch (error) {
|
||||
ui.displayError(`Failed to set context: ${(error as Error).message}`);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
action: 'set',
|
||||
message: `Failed to set context: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ContextResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ContextResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current context (for programmatic usage)
|
||||
*/
|
||||
getContext(): UserContext | null {
|
||||
return this.authManager.getContext();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
// No resources to clean up for context command
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const contextCommand = new ContextCommand();
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
*/
|
||||
static register(program: Command, name?: string): ContextCommand {
|
||||
const contextCommand = new ContextCommand(name);
|
||||
program.addCommand(contextCommand);
|
||||
return contextCommand;
|
||||
}
|
||||
}
|
||||
@@ -1,488 +0,0 @@
|
||||
/**
|
||||
* @fileoverview ListTasks command using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type Task,
|
||||
type TaskStatus,
|
||||
type TaskMasterCore,
|
||||
TASK_STATUSES,
|
||||
OUTPUT_FORMATS,
|
||||
STATUS_ICONS,
|
||||
type OutputFormat
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import {
|
||||
displayHeader,
|
||||
displayDashboards,
|
||||
calculateTaskStatistics,
|
||||
calculateSubtaskStatistics,
|
||||
calculateDependencyStatistics,
|
||||
getPriorityBreakdown,
|
||||
displayRecommendedNextTask,
|
||||
getTaskDescription,
|
||||
displaySuggestedNextSteps,
|
||||
type NextTaskInfo
|
||||
} from '../ui/index.js';
|
||||
|
||||
/**
|
||||
* Options interface for the list command
|
||||
*/
|
||||
export interface ListCommandOptions {
|
||||
status?: string;
|
||||
tag?: string;
|
||||
withSubtasks?: boolean;
|
||||
format?: OutputFormat;
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from list command
|
||||
*/
|
||||
export interface ListTasksResult {
|
||||
tasks: Task[];
|
||||
total: number;
|
||||
filtered: number;
|
||||
tag?: string;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* ListTasksCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class ListTasksCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: ListTasksResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'list');
|
||||
|
||||
// Configure the command
|
||||
this.description('List tasks with optional filtering')
|
||||
.alias('ls')
|
||||
.option('-s, --status <status>', 'Filter by status (comma-separated)')
|
||||
.option('-t, --tag <tag>', 'Filter by tag')
|
||||
.option('--with-subtasks', 'Include subtasks in the output')
|
||||
.option(
|
||||
'-f, --format <format>',
|
||||
'Output format (text, json, compact)',
|
||||
'text'
|
||||
)
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: ListCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the list command
|
||||
*/
|
||||
private async executeCommand(options: ListCommandOptions): Promise<void> {
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get tasks from core
|
||||
const result = await this.getTasks(options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: ListCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (
|
||||
options.format &&
|
||||
!OUTPUT_FORMATS.includes(options.format as OutputFormat)
|
||||
) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: ${OUTPUT_FORMATS.join(', ')}`));
|
||||
return false;
|
||||
}
|
||||
|
||||
// Validate status
|
||||
if (options.status) {
|
||||
const statuses = options.status.split(',').map((s: string) => s.trim());
|
||||
|
||||
for (const status of statuses) {
|
||||
if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) {
|
||||
console.error(chalk.red(`Invalid status: ${status}`));
|
||||
console.error(
|
||||
chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`)
|
||||
);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tasks from tm-core
|
||||
*/
|
||||
private async getTasks(
|
||||
options: ListCommandOptions
|
||||
): Promise<ListTasksResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Build filter
|
||||
const filter =
|
||||
options.status && options.status !== 'all'
|
||||
? {
|
||||
status: options.status
|
||||
.split(',')
|
||||
.map((s: string) => s.trim() as TaskStatus)
|
||||
}
|
||||
: undefined;
|
||||
|
||||
// Call tm-core
|
||||
const result = await this.tmCore.getTaskList({
|
||||
tag: options.tag,
|
||||
filter,
|
||||
includeSubtasks: options.withSubtasks
|
||||
});
|
||||
|
||||
return result as ListTasksResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: ListTasksResult,
|
||||
options: ListCommandOptions
|
||||
): void {
|
||||
const format = (options.format || 'text') as OutputFormat | 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'compact':
|
||||
this.displayCompact(result.tasks, options.withSubtasks);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayText(result, options.withSubtasks);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(data: ListTasksResult): void {
|
||||
console.log(
|
||||
JSON.stringify(
|
||||
{
|
||||
tasks: data.tasks,
|
||||
metadata: {
|
||||
total: data.total,
|
||||
filtered: data.filtered,
|
||||
tag: data.tag,
|
||||
storageType: data.storageType
|
||||
}
|
||||
},
|
||||
null,
|
||||
2
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in compact format
|
||||
*/
|
||||
private displayCompact(tasks: Task[], withSubtasks?: boolean): void {
|
||||
tasks.forEach((task) => {
|
||||
const icon = STATUS_ICONS[task.status];
|
||||
console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`);
|
||||
|
||||
if (withSubtasks && task.subtasks?.length) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subIcon = STATUS_ICONS[subtask.status];
|
||||
console.log(
|
||||
` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}`
|
||||
);
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in text format with tables
|
||||
*/
|
||||
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
||||
const { tasks, tag } = data;
|
||||
|
||||
// Get file path for display
|
||||
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
|
||||
|
||||
// Display header without banner (banner already shown by main CLI)
|
||||
displayHeader({
|
||||
tag: tag || 'master',
|
||||
filePath: filePath
|
||||
});
|
||||
|
||||
// No tasks message
|
||||
if (tasks.length === 0) {
|
||||
ui.displayWarning('No tasks found matching the criteria.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Calculate statistics
|
||||
const taskStats = calculateTaskStatistics(tasks);
|
||||
const subtaskStats = calculateSubtaskStatistics(tasks);
|
||||
const depStats = calculateDependencyStatistics(tasks);
|
||||
const priorityBreakdown = getPriorityBreakdown(tasks);
|
||||
|
||||
// Find next task following the same logic as findNextTask
|
||||
const nextTask = this.findNextTask(tasks);
|
||||
|
||||
// Display dashboard boxes
|
||||
displayDashboards(
|
||||
taskStats,
|
||||
subtaskStats,
|
||||
priorityBreakdown,
|
||||
depStats,
|
||||
nextTask
|
||||
);
|
||||
|
||||
// Task table - no title, just show the table directly
|
||||
console.log(
|
||||
ui.createTaskTable(tasks, {
|
||||
showSubtasks: withSubtasks,
|
||||
showDependencies: true,
|
||||
showComplexity: true // Enable complexity column
|
||||
})
|
||||
);
|
||||
|
||||
// Display recommended next task section immediately after table
|
||||
if (nextTask) {
|
||||
// Find the full task object to get description
|
||||
const fullTask = tasks.find((t) => String(t.id) === String(nextTask.id));
|
||||
const description = fullTask ? getTaskDescription(fullTask) : undefined;
|
||||
|
||||
displayRecommendedNextTask({
|
||||
...nextTask,
|
||||
status: 'pending', // Next task is typically pending
|
||||
description
|
||||
});
|
||||
} else {
|
||||
displayRecommendedNextTask(undefined);
|
||||
}
|
||||
|
||||
// Display suggested next steps at the end
|
||||
displaySuggestedNextSteps();
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: ListTasksResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the next task to work on
|
||||
* Implements the same logic as scripts/modules/task-manager/find-next-task.js
|
||||
*/
|
||||
private findNextTask(tasks: Task[]): NextTaskInfo | undefined {
|
||||
const priorityValues: Record<string, number> = {
|
||||
critical: 4,
|
||||
high: 3,
|
||||
medium: 2,
|
||||
low: 1
|
||||
};
|
||||
|
||||
// Build set of completed task IDs (including subtasks)
|
||||
const completedIds = new Set<string>();
|
||||
tasks.forEach((t) => {
|
||||
if (t.status === 'done' || t.status === 'completed') {
|
||||
completedIds.add(String(t.id));
|
||||
}
|
||||
if (t.subtasks) {
|
||||
t.subtasks.forEach((st) => {
|
||||
if (st.status === 'done' || st.status === 'completed') {
|
||||
completedIds.add(`${t.id}.${st.id}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// First, look for eligible subtasks in in-progress parent tasks
|
||||
const candidateSubtasks: NextTaskInfo[] = [];
|
||||
|
||||
tasks
|
||||
.filter(
|
||||
(t) => t.status === 'in-progress' && t.subtasks && t.subtasks.length > 0
|
||||
)
|
||||
.forEach((parent) => {
|
||||
parent.subtasks!.forEach((st) => {
|
||||
const stStatus = (st.status || 'pending').toLowerCase();
|
||||
if (stStatus !== 'pending' && stStatus !== 'in-progress') return;
|
||||
|
||||
// Check if dependencies are satisfied
|
||||
const fullDeps =
|
||||
st.dependencies?.map((d) => {
|
||||
// Handle both numeric and string IDs
|
||||
if (typeof d === 'string' && d.includes('.')) {
|
||||
return d;
|
||||
}
|
||||
return `${parent.id}.${d}`;
|
||||
}) ?? [];
|
||||
|
||||
const depsSatisfied =
|
||||
fullDeps.length === 0 ||
|
||||
fullDeps.every((depId) => completedIds.has(String(depId)));
|
||||
|
||||
if (depsSatisfied) {
|
||||
candidateSubtasks.push({
|
||||
id: `${parent.id}.${st.id}`,
|
||||
title: st.title || `Subtask ${st.id}`,
|
||||
priority: st.priority || parent.priority || 'medium',
|
||||
dependencies: fullDeps.map((d) => String(d))
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (candidateSubtasks.length > 0) {
|
||||
// Sort by priority, then by dependencies count, then by ID
|
||||
candidateSubtasks.sort((a, b) => {
|
||||
const pa = priorityValues[a.priority || 'medium'] ?? 2;
|
||||
const pb = priorityValues[b.priority || 'medium'] ?? 2;
|
||||
if (pb !== pa) return pb - pa;
|
||||
|
||||
const depCountA = a.dependencies?.length || 0;
|
||||
const depCountB = b.dependencies?.length || 0;
|
||||
if (depCountA !== depCountB) return depCountA - depCountB;
|
||||
|
||||
return String(a.id).localeCompare(String(b.id));
|
||||
});
|
||||
return candidateSubtasks[0];
|
||||
}
|
||||
|
||||
// Fall back to finding eligible top-level tasks
|
||||
const eligibleTasks = tasks.filter((task) => {
|
||||
// Skip non-eligible statuses
|
||||
const status = (task.status || 'pending').toLowerCase();
|
||||
if (status !== 'pending' && status !== 'in-progress') return false;
|
||||
|
||||
// Check dependencies
|
||||
const deps = task.dependencies || [];
|
||||
const depsSatisfied =
|
||||
deps.length === 0 ||
|
||||
deps.every((depId) => completedIds.has(String(depId)));
|
||||
|
||||
return depsSatisfied;
|
||||
});
|
||||
|
||||
if (eligibleTasks.length === 0) return undefined;
|
||||
|
||||
// Sort eligible tasks
|
||||
eligibleTasks.sort((a, b) => {
|
||||
// Priority (higher first)
|
||||
const pa = priorityValues[a.priority || 'medium'] ?? 2;
|
||||
const pb = priorityValues[b.priority || 'medium'] ?? 2;
|
||||
if (pb !== pa) return pb - pa;
|
||||
|
||||
// Dependencies count (fewer first)
|
||||
const depCountA = a.dependencies?.length || 0;
|
||||
const depCountB = b.dependencies?.length || 0;
|
||||
if (depCountA !== depCountB) return depCountA - depCountB;
|
||||
|
||||
// ID (lower first)
|
||||
return Number(a.id) - Number(b.id);
|
||||
});
|
||||
|
||||
const nextTask = eligibleTasks[0];
|
||||
return {
|
||||
id: nextTask.id,
|
||||
title: nextTask.title,
|
||||
priority: nextTask.priority,
|
||||
dependencies: nextTask.dependencies?.map((d) => String(d))
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ListTasksResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const listCommand = new ListTasksCommand();
|
||||
program.addCommand(listCommand);
|
||||
return listCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
*/
|
||||
static register(program: Command, name?: string): ListTasksCommand {
|
||||
const listCommand = new ListTasksCommand(name);
|
||||
program.addCommand(listCommand);
|
||||
return listCommand;
|
||||
}
|
||||
}
|
||||
@@ -1,318 +0,0 @@
|
||||
/**
|
||||
* @fileoverview SetStatusCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type TaskMasterCore,
|
||||
type TaskStatus
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Valid task status values for validation
|
||||
*/
|
||||
const VALID_TASK_STATUSES: TaskStatus[] = [
|
||||
'pending',
|
||||
'in-progress',
|
||||
'done',
|
||||
'deferred',
|
||||
'cancelled',
|
||||
'blocked',
|
||||
'review'
|
||||
];
|
||||
|
||||
/**
|
||||
* Options interface for the set-status command
|
||||
*/
|
||||
export interface SetStatusCommandOptions {
|
||||
id?: string;
|
||||
status?: TaskStatus;
|
||||
format?: 'text' | 'json';
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from set-status command
|
||||
*/
|
||||
export interface SetStatusResult {
|
||||
success: boolean;
|
||||
updatedTasks: Array<{
|
||||
taskId: string;
|
||||
oldStatus: TaskStatus;
|
||||
newStatus: TaskStatus;
|
||||
}>;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* SetStatusCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class SetStatusCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: SetStatusResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'set-status');
|
||||
|
||||
// Configure the command
|
||||
this.description('Update the status of one or more tasks')
|
||||
.requiredOption(
|
||||
'-i, --id <id>',
|
||||
'Task ID(s) to update (comma-separated for multiple, supports subtasks like 5.2)'
|
||||
)
|
||||
.requiredOption(
|
||||
'-s, --status <status>',
|
||||
`New status (${VALID_TASK_STATUSES.join(', ')})`
|
||||
)
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: SetStatusCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the set-status command
|
||||
*/
|
||||
private async executeCommand(
|
||||
options: SetStatusCommandOptions
|
||||
): Promise<void> {
|
||||
try {
|
||||
// Validate required options
|
||||
if (!options.id) {
|
||||
console.error(chalk.red('Error: Task ID is required. Use -i or --id'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!options.status) {
|
||||
console.error(
|
||||
chalk.red('Error: Status is required. Use -s or --status')
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Validate status
|
||||
if (!VALID_TASK_STATUSES.includes(options.status)) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Invalid status "${options.status}". Valid options: ${VALID_TASK_STATUSES.join(', ')}`
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize TaskMaster core
|
||||
this.tmCore = await createTaskMasterCore({
|
||||
projectPath: options.project || process.cwd()
|
||||
});
|
||||
|
||||
// Parse task IDs (handle comma-separated values)
|
||||
const taskIds = options.id.split(',').map((id) => id.trim());
|
||||
|
||||
// Update each task
|
||||
const updatedTasks: Array<{
|
||||
taskId: string;
|
||||
oldStatus: TaskStatus;
|
||||
newStatus: TaskStatus;
|
||||
}> = [];
|
||||
|
||||
for (const taskId of taskIds) {
|
||||
try {
|
||||
const result = await this.tmCore.updateTaskStatus(
|
||||
taskId,
|
||||
options.status
|
||||
);
|
||||
updatedTasks.push({
|
||||
taskId: result.taskId,
|
||||
oldStatus: result.oldStatus,
|
||||
newStatus: result.newStatus
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(
|
||||
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
|
||||
);
|
||||
}
|
||||
if (options.format === 'json') {
|
||||
console.log(
|
||||
JSON.stringify({
|
||||
success: false,
|
||||
error: errorMessage,
|
||||
taskId,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Store result for potential reuse
|
||||
this.lastResult = {
|
||||
success: true,
|
||||
updatedTasks,
|
||||
storageType: this.tmCore.getStorageType() as Exclude<
|
||||
StorageType,
|
||||
'auto'
|
||||
>
|
||||
};
|
||||
|
||||
// Display results
|
||||
this.displayResults(this.lastResult, options);
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : 'Unknown error occurred';
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(chalk.red(`Error: ${errorMessage}`));
|
||||
}
|
||||
|
||||
if (options.format === 'json') {
|
||||
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
} finally {
|
||||
// Clean up resources
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: SetStatusResult,
|
||||
options: SetStatusCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
if (!options.silent) {
|
||||
this.displayTextResults(result);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results in text format
|
||||
*/
|
||||
private displayTextResults(result: SetStatusResult): void {
|
||||
if (result.updatedTasks.length === 1) {
|
||||
// Single task update
|
||||
const update = result.updatedTasks[0];
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold(`✅ Successfully updated task ${update.taskId}`) +
|
||||
'\n\n' +
|
||||
`${chalk.blue('From:')} ${this.getStatusDisplay(update.oldStatus)}\n` +
|
||||
`${chalk.blue('To:')} ${this.getStatusDisplay(update.newStatus)}`,
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// Multiple task updates
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold(
|
||||
`✅ Successfully updated ${result.updatedTasks.length} tasks`
|
||||
) +
|
||||
'\n\n' +
|
||||
result.updatedTasks
|
||||
.map(
|
||||
(update) =>
|
||||
`${chalk.cyan(update.taskId)}: ${this.getStatusDisplay(update.oldStatus)} → ${this.getStatusDisplay(update.newStatus)}`
|
||||
)
|
||||
.join('\n'),
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// Show storage info
|
||||
console.log(chalk.gray(`\nUsing ${result.storageType} storage`));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored status display
|
||||
*/
|
||||
private getStatusDisplay(status: TaskStatus): string {
|
||||
const statusColors: Record<TaskStatus, (text: string) => string> = {
|
||||
pending: chalk.yellow,
|
||||
'in-progress': chalk.blue,
|
||||
done: chalk.green,
|
||||
deferred: chalk.gray,
|
||||
cancelled: chalk.red,
|
||||
blocked: chalk.red,
|
||||
review: chalk.magenta,
|
||||
completed: chalk.green
|
||||
};
|
||||
|
||||
const colorFn = statusColors[status] || chalk.white;
|
||||
return colorFn(status);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last command result (useful for testing or chaining)
|
||||
*/
|
||||
getLastResult(): SetStatusResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const setStatusCommand = new SetStatusCommand();
|
||||
program.addCommand(setStatusCommand);
|
||||
return setStatusCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
*/
|
||||
static register(program: Command, name?: string): SetStatusCommand {
|
||||
const setStatusCommand = new SetStatusCommand(name);
|
||||
program.addCommand(setStatusCommand);
|
||||
return setStatusCommand;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory function to create and configure the set-status command
|
||||
*/
|
||||
export function createSetStatusCommand(): SetStatusCommand {
|
||||
return new SetStatusCommand();
|
||||
}
|
||||
@@ -1,343 +0,0 @@
|
||||
/**
|
||||
* @fileoverview ShowCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
|
||||
/**
|
||||
* Options interface for the show command
|
||||
*/
|
||||
export interface ShowCommandOptions {
|
||||
id?: string;
|
||||
status?: string;
|
||||
format?: 'text' | 'json';
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from show command
|
||||
*/
|
||||
export interface ShowTaskResult {
|
||||
task: Task | null;
|
||||
found: boolean;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type for multiple tasks
|
||||
*/
|
||||
export interface ShowMultipleTasksResult {
|
||||
tasks: Task[];
|
||||
notFound: string[];
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* ShowCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class ShowCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: ShowTaskResult | ShowMultipleTasksResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'show');
|
||||
|
||||
// Configure the command
|
||||
this.description('Display detailed information about one or more tasks')
|
||||
.argument('[id]', 'Task ID(s) to show (comma-separated for multiple)')
|
||||
.option(
|
||||
'-i, --id <id>',
|
||||
'Task ID(s) to show (comma-separated for multiple)'
|
||||
)
|
||||
.option('-s, --status <status>', 'Filter subtasks by status')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(
|
||||
async (taskId: string | undefined, options: ShowCommandOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the show command
|
||||
*/
|
||||
private async executeCommand(
|
||||
taskId: string | undefined,
|
||||
options: ShowCommandOptions
|
||||
): Promise<void> {
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get the task ID from argument or option
|
||||
const idArg = taskId || options.id;
|
||||
if (!idArg) {
|
||||
console.error(chalk.red('Error: Please provide a task ID'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Check if multiple IDs are provided (comma-separated)
|
||||
const taskIds = idArg
|
||||
.split(',')
|
||||
.map((id) => id.trim())
|
||||
.filter((id) => id.length > 0);
|
||||
|
||||
// Get tasks from core
|
||||
const result =
|
||||
taskIds.length > 1
|
||||
? await this.getMultipleTasks(taskIds, options)
|
||||
: await this.getSingleTask(taskIds[0], options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: ShowCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: text, json`));
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single task from tm-core
|
||||
*/
|
||||
private async getSingleTask(
|
||||
taskId: string,
|
||||
_options: ShowCommandOptions
|
||||
): Promise<ShowTaskResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Get the task
|
||||
const task = await this.tmCore.getTask(taskId);
|
||||
|
||||
// Get storage type
|
||||
const storageType = this.tmCore.getStorageType();
|
||||
|
||||
return {
|
||||
task,
|
||||
found: task !== null,
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get multiple tasks from tm-core
|
||||
*/
|
||||
private async getMultipleTasks(
|
||||
taskIds: string[],
|
||||
_options: ShowCommandOptions
|
||||
): Promise<ShowMultipleTasksResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
const tasks: Task[] = [];
|
||||
const notFound: string[] = [];
|
||||
|
||||
// Get each task individually
|
||||
for (const taskId of taskIds) {
|
||||
const task = await this.tmCore.getTask(taskId);
|
||||
if (task) {
|
||||
tasks.push(task);
|
||||
} else {
|
||||
notFound.push(taskId);
|
||||
}
|
||||
}
|
||||
|
||||
// Get storage type
|
||||
const storageType = this.tmCore.getStorageType();
|
||||
|
||||
return {
|
||||
tasks,
|
||||
notFound,
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: ShowTaskResult | ShowMultipleTasksResult,
|
||||
options: ShowCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
if ('task' in result) {
|
||||
// Single task result
|
||||
this.displaySingleTask(result, options);
|
||||
} else {
|
||||
// Multiple tasks result
|
||||
this.displayMultipleTasks(result, options);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: ShowTaskResult | ShowMultipleTasksResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a single task in text format
|
||||
*/
|
||||
private displaySingleTask(
|
||||
result: ShowTaskResult,
|
||||
options: ShowCommandOptions
|
||||
): void {
|
||||
if (!result.found || !result.task) {
|
||||
console.log(
|
||||
boxen(chalk.yellow(`Task not found!`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Use the global task details display function
|
||||
displayTaskDetails(result.task, {
|
||||
statusFilter: options.status,
|
||||
showSuggestedActions: true
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display multiple tasks in text format
|
||||
*/
|
||||
private displayMultipleTasks(
|
||||
result: ShowMultipleTasksResult,
|
||||
_options: ShowCommandOptions
|
||||
): void {
|
||||
// Header
|
||||
ui.displayBanner(`Tasks (${result.tasks.length} found)`);
|
||||
|
||||
if (result.notFound.length > 0) {
|
||||
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
|
||||
}
|
||||
|
||||
if (result.tasks.length === 0) {
|
||||
ui.displayWarning('No tasks found matching the criteria.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Task table
|
||||
console.log(chalk.blue.bold(`\n📋 Tasks:\n`));
|
||||
console.log(
|
||||
ui.createTaskTable(result.tasks, {
|
||||
showSubtasks: true,
|
||||
showDependencies: true
|
||||
})
|
||||
);
|
||||
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(
|
||||
result: ShowTaskResult | ShowMultipleTasksResult
|
||||
): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): ShowTaskResult | ShowMultipleTasksResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
* This is for gradual migration - allows commands.js to use this
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const showCommand = new ShowCommand();
|
||||
program.addCommand(showCommand);
|
||||
return showCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
* Can also configure the command name if needed
|
||||
*/
|
||||
static register(program: Command, name?: string): ShowCommand {
|
||||
const showCommand = new ShowCommand(name);
|
||||
program.addCommand(showCommand);
|
||||
return showCommand;
|
||||
}
|
||||
}
|
||||
@@ -1,512 +0,0 @@
|
||||
/**
|
||||
* @fileoverview StartCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
* This is a thin presentation layer over @tm/core's TaskExecutionService
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import { spawn } from 'child_process';
|
||||
import {
|
||||
createTaskMasterCore,
|
||||
type TaskMasterCore,
|
||||
type StartTaskResult as CoreStartTaskResult
|
||||
} from '@tm/core';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* CLI-specific options interface for the start command
|
||||
*/
|
||||
export interface StartCommandOptions {
|
||||
id?: string;
|
||||
format?: 'text' | 'json';
|
||||
project?: string;
|
||||
dryRun?: boolean;
|
||||
force?: boolean;
|
||||
noStatusUpdate?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* CLI-specific result type from start command
|
||||
* Extends the core result with CLI-specific display information
|
||||
*/
|
||||
export interface StartCommandResult extends CoreStartTaskResult {
|
||||
storageType?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* StartCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's TaskExecutionService
|
||||
*/
|
||||
export class StartCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: StartCommandResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'start');
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Start working on a task by launching claude-code with context'
|
||||
)
|
||||
.argument('[id]', 'Task ID to start working on')
|
||||
.option('-i, --id <id>', 'Task ID to start working on')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option(
|
||||
'--dry-run',
|
||||
'Show what would be executed without launching claude-code'
|
||||
)
|
||||
.option(
|
||||
'--force',
|
||||
'Force start even if another task is already in-progress'
|
||||
)
|
||||
.option(
|
||||
'--no-status-update',
|
||||
'Do not automatically update task status to in-progress'
|
||||
)
|
||||
.action(
|
||||
async (taskId: string | undefined, options: StartCommandOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the start command
|
||||
*/
|
||||
private async executeCommand(
|
||||
taskId: string | undefined,
|
||||
options: StartCommandOptions
|
||||
): Promise<void> {
|
||||
let spinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core with spinner
|
||||
spinner = ora('Initializing Task Master...').start();
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
spinner.succeed('Task Master initialized');
|
||||
|
||||
// Get the task ID from argument or option, or find next available task
|
||||
const idArg = taskId || options.id || null;
|
||||
let targetTaskId = idArg;
|
||||
|
||||
if (!targetTaskId) {
|
||||
spinner = ora('Finding next available task...').start();
|
||||
targetTaskId = await this.performGetNextTask();
|
||||
if (targetTaskId) {
|
||||
spinner.succeed(`Found next task: #${targetTaskId}`);
|
||||
} else {
|
||||
spinner.fail('No available tasks found');
|
||||
}
|
||||
}
|
||||
|
||||
if (!targetTaskId) {
|
||||
ui.displayError('No task ID provided and no available tasks found');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Show pre-launch message (no spinner needed, it's just display)
|
||||
if (!options.dryRun) {
|
||||
await this.showPreLaunchMessage(targetTaskId);
|
||||
}
|
||||
|
||||
// Use tm-core's startTask method with spinner
|
||||
spinner = ora('Preparing task execution...').start();
|
||||
const coreResult = await this.performStartTask(targetTaskId, options);
|
||||
|
||||
if (coreResult.started) {
|
||||
spinner.succeed(
|
||||
options.dryRun
|
||||
? 'Dry run completed'
|
||||
: 'Task prepared - launching Claude...'
|
||||
);
|
||||
} else {
|
||||
spinner.fail('Task execution failed');
|
||||
}
|
||||
|
||||
// Execute command if we have one and it's not a dry run
|
||||
if (!options.dryRun && coreResult.command) {
|
||||
// Stop any remaining spinners before launching Claude
|
||||
if (spinner && !spinner.isSpinning) {
|
||||
// Clear the line to make room for Claude
|
||||
console.log();
|
||||
}
|
||||
await this.executeChildProcess(coreResult.command);
|
||||
}
|
||||
|
||||
// Convert core result to CLI result with storage type
|
||||
const result: StartCommandResult = {
|
||||
...coreResult,
|
||||
storageType: this.tmCore?.getStorageType()
|
||||
};
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results (only for dry run or if execution failed)
|
||||
if (options.dryRun || !coreResult.started) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
if (spinner) {
|
||||
spinner.fail('Operation failed');
|
||||
}
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: StartCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: text, json`));
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the next available task using tm-core
|
||||
*/
|
||||
private async performGetNextTask(): Promise<string | null> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
return this.tmCore.getNextAvailableTask();
|
||||
}
|
||||
|
||||
/**
|
||||
* Show pre-launch message using tm-core data
|
||||
*/
|
||||
private async showPreLaunchMessage(targetTaskId: string): Promise<void> {
|
||||
if (!this.tmCore) return;
|
||||
|
||||
const { task, subtask, subtaskId } =
|
||||
await this.tmCore.getTaskWithSubtask(targetTaskId);
|
||||
if (task) {
|
||||
const workItemText = subtask
|
||||
? `Subtask #${task.id}.${subtaskId} - ${subtask.title}`
|
||||
: `Task #${task.id} - ${task.title}`;
|
||||
|
||||
console.log(
|
||||
chalk.green('🚀 Starting: ') + chalk.white.bold(workItemText)
|
||||
);
|
||||
console.log(chalk.gray('Launching Claude Code...'));
|
||||
console.log(); // Empty line
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform start task using tm-core business logic
|
||||
*/
|
||||
private async performStartTask(
|
||||
targetTaskId: string,
|
||||
options: StartCommandOptions
|
||||
): Promise<CoreStartTaskResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Show spinner for status update if enabled
|
||||
let statusSpinner: Ora | null = null;
|
||||
if (!options.noStatusUpdate && !options.dryRun) {
|
||||
statusSpinner = ora('Updating task status to in-progress...').start();
|
||||
}
|
||||
|
||||
// Get execution command from tm-core (instead of executing directly)
|
||||
const result = await this.tmCore.startTask(targetTaskId, {
|
||||
dryRun: options.dryRun,
|
||||
force: options.force,
|
||||
updateStatus: !options.noStatusUpdate
|
||||
});
|
||||
|
||||
if (statusSpinner) {
|
||||
if (result.started) {
|
||||
statusSpinner.succeed('Task status updated');
|
||||
} else {
|
||||
statusSpinner.warn('Task status update skipped');
|
||||
}
|
||||
}
|
||||
|
||||
if (!result) {
|
||||
throw new Error('Failed to start task - core result is undefined');
|
||||
}
|
||||
|
||||
// Don't execute here - let the main executeCommand method handle it
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the child process directly in the main thread for better process control
|
||||
*/
|
||||
private async executeChildProcess(command: {
|
||||
executable: string;
|
||||
args: string[];
|
||||
cwd: string;
|
||||
}): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Don't show the full command with args as it can be very long
|
||||
console.log(chalk.green('🚀 Launching Claude Code...'));
|
||||
console.log(); // Add space before Claude takes over
|
||||
|
||||
const childProcess = spawn(command.executable, command.args, {
|
||||
cwd: command.cwd,
|
||||
stdio: 'inherit', // Inherit stdio from parent process
|
||||
shell: false
|
||||
});
|
||||
|
||||
childProcess.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(`Process exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
|
||||
childProcess.on('error', (error) => {
|
||||
reject(new Error(`Failed to spawn process: ${error.message}`));
|
||||
});
|
||||
|
||||
// Handle process termination signals gracefully
|
||||
const cleanup = () => {
|
||||
if (childProcess && !childProcess.killed) {
|
||||
childProcess.kill('SIGTERM');
|
||||
}
|
||||
};
|
||||
|
||||
process.on('SIGINT', cleanup);
|
||||
process.on('SIGTERM', cleanup);
|
||||
process.on('exit', cleanup);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: StartCommandResult,
|
||||
options: StartCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayTextResult(result, options);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: StartCommandResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display result in text format
|
||||
*/
|
||||
private displayTextResult(
|
||||
result: StartCommandResult,
|
||||
options: StartCommandOptions
|
||||
): void {
|
||||
if (!result.found || !result.task) {
|
||||
console.log(
|
||||
boxen(chalk.yellow(`Task not found!`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const task = result.task;
|
||||
|
||||
if (options.dryRun) {
|
||||
// For dry run, show full details since Claude Code won't be launched
|
||||
let headerText = `Dry Run: Starting Task #${task.id} - ${task.title}`;
|
||||
|
||||
// If working on a specific subtask, highlight it in the header
|
||||
if (result.subtask && result.subtaskId) {
|
||||
headerText = `Dry Run: Starting Subtask #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
|
||||
}
|
||||
|
||||
displayTaskDetails(task, {
|
||||
customHeader: headerText,
|
||||
headerColor: 'yellow'
|
||||
});
|
||||
|
||||
// Show claude-code prompt
|
||||
if (result.executionOutput) {
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Claude-Code Prompt:') +
|
||||
'\n\n' +
|
||||
result.executionOutput,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan',
|
||||
width: process.stdout.columns * 0.95 || 100
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
'🔍 Dry run - claude-code would be launched with the above prompt'
|
||||
),
|
||||
{
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// For actual execution, show minimal info since Claude Code will clear the terminal
|
||||
if (result.started) {
|
||||
// Determine what was worked on - task or subtask
|
||||
let workItemText = `Task: #${task.id} - ${task.title}`;
|
||||
let statusTarget = task.id;
|
||||
|
||||
if (result.subtask && result.subtaskId) {
|
||||
workItemText = `Subtask: #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
|
||||
statusTarget = `${task.id}.${result.subtaskId}`;
|
||||
}
|
||||
|
||||
// Post-execution message (shown after Claude Code exits)
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold('🎉 Task Session Complete!') +
|
||||
'\n\n' +
|
||||
chalk.white(workItemText) +
|
||||
'\n\n' +
|
||||
chalk.cyan('Next steps:') +
|
||||
'\n' +
|
||||
`• Run ${chalk.yellow('tm show ' + task.id)} to review task details\n` +
|
||||
`• Run ${chalk.yellow('tm set-status --id=' + statusTarget + ' --status=done')} when complete\n` +
|
||||
`• Run ${chalk.yellow('tm next')} to find the next available task\n` +
|
||||
`• Run ${chalk.yellow('tm start')} to begin the next task`,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
width: process.stdout.columns * 0.95 || 100,
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// Error case
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.red(
|
||||
'❌ Failed to launch claude-code' +
|
||||
(result.error ? `\nError: ${result.error}` : '')
|
||||
),
|
||||
{
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'red',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
// Show stack trace in development mode or when DEBUG is set
|
||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
||||
if ((isDevelopment || process.env.DEBUG) && error.stack) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: StartCommandResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): StartCommandResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Static method to register this command on an existing program
|
||||
*/
|
||||
static registerOn(program: Command): Command {
|
||||
const startCommand = new StartCommand();
|
||||
program.addCommand(startCommand);
|
||||
return startCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative registration that returns the command for chaining
|
||||
*/
|
||||
static register(program: Command, name?: string): StartCommand {
|
||||
const startCommand = new StartCommand(name);
|
||||
program.addCommand(startCommand);
|
||||
return startCommand;
|
||||
}
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Main entry point for @tm/cli package
|
||||
* Exports all public APIs for the CLI presentation layer
|
||||
*/
|
||||
|
||||
// Commands
|
||||
export { ListTasksCommand } from './commands/list.command.js';
|
||||
export { ShowCommand } from './commands/show.command.js';
|
||||
export { AuthCommand } from './commands/auth.command.js';
|
||||
export { ContextCommand } from './commands/context.command.js';
|
||||
export { StartCommand } from './commands/start.command.js';
|
||||
export { SetStatusCommand } from './commands/set-status.command.js';
|
||||
|
||||
// UI utilities (for other commands to use)
|
||||
export * as ui from './utils/ui.js';
|
||||
|
||||
// Auto-update utilities
|
||||
export {
|
||||
checkForUpdate,
|
||||
performAutoUpdate,
|
||||
displayUpgradeNotification,
|
||||
compareVersions
|
||||
} from './utils/auto-update.js';
|
||||
|
||||
// Re-export commonly used types from tm-core
|
||||
export type {
|
||||
Task,
|
||||
TaskStatus,
|
||||
TaskPriority,
|
||||
TaskMasterCore
|
||||
} from '@tm/core';
|
||||
@@ -1,567 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Dashboard components for Task Master CLI
|
||||
* Displays project statistics and dependency information
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import type { Task, TaskPriority } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Statistics for task collection
|
||||
*/
|
||||
export interface TaskStatistics {
|
||||
total: number;
|
||||
done: number;
|
||||
inProgress: number;
|
||||
pending: number;
|
||||
blocked: number;
|
||||
deferred: number;
|
||||
cancelled: number;
|
||||
review?: number;
|
||||
completionPercentage: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Statistics for dependencies
|
||||
*/
|
||||
export interface DependencyStatistics {
|
||||
tasksWithNoDeps: number;
|
||||
tasksReadyToWork: number;
|
||||
tasksBlockedByDeps: number;
|
||||
mostDependedOnTaskId?: number;
|
||||
mostDependedOnCount?: number;
|
||||
avgDependenciesPerTask: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Next task information
|
||||
*/
|
||||
export interface NextTaskInfo {
|
||||
id: string | number;
|
||||
title: string;
|
||||
priority?: TaskPriority;
|
||||
dependencies?: (string | number)[];
|
||||
complexity?: number | string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Status breakdown for progress bars
|
||||
*/
|
||||
export interface StatusBreakdown {
|
||||
'in-progress'?: number;
|
||||
pending?: number;
|
||||
blocked?: number;
|
||||
deferred?: number;
|
||||
cancelled?: number;
|
||||
review?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a progress bar with color-coded status segments
|
||||
*/
|
||||
function createProgressBar(
|
||||
completionPercentage: number,
|
||||
width: number = 30,
|
||||
statusBreakdown?: StatusBreakdown
|
||||
): string {
|
||||
// If no breakdown provided, use simple green bar
|
||||
if (!statusBreakdown) {
|
||||
const filled = Math.round((completionPercentage / 100) * width);
|
||||
const empty = width - filled;
|
||||
return chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
|
||||
}
|
||||
|
||||
// Build the bar with different colored sections
|
||||
// Order matches the status display: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
|
||||
let bar = '';
|
||||
let charsUsed = 0;
|
||||
|
||||
// 1. Green filled blocks for completed tasks (done)
|
||||
const completedChars = Math.round((completionPercentage / 100) * width);
|
||||
if (completedChars > 0) {
|
||||
bar += chalk.green('█').repeat(completedChars);
|
||||
charsUsed += completedChars;
|
||||
}
|
||||
|
||||
// 2. Gray filled blocks for cancelled (won't be done)
|
||||
if (statusBreakdown.cancelled && charsUsed < width) {
|
||||
const cancelledChars = Math.round(
|
||||
(statusBreakdown.cancelled / 100) * width
|
||||
);
|
||||
const actualChars = Math.min(cancelledChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.gray('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Gray filled blocks for deferred (won't be done now)
|
||||
if (statusBreakdown.deferred && charsUsed < width) {
|
||||
const deferredChars = Math.round((statusBreakdown.deferred / 100) * width);
|
||||
const actualChars = Math.min(deferredChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.gray('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Blue filled blocks for in-progress (actively working)
|
||||
if (statusBreakdown['in-progress'] && charsUsed < width) {
|
||||
const inProgressChars = Math.round(
|
||||
(statusBreakdown['in-progress'] / 100) * width
|
||||
);
|
||||
const actualChars = Math.min(inProgressChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.blue('█').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Magenta empty blocks for review (almost done)
|
||||
if (statusBreakdown.review && charsUsed < width) {
|
||||
const reviewChars = Math.round((statusBreakdown.review / 100) * width);
|
||||
const actualChars = Math.min(reviewChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.magenta('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Yellow empty blocks for pending (ready to start)
|
||||
if (statusBreakdown.pending && charsUsed < width) {
|
||||
const pendingChars = Math.round((statusBreakdown.pending / 100) * width);
|
||||
const actualChars = Math.min(pendingChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.yellow('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Red empty blocks for blocked (can't start yet)
|
||||
if (statusBreakdown.blocked && charsUsed < width) {
|
||||
const blockedChars = Math.round((statusBreakdown.blocked / 100) * width);
|
||||
const actualChars = Math.min(blockedChars, width - charsUsed);
|
||||
if (actualChars > 0) {
|
||||
bar += chalk.red('░').repeat(actualChars);
|
||||
charsUsed += actualChars;
|
||||
}
|
||||
}
|
||||
|
||||
// Fill any remaining space with gray empty yellow blocks
|
||||
if (charsUsed < width) {
|
||||
bar += chalk.yellow('░').repeat(width - charsUsed);
|
||||
}
|
||||
|
||||
return bar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate task statistics from a list of tasks
|
||||
*/
|
||||
export function calculateTaskStatistics(tasks: Task[]): TaskStatistics {
|
||||
const stats: TaskStatistics = {
|
||||
total: tasks.length,
|
||||
done: 0,
|
||||
inProgress: 0,
|
||||
pending: 0,
|
||||
blocked: 0,
|
||||
deferred: 0,
|
||||
cancelled: 0,
|
||||
review: 0,
|
||||
completionPercentage: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
switch (task.status) {
|
||||
case 'done':
|
||||
stats.done++;
|
||||
break;
|
||||
case 'in-progress':
|
||||
stats.inProgress++;
|
||||
break;
|
||||
case 'pending':
|
||||
stats.pending++;
|
||||
break;
|
||||
case 'blocked':
|
||||
stats.blocked++;
|
||||
break;
|
||||
case 'deferred':
|
||||
stats.deferred++;
|
||||
break;
|
||||
case 'cancelled':
|
||||
stats.cancelled++;
|
||||
break;
|
||||
case 'review':
|
||||
stats.review = (stats.review || 0) + 1;
|
||||
break;
|
||||
}
|
||||
});
|
||||
|
||||
stats.completionPercentage =
|
||||
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate subtask statistics from tasks
|
||||
*/
|
||||
export function calculateSubtaskStatistics(tasks: Task[]): TaskStatistics {
|
||||
const stats: TaskStatistics = {
|
||||
total: 0,
|
||||
done: 0,
|
||||
inProgress: 0,
|
||||
pending: 0,
|
||||
blocked: 0,
|
||||
deferred: 0,
|
||||
cancelled: 0,
|
||||
review: 0,
|
||||
completionPercentage: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
stats.total++;
|
||||
switch (subtask.status) {
|
||||
case 'done':
|
||||
stats.done++;
|
||||
break;
|
||||
case 'in-progress':
|
||||
stats.inProgress++;
|
||||
break;
|
||||
case 'pending':
|
||||
stats.pending++;
|
||||
break;
|
||||
case 'blocked':
|
||||
stats.blocked++;
|
||||
break;
|
||||
case 'deferred':
|
||||
stats.deferred++;
|
||||
break;
|
||||
case 'cancelled':
|
||||
stats.cancelled++;
|
||||
break;
|
||||
case 'review':
|
||||
stats.review = (stats.review || 0) + 1;
|
||||
break;
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
stats.completionPercentage =
|
||||
stats.total > 0 ? Math.round((stats.done / stats.total) * 100) : 0;
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate dependency statistics
|
||||
*/
|
||||
export function calculateDependencyStatistics(
|
||||
tasks: Task[]
|
||||
): DependencyStatistics {
|
||||
const completedTaskIds = new Set(
|
||||
tasks.filter((t) => t.status === 'done').map((t) => t.id)
|
||||
);
|
||||
|
||||
const tasksWithNoDeps = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' && (!t.dependencies || t.dependencies.length === 0)
|
||||
).length;
|
||||
|
||||
const tasksWithAllDepsSatisfied = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' &&
|
||||
t.dependencies &&
|
||||
t.dependencies.length > 0 &&
|
||||
t.dependencies.every((depId) => completedTaskIds.has(depId))
|
||||
).length;
|
||||
|
||||
const tasksBlockedByDeps = tasks.filter(
|
||||
(t) =>
|
||||
t.status !== 'done' &&
|
||||
t.dependencies &&
|
||||
t.dependencies.length > 0 &&
|
||||
!t.dependencies.every((depId) => completedTaskIds.has(depId))
|
||||
).length;
|
||||
|
||||
// Calculate most depended-on task
|
||||
const dependencyCount: Record<string, number> = {};
|
||||
tasks.forEach((task) => {
|
||||
if (task.dependencies && task.dependencies.length > 0) {
|
||||
task.dependencies.forEach((depId) => {
|
||||
const key = String(depId);
|
||||
dependencyCount[key] = (dependencyCount[key] || 0) + 1;
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let mostDependedOnTaskId: number | undefined;
|
||||
let mostDependedOnCount = 0;
|
||||
|
||||
for (const [taskId, count] of Object.entries(dependencyCount)) {
|
||||
if (count > mostDependedOnCount) {
|
||||
mostDependedOnCount = count;
|
||||
mostDependedOnTaskId = parseInt(taskId);
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate average dependencies
|
||||
const totalDependencies = tasks.reduce(
|
||||
(sum, task) => sum + (task.dependencies ? task.dependencies.length : 0),
|
||||
0
|
||||
);
|
||||
const avgDependenciesPerTask =
|
||||
tasks.length > 0 ? totalDependencies / tasks.length : 0;
|
||||
|
||||
return {
|
||||
tasksWithNoDeps,
|
||||
tasksReadyToWork: tasksWithNoDeps + tasksWithAllDepsSatisfied,
|
||||
tasksBlockedByDeps,
|
||||
mostDependedOnTaskId,
|
||||
mostDependedOnCount,
|
||||
avgDependenciesPerTask
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority counts
|
||||
*/
|
||||
export function getPriorityBreakdown(
|
||||
tasks: Task[]
|
||||
): Record<TaskPriority, number> {
|
||||
const breakdown: Record<TaskPriority, number> = {
|
||||
critical: 0,
|
||||
high: 0,
|
||||
medium: 0,
|
||||
low: 0
|
||||
};
|
||||
|
||||
tasks.forEach((task) => {
|
||||
const priority = task.priority || 'medium';
|
||||
breakdown[priority]++;
|
||||
});
|
||||
|
||||
return breakdown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate status breakdown as percentages
|
||||
*/
|
||||
function calculateStatusBreakdown(stats: TaskStatistics): StatusBreakdown {
|
||||
if (stats.total === 0) return {};
|
||||
|
||||
return {
|
||||
'in-progress': (stats.inProgress / stats.total) * 100,
|
||||
pending: (stats.pending / stats.total) * 100,
|
||||
blocked: (stats.blocked / stats.total) * 100,
|
||||
deferred: (stats.deferred / stats.total) * 100,
|
||||
cancelled: (stats.cancelled / stats.total) * 100,
|
||||
review: ((stats.review || 0) / stats.total) * 100
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Format status counts in the correct order with colors
|
||||
* @param stats - The statistics object containing counts
|
||||
* @param isSubtask - Whether this is for subtasks (affects "Done" vs "Completed" label)
|
||||
*/
|
||||
function formatStatusLine(
|
||||
stats: TaskStatistics,
|
||||
isSubtask: boolean = false
|
||||
): string {
|
||||
const parts: string[] = [];
|
||||
|
||||
// Order: Done, Cancelled, Deferred, In Progress, Review, Pending, Blocked
|
||||
if (isSubtask) {
|
||||
parts.push(`Completed: ${chalk.green(`${stats.done}/${stats.total}`)}`);
|
||||
} else {
|
||||
parts.push(`Done: ${chalk.green(stats.done)}`);
|
||||
}
|
||||
|
||||
parts.push(`Cancelled: ${chalk.gray(stats.cancelled)}`);
|
||||
parts.push(`Deferred: ${chalk.gray(stats.deferred)}`);
|
||||
|
||||
// Add line break for second row
|
||||
const firstLine = parts.join(' ');
|
||||
parts.length = 0;
|
||||
|
||||
parts.push(`In Progress: ${chalk.blue(stats.inProgress)}`);
|
||||
parts.push(`Review: ${chalk.magenta(stats.review || 0)}`);
|
||||
parts.push(`Pending: ${chalk.yellow(stats.pending)}`);
|
||||
parts.push(`Blocked: ${chalk.red(stats.blocked)}`);
|
||||
|
||||
const secondLine = parts.join(' ');
|
||||
|
||||
return firstLine + '\n' + secondLine;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the project dashboard box
|
||||
*/
|
||||
export function displayProjectDashboard(
|
||||
taskStats: TaskStatistics,
|
||||
subtaskStats: TaskStatistics,
|
||||
priorityBreakdown: Record<TaskPriority, number>
|
||||
): string {
|
||||
// Calculate status breakdowns using the helper function
|
||||
const taskStatusBreakdown = calculateStatusBreakdown(taskStats);
|
||||
const subtaskStatusBreakdown = calculateStatusBreakdown(subtaskStats);
|
||||
|
||||
// Create progress bars with the breakdowns
|
||||
const taskProgressBar = createProgressBar(
|
||||
taskStats.completionPercentage,
|
||||
30,
|
||||
taskStatusBreakdown
|
||||
);
|
||||
const subtaskProgressBar = createProgressBar(
|
||||
subtaskStats.completionPercentage,
|
||||
30,
|
||||
subtaskStatusBreakdown
|
||||
);
|
||||
|
||||
const taskPercentage = `${taskStats.completionPercentage}% ${taskStats.done}/${taskStats.total}`;
|
||||
const subtaskPercentage = `${subtaskStats.completionPercentage}% ${subtaskStats.done}/${subtaskStats.total}`;
|
||||
|
||||
const content =
|
||||
chalk.white.bold('Project Dashboard') +
|
||||
'\n' +
|
||||
`Tasks Progress: ${taskProgressBar} ${chalk.yellow(taskPercentage)}\n` +
|
||||
formatStatusLine(taskStats, false) +
|
||||
'\n\n' +
|
||||
`Subtasks Progress: ${subtaskProgressBar} ${chalk.cyan(subtaskPercentage)}\n` +
|
||||
formatStatusLine(subtaskStats, true) +
|
||||
'\n\n' +
|
||||
chalk.cyan.bold('Priority Breakdown:') +
|
||||
'\n' +
|
||||
`${chalk.red('•')} ${chalk.white('High priority:')} ${priorityBreakdown.high}\n` +
|
||||
`${chalk.yellow('•')} ${chalk.white('Medium priority:')} ${priorityBreakdown.medium}\n` +
|
||||
`${chalk.green('•')} ${chalk.white('Low priority:')} ${priorityBreakdown.low}`;
|
||||
|
||||
return content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the dependency dashboard box
|
||||
*/
|
||||
export function displayDependencyDashboard(
|
||||
depStats: DependencyStatistics,
|
||||
nextTask?: NextTaskInfo
|
||||
): string {
|
||||
const content =
|
||||
chalk.white.bold('Dependency Status & Next Task') +
|
||||
'\n' +
|
||||
chalk.cyan.bold('Dependency Metrics:') +
|
||||
'\n' +
|
||||
`${chalk.green('•')} ${chalk.white('Tasks with no dependencies:')} ${depStats.tasksWithNoDeps}\n` +
|
||||
`${chalk.green('•')} ${chalk.white('Tasks ready to work on:')} ${depStats.tasksReadyToWork}\n` +
|
||||
`${chalk.yellow('•')} ${chalk.white('Tasks blocked by dependencies:')} ${depStats.tasksBlockedByDeps}\n` +
|
||||
`${chalk.magenta('•')} ${chalk.white('Most depended-on task:')} ${
|
||||
depStats.mostDependedOnTaskId
|
||||
? chalk.cyan(
|
||||
`#${depStats.mostDependedOnTaskId} (${depStats.mostDependedOnCount} dependents)`
|
||||
)
|
||||
: chalk.gray('None')
|
||||
}\n` +
|
||||
`${chalk.blue('•')} ${chalk.white('Avg dependencies per task:')} ${depStats.avgDependenciesPerTask.toFixed(1)}\n\n` +
|
||||
chalk.cyan.bold('Next Task to Work On:') +
|
||||
'\n' +
|
||||
`ID: ${nextTask ? chalk.cyan(String(nextTask.id)) : chalk.gray('N/A')} - ${
|
||||
nextTask
|
||||
? chalk.white.bold(nextTask.title)
|
||||
: chalk.yellow('No task available')
|
||||
}\n` +
|
||||
`Priority: ${nextTask?.priority || chalk.gray('N/A')} Dependencies: ${
|
||||
nextTask?.dependencies?.length
|
||||
? chalk.cyan(nextTask.dependencies.join(', '))
|
||||
: chalk.gray('None')
|
||||
}\n` +
|
||||
`Complexity: ${nextTask?.complexity || chalk.gray('N/A')}`;
|
||||
|
||||
return content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display dashboard boxes side by side or stacked
|
||||
*/
|
||||
export function displayDashboards(
|
||||
taskStats: TaskStatistics,
|
||||
subtaskStats: TaskStatistics,
|
||||
priorityBreakdown: Record<TaskPriority, number>,
|
||||
depStats: DependencyStatistics,
|
||||
nextTask?: NextTaskInfo
|
||||
): void {
|
||||
const projectDashboardContent = displayProjectDashboard(
|
||||
taskStats,
|
||||
subtaskStats,
|
||||
priorityBreakdown
|
||||
);
|
||||
const dependencyDashboardContent = displayDependencyDashboard(
|
||||
depStats,
|
||||
nextTask
|
||||
);
|
||||
|
||||
// Get terminal width
|
||||
const terminalWidth = process.stdout.columns || 80;
|
||||
const minDashboardWidth = 50;
|
||||
const minDependencyWidth = 50;
|
||||
const totalMinWidth = minDashboardWidth + minDependencyWidth + 4;
|
||||
|
||||
// If terminal is wide enough, show side by side
|
||||
if (terminalWidth >= totalMinWidth) {
|
||||
const halfWidth = Math.floor(terminalWidth / 2);
|
||||
const boxContentWidth = halfWidth - 4;
|
||||
|
||||
const dashboardBox = boxen(projectDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
width: boxContentWidth,
|
||||
dimBorder: false
|
||||
});
|
||||
|
||||
const dependencyBox = boxen(dependencyDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
width: boxContentWidth,
|
||||
dimBorder: false
|
||||
});
|
||||
|
||||
// Create side-by-side layout
|
||||
const dashboardLines = dashboardBox.split('\n');
|
||||
const dependencyLines = dependencyBox.split('\n');
|
||||
const maxHeight = Math.max(dashboardLines.length, dependencyLines.length);
|
||||
|
||||
const combinedLines = [];
|
||||
for (let i = 0; i < maxHeight; i++) {
|
||||
const dashLine = i < dashboardLines.length ? dashboardLines[i] : '';
|
||||
const depLine = i < dependencyLines.length ? dependencyLines[i] : '';
|
||||
const paddedDashLine = dashLine.padEnd(halfWidth, ' ');
|
||||
combinedLines.push(paddedDashLine + depLine);
|
||||
}
|
||||
|
||||
console.log(combinedLines.join('\n'));
|
||||
} else {
|
||||
// Show stacked vertically
|
||||
const dashboardBox = boxen(projectDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 0, bottom: 1 }
|
||||
});
|
||||
|
||||
const dependencyBox = boxen(dependencyDashboardContent, {
|
||||
padding: 1,
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 0, bottom: 1 }
|
||||
});
|
||||
|
||||
console.log(dashboardBox);
|
||||
console.log(dependencyBox);
|
||||
}
|
||||
}
|
||||
@@ -1,45 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Task Master header component
|
||||
* Displays the banner, version, project info, and file path
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
|
||||
/**
|
||||
* Header configuration options
|
||||
*/
|
||||
export interface HeaderOptions {
|
||||
title?: string;
|
||||
tag?: string;
|
||||
filePath?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the Task Master header with project info
|
||||
*/
|
||||
export function displayHeader(options: HeaderOptions = {}): void {
|
||||
const { filePath, tag } = options;
|
||||
|
||||
// Display tag and file path info
|
||||
if (tag) {
|
||||
let tagInfo = '';
|
||||
|
||||
if (tag && tag !== 'master') {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
||||
} else {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
||||
}
|
||||
|
||||
console.log(tagInfo);
|
||||
|
||||
if (filePath) {
|
||||
// Convert to absolute path if it's relative
|
||||
const absolutePath = filePath.startsWith('/')
|
||||
? filePath
|
||||
: `${process.cwd()}/${filePath}`;
|
||||
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
|
||||
}
|
||||
|
||||
console.log(); // Empty line for spacing
|
||||
}
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
/**
|
||||
* @fileoverview UI components exports
|
||||
*/
|
||||
|
||||
export * from './header.component.js';
|
||||
export * from './dashboard.component.js';
|
||||
export * from './next-task.component.js';
|
||||
export * from './suggested-steps.component.js';
|
||||
export * from './task-detail.component.js';
|
||||
@@ -1,134 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Next task recommendation component
|
||||
* Displays detailed information about the recommended next task
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import type { Task } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Next task display options
|
||||
*/
|
||||
export interface NextTaskDisplayOptions {
|
||||
id: string | number;
|
||||
title: string;
|
||||
priority?: string;
|
||||
status?: string;
|
||||
dependencies?: (string | number)[];
|
||||
description?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the recommended next task section
|
||||
*/
|
||||
export function displayRecommendedNextTask(
|
||||
task: NextTaskDisplayOptions | undefined
|
||||
): void {
|
||||
if (!task) {
|
||||
// If no task available, show a message
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
|
||||
),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow',
|
||||
title: '⚠ NO TASKS AVAILABLE ⚠',
|
||||
titleAlignment: 'center'
|
||||
}
|
||||
)
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Build the content for the next task box
|
||||
const content = [];
|
||||
|
||||
// Task header with ID and title
|
||||
content.push(
|
||||
`🔥 ${chalk.hex('#FF8800').bold('Next Task to Work On:')} ${chalk.yellow(`#${task.id}`)}${chalk.hex('#FF8800').bold(` - ${task.title}`)}`
|
||||
);
|
||||
content.push('');
|
||||
|
||||
// Priority and Status line
|
||||
const statusLine = [];
|
||||
if (task.priority) {
|
||||
const priorityColor =
|
||||
task.priority === 'high'
|
||||
? chalk.red
|
||||
: task.priority === 'medium'
|
||||
? chalk.yellow
|
||||
: chalk.gray;
|
||||
statusLine.push(`Priority: ${priorityColor.bold(task.priority)}`);
|
||||
}
|
||||
if (task.status) {
|
||||
const statusDisplay =
|
||||
task.status === 'pending'
|
||||
? chalk.yellow('○ pending')
|
||||
: task.status === 'in-progress'
|
||||
? chalk.blue('▶ in-progress')
|
||||
: chalk.gray(task.status);
|
||||
statusLine.push(`Status: ${statusDisplay}`);
|
||||
}
|
||||
content.push(statusLine.join(' '));
|
||||
|
||||
// Dependencies
|
||||
const depsDisplay =
|
||||
!task.dependencies || task.dependencies.length === 0
|
||||
? chalk.gray('None')
|
||||
: chalk.cyan(task.dependencies.join(', '));
|
||||
content.push(`Dependencies: ${depsDisplay}`);
|
||||
|
||||
// Description if available
|
||||
if (task.description) {
|
||||
content.push('');
|
||||
content.push(`Description: ${chalk.white(task.description)}`);
|
||||
}
|
||||
|
||||
// Action commands
|
||||
content.push('');
|
||||
content.push(
|
||||
`${chalk.cyan('Start working:')} ${chalk.yellow(`task-master set-status --id=${task.id} --status=in-progress`)}`
|
||||
);
|
||||
content.push(
|
||||
`${chalk.cyan('View details:')} ${chalk.yellow(`task-master show ${task.id}`)}`
|
||||
);
|
||||
|
||||
// Display in a styled box with orange border
|
||||
console.log(
|
||||
boxen(content.join('\n'), {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: '#FFA500', // Orange color
|
||||
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
||||
titleAlignment: 'center',
|
||||
width: process.stdout.columns * 0.97,
|
||||
fullscreen: false
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get task description from the full task object
|
||||
*/
|
||||
export function getTaskDescription(task: Task): string | undefined {
|
||||
// Try to get description from the task
|
||||
// This could be from task.description or the first line of task.details
|
||||
if ('description' in task && task.description) {
|
||||
return task.description as string;
|
||||
}
|
||||
|
||||
if ('details' in task && task.details) {
|
||||
// Take first sentence or line from details
|
||||
const details = task.details as string;
|
||||
const firstLine = details.split('\n')[0];
|
||||
const firstSentence = firstLine.split('.')[0];
|
||||
return firstSentence;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Suggested next steps component
|
||||
* Displays helpful command suggestions at the end of the list
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
|
||||
/**
|
||||
* Display suggested next steps section
|
||||
*/
|
||||
export function displaySuggestedNextSteps(): void {
|
||||
const steps = [
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master next')} to see what to work on next`,
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
|
||||
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master set-status --id=<id> --status=done')} to mark a task as complete`
|
||||
];
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Suggested Next Steps:') + '\n\n' + steps.join('\n'),
|
||||
{
|
||||
padding: 1,
|
||||
margin: { top: 0, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'gray',
|
||||
width: process.stdout.columns * 0.97
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
@@ -1,335 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Task detail component for show command
|
||||
* Displays detailed task information in a structured format
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { marked, MarkedExtension } from 'marked';
|
||||
import { markedTerminal } from 'marked-terminal';
|
||||
import type { Task } from '@tm/core/types';
|
||||
import { getStatusWithColor, getPriorityWithColor } from '../../utils/ui.js';
|
||||
|
||||
// Configure marked to use terminal renderer with subtle colors
|
||||
marked.use(
|
||||
markedTerminal({
|
||||
// More subtle colors that match the overall design
|
||||
code: (code: string) => {
|
||||
// Custom code block handler to preserve formatting
|
||||
return code
|
||||
.split('\n')
|
||||
.map((line) => ' ' + chalk.cyan(line))
|
||||
.join('\n');
|
||||
},
|
||||
blockquote: chalk.gray.italic,
|
||||
html: chalk.gray,
|
||||
heading: chalk.white.bold, // White bold for headings
|
||||
hr: chalk.gray,
|
||||
listitem: chalk.white, // White for list items
|
||||
paragraph: chalk.white, // White for paragraphs (default text color)
|
||||
strong: chalk.white.bold, // White bold for strong text
|
||||
em: chalk.white.italic, // White italic for emphasis
|
||||
codespan: chalk.cyan, // Cyan for inline code (no background)
|
||||
del: chalk.dim.strikethrough,
|
||||
link: chalk.blue,
|
||||
href: chalk.blue.underline,
|
||||
// Add more explicit code block handling
|
||||
showSectionPrefix: false,
|
||||
unescape: true,
|
||||
emoji: false,
|
||||
// Try to preserve whitespace in code blocks
|
||||
tab: 4,
|
||||
width: 120
|
||||
}) as MarkedExtension
|
||||
);
|
||||
|
||||
// Also set marked options to preserve whitespace
|
||||
marked.setOptions({
|
||||
breaks: true,
|
||||
gfm: true
|
||||
});
|
||||
|
||||
/**
|
||||
* Display the task header with tag
|
||||
*/
|
||||
export function displayTaskHeader(
|
||||
taskId: string | number,
|
||||
title: string
|
||||
): void {
|
||||
// Display task header box
|
||||
console.log(
|
||||
boxen(chalk.white.bold(`Task: #${taskId} - ${title}`), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display task properties in a table format
|
||||
*/
|
||||
export function displayTaskProperties(task: Task): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Create table for task properties - simple 2-column layout
|
||||
const table = new Table({
|
||||
head: [],
|
||||
style: {
|
||||
head: [],
|
||||
border: ['grey']
|
||||
},
|
||||
colWidths: [
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.8)
|
||||
],
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
const deps =
|
||||
task.dependencies && task.dependencies.length > 0
|
||||
? task.dependencies.map((d) => String(d)).join(', ')
|
||||
: 'None';
|
||||
|
||||
// Build the left column (labels) and right column (values)
|
||||
const labels = [
|
||||
chalk.cyan('ID:'),
|
||||
chalk.cyan('Title:'),
|
||||
chalk.cyan('Status:'),
|
||||
chalk.cyan('Priority:'),
|
||||
chalk.cyan('Dependencies:'),
|
||||
chalk.cyan('Complexity:'),
|
||||
chalk.cyan('Description:')
|
||||
].join('\n');
|
||||
|
||||
const values = [
|
||||
String(task.id),
|
||||
task.title,
|
||||
getStatusWithColor(task.status),
|
||||
getPriorityWithColor(task.priority),
|
||||
deps,
|
||||
'N/A',
|
||||
task.description || ''
|
||||
].join('\n');
|
||||
|
||||
table.push([labels, values]);
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Display implementation details in a box
|
||||
*/
|
||||
export function displayImplementationDetails(details: string): void {
|
||||
// Handle all escaped characters properly
|
||||
const cleanDetails = details
|
||||
.replace(/\\n/g, '\n') // Convert \n to actual newlines
|
||||
.replace(/\\t/g, '\t') // Convert \t to actual tabs
|
||||
.replace(/\\"/g, '"') // Convert \" to actual quotes
|
||||
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
|
||||
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
|
||||
// Parse markdown to terminal-friendly format
|
||||
const markdownResult = marked(cleanDetails);
|
||||
const formattedDetails =
|
||||
typeof markdownResult === 'string' ? markdownResult.trim() : cleanDetails; // Fallback to original if Promise
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Implementation Details:') + '\n\n' + formattedDetails,
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan', // Changed to cyan to match the original
|
||||
width: terminalWidth // Fixed width to match the original
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display test strategy in a box
|
||||
*/
|
||||
export function displayTestStrategy(testStrategy: string): void {
|
||||
// Handle all escaped characters properly (same as implementation details)
|
||||
const cleanStrategy = testStrategy
|
||||
.replace(/\\n/g, '\n') // Convert \n to actual newlines
|
||||
.replace(/\\t/g, '\t') // Convert \t to actual tabs
|
||||
.replace(/\\"/g, '"') // Convert \" to actual quotes
|
||||
.replace(/\\\\/g, '\\'); // Convert \\ to single backslash
|
||||
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
|
||||
// Parse markdown to terminal-friendly format (same as implementation details)
|
||||
const markdownResult = marked(cleanStrategy);
|
||||
const formattedStrategy =
|
||||
typeof markdownResult === 'string' ? markdownResult.trim() : cleanStrategy; // Fallback to original if Promise
|
||||
|
||||
console.log(
|
||||
boxen(chalk.white.bold('Test Strategy:') + '\n\n' + formattedStrategy, {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan', // Changed to cyan to match implementation details
|
||||
width: terminalWidth
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display subtasks in a table format
|
||||
*/
|
||||
export function displaySubtasks(
|
||||
subtasks: Array<{
|
||||
id: string | number;
|
||||
title: string;
|
||||
status: any;
|
||||
description?: string;
|
||||
dependencies?: string[];
|
||||
}>,
|
||||
parentId: string | number
|
||||
): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Display subtasks header
|
||||
console.log(
|
||||
boxen(chalk.magenta.bold('Subtasks'), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'magenta',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, bottom: 0 }
|
||||
})
|
||||
);
|
||||
|
||||
// Create subtasks table
|
||||
const table = new Table({
|
||||
head: [
|
||||
chalk.magenta.bold('ID'),
|
||||
chalk.magenta.bold('Status'),
|
||||
chalk.magenta.bold('Title'),
|
||||
chalk.magenta.bold('Deps')
|
||||
],
|
||||
style: {
|
||||
head: [],
|
||||
border: ['grey']
|
||||
},
|
||||
colWidths: [
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.6),
|
||||
Math.floor(terminalWidth * 0.15)
|
||||
],
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
subtasks.forEach((subtask) => {
|
||||
const subtaskId = `${parentId}.${subtask.id}`;
|
||||
|
||||
// Format dependencies
|
||||
const deps =
|
||||
subtask.dependencies && subtask.dependencies.length > 0
|
||||
? subtask.dependencies.join(', ')
|
||||
: 'None';
|
||||
|
||||
table.push([
|
||||
subtaskId,
|
||||
getStatusWithColor(subtask.status),
|
||||
subtask.title,
|
||||
deps
|
||||
]);
|
||||
});
|
||||
|
||||
console.log(table.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Display suggested actions
|
||||
*/
|
||||
export function displaySuggestedActions(taskId: string | number): void {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Suggested Actions:') +
|
||||
'\n\n' +
|
||||
`${chalk.cyan('1.')} Run ${chalk.yellow(`task-master set-status --id=${taskId} --status=in-progress`)} to start working\n` +
|
||||
`${chalk.cyan('2.')} Run ${chalk.yellow(`task-master expand --id=${taskId}`)} to break down into subtasks\n` +
|
||||
`${chalk.cyan('3.')} Run ${chalk.yellow(`task-master update-task --id=${taskId} --prompt="..."`)} to update details`,
|
||||
{
|
||||
padding: 1,
|
||||
margin: { top: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
width: process.stdout.columns * 0.95 || 100
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display complete task details - used by both show and start commands
|
||||
*/
|
||||
export function displayTaskDetails(
|
||||
task: Task,
|
||||
options?: {
|
||||
statusFilter?: string;
|
||||
showSuggestedActions?: boolean;
|
||||
customHeader?: string;
|
||||
headerColor?: string;
|
||||
}
|
||||
): void {
|
||||
const {
|
||||
statusFilter,
|
||||
showSuggestedActions = false,
|
||||
customHeader,
|
||||
headerColor = 'blue'
|
||||
} = options || {};
|
||||
|
||||
// Display header - either custom or default
|
||||
if (customHeader) {
|
||||
console.log(
|
||||
boxen(chalk.white.bold(customHeader), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: headerColor,
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
})
|
||||
);
|
||||
} else {
|
||||
displayTaskHeader(task.id, task.title);
|
||||
}
|
||||
|
||||
// Display task properties in table format
|
||||
displayTaskProperties(task);
|
||||
|
||||
// Display implementation details if available
|
||||
if (task.details) {
|
||||
console.log(); // Empty line for spacing
|
||||
displayImplementationDetails(task.details);
|
||||
}
|
||||
|
||||
// Display test strategy if available
|
||||
if ('testStrategy' in task && task.testStrategy) {
|
||||
console.log(); // Empty line for spacing
|
||||
displayTestStrategy(task.testStrategy as string);
|
||||
}
|
||||
|
||||
// Display subtasks if available
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
// Filter subtasks by status if provided
|
||||
const filteredSubtasks = statusFilter
|
||||
? task.subtasks.filter((sub) => sub.status === statusFilter)
|
||||
: task.subtasks;
|
||||
|
||||
if (filteredSubtasks.length === 0 && statusFilter) {
|
||||
console.log(); // Empty line for spacing
|
||||
console.log(chalk.gray(` No subtasks with status '${statusFilter}'`));
|
||||
} else if (filteredSubtasks.length > 0) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySubtasks(filteredSubtasks, task.id);
|
||||
}
|
||||
}
|
||||
|
||||
// Display suggested actions if requested
|
||||
if (showSuggestedActions) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySuggestedActions(task.id);
|
||||
}
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Main UI exports
|
||||
*/
|
||||
|
||||
// Export all components
|
||||
export * from './components/index.js';
|
||||
|
||||
// Re-export existing UI utilities
|
||||
export * from '../utils/ui.js';
|
||||
@@ -1,248 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Auto-update utilities for task-master-ai CLI
|
||||
*/
|
||||
|
||||
import { spawn } from 'child_process';
|
||||
import https from 'https';
|
||||
import chalk from 'chalk';
|
||||
import ora from 'ora';
|
||||
import boxen from 'boxen';
|
||||
|
||||
export interface UpdateInfo {
|
||||
currentVersion: string;
|
||||
latestVersion: string;
|
||||
needsUpdate: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current version from build-time injected environment variable
|
||||
*/
|
||||
function getCurrentVersion(): string {
|
||||
// Version is injected at build time via TM_PUBLIC_VERSION
|
||||
const version = process.env.TM_PUBLIC_VERSION;
|
||||
if (version && version !== 'unknown') {
|
||||
return version;
|
||||
}
|
||||
|
||||
// Fallback for development or if injection failed
|
||||
console.warn('Could not read version from TM_PUBLIC_VERSION, using fallback');
|
||||
return '0.0.0';
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare semantic versions with proper pre-release handling
|
||||
* @param v1 - First version
|
||||
* @param v2 - Second version
|
||||
* @returns -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2
|
||||
*/
|
||||
export function compareVersions(v1: string, v2: string): number {
|
||||
const toParts = (v: string) => {
|
||||
const [core, pre = ''] = v.split('-', 2);
|
||||
const nums = core.split('.').map((n) => Number.parseInt(n, 10) || 0);
|
||||
return { nums, pre };
|
||||
};
|
||||
|
||||
const a = toParts(v1);
|
||||
const b = toParts(v2);
|
||||
const len = Math.max(a.nums.length, b.nums.length);
|
||||
|
||||
// Compare numeric parts
|
||||
for (let i = 0; i < len; i++) {
|
||||
const d = (a.nums[i] || 0) - (b.nums[i] || 0);
|
||||
if (d !== 0) return d < 0 ? -1 : 1;
|
||||
}
|
||||
|
||||
// Handle pre-release comparison
|
||||
if (a.pre && !b.pre) return -1; // prerelease < release
|
||||
if (!a.pre && b.pre) return 1; // release > prerelease
|
||||
if (a.pre === b.pre) return 0; // same or both empty
|
||||
return a.pre < b.pre ? -1 : 1; // basic prerelease tie-break
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for newer version of task-master-ai
|
||||
*/
|
||||
export async function checkForUpdate(
|
||||
currentVersionOverride?: string
|
||||
): Promise<UpdateInfo> {
|
||||
const currentVersion = currentVersionOverride || getCurrentVersion();
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'registry.npmjs.org',
|
||||
path: '/task-master-ai',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/vnd.npm.install-v1+json',
|
||||
'User-Agent': `task-master-ai/${currentVersion}`
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
try {
|
||||
if (res.statusCode !== 200)
|
||||
throw new Error(`npm registry status ${res.statusCode}`);
|
||||
const npmData = JSON.parse(data);
|
||||
const latestVersion = npmData['dist-tags']?.latest || currentVersion;
|
||||
|
||||
const needsUpdate =
|
||||
compareVersions(currentVersion, latestVersion) < 0;
|
||||
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion,
|
||||
needsUpdate
|
||||
});
|
||||
} catch (error) {
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', () => {
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
});
|
||||
|
||||
req.setTimeout(3000, () => {
|
||||
req.destroy();
|
||||
resolve({
|
||||
currentVersion,
|
||||
latestVersion: currentVersion,
|
||||
needsUpdate: false
|
||||
});
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display upgrade notification message
|
||||
*/
|
||||
export function displayUpgradeNotification(
|
||||
currentVersion: string,
|
||||
latestVersion: string
|
||||
) {
|
||||
const message = boxen(
|
||||
`${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}\n\n` +
|
||||
`Auto-updating to the latest version with new features and bug fixes...`,
|
||||
{
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderColor: 'yellow',
|
||||
borderStyle: 'round'
|
||||
}
|
||||
);
|
||||
|
||||
console.log(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Automatically update task-master-ai to the latest version
|
||||
*/
|
||||
export async function performAutoUpdate(
|
||||
latestVersion: string
|
||||
): Promise<boolean> {
|
||||
if (
|
||||
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1' ||
|
||||
process.env.CI ||
|
||||
process.env.NODE_ENV === 'test'
|
||||
) {
|
||||
const reason =
|
||||
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1'
|
||||
? 'TASKMASTER_SKIP_AUTO_UPDATE=1'
|
||||
: process.env.CI
|
||||
? 'CI environment'
|
||||
: 'NODE_ENV=test';
|
||||
console.log(chalk.dim(`Skipping auto-update (${reason})`));
|
||||
return false;
|
||||
}
|
||||
const spinner = ora({
|
||||
text: chalk.blue(
|
||||
`Updating task-master-ai to version ${chalk.green(latestVersion)}`
|
||||
),
|
||||
spinner: 'dots',
|
||||
color: 'blue'
|
||||
}).start();
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const updateProcess = spawn(
|
||||
'npm',
|
||||
[
|
||||
'install',
|
||||
'-g',
|
||||
`task-master-ai@${latestVersion}`,
|
||||
'--no-fund',
|
||||
'--no-audit',
|
||||
'--loglevel=warn'
|
||||
],
|
||||
{
|
||||
stdio: ['ignore', 'pipe', 'pipe']
|
||||
}
|
||||
);
|
||||
|
||||
let errorOutput = '';
|
||||
|
||||
updateProcess.stdout.on('data', () => {
|
||||
// Update spinner text with progress
|
||||
spinner.text = chalk.blue(
|
||||
`Installing task-master-ai@${latestVersion}...`
|
||||
);
|
||||
});
|
||||
|
||||
updateProcess.stderr.on('data', (data) => {
|
||||
errorOutput += data.toString();
|
||||
});
|
||||
|
||||
updateProcess.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
spinner.succeed(
|
||||
chalk.green(
|
||||
`Successfully updated to version ${chalk.bold(latestVersion)}`
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.dim('Please restart your command to use the new version.')
|
||||
);
|
||||
resolve(true);
|
||||
} else {
|
||||
spinner.fail(chalk.red('Auto-update failed'));
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
`Please run manually: npm install -g task-master-ai@${latestVersion}`
|
||||
)
|
||||
);
|
||||
if (errorOutput) {
|
||||
console.log(chalk.dim(`Error: ${errorOutput.trim()}`));
|
||||
}
|
||||
resolve(false);
|
||||
}
|
||||
});
|
||||
|
||||
updateProcess.on('error', (error) => {
|
||||
spinner.fail(chalk.red('Auto-update failed'));
|
||||
console.log(chalk.red('Error:'), error.message);
|
||||
console.log(
|
||||
chalk.cyan(
|
||||
`Please run manually: npm install -g task-master-ai@${latestVersion}`
|
||||
)
|
||||
);
|
||||
resolve(false);
|
||||
});
|
||||
});
|
||||
}
|
||||
@@ -1,362 +0,0 @@
|
||||
/**
|
||||
* @fileoverview UI utilities for Task Master CLI
|
||||
* Provides formatting, display, and visual components for the command line interface
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import type { Task, TaskStatus, TaskPriority } from '@tm/core/types';
|
||||
|
||||
/**
|
||||
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function getStatusWithColor(
|
||||
status: TaskStatus,
|
||||
forTable: boolean = false
|
||||
): string {
|
||||
const statusConfig = {
|
||||
done: {
|
||||
color: chalk.green,
|
||||
icon: '✓',
|
||||
tableIcon: '✓'
|
||||
},
|
||||
pending: {
|
||||
color: chalk.yellow,
|
||||
icon: '○',
|
||||
tableIcon: '○'
|
||||
},
|
||||
'in-progress': {
|
||||
color: chalk.hex('#FFA500'),
|
||||
icon: '▶',
|
||||
tableIcon: '▶'
|
||||
},
|
||||
deferred: {
|
||||
color: chalk.gray,
|
||||
icon: 'x',
|
||||
tableIcon: 'x'
|
||||
},
|
||||
review: {
|
||||
color: chalk.magenta,
|
||||
icon: '?',
|
||||
tableIcon: '?'
|
||||
},
|
||||
cancelled: {
|
||||
color: chalk.gray,
|
||||
icon: 'x',
|
||||
tableIcon: 'x'
|
||||
},
|
||||
blocked: {
|
||||
color: chalk.red,
|
||||
icon: '!',
|
||||
tableIcon: '!'
|
||||
},
|
||||
completed: {
|
||||
color: chalk.green,
|
||||
icon: '✓',
|
||||
tableIcon: '✓'
|
||||
}
|
||||
};
|
||||
|
||||
const config = statusConfig[status] || {
|
||||
color: chalk.red,
|
||||
icon: 'X',
|
||||
tableIcon: 'X'
|
||||
};
|
||||
|
||||
const icon = forTable ? config.tableIcon : config.icon;
|
||||
return config.color(`${icon} ${status}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored priority display
|
||||
*/
|
||||
export function getPriorityWithColor(priority: TaskPriority): string {
|
||||
const priorityColors: Record<TaskPriority, (text: string) => string> = {
|
||||
critical: chalk.red.bold,
|
||||
high: chalk.red,
|
||||
medium: chalk.yellow,
|
||||
low: chalk.gray
|
||||
};
|
||||
|
||||
const colorFn = priorityColors[priority] || chalk.white;
|
||||
return colorFn(priority);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get colored complexity display
|
||||
*/
|
||||
export function getComplexityWithColor(complexity: number | string): string {
|
||||
const score =
|
||||
typeof complexity === 'string' ? parseInt(complexity, 10) : complexity;
|
||||
|
||||
if (isNaN(score)) {
|
||||
return chalk.gray('N/A');
|
||||
}
|
||||
|
||||
if (score >= 8) {
|
||||
return chalk.red.bold(`${score} (High)`);
|
||||
} else if (score >= 5) {
|
||||
return chalk.yellow(`${score} (Medium)`);
|
||||
} else {
|
||||
return chalk.green(`${score} (Low)`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate text to specified length
|
||||
*/
|
||||
export function truncate(text: string, maxLength: number): string {
|
||||
if (text.length <= maxLength) {
|
||||
return text;
|
||||
}
|
||||
return text.substring(0, maxLength - 3) + '...';
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a progress bar
|
||||
*/
|
||||
export function createProgressBar(
|
||||
completed: number,
|
||||
total: number,
|
||||
width: number = 30
|
||||
): string {
|
||||
if (total === 0) {
|
||||
return chalk.gray('No tasks');
|
||||
}
|
||||
|
||||
const percentage = Math.round((completed / total) * 100);
|
||||
const filled = Math.round((completed / total) * width);
|
||||
const empty = width - filled;
|
||||
|
||||
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
|
||||
|
||||
return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a fancy banner
|
||||
*/
|
||||
export function displayBanner(title: string = 'Task Master'): void {
|
||||
console.log(
|
||||
boxen(chalk.white.bold(title), {
|
||||
padding: 1,
|
||||
margin: { top: 1, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue',
|
||||
textAlignment: 'center'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display an error message (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function displayError(message: string, details?: string): void {
|
||||
console.error(
|
||||
boxen(
|
||||
chalk.red.bold('X Error: ') +
|
||||
chalk.white(message) +
|
||||
(details ? '\n\n' + chalk.gray(details) : ''),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'red'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a success message
|
||||
*/
|
||||
export function displaySuccess(message: string): void {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green'
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display a warning message
|
||||
*/
|
||||
export function displayWarning(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Display info message
|
||||
*/
|
||||
export function displayInfo(message: string): void {
|
||||
console.log(
|
||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependencies with their status
|
||||
*/
|
||||
export function formatDependenciesWithStatus(
|
||||
dependencies: string[] | number[],
|
||||
tasks: Task[]
|
||||
): string {
|
||||
if (!dependencies || dependencies.length === 0) {
|
||||
return chalk.gray('none');
|
||||
}
|
||||
|
||||
const taskMap = new Map(tasks.map((t) => [t.id.toString(), t]));
|
||||
|
||||
return dependencies
|
||||
.map((depId) => {
|
||||
const task = taskMap.get(depId.toString());
|
||||
if (!task) {
|
||||
return chalk.red(`${depId} (not found)`);
|
||||
}
|
||||
|
||||
const statusIcon =
|
||||
task.status === 'done'
|
||||
? '✓'
|
||||
: task.status === 'in-progress'
|
||||
? '►'
|
||||
: '○';
|
||||
|
||||
return `${depId}${statusIcon}`;
|
||||
})
|
||||
.join(', ');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a task table for display
|
||||
*/
|
||||
export function createTaskTable(
|
||||
tasks: Task[],
|
||||
options?: {
|
||||
showSubtasks?: boolean;
|
||||
showComplexity?: boolean;
|
||||
showDependencies?: boolean;
|
||||
}
|
||||
): string {
|
||||
const {
|
||||
showSubtasks = false,
|
||||
showComplexity = false,
|
||||
showDependencies = true
|
||||
} = options || {};
|
||||
|
||||
// Calculate dynamic column widths based on terminal width
|
||||
const terminalWidth = process.stdout.columns * 0.9 || 100;
|
||||
// Adjust column widths to better match the original layout
|
||||
const baseColWidths = showComplexity
|
||||
? [
|
||||
Math.floor(terminalWidth * 0.06),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.12),
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.12)
|
||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||
: [
|
||||
Math.floor(terminalWidth * 0.08),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.18),
|
||||
Math.floor(terminalWidth * 0.12),
|
||||
Math.floor(terminalWidth * 0.2)
|
||||
]; // ID, Title, Status, Priority, Dependencies
|
||||
|
||||
const headers = [
|
||||
chalk.blue.bold('ID'),
|
||||
chalk.blue.bold('Title'),
|
||||
chalk.blue.bold('Status'),
|
||||
chalk.blue.bold('Priority')
|
||||
];
|
||||
const colWidths = baseColWidths.slice(0, 4);
|
||||
|
||||
if (showDependencies) {
|
||||
headers.push(chalk.blue.bold('Dependencies'));
|
||||
colWidths.push(baseColWidths[4]);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
headers.push(chalk.blue.bold('Complexity'));
|
||||
colWidths.push(baseColWidths[5] || 12);
|
||||
}
|
||||
|
||||
const table = new Table({
|
||||
head: headers,
|
||||
style: { head: [], border: [] },
|
||||
colWidths,
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
tasks.forEach((task) => {
|
||||
const row: string[] = [
|
||||
chalk.cyan(task.id.toString()),
|
||||
truncate(task.title, colWidths[1] - 3),
|
||||
getStatusWithColor(task.status, true), // Use table version
|
||||
getPriorityWithColor(task.priority)
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
// For table display, show simple format without status icons
|
||||
if (!task.dependencies || task.dependencies.length === 0) {
|
||||
row.push(chalk.gray('None'));
|
||||
} else {
|
||||
row.push(
|
||||
chalk.cyan(task.dependencies.map((d) => String(d)).join(', '))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
// Show N/A if no complexity score
|
||||
row.push(chalk.gray('N/A'));
|
||||
}
|
||||
|
||||
table.push(row);
|
||||
|
||||
// Add subtasks if requested
|
||||
if (showSubtasks && task.subtasks && task.subtasks.length > 0) {
|
||||
task.subtasks.forEach((subtask) => {
|
||||
const subRow: string[] = [
|
||||
chalk.gray(` └─ ${subtask.id}`),
|
||||
chalk.gray(truncate(subtask.title, colWidths[1] - 6)),
|
||||
chalk.gray(getStatusWithColor(subtask.status, true)),
|
||||
chalk.gray(subtask.priority || 'medium')
|
||||
];
|
||||
|
||||
if (showDependencies) {
|
||||
subRow.push(
|
||||
chalk.gray(
|
||||
subtask.dependencies && subtask.dependencies.length > 0
|
||||
? subtask.dependencies.map((dep) => String(dep)).join(', ')
|
||||
: 'None'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
if (showComplexity) {
|
||||
subRow.push(chalk.gray('--'));
|
||||
}
|
||||
|
||||
table.push(subRow);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return table.toString();
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "NodeNext",
|
||||
"lib": ["ES2022"],
|
||||
"declaration": true,
|
||||
"declarationMap": true,
|
||||
"sourceMap": true,
|
||||
"outDir": "./dist",
|
||||
"baseUrl": ".",
|
||||
"rootDir": "./src",
|
||||
"strict": true,
|
||||
"noImplicitAny": true,
|
||||
"strictNullChecks": true,
|
||||
"strictFunctionTypes": true,
|
||||
"strictBindCallApply": true,
|
||||
"strictPropertyInitialization": true,
|
||||
"noImplicitThis": true,
|
||||
"alwaysStrict": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"esModuleInterop": true,
|
||||
"skipLibCheck": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"moduleResolution": "NodeNext",
|
||||
"moduleDetection": "force",
|
||||
"types": ["node"],
|
||||
"resolveJsonModule": true,
|
||||
"isolatedModules": true,
|
||||
"allowImportingTsExtensions": false
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "tests", "**/*.test.ts", "**/*.spec.ts"]
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
# docs
|
||||
|
||||
## 0.0.4
|
||||
|
||||
## 0.0.3
|
||||
|
||||
## 0.0.2
|
||||
|
||||
## 0.0.1
|
||||
@@ -1,24 +0,0 @@
|
||||
# Task Master Documentation
|
||||
|
||||
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
|
||||
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
|
||||
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
|
||||
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
|
||||
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
|
||||
|
||||
## Best Practices
|
||||
|
||||
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
|
||||
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
|
||||
|
||||
## Need More Help?
|
||||
|
||||
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
|
||||
@@ -1,114 +0,0 @@
|
||||
---
|
||||
title: "Installation(2)"
|
||||
description: "This guide walks you through setting up Task Master in your development environment."
|
||||
---
|
||||
|
||||
## Initial Setup
|
||||
|
||||
<Tip>
|
||||
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
|
||||
</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Option 1: Using MCP (Recommended)" icon="sparkles">
|
||||
<Steps>
|
||||
<Step title="Add the MCP config to your editor">
|
||||
<Link href="https://cursor.sh">Cursor</Link> recommended, but it works with other text editors
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"MODEL": "claude-3-7-sonnet-20250219",
|
||||
"PERPLEXITY_MODEL": "sonar-pro",
|
||||
"MAX_TOKENS": 128000,
|
||||
"TEMPERATURE": 0.2,
|
||||
"DEFAULT_SUBTASKS": 5,
|
||||
"DEFAULT_PRIORITY": "medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Step>
|
||||
<Step title="Enable the MCP in your editor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Prompt the AI to initialize Task Master">
|
||||
> "Can you please initialize taskmaster-ai into my project?"
|
||||
|
||||
**The AI will:**
|
||||
|
||||
1. Create necessary project structure
|
||||
2. Set up initial configuration files
|
||||
3. Guide you through the rest of the process
|
||||
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
|
||||
5. **Use natural language commands** to interact with Task Master:
|
||||
|
||||
> "Can you parse my PRD at scripts/prd.txt?"
|
||||
>
|
||||
> "What's the next task I should work on?"
|
||||
>
|
||||
> "Can you help me implement task 3?"
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Option 2: Manual Installation">
|
||||
If you prefer to use the command line interface directly:
|
||||
|
||||
<Steps>
|
||||
<Step title="Install">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
npm install -g task-master-ai
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npm install task-master-ai
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Initialize a new project">
|
||||
<CodeGroup>
|
||||
|
||||
```bash Global
|
||||
task-master init
|
||||
```
|
||||
|
||||
|
||||
```bash Local
|
||||
npx task-master-init
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Commands
|
||||
|
||||
<Tip>
|
||||
After setting up Task Master, you can use these commands (either via AI prompts or CLI)
|
||||
</Tip>
|
||||
|
||||
```bash
|
||||
# Parse a PRD and generate tasks
|
||||
task-master parse-prd your-prd.txt
|
||||
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# Show the next task to work on
|
||||
task-master next
|
||||
|
||||
# Generate task files
|
||||
task-master generate
|
||||
@@ -1,263 +0,0 @@
|
||||
---
|
||||
title: "AI Client Utilities for MCP Tools"
|
||||
description: "This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools."
|
||||
---
|
||||
## Examples
|
||||
<AccordionGroup>
|
||||
<Accordion title="Basic Usage with Direct Functions">
|
||||
```javascript
|
||||
// In your direct function implementation:
|
||||
import {
|
||||
getAnthropicClientForMCP,
|
||||
getModelConfig,
|
||||
handleClaudeError
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function someAiOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Initialize Anthropic client with session from context
|
||||
const client = getAnthropicClientForMCP(context.session, log);
|
||||
|
||||
// Get model configuration with defaults or session overrides
|
||||
const modelConfig = getModelConfig(context.session);
|
||||
|
||||
// Make API call with proper error handling
|
||||
try {
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature,
|
||||
messages: [{ role: 'user', content: 'Your prompt here' }]
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response
|
||||
};
|
||||
} catch (apiError) {
|
||||
// Use helper to get user-friendly error message
|
||||
const friendlyMessage = handleClaudeError(apiError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_API_ERROR',
|
||||
message: friendlyMessage
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle client initialization errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'AI_CLIENT_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Integration with AsyncOperationManager">
|
||||
```javascript
|
||||
// In your MCP tool implementation:
|
||||
import {
|
||||
AsyncOperationManager,
|
||||
StatusCodes
|
||||
} from '../../utils/async-operation-manager.js';
|
||||
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
|
||||
|
||||
export async function someAiOperation(args, context) {
|
||||
const { session, mcpLog } = context;
|
||||
const log = mcpLog || console;
|
||||
|
||||
try {
|
||||
// Create operation description
|
||||
const operationDescription = `AI operation: ${args.someParam}`;
|
||||
|
||||
// Start async operation
|
||||
const operation = AsyncOperationManager.createOperation(
|
||||
operationDescription,
|
||||
async (reportProgress) => {
|
||||
try {
|
||||
// Initial progress report
|
||||
reportProgress({
|
||||
progress: 0,
|
||||
status: 'Starting AI operation...'
|
||||
});
|
||||
|
||||
// Call direct function with session and progress reporting
|
||||
const result = await someAiOperationDirect(args, log, {
|
||||
reportProgress,
|
||||
mcpLog: log,
|
||||
session
|
||||
});
|
||||
|
||||
// Final progress update
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: result.success ? 'Operation completed' : 'Operation failed',
|
||||
result: result.data,
|
||||
error: result.error
|
||||
});
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Handle errors in the operation
|
||||
reportProgress({
|
||||
progress: 100,
|
||||
status: 'Operation failed',
|
||||
error: {
|
||||
message: error.message,
|
||||
code: error.code || 'OPERATION_FAILED'
|
||||
}
|
||||
});
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Return immediate response with operation ID
|
||||
return {
|
||||
status: StatusCodes.ACCEPTED,
|
||||
body: {
|
||||
success: true,
|
||||
message: 'Operation started',
|
||||
operationId: operation.id
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// Handle errors in the MCP tool
|
||||
log.error(`Error in someAiOperation: ${error.message}`);
|
||||
return {
|
||||
status: StatusCodes.INTERNAL_SERVER_ERROR,
|
||||
body: {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'OPERATION_FAILED',
|
||||
message: error.message
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Using Research Capabilities with Perplexity">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import {
|
||||
getPerplexityClientForMCP,
|
||||
getBestAvailableAIModel
|
||||
} from '../utils/ai-client-utils.js';
|
||||
|
||||
export async function researchOperationDirect(args, log, context) {
|
||||
try {
|
||||
// Get the best AI model for this operation based on needs
|
||||
const { type, client } = await getBestAvailableAIModel(
|
||||
context.session,
|
||||
{ requiresResearch: true },
|
||||
log
|
||||
);
|
||||
|
||||
// Report which model we're using
|
||||
if (context.reportProgress) {
|
||||
await context.reportProgress({
|
||||
progress: 10,
|
||||
status: `Using ${type} model for research...`
|
||||
});
|
||||
}
|
||||
|
||||
// Make API call based on the model type
|
||||
if (type === 'perplexity') {
|
||||
// Call Perplexity
|
||||
const response = await client.chat.completions.create({
|
||||
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
|
||||
messages: [{ role: 'user', content: args.researchQuery }],
|
||||
temperature: 0.1
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.choices[0].message.content
|
||||
};
|
||||
} else {
|
||||
// Call Claude as fallback
|
||||
// (Implementation depends on specific needs)
|
||||
// ...
|
||||
}
|
||||
} catch (error) {
|
||||
// Handle errors
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'RESEARCH_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Configuration Override">
|
||||
```javascript
|
||||
// In your direct function:
|
||||
import { getModelConfig } from '../utils/ai-client-utils.js';
|
||||
|
||||
// Using custom defaults for a specific operation
|
||||
const operationDefaults = {
|
||||
model: 'claude-3-haiku-20240307', // Faster, smaller model
|
||||
maxTokens: 1000, // Lower token limit
|
||||
temperature: 0.2 // Lower temperature for more deterministic output
|
||||
};
|
||||
|
||||
// Get model config with operation-specific defaults
|
||||
const modelConfig = getModelConfig(context.session, operationDefaults);
|
||||
|
||||
// Now use modelConfig in your API calls
|
||||
const response = await client.messages.create({
|
||||
model: modelConfig.model,
|
||||
max_tokens: modelConfig.maxTokens,
|
||||
temperature: modelConfig.temperature
|
||||
// Other parameters...
|
||||
});
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Error Handling">
|
||||
- Always use try/catch blocks around both client initialization and API calls
|
||||
- Use `handleClaudeError` to provide user-friendly error messages
|
||||
- Return standardized error objects with code and message
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Progress Reporting">
|
||||
- Report progress at key points (starting, processing, completing)
|
||||
- Include meaningful status messages
|
||||
- Include error details in progress reports when failures occur
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Session Handling">
|
||||
- Always pass the session from the context to the AI client getters
|
||||
- Use `getModelConfig` to respect user settings from session
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Model Selection">
|
||||
- Use `getBestAvailableAIModel` when you need to select between different models
|
||||
- Set `requiresResearch: true` when you need Perplexity capabilities
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AsyncOperationManager Integration">
|
||||
- Create descriptive operation names
|
||||
- Handle all errors within the operation function
|
||||
- Return standardized results from direct functions
|
||||
- Return immediate responses with operation IDs
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1,180 +0,0 @@
|
||||
---
|
||||
title: "AI Development Workflow"
|
||||
description: "Learn how Task Master and Cursor AI work together to streamline your development workflow"
|
||||
---
|
||||
|
||||
<Tip>The Cursor agent is pre-configured (via the rules file) to follow this workflow</Tip>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="1. Task Discovery and Selection">
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="2. Task Implementation">
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="3. Task Verification">
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="4. Task Completion">
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="5. Handling Implementation Drift">
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="6. Breaking Down Complex Tasks">
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using Perplexity AI:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Example Cursor AI Interactions
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1,208 +0,0 @@
|
||||
---
|
||||
title: "Task Master Commands"
|
||||
description: "A comprehensive reference of all available Task Master commands"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1,80 +0,0 @@
|
||||
---
|
||||
title: "Configuration"
|
||||
description: "Configure Task Master through environment variables in a .env file"
|
||||
---
|
||||
|
||||
## Required Configuration
|
||||
|
||||
<Note>
|
||||
Task Master requires an Anthropic API key to function. Add this to your `.env` file:
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
```
|
||||
|
||||
You can obtain an API key from the [Anthropic Console](https://console.anthropic.com/).
|
||||
</Note>
|
||||
|
||||
## Optional Configuration
|
||||
|
||||
| Variable | Default Value | Description | Example |
|
||||
| --- | --- | --- | --- |
|
||||
| `MODEL` | `"claude-3-7-sonnet-20250219"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
|
||||
| `MAX_TOKENS` | `"4000"` | Maximum tokens for responses | `MAX_TOKENS=8000` |
|
||||
| `TEMPERATURE` | `"0.7"` | Temperature for model responses | `TEMPERATURE=0.5` |
|
||||
| `DEBUG` | `"false"` | Enable debug logging | `DEBUG=true` |
|
||||
| `LOG_LEVEL` | `"info"` | Console output level | `LOG_LEVEL=debug` |
|
||||
| `DEFAULT_SUBTASKS` | `"3"` | Default subtask count | `DEFAULT_SUBTASKS=5` |
|
||||
| `DEFAULT_PRIORITY` | `"medium"` | Default priority | `DEFAULT_PRIORITY=high` |
|
||||
| `PROJECT_NAME` | `"MCP SaaS MVP"` | Project name in metadata | `PROJECT_NAME=My Awesome Project` |
|
||||
| `PROJECT_VERSION` | `"1.0.0"` | Version in metadata | `PROJECT_VERSION=2.1.0` |
|
||||
| `PERPLEXITY_API_KEY` | - | For research-backed features | `PERPLEXITY_API_KEY=pplx-...` |
|
||||
| `PERPLEXITY_MODEL` | `"sonar-medium-online"` | Perplexity model | `PERPLEXITY_MODEL=sonar-large-online` |
|
||||
|
||||
## Example .env File
|
||||
|
||||
```
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
|
||||
|
||||
# Optional - Claude Configuration
|
||||
MODEL=claude-3-7-sonnet-20250219
|
||||
MAX_TOKENS=4000
|
||||
TEMPERATURE=0.7
|
||||
|
||||
# Optional - Perplexity API for Research
|
||||
PERPLEXITY_API_KEY=pplx-your-api-key
|
||||
PERPLEXITY_MODEL=sonar-medium-online
|
||||
|
||||
# Optional - Project Info
|
||||
PROJECT_NAME=My Project
|
||||
PROJECT_VERSION=1.0.0
|
||||
|
||||
# Optional - Application Configuration
|
||||
DEFAULT_SUBTASKS=3
|
||||
DEFAULT_PRIORITY=medium
|
||||
DEBUG=false
|
||||
LOG_LEVEL=info
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
<Note>
|
||||
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide] page.
|
||||
</Note>
|
||||
@@ -1,95 +0,0 @@
|
||||
---
|
||||
title: "Cursor AI Integration"
|
||||
description: "Learn how to set up and use Task Master with Cursor AI"
|
||||
---
|
||||
|
||||
## Setting up Cursor AI Integration
|
||||
|
||||
<Check>
|
||||
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development.
|
||||
</Check>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Using Cursor with MCP (Recommended)" icon="sparkles">
|
||||
If you've already set up Task Master with MCP in Cursor, the integration is automatic. You can simply use natural language to interact with Task Master:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
Can you analyze the complexity of our tasks?
|
||||
I'd like to implement task 4. What does it involve?
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Manual Cursor Setup">
|
||||
If you're not using MCP, you can still set up Cursor integration:
|
||||
|
||||
<Steps>
|
||||
<Step title="After initializing your project, open it in Cursor">
|
||||
The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
|
||||
</Step>
|
||||
<Step title="Place your PRD document in the scripts/ directory (e.g., scripts/prd.txt)">
|
||||
|
||||
</Step>
|
||||
<Step title="Open Cursor's AI chat and switch to Agent mode">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
</Accordion>
|
||||
<Accordion title="Alternative MCP Setup in Cursor">
|
||||
<Steps>
|
||||
<Step title="Go to Cursor settings">
|
||||
|
||||
</Step>
|
||||
<Step title="Navigate to the MCP section">
|
||||
|
||||
</Step>
|
||||
<Step title="Click on 'Add New MCP Server'">
|
||||
|
||||
</Step>
|
||||
<Step title="Configure with the following details:">
|
||||
- Name: "Task Master"
|
||||
- Type: "Command"
|
||||
- Command: "npx -y task-master-ai"
|
||||
</Step>
|
||||
<Step title="Save Settings">
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Initial Task Generation
|
||||
|
||||
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
|
||||
|
||||
```
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master parse-prd scripts/prd.txt
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Parse your PRD document
|
||||
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
|
||||
- The agent will understand this process due to the Cursor rules
|
||||
|
||||
### Generate Individual Task Files
|
||||
|
||||
Next, ask the agent to generate individual task files:
|
||||
|
||||
```
|
||||
Please generate individual task files from tasks.json
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
This creates individual task files in the `tasks/` directory (e.g., `task_001.txt`, `task_002.txt`), making it easier to reference specific tasks.
|
||||
@@ -1,56 +0,0 @@
|
||||
---
|
||||
title: "Example Cursor AI Interactions"
|
||||
description: "Below are some common interactions with Cursor AI when using Task Master"
|
||||
---
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Starting a new project">
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Working on tasks">
|
||||
```
|
||||
What's the next task I should work on? Please consider dependencies and priorities.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Implementing a specific task">
|
||||
```
|
||||
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing subtasks">
|
||||
```
|
||||
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Handling changes">
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Completing work">
|
||||
```
|
||||
I've finished implementing the authentication system described in task 2. All tests are passing.
|
||||
Please mark it as complete and tell me what I should work on next.
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyzing complexity">
|
||||
```
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Viewing complexity report">
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1,210 +0,0 @@
|
||||
---
|
||||
title: Advanced Tasks
|
||||
sidebarTitle: "Advanced Tasks"
|
||||
---
|
||||
|
||||
## AI-Driven Development Workflow
|
||||
|
||||
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
|
||||
|
||||
### 1. Task Discovery and Selection
|
||||
|
||||
Ask the agent to list available tasks:
|
||||
|
||||
```
|
||||
What tasks are available to work on next?
|
||||
```
|
||||
|
||||
```
|
||||
Can you show me tasks 1, 3, and 5 to understand their current status?
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master list` to see all tasks
|
||||
- Run `task-master next` to determine the next task to work on
|
||||
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
|
||||
- Analyze dependencies to determine which tasks are ready to be worked on
|
||||
- Prioritize tasks based on priority level and ID order
|
||||
- Suggest the next task(s) to implement
|
||||
|
||||
### 2. Task Implementation
|
||||
|
||||
When implementing a task, the agent will:
|
||||
|
||||
- Reference the task's details section for implementation specifics
|
||||
- Consider dependencies on previous tasks
|
||||
- Follow the project's coding standards
|
||||
- Create appropriate tests based on the task's testStrategy
|
||||
|
||||
You can ask:
|
||||
|
||||
```
|
||||
Let's implement task 3. What does it involve?
|
||||
```
|
||||
|
||||
### 2.1. Viewing Multiple Tasks
|
||||
|
||||
For efficient context gathering and batch operations:
|
||||
|
||||
```
|
||||
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
|
||||
```
|
||||
|
||||
The agent will:
|
||||
|
||||
- Run `task-master show 5,7,9` to display a compact summary table
|
||||
- Show task status, priority, and progress indicators
|
||||
- Provide an interactive action menu with batch operations
|
||||
- Allow you to perform group actions like marking multiple tasks as in-progress
|
||||
|
||||
### 3. Task Verification
|
||||
|
||||
Before marking a task as complete, verify it according to:
|
||||
|
||||
- The task's specified testStrategy
|
||||
- Any automated tests in the codebase
|
||||
- Manual verification if required
|
||||
|
||||
### 4. Task Completion
|
||||
|
||||
When a task is completed, tell the agent:
|
||||
|
||||
```
|
||||
Task 3 is now complete. Please update its status.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master set-status --id=3 --status=done
|
||||
```
|
||||
|
||||
### 5. Handling Implementation Drift
|
||||
|
||||
If during implementation, you discover that:
|
||||
|
||||
- The current approach differs significantly from what was planned
|
||||
- Future tasks need to be modified due to current implementation choices
|
||||
- New dependencies or requirements have emerged
|
||||
|
||||
Tell the agent:
|
||||
|
||||
```
|
||||
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
|
||||
|
||||
# OR, if research is needed to find best practices for MongoDB:
|
||||
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
|
||||
```
|
||||
|
||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||
|
||||
### 6. Reorganizing Tasks
|
||||
|
||||
If you need to reorganize your task structure:
|
||||
|
||||
```
|
||||
I think subtask 5.2 would fit better as part of task 7 instead. Can you move it there?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master move --from=5.2 --to=7.3
|
||||
```
|
||||
|
||||
You can reorganize tasks in various ways:
|
||||
|
||||
- Moving a standalone task to become a subtask: `--from=5 --to=7`
|
||||
- Moving a subtask to become a standalone task: `--from=5.2 --to=7`
|
||||
- Moving a subtask to a different parent: `--from=5.2 --to=7.3`
|
||||
- Reordering subtasks within the same parent: `--from=5.2 --to=5.4`
|
||||
- Moving a task to a new ID position: `--from=5 --to=25` (even if task 25 doesn't exist yet)
|
||||
- Moving multiple tasks at once: `--from=10,11,12 --to=16,17,18` (must have same number of IDs, Taskmaster will look through each position)
|
||||
|
||||
When moving tasks to new IDs:
|
||||
|
||||
- The system automatically creates placeholder tasks for non-existent destination IDs
|
||||
- This prevents accidental data loss during reorganization
|
||||
- Any tasks that depend on moved tasks will have their dependencies updated
|
||||
- When moving a parent task, all its subtasks are automatically moved with it and renumbered
|
||||
|
||||
This is particularly useful as your project understanding evolves and you need to refine your task structure.
|
||||
|
||||
### 7. Resolving Merge Conflicts with Tasks
|
||||
|
||||
When working with a team, you might encounter merge conflicts in your tasks.json file if multiple team members create tasks on different branches. The move command makes resolving these conflicts straightforward:
|
||||
|
||||
```
|
||||
I just merged the main branch and there's a conflict with tasks.json. My teammates created tasks 10-15 while I created tasks 10-12 on my branch. Can you help me resolve this?
|
||||
```
|
||||
|
||||
The agent will help you:
|
||||
|
||||
1. Keep your teammates' tasks (10-15)
|
||||
2. Move your tasks to new positions to avoid conflicts:
|
||||
|
||||
```bash
|
||||
# Move your tasks to new positions (e.g., 16-18)
|
||||
task-master move --from=10 --to=16
|
||||
task-master move --from=11 --to=17
|
||||
task-master move --from=12 --to=18
|
||||
```
|
||||
|
||||
This approach preserves everyone's work while maintaining a clean task structure, making it much easier to handle task conflicts than trying to manually merge JSON files.
|
||||
|
||||
### 8. Breaking Down Complex Tasks
|
||||
|
||||
For complex tasks that need more granularity:
|
||||
|
||||
```
|
||||
Task 5 seems complex. Can you break it down into subtasks?
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --num=3
|
||||
```
|
||||
|
||||
You can provide additional context:
|
||||
|
||||
```
|
||||
Please break down task 5 with a focus on security considerations.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --prompt="Focus on security aspects"
|
||||
```
|
||||
|
||||
You can also expand all pending tasks:
|
||||
|
||||
```
|
||||
Please break down all pending tasks into subtasks.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using the configured research model:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master expand --id=5 --research
|
||||
```
|
||||
@@ -1,319 +0,0 @@
|
||||
---
|
||||
title: Advanced Configuration
|
||||
sidebarTitle: "Advanced Configuration"
|
||||
---
|
||||
|
||||
|
||||
Taskmaster uses two primary methods for configuration:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
|
||||
|
||||
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
|
||||
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
|
||||
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
|
||||
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
|
||||
- **Example Structure:**
|
||||
```json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2,
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1,
|
||||
"baseURL": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"defaultTag": "master",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
|
||||
|
||||
- For projects that haven't migrated to the new structure yet.
|
||||
- **Location:** Project root directory.
|
||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||
|
||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
- **Location:**
|
||||
- For CLI usage: Create a `.env` file in your project root.
|
||||
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
|
||||
- **Required API Keys (Depending on configured providers):**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key.
|
||||
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
|
||||
- `MISTRAL_API_KEY`: Your Mistral API key.
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
|
||||
- **Optional Auto-Update Control:**
|
||||
- `TASKMASTER_SKIP_AUTO_UPDATE`: Set to '1' to disable automatic updates. Also automatically disabled in CI environments (when `CI` environment variable is set).
|
||||
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
|
||||
|
||||
## Tagged Task Lists Configuration (v0.17+)
|
||||
|
||||
Taskmaster includes a tagged task lists system for multi-context task management.
|
||||
|
||||
### Global Tag Settings
|
||||
|
||||
```json
|
||||
"global": {
|
||||
"defaultTag": "master"
|
||||
}
|
||||
```
|
||||
|
||||
- **`defaultTag`** (string): Default tag context for new operations (default: "master")
|
||||
|
||||
### Git Integration
|
||||
|
||||
Task Master provides manual git integration through the `--from-branch` option:
|
||||
|
||||
- **Manual Tag Creation**: Use `task-master add-tag --from-branch` to create a tag based on your current git branch name
|
||||
- **User Control**: No automatic tag switching - you control when and how tags are created
|
||||
- **Flexible Workflow**: Supports any git workflow without imposing rigid branch-tag mappings
|
||||
|
||||
## State Management File
|
||||
|
||||
Taskmaster uses `.taskmaster/state.json` to track tagged system runtime information:
|
||||
|
||||
```json
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-06-11T20:26:12.598Z",
|
||||
"migrationNoticeShown": true
|
||||
}
|
||||
```
|
||||
|
||||
- **`currentTag`**: Currently active tag context
|
||||
- **`lastSwitched`**: Timestamp of last tag switch
|
||||
- **`migrationNoticeShown`**: Whether migration notice has been displayed
|
||||
|
||||
This file is automatically created during tagged system migration and should not be manually edited.
|
||||
|
||||
## Example `.env` File (for API Keys)
|
||||
|
||||
```
|
||||
# Required API keys for providers configured in .taskmaster/config.json
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
|
||||
PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# GOOGLE_API_KEY=AIzaSy...
|
||||
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
# etc.
|
||||
|
||||
# Optional Endpoint Overrides
|
||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
||||
#
|
||||
# Azure OpenAI Configuration
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Configuration Errors
|
||||
|
||||
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
|
||||
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
|
||||
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
Try running it with Node directly:
|
||||
|
||||
```bash
|
||||
node node_modules/claude-task-master/scripts/init.js
|
||||
```
|
||||
|
||||
Or clone the repository and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
## Provider-Specific Configuration
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- A Google Cloud account with Vertex AI API enabled
|
||||
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
|
||||
- A Google Cloud project ID
|
||||
2. **Authentication Options**:
|
||||
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
|
||||
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
|
||||
3. **Required Configuration**:
|
||||
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
|
||||
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
|
||||
4. **Example Setup**:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
Or using service account:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
5. **In .taskmaster/config.json**:
|
||||
```json
|
||||
"global": {
|
||||
"vertexProjectId": "my-gcp-project-123",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
```
|
||||
|
||||
### Azure OpenAI Configuration
|
||||
|
||||
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- An Azure account with an active subscription
|
||||
- Azure OpenAI service resource created in the Azure portal
|
||||
- Azure OpenAI API key and endpoint URL
|
||||
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource
|
||||
|
||||
2. **Authentication**:
|
||||
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
|
||||
- Configure the endpoint URL using one of the methods below
|
||||
|
||||
3. **Configuration Options**:
|
||||
|
||||
**Option 1: Using Global Azure Base URL (affects all Azure models)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Environment Variables**:
|
||||
```bash
|
||||
# In .env file
|
||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
|
||||
# Optional: Override endpoint for all Azure models
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||
```
|
||||
|
||||
5. **Important Notes**:
|
||||
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
|
||||
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
|
||||
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`
|
||||
|
||||
6. **Troubleshooting**:
|
||||
|
||||
**"Resource not found" errors:**
|
||||
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
|
||||
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
|
||||
- Check that your Azure OpenAI resource is in the correct region and properly deployed
|
||||
|
||||
**Authentication errors:**
|
||||
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
|
||||
- Ensure your Azure OpenAI resource has the necessary permissions
|
||||
- Check that your subscription has not been suspended or reached quota limits
|
||||
|
||||
**Model availability errors:**
|
||||
- Confirm the model is deployed in your Azure OpenAI resource
|
||||
- Verify the deployment name matches your configuration exactly (case-sensitive)
|
||||
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
|
||||
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
title: Intro to Advanced Usage
|
||||
sidebarTitle: "Advanced Usage"
|
||||
---
|
||||
|
||||
# Best Practices
|
||||
|
||||
Explore advanced tips, recommended workflows, and best practices for getting the most out of Task Master.
|
||||
@@ -1,209 +0,0 @@
|
||||
---
|
||||
title: CLI Commands
|
||||
sidebarTitle: "CLI Commands"
|
||||
---
|
||||
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Parse PRD">
|
||||
```bash
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="List Tasks">
|
||||
```bash
|
||||
# List all tasks
|
||||
task-master list
|
||||
|
||||
# List tasks with a specific status
|
||||
task-master list --status=<status>
|
||||
|
||||
# List tasks with subtasks
|
||||
task-master list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include subtasks
|
||||
task-master list --status=<status> --with-subtasks
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Next Task">
|
||||
```bash
|
||||
# Show the next task to work on based on dependencies and status
|
||||
task-master next
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Show Specific Task">
|
||||
```bash
|
||||
# Show details of a specific task
|
||||
task-master show <id>
|
||||
# or
|
||||
task-master show --id=<id>
|
||||
|
||||
# View a specific subtask (e.g., subtask 2 of task 1)
|
||||
task-master show 1.2
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update Tasks">
|
||||
```bash
|
||||
# Update tasks from a specific ID and provide context
|
||||
task-master update --from=<id> --prompt="<prompt>"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Specific Task">
|
||||
```bash
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Update a Subtask">
|
||||
```bash
|
||||
# Append additional information to a specific subtask
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
|
||||
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Generate Task Files">
|
||||
```bash
|
||||
# Generate individual task files from tasks.json
|
||||
task-master generate
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Set Task Status">
|
||||
```bash
|
||||
# Set status of a single task
|
||||
task-master set-status --id=<id> --status=<status>
|
||||
|
||||
# Set status for multiple tasks
|
||||
task-master set-status --id=1,2,3 --status=<status>
|
||||
|
||||
# Set status for subtasks
|
||||
task-master set-status --id=1.1,1.2 --status=<status>
|
||||
```
|
||||
|
||||
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expand Tasks">
|
||||
```bash
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
# Expand all pending tasks
|
||||
task-master expand --all
|
||||
|
||||
# Force regeneration of subtasks for tasks that already have them
|
||||
task-master expand --all --force
|
||||
|
||||
# Research-backed subtask generation for a specific task
|
||||
task-master expand --id=<id> --research
|
||||
|
||||
# Research-backed generation for all tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Clear Subtasks">
|
||||
```bash
|
||||
# Clear subtasks from a specific task
|
||||
task-master clear-subtasks --id=<id>
|
||||
|
||||
# Clear subtasks from multiple tasks
|
||||
task-master clear-subtasks --id=1,2,3
|
||||
|
||||
# Clear subtasks from all tasks
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Analyze Task Complexity">
|
||||
```bash
|
||||
# Analyze complexity of all tasks
|
||||
task-master analyze-complexity
|
||||
|
||||
# Save report to a custom location
|
||||
task-master analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="View Complexity Report">
|
||||
```bash
|
||||
# Display the task complexity analysis report
|
||||
task-master complexity-report
|
||||
|
||||
# View a report at a custom location
|
||||
task-master complexity-report --file=my-report.json
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Managing Task Dependencies">
|
||||
```bash
|
||||
# Add a dependency to a task
|
||||
task-master add-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Remove a dependency from a task
|
||||
task-master remove-dependency --id=<id> --depends-on=<id>
|
||||
|
||||
# Validate dependencies without fixing them
|
||||
task-master validate-dependencies
|
||||
|
||||
# Find and fix invalid dependencies automatically
|
||||
task-master fix-dependencies
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Add a New Task">
|
||||
```bash
|
||||
# Add a new task using AI
|
||||
task-master add-task --prompt="Description of the new task"
|
||||
|
||||
# Add a task with dependencies
|
||||
task-master add-task --prompt="Description" --dependencies=1,2,3
|
||||
|
||||
# Add a task with priority
|
||||
task-master add-task --prompt="Description" --priority=high
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Initialize a Project">
|
||||
```bash
|
||||
# Initialize a new project with Task Master structure
|
||||
task-master init
|
||||
```
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -1,241 +0,0 @@
|
||||
---
|
||||
title: Technical Capabilities
|
||||
sidebarTitle: "Technical Capabilities"
|
||||
---
|
||||
|
||||
# Capabilities (Technical)
|
||||
|
||||
Discover the technical capabilities of Task Master, including supported models, integrations, and more.
|
||||
|
||||
# CLI Interface Synopsis
|
||||
|
||||
This document outlines the command-line interface (CLI) for the Task Master application, as defined in `bin/task-master.js` and the `scripts/modules/commands.js` file (which I will assume exists based on the context). This guide is intended for those writing user-facing documentation to understand how users interact with the application from the command line.
|
||||
|
||||
## Entry Point
|
||||
|
||||
The main entry point for the CLI is the `task-master` command, which is an executable script that spawns the main application logic in `scripts/dev.js`.
|
||||
|
||||
## Global Options
|
||||
|
||||
The following options are available for all commands:
|
||||
|
||||
- `-h, --help`: Display help information.
|
||||
- `--version`: Display the application's version.
|
||||
|
||||
## Commands
|
||||
|
||||
The CLI is organized into a series of commands, each with its own set of options. The following is a summary of the available commands, categorized by their functionality.
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add`**: Creates a new task using an AI-powered prompt.
|
||||
- `--prompt <prompt>`: The prompt to use for generating the task.
|
||||
- `--dependencies <dependencies>`: A comma-separated list of task IDs that this task depends on.
|
||||
- `--priority <priority>`: The priority of the task (e.g., `high`, `medium`, `low`).
|
||||
- **`add-subtask`**: Adds a subtask to a parent task.
|
||||
- `--parent-id <parentId>`: The ID of the parent task.
|
||||
- `--task-id <taskId>`: The ID of an existing task to convert to a subtask.
|
||||
- `--title <title>`: The title of the new subtask.
|
||||
- **`remove`**: Removes one or more tasks or subtasks.
|
||||
- `--ids <ids>`: A comma-separated list of task or subtask IDs to remove.
|
||||
- **`remove-subtask`**: Removes a subtask from its parent.
|
||||
- `--id <subtaskId>`: The ID of the subtask to remove (in the format `parentId.subtaskId`).
|
||||
- `--convert-to-task`: Converts the subtask to a standalone task.
|
||||
- **`update`**: Updates multiple tasks starting from a specific ID.
|
||||
- `--from <fromId>`: The ID of the task to start updating from.
|
||||
- `--prompt <prompt>`: The new context to apply to the tasks.
|
||||
- **`update-task`**: Updates a single task.
|
||||
- `--id <taskId>`: The ID of the task to update.
|
||||
- `--prompt <prompt>`: The new context to apply to the task.
|
||||
- **`update-subtask`**: Appends information to a subtask.
|
||||
- `--id <subtaskId>`: The ID of the subtask to update (in the format `parentId.subtaskId`).
|
||||
- `--prompt <prompt>`: The information to append to the subtask.
|
||||
- **`move`**: Moves a task or subtask.
|
||||
- `--from <sourceId>`: The ID of the task or subtask to move.
|
||||
- `--to <destinationId>`: The destination ID.
|
||||
- **`clear-subtasks`**: Clears all subtasks from one or more tasks.
|
||||
- `--ids <ids>`: A comma-separated list of task IDs.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`list`**: Lists all tasks.
|
||||
- `--status <status>`: Filters tasks by status.
|
||||
- `--with-subtasks`: Includes subtasks in the list.
|
||||
- **`show`**: Shows the details of a specific task.
|
||||
- `--id <taskId>`: The ID of the task to show.
|
||||
- **`next`**: Shows the next task to work on.
|
||||
- **`set-status`**: Sets the status of a task or subtask.
|
||||
- `--id <id>`: The ID of the task or subtask.
|
||||
- `--status <status>`: The new status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse-prd`**: Parses a PRD to generate tasks.
|
||||
- `--file <file>`: The path to the PRD file.
|
||||
- `--num-tasks <numTasks>`: The number of tasks to generate.
|
||||
- **`expand`**: Expands a task into subtasks.
|
||||
- `--id <taskId>`: The ID of the task to expand.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate.
|
||||
- **`expand-all`**: Expands all eligible tasks.
|
||||
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate for each task.
|
||||
- **`analyze-complexity`**: Analyzes task complexity.
|
||||
- `--file <file>`: The path to the tasks file.
|
||||
- **`complexity-report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Project and Configuration
|
||||
|
||||
- **`init`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`migrate`**: Migrates a project to the new directory structure.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
- `--query <query>`: The research query.
|
||||
|
||||
This synopsis provides a comprehensive overview of the CLI commands and their options, which should be helpful for creating user-facing documentation.
|
||||
|
||||
|
||||
# Core Implementation Synopsis
|
||||
|
||||
This document provides a high-level overview of the core implementation of the Task Master application, focusing on the functionalities exposed through `scripts/modules/task-manager.js`. This serves as a guide for understanding the application's capabilities when writing user-facing documentation.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The application revolves around the management of tasks and subtasks, which are stored in a `tasks.json` file. The core logic provides functionalities to create, read, update, and delete tasks and subtasks, as well as manage their dependencies and statuses.
|
||||
|
||||
### Task Structure
|
||||
|
||||
A task is a JSON object with the following key properties:
|
||||
|
||||
- `id`: A unique number identifying the task.
|
||||
- `title`: A string representing the task's title.
|
||||
- `description`: A string providing a brief description of the task.
|
||||
- `details`: A string containing detailed information about the task.
|
||||
- `testStrategy`: A string describing how to test the task.
|
||||
- `status`: A string representing the task's current status (e.g., `pending`, `in-progress`, `done`).
|
||||
- `dependencies`: An array of task IDs that this task depends on.
|
||||
- `priority`: A string representing the task's priority (e.g., `high`, `medium`, `low`).
|
||||
- `subtasks`: An array of subtask objects.
|
||||
|
||||
A subtask has a similar structure to a task but is nested within a parent task.
|
||||
|
||||
## Feature Categories
|
||||
|
||||
The core functionalities can be categorized as follows:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks.
|
||||
|
||||
- **`addTask(prompt, dependencies, priority)`**: Creates a new task using an AI-powered prompt to generate the title, description, details, and test strategy. It can also be used to create a task manually by providing the task data directly.
|
||||
- **`addSubtask(parentId, existingTaskId, newSubtaskData)`**: Adds a subtask to a parent task. It can either convert an existing task into a subtask or create a new subtask from scratch.
|
||||
- **`removeTask(taskIds)`**: Removes one or more tasks or subtasks.
|
||||
- **`removeSubtask(subtaskId, convertToTask)`**: Removes a subtask from its parent. It can optionally convert the subtask into a standalone task.
|
||||
- **`updateTaskById(taskId, prompt)`**: Updates a task's information based on a prompt.
|
||||
- **`updateSubtaskById(subtaskId, prompt)`**: Appends additional information to a subtask's details.
|
||||
- **`updateTasks(fromId, prompt)`**: Updates multiple tasks starting from a specific ID based on a new context.
|
||||
- **`moveTask(sourceId, destinationId)`**: Moves a task or subtask to a new position.
|
||||
- **`clearSubtasks(taskIds)`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
These functions are used to retrieve information about tasks and manage their status.
|
||||
|
||||
- **`listTasks(statusFilter, withSubtasks)`**: Lists all tasks, with options to filter by status and include subtasks.
|
||||
- **`findTaskById(taskId)`**: Finds a task by its ID.
|
||||
- **`taskExists(taskId)`**: Checks if a task with a given ID exists.
|
||||
- **`setTaskStatus(taskIdInput, newStatus)`**: Sets the status of a task or subtask.
|
||||
-al
|
||||
- **`updateSingleTaskStatus(taskIdInput, newStatus)`**: A helper function to update the status of a single task or subtask.
|
||||
- **`findNextTask()`**: Determines the next task to work on based on dependencies and status.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
These functions leverage AI to analyze and break down tasks.
|
||||
|
||||
- **`parsePRD(prdPath, numTasks)`**: Parses a Product Requirements Document (PRD) to generate an initial set of tasks.
|
||||
- **`expandTask(taskId, numSubtasks)`**: Expands a task into a specified number of subtasks using AI.
|
||||
- **`expandAllTasks(numSubtasks)`**: Expands all eligible pending or in-progress tasks.
|
||||
- **`analyzeTaskComplexity(options)`**: Analyzes the complexity of tasks and generates recommendations for expansion.
|
||||
- **`readComplexityReport()`**: Reads the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
These functions are crucial for managing the relationships between tasks.
|
||||
|
||||
- **`isTaskDependentOn(task, targetTaskId)`**: Checks if a task has a direct or indirect dependency on another task.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
These functions are for managing the project and its configuration.
|
||||
|
||||
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`.
|
||||
- **`migrateProject()`**: Migrates the project to the new `.taskmaster` directory structure.
|
||||
- **`performResearch(query, options)`**: Performs AI-powered research with project context.
|
||||
|
||||
This overview should provide a solid foundation for creating user-facing documentation. For more detailed information on each function, refer to the source code in `scripts/modules/task-manager/`.
|
||||
|
||||
|
||||
# MCP Interface Synopsis
|
||||
|
||||
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
### 1. Task and Subtask Management
|
||||
|
||||
- **`add_task`**: Creates a new task.
|
||||
- **`add_subtask`**: Adds a subtask to a parent task.
|
||||
- **`remove_task`**: Removes one or more tasks or subtasks.
|
||||
- **`remove_subtask`**: Removes a subtask from its parent.
|
||||
- **`update_task`**: Updates a single task.
|
||||
- **`update_subtask`**: Appends information to a subtask.
|
||||
- **`update`**: Updates multiple tasks.
|
||||
- **`move_task`**: Moves a task or subtask.
|
||||
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
|
||||
|
||||
### 2. Task Information and Status
|
||||
|
||||
- **`get_tasks`**: Lists all tasks.
|
||||
- **`get_task`**: Shows the details of a specific task.
|
||||
- **`next_task`**: Shows the next task to work on.
|
||||
- **`set_task_status`**: Sets the status of a task or subtask.
|
||||
|
||||
### 3. Task Analysis and Expansion
|
||||
|
||||
- **`parse_prd`**: Parses a PRD to generate tasks.
|
||||
- **`expand_task`**: Expands a task into subtasks.
|
||||
- **`expand_all`**: Expands all eligible tasks.
|
||||
- **`analyze_project_complexity`**: Analyzes task complexity.
|
||||
- **`complexity_report`**: Displays the complexity analysis report.
|
||||
|
||||
### 4. Dependency Management
|
||||
|
||||
- **`add_dependency`**: Adds a dependency to a task.
|
||||
- **`remove_dependency`**: Removes a dependency from a task.
|
||||
- **`validate_dependencies`**: Validates the dependencies of all tasks.
|
||||
- **`fix_dependencies`**: Fixes any invalid dependencies.
|
||||
|
||||
### 5. Project and Configuration
|
||||
|
||||
- **`initialize_project`**: Initializes a new project.
|
||||
- **`generate`**: Generates individual task files.
|
||||
- **`models`**: Manages AI model configurations.
|
||||
- **`research`**: Performs AI-powered research.
|
||||
|
||||
### 6. Tag Management
|
||||
|
||||
- **`add_tag`**: Creates a new tag.
|
||||
- **`delete_tag`**: Deletes a tag.
|
||||
- **`list_tags`**: Lists all tags.
|
||||
- **`use_tag`**: Switches to a different tag.
|
||||
- **`rename_tag`**: Renames a tag.
|
||||
- **`copy_tag`**: Copies a tag.
|
||||
|
||||
This synopsis provides a clear overview of the MCP interface and its available tools, which will be valuable for anyone writing documentation for developers who need to interact with the Task Master application programmatically.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user