Compare commits
6 Commits
feature/cl
...
v0.18.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ef1deec947 | ||
|
|
b40139ca05 | ||
|
|
8852831807 | ||
|
|
661d3e04ba | ||
|
|
0dba2cb2da | ||
|
|
9ee7a94056 |
@@ -1,17 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Added comprehensive rule profile management:
|
||||
|
||||
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
|
||||
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
|
||||
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
|
||||
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
|
||||
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
|
||||
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
|
||||
- Resolves #338
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improves Amazon Bedrock support
|
||||
@@ -1,11 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve provider validation system with clean constants structure
|
||||
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
|
||||
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix weird `task-master init` bug when using in certain environments
|
||||
@@ -1,22 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Claude Code provider support
|
||||
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
- Lazy loading ensures the provider only loads when requested
|
||||
- Full integration with existing Task Master commands and workflows
|
||||
- Comprehensive test coverage for reliability
|
||||
- New --claude-code flag for the models command
|
||||
|
||||
Users can now configure Claude Code models with:
|
||||
task-master models --set-main sonnet --claude-code
|
||||
task-master models --set-research opus --claude-code
|
||||
|
||||
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
|
||||
147
.claude/TM_COMMANDS_GUIDE.md
Normal file
147
.claude/TM_COMMANDS_GUIDE.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# Task Master Commands for Claude Code
|
||||
|
||||
Complete guide to using Task Master through Claude Code's slash commands.
|
||||
|
||||
## Overview
|
||||
|
||||
All Task Master functionality is available through the `/project:tm/` namespace with natural language support and intelligent features.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install Task Master
|
||||
/project:tm/setup/quick-install
|
||||
|
||||
# Initialize project
|
||||
/project:tm/init/quick
|
||||
|
||||
# Parse requirements
|
||||
/project:tm/parse-prd requirements.md
|
||||
|
||||
# Start working
|
||||
/project:tm/next
|
||||
```
|
||||
|
||||
## Command Structure
|
||||
|
||||
Commands are organized hierarchically to match Task Master's CLI:
|
||||
- Main commands at `/project:tm/[command]`
|
||||
- Subcommands for specific operations `/project:tm/[command]/[subcommand]`
|
||||
- Natural language arguments accepted throughout
|
||||
|
||||
## Complete Command Reference
|
||||
|
||||
### Setup & Configuration
|
||||
- `/project:tm/setup/install` - Full installation guide
|
||||
- `/project:tm/setup/quick-install` - One-line install
|
||||
- `/project:tm/init` - Initialize project
|
||||
- `/project:tm/init/quick` - Quick init with -y
|
||||
- `/project:tm/models` - View AI config
|
||||
- `/project:tm/models/setup` - Configure AI
|
||||
|
||||
### Task Generation
|
||||
- `/project:tm/parse-prd` - Generate from PRD
|
||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||
- `/project:tm/generate` - Create task files
|
||||
|
||||
### Task Management
|
||||
- `/project:tm/list` - List with natural language filters
|
||||
- `/project:tm/list/with-subtasks` - Hierarchical view
|
||||
- `/project:tm/list/by-status <status>` - Filter by status
|
||||
- `/project:tm/show <id>` - Task details
|
||||
- `/project:tm/add-task` - Create task
|
||||
- `/project:tm/update` - Update tasks
|
||||
- `/project:tm/remove-task` - Delete task
|
||||
|
||||
### Status Management
|
||||
- `/project:tm/set-status/to-pending <id>`
|
||||
- `/project:tm/set-status/to-in-progress <id>`
|
||||
- `/project:tm/set-status/to-done <id>`
|
||||
- `/project:tm/set-status/to-review <id>`
|
||||
- `/project:tm/set-status/to-deferred <id>`
|
||||
- `/project:tm/set-status/to-cancelled <id>`
|
||||
|
||||
### Task Analysis
|
||||
- `/project:tm/analyze-complexity` - AI analysis
|
||||
- `/project:tm/complexity-report` - View report
|
||||
- `/project:tm/expand <id>` - Break down task
|
||||
- `/project:tm/expand/all` - Expand all complex
|
||||
|
||||
### Dependencies
|
||||
- `/project:tm/add-dependency` - Add dependency
|
||||
- `/project:tm/remove-dependency` - Remove dependency
|
||||
- `/project:tm/validate-dependencies` - Check issues
|
||||
- `/project:tm/fix-dependencies` - Auto-fix
|
||||
|
||||
### Workflows
|
||||
- `/project:tm/workflows/smart-flow` - Adaptive workflows
|
||||
- `/project:tm/workflows/pipeline` - Chain commands
|
||||
- `/project:tm/workflows/auto-implement` - AI implementation
|
||||
|
||||
### Utilities
|
||||
- `/project:tm/status` - Project dashboard
|
||||
- `/project:tm/next` - Next task recommendation
|
||||
- `/project:tm/utils/analyze` - Project analysis
|
||||
- `/project:tm/learn` - Interactive help
|
||||
|
||||
## Key Features
|
||||
|
||||
### Natural Language Support
|
||||
All commands understand natural language:
|
||||
```
|
||||
/project:tm/list pending high priority
|
||||
/project:tm/update mark 23 as done
|
||||
/project:tm/add-task implement OAuth login
|
||||
```
|
||||
|
||||
### Smart Context
|
||||
Commands analyze project state and provide intelligent suggestions based on:
|
||||
- Current task status
|
||||
- Dependencies
|
||||
- Team patterns
|
||||
- Project phase
|
||||
|
||||
### Visual Enhancements
|
||||
- Progress bars and indicators
|
||||
- Status badges
|
||||
- Organized displays
|
||||
- Clear hierarchies
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Daily Development
|
||||
```
|
||||
/project:tm/workflows/smart-flow morning
|
||||
/project:tm/next
|
||||
/project:tm/set-status/to-in-progress <id>
|
||||
/project:tm/set-status/to-done <id>
|
||||
```
|
||||
|
||||
### Task Breakdown
|
||||
```
|
||||
/project:tm/show <id>
|
||||
/project:tm/expand <id>
|
||||
/project:tm/list/with-subtasks
|
||||
```
|
||||
|
||||
### Sprint Planning
|
||||
```
|
||||
/project:tm/analyze-complexity
|
||||
/project:tm/workflows/pipeline init → expand/all → status
|
||||
```
|
||||
|
||||
## Migration from Old Commands
|
||||
|
||||
| Old | New |
|
||||
|-----|-----|
|
||||
| `/project:task-master:list` | `/project:tm/list` |
|
||||
| `/project:task-master:complete` | `/project:tm/set-status/to-done` |
|
||||
| `/project:workflows:auto-implement` | `/project:tm/workflows/auto-implement` |
|
||||
|
||||
## Tips
|
||||
|
||||
1. Use `/project:tm/` + Tab for command discovery
|
||||
2. Natural language is supported everywhere
|
||||
3. Commands provide smart defaults
|
||||
4. Chain commands for automation
|
||||
5. Check `/project:tm/learn` for interactive help
|
||||
55
.claude/commands/tm/add-dependency/index.md
Normal file
55
.claude/commands/tm/add-dependency/index.md
Normal file
@@ -0,0 +1,55 @@
|
||||
Add a dependency between tasks.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the task IDs to establish dependency relationship.
|
||||
|
||||
## Adding Dependencies
|
||||
|
||||
Creates a dependency where one task must be completed before another can start.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Parse natural language or IDs:
|
||||
- "make 5 depend on 3" → task 5 depends on task 3
|
||||
- "5 needs 3" → task 5 depends on task 3
|
||||
- "5 3" → task 5 depends on task 3
|
||||
- "5 after 3" → task 5 depends on task 3
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master add-dependency --id=<task-id> --depends-on=<dependency-id>
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before adding:
|
||||
1. **Verify both tasks exist**
|
||||
2. **Check for circular dependencies**
|
||||
3. **Ensure dependency makes logical sense**
|
||||
4. **Warn if creating complex chains**
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Detect if dependency already exists
|
||||
- Suggest related dependencies
|
||||
- Show impact on task flow
|
||||
- Update task priorities if needed
|
||||
|
||||
## Post-Addition
|
||||
|
||||
After adding dependency:
|
||||
1. Show updated dependency graph
|
||||
2. Identify any newly blocked tasks
|
||||
3. Suggest task order changes
|
||||
4. Update project timeline
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/add-dependency 5 needs 3
|
||||
→ Task #5 now depends on Task #3
|
||||
→ Task #5 is now blocked until #3 completes
|
||||
→ Suggested: Also consider if #5 needs #4
|
||||
```
|
||||
71
.claude/commands/tm/add-subtask/from-task.md
Normal file
71
.claude/commands/tm/add-subtask/from-task.md
Normal file
@@ -0,0 +1,71 @@
|
||||
Convert an existing task into a subtask.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse parent ID and task ID to convert.
|
||||
|
||||
## Task Conversion
|
||||
|
||||
Converts an existing standalone task into a subtask of another task.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "move task 8 under 5"
|
||||
- "make 8 a subtask of 5"
|
||||
- "nest 8 in 5"
|
||||
- "5 8" → make task 8 a subtask of task 5
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
|
||||
```
|
||||
|
||||
## Pre-Conversion Checks
|
||||
|
||||
1. **Validation**
|
||||
- Both tasks exist and are valid
|
||||
- No circular parent relationships
|
||||
- Task isn't already a subtask
|
||||
- Logical hierarchy makes sense
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Dependencies that will be affected
|
||||
- Tasks that depend on converting task
|
||||
- Priority alignment needed
|
||||
- Status compatibility
|
||||
|
||||
## Conversion Process
|
||||
|
||||
1. Change task ID from "8" to "5.1" (next available)
|
||||
2. Update all dependency references
|
||||
3. Inherit parent's context where appropriate
|
||||
4. Adjust priorities if needed
|
||||
5. Update time estimates
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Preserve task history
|
||||
- Maintain dependencies
|
||||
- Update all references
|
||||
- Create conversion log
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/add-subtask/from-task 5 8
|
||||
→ Converting: Task #8 becomes subtask #5.1
|
||||
→ Updated: 3 dependency references
|
||||
→ Parent task #5 now has 1 subtask
|
||||
→ Note: Subtask inherits parent's priority
|
||||
|
||||
Before: #8 "Implement validation" (standalone)
|
||||
After: #5.1 "Implement validation" (subtask of #5)
|
||||
```
|
||||
|
||||
## Post-Conversion
|
||||
|
||||
- Show new task hierarchy
|
||||
- List updated dependencies
|
||||
- Verify project integrity
|
||||
- Suggest related conversions
|
||||
76
.claude/commands/tm/add-subtask/index.md
Normal file
76
.claude/commands/tm/add-subtask/index.md
Normal file
@@ -0,0 +1,76 @@
|
||||
Add a subtask to a parent task.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse arguments to create a new subtask or convert existing task.
|
||||
|
||||
## Adding Subtasks
|
||||
|
||||
Creates subtasks to break down complex parent tasks into manageable pieces.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Flexible natural language:
|
||||
- "add subtask to 5: implement login form"
|
||||
- "break down 5 with: setup, implement, test"
|
||||
- "subtask for 5: handle edge cases"
|
||||
- "5: validate user input" → adds subtask to task 5
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### 1. Create New Subtask
|
||||
```bash
|
||||
task-master add-subtask --parent=<id> --title="<title>" --description="<desc>"
|
||||
```
|
||||
|
||||
### 2. Convert Existing Task
|
||||
```bash
|
||||
task-master add-subtask --parent=<id> --task-id=<existing-id>
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Automatic Subtask Generation**
|
||||
- If title contains "and" or commas, create multiple
|
||||
- Suggest common subtask patterns
|
||||
- Inherit parent's context
|
||||
|
||||
2. **Intelligent Defaults**
|
||||
- Priority based on parent
|
||||
- Appropriate time estimates
|
||||
- Logical dependencies between subtasks
|
||||
|
||||
3. **Validation**
|
||||
- Check parent task complexity
|
||||
- Warn if too many subtasks
|
||||
- Ensure subtask makes sense
|
||||
|
||||
## Creation Process
|
||||
|
||||
1. Parse parent task context
|
||||
2. Generate subtask with ID like "5.1"
|
||||
3. Set appropriate defaults
|
||||
4. Link to parent task
|
||||
5. Update parent's time estimate
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/add-subtask to 5: implement user authentication
|
||||
→ Created subtask #5.1: "implement user authentication"
|
||||
→ Parent task #5 now has 1 subtask
|
||||
→ Suggested next subtasks: tests, documentation
|
||||
|
||||
/project:tm/add-subtask 5: setup, implement, test
|
||||
→ Created 3 subtasks:
|
||||
#5.1: setup
|
||||
#5.2: implement
|
||||
#5.3: test
|
||||
```
|
||||
|
||||
## Post-Creation
|
||||
|
||||
- Show updated task hierarchy
|
||||
- Suggest logical next subtasks
|
||||
- Update complexity estimates
|
||||
- Recommend subtask order
|
||||
78
.claude/commands/tm/add-task/index.md
Normal file
78
.claude/commands/tm/add-task/index.md
Normal file
@@ -0,0 +1,78 @@
|
||||
Add new tasks with intelligent parsing and context awareness.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Smart Task Addition
|
||||
|
||||
Parse natural language to create well-structured tasks.
|
||||
|
||||
### 1. **Input Understanding**
|
||||
|
||||
I'll intelligently parse your request:
|
||||
- Natural language → Structured task
|
||||
- Detect priority from keywords (urgent, ASAP, important)
|
||||
- Infer dependencies from context
|
||||
- Suggest complexity based on description
|
||||
- Determine task type (feature, bug, refactor, test, docs)
|
||||
|
||||
### 2. **Smart Parsing Examples**
|
||||
|
||||
**"Add urgent task to fix login bug"**
|
||||
→ Title: Fix login bug
|
||||
→ Priority: high
|
||||
→ Type: bug
|
||||
→ Suggested complexity: medium
|
||||
|
||||
**"Create task for API documentation after task 23 is done"**
|
||||
→ Title: API documentation
|
||||
→ Dependencies: [23]
|
||||
→ Type: documentation
|
||||
→ Priority: medium
|
||||
|
||||
**"Need to refactor auth module - depends on 12 and 15, high complexity"**
|
||||
→ Title: Refactor auth module
|
||||
→ Dependencies: [12, 15]
|
||||
→ Complexity: high
|
||||
→ Type: refactor
|
||||
|
||||
### 3. **Context Enhancement**
|
||||
|
||||
Based on current project state:
|
||||
- Suggest related existing tasks
|
||||
- Warn about potential conflicts
|
||||
- Recommend dependencies
|
||||
- Propose subtasks if complex
|
||||
|
||||
### 4. **Interactive Refinement**
|
||||
|
||||
```yaml
|
||||
Task Preview:
|
||||
─────────────
|
||||
Title: [Extracted title]
|
||||
Priority: [Inferred priority]
|
||||
Dependencies: [Detected dependencies]
|
||||
Complexity: [Estimated complexity]
|
||||
|
||||
Suggestions:
|
||||
- Similar task #34 exists, consider as dependency?
|
||||
- This seems complex, break into subtasks?
|
||||
- Tasks #45-47 work on same module
|
||||
```
|
||||
|
||||
### 5. **Validation & Creation**
|
||||
|
||||
Before creating:
|
||||
- Validate dependencies exist
|
||||
- Check for duplicates
|
||||
- Ensure logical ordering
|
||||
- Verify task completeness
|
||||
|
||||
### 6. **Smart Defaults**
|
||||
|
||||
Intelligent defaults based on:
|
||||
- Task type patterns
|
||||
- Team conventions
|
||||
- Historical data
|
||||
- Current sprint/phase
|
||||
|
||||
Result: High-quality tasks from minimal input.
|
||||
121
.claude/commands/tm/analyze-complexity/index.md
Normal file
121
.claude/commands/tm/analyze-complexity/index.md
Normal file
@@ -0,0 +1,121 @@
|
||||
Analyze task complexity and generate expansion recommendations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Perform deep analysis of task complexity across the project.
|
||||
|
||||
## Complexity Analysis
|
||||
|
||||
Uses AI to analyze tasks and recommend which ones need breakdown.
|
||||
|
||||
## Execution Options
|
||||
|
||||
```bash
|
||||
task-master analyze-complexity [--research] [--threshold=5]
|
||||
```
|
||||
|
||||
## Analysis Parameters
|
||||
|
||||
- `--research` → Use research AI for deeper analysis
|
||||
- `--threshold=5` → Only flag tasks above complexity 5
|
||||
- Default: Analyze all pending tasks
|
||||
|
||||
## Analysis Process
|
||||
|
||||
### 1. **Task Evaluation**
|
||||
For each task, AI evaluates:
|
||||
- Technical complexity
|
||||
- Time requirements
|
||||
- Dependency complexity
|
||||
- Risk factors
|
||||
- Knowledge requirements
|
||||
|
||||
### 2. **Complexity Scoring**
|
||||
Assigns score 1-10 based on:
|
||||
- Implementation difficulty
|
||||
- Integration challenges
|
||||
- Testing requirements
|
||||
- Unknown factors
|
||||
- Technical debt risk
|
||||
|
||||
### 3. **Recommendations**
|
||||
For complex tasks:
|
||||
- Suggest expansion approach
|
||||
- Recommend subtask breakdown
|
||||
- Identify risk areas
|
||||
- Propose mitigation strategies
|
||||
|
||||
## Smart Analysis Features
|
||||
|
||||
1. **Pattern Recognition**
|
||||
- Similar task comparisons
|
||||
- Historical complexity accuracy
|
||||
- Team velocity consideration
|
||||
- Technology stack factors
|
||||
|
||||
2. **Contextual Factors**
|
||||
- Team expertise
|
||||
- Available resources
|
||||
- Timeline constraints
|
||||
- Business criticality
|
||||
|
||||
3. **Risk Assessment**
|
||||
- Technical risks
|
||||
- Timeline risks
|
||||
- Dependency risks
|
||||
- Knowledge gaps
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
Task Complexity Analysis Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
High Complexity Tasks (>7):
|
||||
📍 #5 "Implement real-time sync" - Score: 9/10
|
||||
Factors: WebSocket complexity, state management, conflict resolution
|
||||
Recommendation: Expand into 5-7 subtasks
|
||||
Risks: Performance, data consistency
|
||||
|
||||
📍 #12 "Migrate database schema" - Score: 8/10
|
||||
Factors: Data migration, zero downtime, rollback strategy
|
||||
Recommendation: Expand into 4-5 subtasks
|
||||
Risks: Data loss, downtime
|
||||
|
||||
Medium Complexity Tasks (5-7):
|
||||
📍 #23 "Add export functionality" - Score: 6/10
|
||||
Consider expansion if timeline tight
|
||||
|
||||
Low Complexity Tasks (<5):
|
||||
✅ 15 tasks - No expansion needed
|
||||
|
||||
Summary:
|
||||
- Expand immediately: 2 tasks
|
||||
- Consider expanding: 5 tasks
|
||||
- Keep as-is: 15 tasks
|
||||
```
|
||||
|
||||
## Actionable Output
|
||||
|
||||
For each high-complexity task:
|
||||
1. Complexity score with reasoning
|
||||
2. Specific expansion suggestions
|
||||
3. Risk mitigation approaches
|
||||
4. Recommended subtask structure
|
||||
|
||||
## Integration
|
||||
|
||||
Results are:
|
||||
- Saved to `.taskmaster/reports/complexity-analysis.md`
|
||||
- Used by expand command
|
||||
- Inform sprint planning
|
||||
- Guide resource allocation
|
||||
|
||||
## Next Steps
|
||||
|
||||
After analysis:
|
||||
```
|
||||
/project:tm/expand 5 # Expand specific task
|
||||
/project:tm/expand/all # Expand all recommended
|
||||
/project:tm/complexity-report # View detailed report
|
||||
```
|
||||
93
.claude/commands/tm/clear-subtasks/all.md
Normal file
93
.claude/commands/tm/clear-subtasks/all.md
Normal file
@@ -0,0 +1,93 @@
|
||||
Clear all subtasks from all tasks globally.
|
||||
|
||||
## Global Subtask Clearing
|
||||
|
||||
Remove all subtasks across the entire project. Use with extreme caution.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Project-Wide Summary**
|
||||
```
|
||||
Global Subtask Summary
|
||||
━━━━━━━━━━━━━━━━━━━━
|
||||
Total parent tasks: 12
|
||||
Total subtasks: 47
|
||||
- Completed: 15
|
||||
- In-progress: 8
|
||||
- Pending: 24
|
||||
|
||||
Work at risk: ~120 hours
|
||||
```
|
||||
|
||||
2. **Critical Warnings**
|
||||
- In-progress subtasks that will lose work
|
||||
- Completed subtasks with valuable history
|
||||
- Complex dependency chains
|
||||
- Integration test results
|
||||
|
||||
## Double Confirmation
|
||||
|
||||
```
|
||||
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
This will remove ALL 47 subtasks from your project
|
||||
Including 8 in-progress and 15 completed subtasks
|
||||
|
||||
This action CANNOT be undone
|
||||
|
||||
Type 'CLEAR ALL SUBTASKS' to confirm:
|
||||
```
|
||||
|
||||
## Smart Safeguards
|
||||
|
||||
- Require explicit confirmation phrase
|
||||
- Create automatic backup
|
||||
- Log all removed data
|
||||
- Option to export first
|
||||
|
||||
## Use Cases
|
||||
|
||||
Valid reasons for global clear:
|
||||
- Project restructuring
|
||||
- Major pivot in approach
|
||||
- Starting fresh breakdown
|
||||
- Switching to different task organization
|
||||
|
||||
## Process
|
||||
|
||||
1. Full project analysis
|
||||
2. Create backup file
|
||||
3. Show detailed impact
|
||||
4. Require confirmation
|
||||
5. Execute removal
|
||||
6. Generate summary report
|
||||
|
||||
## Alternative Suggestions
|
||||
|
||||
Before clearing all:
|
||||
- Export subtasks to file
|
||||
- Clear only pending subtasks
|
||||
- Clear by task category
|
||||
- Archive instead of delete
|
||||
|
||||
## Post-Clear Report
|
||||
|
||||
```
|
||||
Global Subtask Clear Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Removed: 47 subtasks from 12 tasks
|
||||
Backup saved: .taskmaster/backup/subtasks-20240115.json
|
||||
Parent tasks updated: 12
|
||||
Time estimates adjusted: Yes
|
||||
|
||||
Next steps:
|
||||
- Review updated task list
|
||||
- Re-expand complex tasks as needed
|
||||
- Check project timeline
|
||||
```
|
||||
86
.claude/commands/tm/clear-subtasks/index.md
Normal file
86
.claude/commands/tm/clear-subtasks/index.md
Normal file
@@ -0,0 +1,86 @@
|
||||
Clear all subtasks from a specific task.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
Remove all subtasks from a parent task at once.
|
||||
|
||||
## Clearing Subtasks
|
||||
|
||||
Bulk removal of all subtasks from a parent task.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --id=<task-id>
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Subtask Summary**
|
||||
- Number of subtasks
|
||||
- Completion status of each
|
||||
- Work already done
|
||||
- Dependencies affected
|
||||
|
||||
2. **Impact Assessment**
|
||||
- Data that will be lost
|
||||
- Dependencies to be removed
|
||||
- Effect on project timeline
|
||||
- Parent task implications
|
||||
|
||||
## Confirmation Required
|
||||
|
||||
```
|
||||
Clear Subtasks Confirmation
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Parent Task: #5 "Implement user authentication"
|
||||
Subtasks to remove: 4
|
||||
- #5.1 "Setup auth framework" (done)
|
||||
- #5.2 "Create login form" (in-progress)
|
||||
- #5.3 "Add validation" (pending)
|
||||
- #5.4 "Write tests" (pending)
|
||||
|
||||
⚠️ This will permanently delete all subtask data
|
||||
Continue? (y/n)
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Option to convert to standalone tasks
|
||||
- Backup task data before clearing
|
||||
- Preserve completed work history
|
||||
- Update parent task appropriately
|
||||
|
||||
## Process
|
||||
|
||||
1. List all subtasks for confirmation
|
||||
2. Check for in-progress work
|
||||
3. Remove all subtasks
|
||||
4. Update parent task
|
||||
5. Clean up dependencies
|
||||
|
||||
## Alternative Options
|
||||
|
||||
Suggest alternatives:
|
||||
- Convert important subtasks to tasks
|
||||
- Keep completed subtasks
|
||||
- Archive instead of delete
|
||||
- Export subtask data first
|
||||
|
||||
## Post-Clear
|
||||
|
||||
- Show updated parent task
|
||||
- Recalculate time estimates
|
||||
- Update task complexity
|
||||
- Suggest next steps
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/clear-subtasks 5
|
||||
→ Found 4 subtasks to remove
|
||||
→ Warning: Subtask #5.2 is in-progress
|
||||
→ Cleared all subtasks from task #5
|
||||
→ Updated parent task estimates
|
||||
→ Suggestion: Consider re-expanding with better breakdown
|
||||
```
|
||||
117
.claude/commands/tm/complexity-report/index.md
Normal file
117
.claude/commands/tm/complexity-report/index.md
Normal file
@@ -0,0 +1,117 @@
|
||||
Display the task complexity analysis report.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
View the detailed complexity analysis generated by analyze-complexity command.
|
||||
|
||||
## Viewing Complexity Report
|
||||
|
||||
Shows comprehensive task complexity analysis with actionable insights.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master complexity-report [--file=<path>]
|
||||
```
|
||||
|
||||
## Report Location
|
||||
|
||||
Default: `.taskmaster/reports/complexity-analysis.md`
|
||||
Custom: Specify with --file parameter
|
||||
|
||||
## Report Contents
|
||||
|
||||
### 1. **Executive Summary**
|
||||
```
|
||||
Complexity Analysis Summary
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Analysis Date: 2024-01-15
|
||||
Tasks Analyzed: 32
|
||||
High Complexity: 5 (16%)
|
||||
Medium Complexity: 12 (37%)
|
||||
Low Complexity: 15 (47%)
|
||||
|
||||
Critical Findings:
|
||||
- 5 tasks need immediate expansion
|
||||
- 3 tasks have high technical risk
|
||||
- 2 tasks block critical path
|
||||
```
|
||||
|
||||
### 2. **Detailed Task Analysis**
|
||||
For each complex task:
|
||||
- Complexity score breakdown
|
||||
- Contributing factors
|
||||
- Specific risks identified
|
||||
- Expansion recommendations
|
||||
- Similar completed tasks
|
||||
|
||||
### 3. **Risk Matrix**
|
||||
Visual representation:
|
||||
```
|
||||
Risk vs Complexity Matrix
|
||||
━━━━━━━━━━━━━━━━━━━━━━━
|
||||
High Risk | #5(9) #12(8) | #23(6)
|
||||
Med Risk | #34(7) | #45(5) #67(5)
|
||||
Low Risk | #78(8) | [15 tasks]
|
||||
| High Complex | Med Complex
|
||||
```
|
||||
|
||||
### 4. **Recommendations**
|
||||
|
||||
**Immediate Actions:**
|
||||
1. Expand task #5 - Critical path + high complexity
|
||||
2. Expand task #12 - High risk + dependencies
|
||||
3. Review task #34 - Consider splitting
|
||||
|
||||
**Sprint Planning:**
|
||||
- Don't schedule multiple high-complexity tasks together
|
||||
- Ensure expertise available for complex tasks
|
||||
- Build in buffer time for unknowns
|
||||
|
||||
## Interactive Features
|
||||
|
||||
When viewing report:
|
||||
1. **Quick Actions**
|
||||
- Press 'e' to expand a task
|
||||
- Press 'd' for task details
|
||||
- Press 'r' to refresh analysis
|
||||
|
||||
2. **Filtering**
|
||||
- View by complexity level
|
||||
- Filter by risk factors
|
||||
- Show only actionable items
|
||||
|
||||
3. **Export Options**
|
||||
- Markdown format
|
||||
- CSV for spreadsheets
|
||||
- JSON for tools
|
||||
|
||||
## Report Intelligence
|
||||
|
||||
- Compares with historical data
|
||||
- Shows complexity trends
|
||||
- Identifies patterns
|
||||
- Suggests process improvements
|
||||
|
||||
## Integration
|
||||
|
||||
Use report for:
|
||||
- Sprint planning sessions
|
||||
- Resource allocation
|
||||
- Risk assessment
|
||||
- Team discussions
|
||||
- Client updates
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
/project:tm/complexity-report
|
||||
→ Opens latest analysis
|
||||
|
||||
/project:tm/complexity-report --file=archived/2024-01-01.md
|
||||
→ View historical analysis
|
||||
|
||||
After viewing:
|
||||
/project:tm/expand 5
|
||||
→ Expand high-complexity task
|
||||
```
|
||||
51
.claude/commands/tm/expand/all.md
Normal file
51
.claude/commands/tm/expand/all.md
Normal file
@@ -0,0 +1,51 @@
|
||||
Expand all pending tasks that need subtasks.
|
||||
|
||||
## Bulk Task Expansion
|
||||
|
||||
Intelligently expands all tasks that would benefit from breakdown.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
## Smart Selection
|
||||
|
||||
Only expands tasks that:
|
||||
- Are marked as pending
|
||||
- Have high complexity (>5)
|
||||
- Lack existing subtasks
|
||||
- Would benefit from breakdown
|
||||
|
||||
## Expansion Process
|
||||
|
||||
1. **Analysis Phase**
|
||||
- Identify expansion candidates
|
||||
- Group related tasks
|
||||
- Plan expansion strategy
|
||||
|
||||
2. **Batch Processing**
|
||||
- Expand tasks in logical order
|
||||
- Maintain consistency
|
||||
- Preserve relationships
|
||||
- Optimize for parallelism
|
||||
|
||||
3. **Quality Control**
|
||||
- Ensure subtask quality
|
||||
- Avoid over-decomposition
|
||||
- Maintain task coherence
|
||||
- Update dependencies
|
||||
|
||||
## Options
|
||||
|
||||
- Add `force` to expand all regardless of complexity
|
||||
- Add `research` for enhanced AI analysis
|
||||
|
||||
## Results
|
||||
|
||||
After bulk expansion:
|
||||
- Summary of tasks expanded
|
||||
- New subtask count
|
||||
- Updated complexity metrics
|
||||
- Suggested task order
|
||||
49
.claude/commands/tm/expand/index.md
Normal file
49
.claude/commands/tm/expand/index.md
Normal file
@@ -0,0 +1,49 @@
|
||||
Break down a complex task into subtasks.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Intelligent Task Expansion
|
||||
|
||||
Analyzes a task and creates detailed subtasks for better manageability.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master expand --id=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Expansion Process
|
||||
|
||||
1. **Task Analysis**
|
||||
- Review task complexity
|
||||
- Identify components
|
||||
- Detect technical challenges
|
||||
- Estimate time requirements
|
||||
|
||||
2. **Subtask Generation**
|
||||
- Create 3-7 subtasks typically
|
||||
- Each subtask 1-4 hours
|
||||
- Logical implementation order
|
||||
- Clear acceptance criteria
|
||||
|
||||
3. **Smart Breakdown**
|
||||
- Setup/configuration tasks
|
||||
- Core implementation
|
||||
- Testing components
|
||||
- Integration steps
|
||||
- Documentation updates
|
||||
|
||||
## Enhanced Features
|
||||
|
||||
Based on task type:
|
||||
- **Feature**: Setup → Implement → Test → Integrate
|
||||
- **Bug Fix**: Reproduce → Diagnose → Fix → Verify
|
||||
- **Refactor**: Analyze → Plan → Refactor → Validate
|
||||
|
||||
## Post-Expansion
|
||||
|
||||
After expansion:
|
||||
1. Show subtask hierarchy
|
||||
2. Update time estimates
|
||||
3. Suggest implementation order
|
||||
4. Highlight critical path
|
||||
81
.claude/commands/tm/fix-dependencies/index.md
Normal file
81
.claude/commands/tm/fix-dependencies/index.md
Normal file
@@ -0,0 +1,81 @@
|
||||
Automatically fix dependency issues found during validation.
|
||||
|
||||
## Automatic Dependency Repair
|
||||
|
||||
Intelligently fixes common dependency problems while preserving project logic.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master fix-dependencies
|
||||
```
|
||||
|
||||
## What Gets Fixed
|
||||
|
||||
### 1. **Auto-Fixable Issues**
|
||||
- Remove references to deleted tasks
|
||||
- Break simple circular dependencies
|
||||
- Remove self-dependencies
|
||||
- Clean up duplicate dependencies
|
||||
|
||||
### 2. **Smart Resolutions**
|
||||
- Reorder dependencies to maintain logic
|
||||
- Suggest task merging for over-dependent tasks
|
||||
- Flatten unnecessary dependency chains
|
||||
- Remove redundant transitive dependencies
|
||||
|
||||
### 3. **Manual Review Required**
|
||||
- Complex circular dependencies
|
||||
- Critical path modifications
|
||||
- Business logic dependencies
|
||||
- High-impact changes
|
||||
|
||||
## Fix Process
|
||||
|
||||
1. **Analysis Phase**
|
||||
- Run validation check
|
||||
- Categorize issues by type
|
||||
- Determine fix strategy
|
||||
|
||||
2. **Execution Phase**
|
||||
- Apply automatic fixes
|
||||
- Log all changes made
|
||||
- Preserve task relationships
|
||||
|
||||
3. **Verification Phase**
|
||||
- Re-validate after fixes
|
||||
- Show before/after comparison
|
||||
- Highlight manual fixes needed
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Preserves intended task flow
|
||||
- Minimal disruption approach
|
||||
- Creates fix history/log
|
||||
- Suggests manual interventions
|
||||
|
||||
## Output Example
|
||||
|
||||
```
|
||||
Dependency Auto-Fix Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Fixed Automatically:
|
||||
✅ Removed 2 references to deleted tasks
|
||||
✅ Resolved 1 self-dependency
|
||||
✅ Cleaned 3 redundant dependencies
|
||||
|
||||
Manual Review Needed:
|
||||
⚠️ Complex circular dependency: #12 → #15 → #18 → #12
|
||||
Suggestion: Make #15 not depend on #12
|
||||
⚠️ Task #45 has 8 dependencies
|
||||
Suggestion: Break into subtasks
|
||||
|
||||
Run '/project:tm/validate-dependencies' to verify fixes
|
||||
```
|
||||
|
||||
## Safety
|
||||
|
||||
- Preview mode available
|
||||
- Rollback capability
|
||||
- Change logging
|
||||
- No data loss
|
||||
121
.claude/commands/tm/generate/index.md
Normal file
121
.claude/commands/tm/generate/index.md
Normal file
@@ -0,0 +1,121 @@
|
||||
Generate individual task files from tasks.json.
|
||||
|
||||
## Task File Generation
|
||||
|
||||
Creates separate markdown files for each task, perfect for AI agents or documentation.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
## What It Creates
|
||||
|
||||
For each task, generates a file like `task_001.txt`:
|
||||
|
||||
```
|
||||
Task ID: 1
|
||||
Title: Implement user authentication
|
||||
Status: pending
|
||||
Priority: high
|
||||
Dependencies: []
|
||||
Created: 2024-01-15
|
||||
Complexity: 7
|
||||
|
||||
## Description
|
||||
Create a secure user authentication system with login, logout, and session management.
|
||||
|
||||
## Details
|
||||
- Use JWT tokens for session management
|
||||
- Implement secure password hashing
|
||||
- Add remember me functionality
|
||||
- Include password reset flow
|
||||
|
||||
## Test Strategy
|
||||
- Unit tests for auth functions
|
||||
- Integration tests for login flow
|
||||
- Security testing for vulnerabilities
|
||||
- Performance tests for concurrent logins
|
||||
|
||||
## Subtasks
|
||||
1.1 Setup authentication framework (pending)
|
||||
1.2 Create login endpoints (pending)
|
||||
1.3 Implement session management (pending)
|
||||
1.4 Add password reset (pending)
|
||||
```
|
||||
|
||||
## File Organization
|
||||
|
||||
Creates structure:
|
||||
```
|
||||
.taskmaster/
|
||||
└── tasks/
|
||||
├── task_001.txt
|
||||
├── task_002.txt
|
||||
├── task_003.txt
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Consistent Formatting**
|
||||
- Standardized structure
|
||||
- Clear sections
|
||||
- AI-readable format
|
||||
- Markdown compatible
|
||||
|
||||
2. **Contextual Information**
|
||||
- Full task details
|
||||
- Related task references
|
||||
- Progress indicators
|
||||
- Implementation notes
|
||||
|
||||
3. **Incremental Updates**
|
||||
- Only regenerate changed tasks
|
||||
- Preserve custom additions
|
||||
- Track generation timestamp
|
||||
- Version control friendly
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **AI Context**: Provide task context to AI assistants
|
||||
- **Documentation**: Standalone task documentation
|
||||
- **Archival**: Task history preservation
|
||||
- **Sharing**: Send specific tasks to team members
|
||||
- **Review**: Easier task review process
|
||||
|
||||
## Generation Options
|
||||
|
||||
Based on arguments:
|
||||
- Filter by status
|
||||
- Include/exclude completed
|
||||
- Custom templates
|
||||
- Different formats
|
||||
|
||||
## Post-Generation
|
||||
|
||||
```
|
||||
Task File Generation Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Generated: 45 task files
|
||||
Location: .taskmaster/tasks/
|
||||
Total size: 156 KB
|
||||
|
||||
New files: 5
|
||||
Updated files: 12
|
||||
Unchanged: 28
|
||||
|
||||
Ready for:
|
||||
- AI agent consumption
|
||||
- Version control
|
||||
- Team distribution
|
||||
```
|
||||
|
||||
## Integration Benefits
|
||||
|
||||
- Git-trackable task history
|
||||
- Easy task sharing
|
||||
- AI tool compatibility
|
||||
- Offline task access
|
||||
- Backup redundancy
|
||||
81
.claude/commands/tm/help.md
Normal file
81
.claude/commands/tm/help.md
Normal file
@@ -0,0 +1,81 @@
|
||||
Show help for Task Master commands.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Display help for Task Master commands. If arguments provided, show specific command help.
|
||||
|
||||
## Task Master Command Help
|
||||
|
||||
### Quick Navigation
|
||||
|
||||
Type `/project:tm/` and use tab completion to explore all commands.
|
||||
|
||||
### Command Categories
|
||||
|
||||
#### 🚀 Setup & Installation
|
||||
- `/project:tm/setup/install` - Comprehensive installation guide
|
||||
- `/project:tm/setup/quick-install` - One-line global install
|
||||
|
||||
#### 📋 Project Setup
|
||||
- `/project:tm/init` - Initialize new project
|
||||
- `/project:tm/init/quick` - Quick setup with auto-confirm
|
||||
- `/project:tm/models` - View AI configuration
|
||||
- `/project:tm/models/setup` - Configure AI providers
|
||||
|
||||
#### 🎯 Task Generation
|
||||
- `/project:tm/parse-prd` - Generate tasks from PRD
|
||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||
- `/project:tm/generate` - Create task files
|
||||
|
||||
#### 📝 Task Management
|
||||
- `/project:tm/list` - List tasks (natural language filters)
|
||||
- `/project:tm/show <id>` - Display task details
|
||||
- `/project:tm/add-task` - Create new task
|
||||
- `/project:tm/update` - Update tasks naturally
|
||||
- `/project:tm/next` - Get next task recommendation
|
||||
|
||||
#### 🔄 Status Management
|
||||
- `/project:tm/set-status/to-pending <id>`
|
||||
- `/project:tm/set-status/to-in-progress <id>`
|
||||
- `/project:tm/set-status/to-done <id>`
|
||||
- `/project:tm/set-status/to-review <id>`
|
||||
- `/project:tm/set-status/to-deferred <id>`
|
||||
- `/project:tm/set-status/to-cancelled <id>`
|
||||
|
||||
#### 🔍 Analysis & Breakdown
|
||||
- `/project:tm/analyze-complexity` - Analyze task complexity
|
||||
- `/project:tm/expand <id>` - Break down complex task
|
||||
- `/project:tm/expand/all` - Expand all eligible tasks
|
||||
|
||||
#### 🔗 Dependencies
|
||||
- `/project:tm/add-dependency` - Add task dependency
|
||||
- `/project:tm/remove-dependency` - Remove dependency
|
||||
- `/project:tm/validate-dependencies` - Check for issues
|
||||
|
||||
#### 🤖 Workflows
|
||||
- `/project:tm/workflows/smart-flow` - Intelligent workflows
|
||||
- `/project:tm/workflows/pipeline` - Command chaining
|
||||
- `/project:tm/workflows/auto-implement` - Auto-implementation
|
||||
|
||||
#### 📊 Utilities
|
||||
- `/project:tm/utils/analyze` - Project analysis
|
||||
- `/project:tm/status` - Project dashboard
|
||||
- `/project:tm/learn` - Interactive learning
|
||||
|
||||
### Natural Language Examples
|
||||
|
||||
```
|
||||
/project:tm/list pending high priority
|
||||
/project:tm/update mark all API tasks as done
|
||||
/project:tm/add-task create login system with OAuth
|
||||
/project:tm/show current
|
||||
```
|
||||
|
||||
### Getting Started
|
||||
|
||||
1. Install: `/project:tm/setup/quick-install`
|
||||
2. Initialize: `/project:tm/init/quick`
|
||||
3. Learn: `/project:tm/learn start`
|
||||
4. Work: `/project:tm/workflows/smart-flow`
|
||||
|
||||
For detailed command info: `/project:tm/help <command-name>`
|
||||
130
.claude/commands/tm/index.md
Normal file
130
.claude/commands/tm/index.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Task Master Command Reference
|
||||
|
||||
Comprehensive command structure for Task Master integration with Claude Code.
|
||||
|
||||
## Command Organization
|
||||
|
||||
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
|
||||
|
||||
## Project Setup & Configuration
|
||||
|
||||
### `/project:tm/init`
|
||||
- `index` - Initialize new project (handles PRD files intelligently)
|
||||
- `quick` - Quick setup with auto-confirmation (-y flag)
|
||||
|
||||
### `/project:tm/models`
|
||||
- `index` - View current AI model configuration
|
||||
- `setup` - Interactive model configuration
|
||||
- `set-main` - Set primary generation model
|
||||
- `set-research` - Set research model
|
||||
- `set-fallback` - Set fallback model
|
||||
|
||||
## Task Generation
|
||||
|
||||
### `/project:tm/parse-prd`
|
||||
- `index` - Generate tasks from PRD document
|
||||
- `with-research` - Enhanced parsing with research mode
|
||||
|
||||
### `/project:tm/generate`
|
||||
- Create individual task files from tasks.json
|
||||
|
||||
## Task Management
|
||||
|
||||
### `/project:tm/list`
|
||||
- `index` - Smart listing with natural language filters
|
||||
- `with-subtasks` - Include subtasks in hierarchical view
|
||||
- `by-status` - Filter by specific status
|
||||
|
||||
### `/project:tm/set-status`
|
||||
- `to-pending` - Reset task to pending
|
||||
- `to-in-progress` - Start working on task
|
||||
- `to-done` - Mark task complete
|
||||
- `to-review` - Submit for review
|
||||
- `to-deferred` - Defer task
|
||||
- `to-cancelled` - Cancel task
|
||||
|
||||
### `/project:tm/sync-readme`
|
||||
- Export tasks to README.md with formatting
|
||||
|
||||
### `/project:tm/update`
|
||||
- `index` - Update tasks with natural language
|
||||
- `from-id` - Update multiple tasks from a starting point
|
||||
- `single` - Update specific task
|
||||
|
||||
### `/project:tm/add-task`
|
||||
- `index` - Add new task with AI assistance
|
||||
|
||||
### `/project:tm/remove-task`
|
||||
- `index` - Remove task with confirmation
|
||||
|
||||
## Subtask Management
|
||||
|
||||
### `/project:tm/add-subtask`
|
||||
- `index` - Add new subtask to parent
|
||||
- `from-task` - Convert existing task to subtask
|
||||
|
||||
### `/project:tm/remove-subtask`
|
||||
- Remove subtask (with optional conversion)
|
||||
|
||||
### `/project:tm/clear-subtasks`
|
||||
- `index` - Clear subtasks from specific task
|
||||
- `all` - Clear all subtasks globally
|
||||
|
||||
## Task Analysis & Breakdown
|
||||
|
||||
### `/project:tm/analyze-complexity`
|
||||
- Analyze and generate expansion recommendations
|
||||
|
||||
### `/project:tm/complexity-report`
|
||||
- Display complexity analysis report
|
||||
|
||||
### `/project:tm/expand`
|
||||
- `index` - Break down specific task
|
||||
- `all` - Expand all eligible tasks
|
||||
- `with-research` - Enhanced expansion
|
||||
|
||||
## Task Navigation
|
||||
|
||||
### `/project:tm/next`
|
||||
- Intelligent next task recommendation
|
||||
|
||||
### `/project:tm/show`
|
||||
- Display detailed task information
|
||||
|
||||
### `/project:tm/status`
|
||||
- Comprehensive project dashboard
|
||||
|
||||
## Dependency Management
|
||||
|
||||
### `/project:tm/add-dependency`
|
||||
- Add task dependency
|
||||
|
||||
### `/project:tm/remove-dependency`
|
||||
- Remove task dependency
|
||||
|
||||
### `/project:tm/validate-dependencies`
|
||||
- Check for dependency issues
|
||||
|
||||
### `/project:tm/fix-dependencies`
|
||||
- Automatically fix dependency problems
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### Natural Language
|
||||
Most commands accept natural language arguments:
|
||||
```
|
||||
/project:tm/add-task create user authentication system
|
||||
/project:tm/update mark all API tasks as high priority
|
||||
/project:tm/list show blocked tasks
|
||||
```
|
||||
|
||||
### ID-Based Commands
|
||||
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
||||
```
|
||||
/project:tm/show 45
|
||||
/project:tm/expand 23
|
||||
/project:tm/set-status/to-done 67
|
||||
```
|
||||
|
||||
### Smart Defaults
|
||||
Commands provide intelligent defaults and suggestions based on context.
|
||||
50
.claude/commands/tm/init/index.md
Normal file
50
.claude/commands/tm/init/index.md
Normal file
@@ -0,0 +1,50 @@
|
||||
Initialize a new Task Master project.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse arguments to determine initialization preferences.
|
||||
|
||||
## Initialization Process
|
||||
|
||||
1. **Parse Arguments**
|
||||
- PRD file path (if provided)
|
||||
- Project name
|
||||
- Auto-confirm flag (-y)
|
||||
|
||||
2. **Project Setup**
|
||||
```bash
|
||||
task-master init
|
||||
```
|
||||
|
||||
3. **Smart Initialization**
|
||||
- Detect existing project files
|
||||
- Suggest project name from directory
|
||||
- Check for git repository
|
||||
- Verify AI provider configuration
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Based on arguments:
|
||||
- `quick` / `-y` → Skip confirmations
|
||||
- `<file.md>` → Use as PRD after init
|
||||
- `--name=<name>` → Set project name
|
||||
- `--description=<desc>` → Set description
|
||||
|
||||
## Post-Initialization
|
||||
|
||||
After successful init:
|
||||
1. Show project structure created
|
||||
2. Verify AI models configured
|
||||
3. Suggest next steps:
|
||||
- Parse PRD if available
|
||||
- Configure AI providers
|
||||
- Set up git hooks
|
||||
- Create first tasks
|
||||
|
||||
## Integration
|
||||
|
||||
If PRD file provided:
|
||||
```
|
||||
/project:tm/init my-prd.md
|
||||
→ Automatically runs parse-prd after init
|
||||
```
|
||||
46
.claude/commands/tm/init/quick.md
Normal file
46
.claude/commands/tm/init/quick.md
Normal file
@@ -0,0 +1,46 @@
|
||||
Quick initialization with auto-confirmation.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Initialize a Task Master project without prompts, accepting all defaults.
|
||||
|
||||
## Quick Setup
|
||||
|
||||
```bash
|
||||
task-master init -y
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
1. Creates `.taskmaster/` directory structure
|
||||
2. Initializes empty `tasks.json`
|
||||
3. Sets up default configuration
|
||||
4. Uses directory name as project name
|
||||
5. Skips all confirmation prompts
|
||||
|
||||
## Smart Defaults
|
||||
|
||||
- Project name: Current directory name
|
||||
- Description: "Task Master Project"
|
||||
- Model config: Existing environment vars
|
||||
- Task structure: Standard format
|
||||
|
||||
## Next Steps
|
||||
|
||||
After quick init:
|
||||
1. Configure AI models if needed:
|
||||
```
|
||||
/project:tm/models/setup
|
||||
```
|
||||
|
||||
2. Parse PRD if available:
|
||||
```
|
||||
/project:tm/parse-prd <file>
|
||||
```
|
||||
|
||||
3. Or create first task:
|
||||
```
|
||||
/project:tm/add-task create initial setup
|
||||
```
|
||||
|
||||
Perfect for rapid project setup!
|
||||
103
.claude/commands/tm/learn.md
Normal file
103
.claude/commands/tm/learn.md
Normal file
@@ -0,0 +1,103 @@
|
||||
Learn about Task Master capabilities through interactive exploration.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Interactive Task Master Learning
|
||||
|
||||
Based on your input, I'll help you discover capabilities:
|
||||
|
||||
### 1. **What are you trying to do?**
|
||||
|
||||
If $ARGUMENTS contains:
|
||||
- "start" / "begin" → Show project initialization workflows
|
||||
- "manage" / "organize" → Show task management commands
|
||||
- "automate" / "auto" → Show automation workflows
|
||||
- "analyze" / "report" → Show analysis tools
|
||||
- "fix" / "problem" → Show troubleshooting commands
|
||||
- "fast" / "quick" → Show efficiency shortcuts
|
||||
|
||||
### 2. **Intelligent Suggestions**
|
||||
|
||||
Based on your project state:
|
||||
|
||||
**No tasks yet?**
|
||||
```
|
||||
You'll want to start with:
|
||||
1. /project:task-master:init <prd-file>
|
||||
→ Creates tasks from requirements
|
||||
|
||||
2. /project:task-master:parse-prd <file>
|
||||
→ Alternative task generation
|
||||
|
||||
Try: /project:task-master:init demo-prd.md
|
||||
```
|
||||
|
||||
**Have tasks?**
|
||||
Let me analyze what you might need...
|
||||
- Many pending tasks? → Learn sprint planning
|
||||
- Complex tasks? → Learn task expansion
|
||||
- Daily work? → Learn workflow automation
|
||||
|
||||
### 3. **Command Discovery**
|
||||
|
||||
**By Category:**
|
||||
- 📋 Task Management: list, show, add, update, complete
|
||||
- 🔄 Workflows: auto-implement, sprint-plan, daily-standup
|
||||
- 🛠️ Utilities: check-health, complexity-report, sync-memory
|
||||
- 🔍 Analysis: validate-deps, show dependencies
|
||||
|
||||
**By Scenario:**
|
||||
- "I want to see what to work on" → `/project:task-master:next`
|
||||
- "I need to break this down" → `/project:task-master:expand <id>`
|
||||
- "Show me everything" → `/project:task-master:status`
|
||||
- "Just do it for me" → `/project:workflows:auto-implement`
|
||||
|
||||
### 4. **Power User Patterns**
|
||||
|
||||
**Command Chaining:**
|
||||
```
|
||||
/project:task-master:next
|
||||
/project:task-master:start <id>
|
||||
/project:workflows:auto-implement
|
||||
```
|
||||
|
||||
**Smart Filters:**
|
||||
```
|
||||
/project:task-master:list pending high
|
||||
/project:task-master:list blocked
|
||||
/project:task-master:list 1-5 tree
|
||||
```
|
||||
|
||||
**Automation:**
|
||||
```
|
||||
/project:workflows:pipeline init → expand-all → sprint-plan
|
||||
```
|
||||
|
||||
### 5. **Learning Path**
|
||||
|
||||
Based on your experience level:
|
||||
|
||||
**Beginner Path:**
|
||||
1. init → Create project
|
||||
2. status → Understand state
|
||||
3. next → Find work
|
||||
4. complete → Finish task
|
||||
|
||||
**Intermediate Path:**
|
||||
1. expand → Break down complex tasks
|
||||
2. sprint-plan → Organize work
|
||||
3. complexity-report → Understand difficulty
|
||||
4. validate-deps → Ensure consistency
|
||||
|
||||
**Advanced Path:**
|
||||
1. pipeline → Chain operations
|
||||
2. smart-flow → Context-aware automation
|
||||
3. Custom commands → Extend the system
|
||||
|
||||
### 6. **Try This Now**
|
||||
|
||||
Based on what you asked about, try:
|
||||
[Specific command suggestion based on $ARGUMENTS]
|
||||
|
||||
Want to learn more about a specific command?
|
||||
Type: /project:help <command-name>
|
||||
39
.claude/commands/tm/list/by-status.md
Normal file
39
.claude/commands/tm/list/by-status.md
Normal file
@@ -0,0 +1,39 @@
|
||||
List tasks filtered by a specific status.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the status from arguments and list only tasks matching that status.
|
||||
|
||||
## Status Options
|
||||
- `pending` - Not yet started
|
||||
- `in-progress` - Currently being worked on
|
||||
- `done` - Completed
|
||||
- `review` - Awaiting review
|
||||
- `deferred` - Postponed
|
||||
- `cancelled` - Cancelled
|
||||
|
||||
## Execution
|
||||
|
||||
Based on $ARGUMENTS, run:
|
||||
```bash
|
||||
task-master list --status=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Enhanced Display
|
||||
|
||||
For the filtered results:
|
||||
- Group by priority within the status
|
||||
- Show time in current status
|
||||
- Highlight tasks approaching deadlines
|
||||
- Display blockers and dependencies
|
||||
- Suggest next actions for each status group
|
||||
|
||||
## Intelligent Insights
|
||||
|
||||
Based on the status filter:
|
||||
- **Pending**: Show recommended start order
|
||||
- **In-Progress**: Display idle time warnings
|
||||
- **Done**: Show newly unblocked tasks
|
||||
- **Review**: Indicate review duration
|
||||
- **Deferred**: Show reactivation criteria
|
||||
- **Cancelled**: Display impact analysis
|
||||
43
.claude/commands/tm/list/index.md
Normal file
43
.claude/commands/tm/list/index.md
Normal file
@@ -0,0 +1,43 @@
|
||||
List tasks with intelligent argument parsing.
|
||||
|
||||
Parse arguments to determine filters and display options:
|
||||
- Status: pending, in-progress, done, review, deferred, cancelled
|
||||
- Priority: high, medium, low (or priority:high)
|
||||
- Special: subtasks, tree, dependencies, blocked
|
||||
- IDs: Direct numbers (e.g., "1,3,5" or "1-5")
|
||||
- Complex: "pending high" = pending AND high priority
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Let me parse your request intelligently:
|
||||
|
||||
1. **Detect Filter Intent**
|
||||
- If arguments contain status keywords → filter by status
|
||||
- If arguments contain priority → filter by priority
|
||||
- If arguments contain "subtasks" → include subtasks
|
||||
- If arguments contain "tree" → hierarchical view
|
||||
- If arguments contain numbers → show specific tasks
|
||||
- If arguments contain "blocked" → show blocked tasks only
|
||||
|
||||
2. **Smart Combinations**
|
||||
Examples of what I understand:
|
||||
- "pending high" → pending tasks with high priority
|
||||
- "done today" → tasks completed today
|
||||
- "blocked" → tasks with unmet dependencies
|
||||
- "1-5" → tasks 1 through 5
|
||||
- "subtasks tree" → hierarchical view with subtasks
|
||||
|
||||
3. **Execute Appropriate Query**
|
||||
Based on parsed intent, run the most specific task-master command
|
||||
|
||||
4. **Enhanced Display**
|
||||
- Group by relevant criteria
|
||||
- Show most important information first
|
||||
- Use visual indicators for quick scanning
|
||||
- Include relevant metrics
|
||||
|
||||
5. **Intelligent Suggestions**
|
||||
Based on what you're viewing, suggest next actions:
|
||||
- Many pending? → Suggest priority order
|
||||
- Many blocked? → Show dependency resolution
|
||||
- Looking at specific tasks? → Show related tasks
|
||||
29
.claude/commands/tm/list/with-subtasks.md
Normal file
29
.claude/commands/tm/list/with-subtasks.md
Normal file
@@ -0,0 +1,29 @@
|
||||
List all tasks including their subtasks in a hierarchical view.
|
||||
|
||||
This command shows all tasks with their nested subtasks, providing a complete project overview.
|
||||
|
||||
## Execution
|
||||
|
||||
Run the Task Master list command with subtasks flag:
|
||||
```bash
|
||||
task-master list --with-subtasks
|
||||
```
|
||||
|
||||
## Enhanced Display
|
||||
|
||||
I'll organize the output to show:
|
||||
- Parent tasks with clear indicators
|
||||
- Nested subtasks with proper indentation
|
||||
- Status badges for quick scanning
|
||||
- Dependencies and blockers highlighted
|
||||
- Progress indicators for tasks with subtasks
|
||||
|
||||
## Smart Filtering
|
||||
|
||||
Based on the task hierarchy:
|
||||
- Show completion percentage for parent tasks
|
||||
- Highlight blocked subtask chains
|
||||
- Group by functional areas
|
||||
- Indicate critical path items
|
||||
|
||||
This gives you a complete tree view of your project structure.
|
||||
51
.claude/commands/tm/models/index.md
Normal file
51
.claude/commands/tm/models/index.md
Normal file
@@ -0,0 +1,51 @@
|
||||
View current AI model configuration.
|
||||
|
||||
## Model Configuration Display
|
||||
|
||||
Shows the currently configured AI providers and models for Task Master.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master models
|
||||
```
|
||||
|
||||
## Information Displayed
|
||||
|
||||
1. **Main Provider**
|
||||
- Model ID and name
|
||||
- API key status (configured/missing)
|
||||
- Usage: Primary task generation
|
||||
|
||||
2. **Research Provider**
|
||||
- Model ID and name
|
||||
- API key status
|
||||
- Usage: Enhanced research mode
|
||||
|
||||
3. **Fallback Provider**
|
||||
- Model ID and name
|
||||
- API key status
|
||||
- Usage: Backup when main fails
|
||||
|
||||
## Visual Status
|
||||
|
||||
```
|
||||
Task Master AI Model Configuration
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Main: ✅ claude-3-5-sonnet (configured)
|
||||
Research: ✅ perplexity-sonar (configured)
|
||||
Fallback: ⚠️ Not configured (optional)
|
||||
|
||||
Available Models:
|
||||
- claude-3-5-sonnet
|
||||
- gpt-4-turbo
|
||||
- gpt-3.5-turbo
|
||||
- perplexity-sonar
|
||||
```
|
||||
|
||||
## Next Actions
|
||||
|
||||
Based on configuration:
|
||||
- If missing API keys → Suggest setup
|
||||
- If no research model → Explain benefits
|
||||
- If all configured → Show usage tips
|
||||
51
.claude/commands/tm/models/setup.md
Normal file
51
.claude/commands/tm/models/setup.md
Normal file
@@ -0,0 +1,51 @@
|
||||
Run interactive setup to configure AI models.
|
||||
|
||||
## Interactive Model Configuration
|
||||
|
||||
Guides you through setting up AI providers for Task Master.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
## Setup Process
|
||||
|
||||
1. **Environment Check**
|
||||
- Detect existing API keys
|
||||
- Show current configuration
|
||||
- Identify missing providers
|
||||
|
||||
2. **Provider Selection**
|
||||
- Choose main provider (required)
|
||||
- Select research provider (recommended)
|
||||
- Configure fallback (optional)
|
||||
|
||||
3. **API Key Configuration**
|
||||
- Prompt for missing keys
|
||||
- Validate key format
|
||||
- Test connectivity
|
||||
- Save configuration
|
||||
|
||||
## Smart Recommendations
|
||||
|
||||
Based on your needs:
|
||||
- **For best results**: Claude + Perplexity
|
||||
- **Budget conscious**: GPT-3.5 + Perplexity
|
||||
- **Maximum capability**: GPT-4 + Perplexity + Claude fallback
|
||||
|
||||
## Configuration Storage
|
||||
|
||||
Keys can be stored in:
|
||||
1. Environment variables (recommended)
|
||||
2. `.env` file in project
|
||||
3. Global `.taskmaster/config`
|
||||
|
||||
## Post-Setup
|
||||
|
||||
After configuration:
|
||||
- Test each provider
|
||||
- Show usage examples
|
||||
- Suggest next steps
|
||||
- Verify parse-prd works
|
||||
66
.claude/commands/tm/next/index.md
Normal file
66
.claude/commands/tm/next/index.md
Normal file
@@ -0,0 +1,66 @@
|
||||
Intelligently determine and prepare the next action based on comprehensive context.
|
||||
|
||||
This enhanced version of 'next' considers:
|
||||
- Current task states
|
||||
- Recent activity
|
||||
- Time constraints
|
||||
- Dependencies
|
||||
- Your working patterns
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Next Action
|
||||
|
||||
### 1. **Context Gathering**
|
||||
Let me analyze the current situation:
|
||||
- Active tasks (in-progress)
|
||||
- Recently completed tasks
|
||||
- Blocked tasks
|
||||
- Time since last activity
|
||||
- Arguments provided: $ARGUMENTS
|
||||
|
||||
### 2. **Smart Decision Tree**
|
||||
|
||||
**If you have an in-progress task:**
|
||||
- Has it been idle > 2 hours? → Suggest resuming or switching
|
||||
- Near completion? → Show remaining steps
|
||||
- Blocked? → Find alternative task
|
||||
|
||||
**If no in-progress tasks:**
|
||||
- Unblocked high-priority tasks? → Start highest
|
||||
- Complex tasks need breakdown? → Suggest expansion
|
||||
- All tasks blocked? → Show dependency resolution
|
||||
|
||||
**Special arguments handling:**
|
||||
- "quick" → Find task < 2 hours
|
||||
- "easy" → Find low complexity task
|
||||
- "important" → Find high priority regardless of complexity
|
||||
- "continue" → Resume last worked task
|
||||
|
||||
### 3. **Preparation Workflow**
|
||||
|
||||
Based on selected task:
|
||||
1. Show full context and history
|
||||
2. Set up development environment
|
||||
3. Run relevant tests
|
||||
4. Open related files
|
||||
5. Show similar completed tasks
|
||||
6. Estimate completion time
|
||||
|
||||
### 4. **Alternative Suggestions**
|
||||
|
||||
Always provide options:
|
||||
- Primary recommendation
|
||||
- Quick alternative (< 1 hour)
|
||||
- Strategic option (unblocks most tasks)
|
||||
- Learning option (new technology/skill)
|
||||
|
||||
### 5. **Workflow Integration**
|
||||
|
||||
Seamlessly connect to:
|
||||
- `/project:task-master:start [selected]`
|
||||
- `/project:workflows:auto-implement`
|
||||
- `/project:task-master:expand` (if complex)
|
||||
- `/project:utils:complexity-report` (if unsure)
|
||||
|
||||
The goal: Zero friction from decision to implementation.
|
||||
49
.claude/commands/tm/parse-prd/index.md
Normal file
49
.claude/commands/tm/parse-prd/index.md
Normal file
@@ -0,0 +1,49 @@
|
||||
Parse a PRD document to generate tasks.
|
||||
|
||||
Arguments: $ARGUMENTS (PRD file path)
|
||||
|
||||
## Intelligent PRD Parsing
|
||||
|
||||
Analyzes your requirements document and generates a complete task breakdown.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master parse-prd --input=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Parsing Process
|
||||
|
||||
1. **Document Analysis**
|
||||
- Extract key requirements
|
||||
- Identify technical components
|
||||
- Detect dependencies
|
||||
- Estimate complexity
|
||||
|
||||
2. **Task Generation**
|
||||
- Create 10-15 tasks by default
|
||||
- Include implementation tasks
|
||||
- Add testing tasks
|
||||
- Include documentation tasks
|
||||
- Set logical dependencies
|
||||
|
||||
3. **Smart Enhancements**
|
||||
- Group related functionality
|
||||
- Set appropriate priorities
|
||||
- Add acceptance criteria
|
||||
- Include test strategies
|
||||
|
||||
## Options
|
||||
|
||||
Parse arguments for modifiers:
|
||||
- Number after filename → `--num-tasks`
|
||||
- `research` → Use research mode
|
||||
- `comprehensive` → Generate more tasks
|
||||
|
||||
## Post-Generation
|
||||
|
||||
After parsing:
|
||||
1. Display task summary
|
||||
2. Show dependency graph
|
||||
3. Suggest task expansion for complex items
|
||||
4. Recommend sprint planning
|
||||
48
.claude/commands/tm/parse-prd/with-research.md
Normal file
48
.claude/commands/tm/parse-prd/with-research.md
Normal file
@@ -0,0 +1,48 @@
|
||||
Parse PRD with enhanced research mode for better task generation.
|
||||
|
||||
Arguments: $ARGUMENTS (PRD file path)
|
||||
|
||||
## Research-Enhanced Parsing
|
||||
|
||||
Uses the research AI provider (typically Perplexity) for more comprehensive task generation with current best practices.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master parse-prd --input=$ARGUMENTS --research
|
||||
```
|
||||
|
||||
## Research Benefits
|
||||
|
||||
1. **Current Best Practices**
|
||||
- Latest framework patterns
|
||||
- Security considerations
|
||||
- Performance optimizations
|
||||
- Accessibility requirements
|
||||
|
||||
2. **Technical Deep Dive**
|
||||
- Implementation approaches
|
||||
- Library recommendations
|
||||
- Architecture patterns
|
||||
- Testing strategies
|
||||
|
||||
3. **Comprehensive Coverage**
|
||||
- Edge cases consideration
|
||||
- Error handling tasks
|
||||
- Monitoring setup
|
||||
- Deployment tasks
|
||||
|
||||
## Enhanced Output
|
||||
|
||||
Research mode typically:
|
||||
- Generates more detailed tasks
|
||||
- Includes industry standards
|
||||
- Adds compliance considerations
|
||||
- Suggests modern tooling
|
||||
|
||||
## When to Use
|
||||
|
||||
- New technology domains
|
||||
- Complex requirements
|
||||
- Regulatory compliance needed
|
||||
- Best practices crucial
|
||||
62
.claude/commands/tm/remove-dependency/index.md
Normal file
62
.claude/commands/tm/remove-dependency/index.md
Normal file
@@ -0,0 +1,62 @@
|
||||
Remove a dependency between tasks.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the task IDs to remove dependency relationship.
|
||||
|
||||
## Removing Dependencies
|
||||
|
||||
Removes a dependency relationship, potentially unblocking tasks.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Parse natural language or IDs:
|
||||
- "remove dependency between 5 and 3"
|
||||
- "5 no longer needs 3"
|
||||
- "unblock 5 from 3"
|
||||
- "5 3" → remove dependency of 5 on 3
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master remove-dependency --id=<task-id> --depends-on=<dependency-id>
|
||||
```
|
||||
|
||||
## Pre-Removal Checks
|
||||
|
||||
1. **Verify dependency exists**
|
||||
2. **Check impact on task flow**
|
||||
3. **Warn if it breaks logical sequence**
|
||||
4. **Show what will be unblocked**
|
||||
|
||||
## Smart Analysis
|
||||
|
||||
Before removing:
|
||||
- Show why dependency might have existed
|
||||
- Check if removal makes tasks executable
|
||||
- Verify no critical path disruption
|
||||
- Suggest alternative dependencies
|
||||
|
||||
## Post-Removal
|
||||
|
||||
After removing:
|
||||
1. Show updated task status
|
||||
2. List newly unblocked tasks
|
||||
3. Update project timeline
|
||||
4. Suggest next actions
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Confirm if removing critical dependency
|
||||
- Show tasks that become immediately actionable
|
||||
- Warn about potential issues
|
||||
- Keep removal history
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/remove-dependency 5 from 3
|
||||
→ Removed: Task #5 no longer depends on #3
|
||||
→ Task #5 is now UNBLOCKED and ready to start
|
||||
→ Warning: Consider if #5 still needs #2 completed first
|
||||
```
|
||||
84
.claude/commands/tm/remove-subtask/index.md
Normal file
84
.claude/commands/tm/remove-subtask/index.md
Normal file
@@ -0,0 +1,84 @@
|
||||
Remove a subtask from its parent task.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse subtask ID to remove, with option to convert to standalone task.
|
||||
|
||||
## Removing Subtasks
|
||||
|
||||
Remove a subtask and optionally convert it back to a standalone task.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "remove subtask 5.1"
|
||||
- "delete 5.1"
|
||||
- "convert 5.1 to task" → remove and convert
|
||||
- "5.1 standalone" → convert to standalone
|
||||
|
||||
## Execution Options
|
||||
|
||||
### 1. Delete Subtask
|
||||
```bash
|
||||
task-master remove-subtask --id=<parentId.subtaskId>
|
||||
```
|
||||
|
||||
### 2. Convert to Standalone
|
||||
```bash
|
||||
task-master remove-subtask --id=<parentId.subtaskId> --convert
|
||||
```
|
||||
|
||||
## Pre-Removal Checks
|
||||
|
||||
1. **Validate Subtask**
|
||||
- Verify subtask exists
|
||||
- Check completion status
|
||||
- Review dependencies
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Other subtasks that depend on it
|
||||
- Parent task implications
|
||||
- Data that will be lost
|
||||
|
||||
## Removal Process
|
||||
|
||||
### For Deletion:
|
||||
1. Confirm if subtask has work done
|
||||
2. Update parent task estimates
|
||||
3. Remove subtask and its data
|
||||
4. Clean up dependencies
|
||||
|
||||
### For Conversion:
|
||||
1. Assign new standalone task ID
|
||||
2. Preserve all task data
|
||||
3. Update dependency references
|
||||
4. Maintain task history
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Warn if subtask is in-progress
|
||||
- Show impact on parent task
|
||||
- Preserve important data
|
||||
- Update related estimates
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/remove-subtask 5.1
|
||||
→ Warning: Subtask #5.1 is in-progress
|
||||
→ This will delete all subtask data
|
||||
→ Parent task #5 will be updated
|
||||
Confirm deletion? (y/n)
|
||||
|
||||
/project:tm/remove-subtask 5.1 convert
|
||||
→ Converting subtask #5.1 to standalone task #89
|
||||
→ Preserved: All task data and history
|
||||
→ Updated: 2 dependency references
|
||||
→ New task #89 is now independent
|
||||
```
|
||||
|
||||
## Post-Removal
|
||||
|
||||
- Update parent task status
|
||||
- Recalculate estimates
|
||||
- Show updated hierarchy
|
||||
- Suggest next actions
|
||||
107
.claude/commands/tm/remove-task/index.md
Normal file
107
.claude/commands/tm/remove-task/index.md
Normal file
@@ -0,0 +1,107 @@
|
||||
Remove a task permanently from the project.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
Delete a task and handle all its relationships properly.
|
||||
|
||||
## Task Removal
|
||||
|
||||
Permanently removes a task while maintaining project integrity.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "remove task 5"
|
||||
- "delete 5"
|
||||
- "5" → remove task 5
|
||||
- Can include "-y" for auto-confirm
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master remove-task --id=<id> [-y]
|
||||
```
|
||||
|
||||
## Pre-Removal Analysis
|
||||
|
||||
1. **Task Details**
|
||||
- Current status
|
||||
- Work completed
|
||||
- Time invested
|
||||
- Associated data
|
||||
|
||||
2. **Relationship Check**
|
||||
- Tasks that depend on this
|
||||
- Dependencies this task has
|
||||
- Subtasks that will be removed
|
||||
- Blocking implications
|
||||
|
||||
3. **Impact Assessment**
|
||||
```
|
||||
Task Removal Impact
|
||||
━━━━━━━━━━━━━━━━━━
|
||||
Task: #5 "Implement authentication" (in-progress)
|
||||
Status: 60% complete (~8 hours work)
|
||||
|
||||
Will affect:
|
||||
- 3 tasks depend on this (will be blocked)
|
||||
- Has 4 subtasks (will be deleted)
|
||||
- Part of critical path
|
||||
|
||||
⚠️ This action cannot be undone
|
||||
```
|
||||
|
||||
## Smart Warnings
|
||||
|
||||
- Warn if task is in-progress
|
||||
- Show dependent tasks that will be blocked
|
||||
- Highlight if part of critical path
|
||||
- Note any completed work being lost
|
||||
|
||||
## Removal Process
|
||||
|
||||
1. Show comprehensive impact
|
||||
2. Require confirmation (unless -y)
|
||||
3. Update dependent task references
|
||||
4. Remove task and subtasks
|
||||
5. Clean up orphaned dependencies
|
||||
6. Log removal with timestamp
|
||||
|
||||
## Alternative Actions
|
||||
|
||||
Suggest before deletion:
|
||||
- Mark as cancelled instead
|
||||
- Convert to documentation
|
||||
- Archive task data
|
||||
- Transfer work to another task
|
||||
|
||||
## Post-Removal
|
||||
|
||||
- List affected tasks
|
||||
- Show broken dependencies
|
||||
- Update project statistics
|
||||
- Suggest dependency fixes
|
||||
- Recalculate timeline
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/remove-task 5
|
||||
→ Task #5 is in-progress with 8 hours logged
|
||||
→ 3 other tasks depend on this
|
||||
→ Suggestion: Mark as cancelled instead?
|
||||
Remove anyway? (y/n)
|
||||
|
||||
/project:tm/remove-task 5 -y
|
||||
→ Removed: Task #5 and 4 subtasks
|
||||
→ Updated: 3 task dependencies
|
||||
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
||||
→ Run /project:tm/fix-dependencies to resolve
|
||||
```
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Confirmation required
|
||||
- Impact preview
|
||||
- Removal logging
|
||||
- Suggest alternatives
|
||||
- No cascade delete of dependents
|
||||
55
.claude/commands/tm/set-status/to-cancelled.md
Normal file
55
.claude/commands/tm/set-status/to-cancelled.md
Normal file
@@ -0,0 +1,55 @@
|
||||
Cancel a task permanently.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Cancelling a Task
|
||||
|
||||
This status indicates a task is no longer needed and won't be completed.
|
||||
|
||||
## Valid Reasons for Cancellation
|
||||
|
||||
- Requirements changed
|
||||
- Feature deprecated
|
||||
- Duplicate of another task
|
||||
- Strategic pivot
|
||||
- Technical approach invalidated
|
||||
|
||||
## Pre-Cancellation Checks
|
||||
|
||||
1. Confirm no critical dependencies
|
||||
2. Check for partial implementation
|
||||
3. Verify cancellation rationale
|
||||
4. Document lessons learned
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=cancelled
|
||||
```
|
||||
|
||||
## Cancellation Impact
|
||||
|
||||
When cancelling:
|
||||
1. **Dependency Updates**
|
||||
- Notify dependent tasks
|
||||
- Update project scope
|
||||
- Recalculate timelines
|
||||
|
||||
2. **Clean-up Actions**
|
||||
- Remove related branches
|
||||
- Archive any work done
|
||||
- Update documentation
|
||||
- Close related issues
|
||||
|
||||
3. **Learning Capture**
|
||||
- Document why cancelled
|
||||
- Note what was learned
|
||||
- Update estimation models
|
||||
- Prevent future duplicates
|
||||
|
||||
## Historical Preservation
|
||||
|
||||
- Keep for reference
|
||||
- Tag with cancellation reason
|
||||
- Link to replacement if any
|
||||
- Maintain audit trail
|
||||
47
.claude/commands/tm/set-status/to-deferred.md
Normal file
47
.claude/commands/tm/set-status/to-deferred.md
Normal file
@@ -0,0 +1,47 @@
|
||||
Defer a task for later consideration.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Deferring a Task
|
||||
|
||||
This status indicates a task is valid but not currently actionable or prioritized.
|
||||
|
||||
## Valid Reasons for Deferral
|
||||
|
||||
- Waiting for external dependencies
|
||||
- Reprioritized for future sprint
|
||||
- Blocked by technical limitations
|
||||
- Resource constraints
|
||||
- Strategic timing considerations
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=deferred
|
||||
```
|
||||
|
||||
## Deferral Management
|
||||
|
||||
When deferring:
|
||||
1. **Document Reason**
|
||||
- Capture why it's being deferred
|
||||
- Set reactivation criteria
|
||||
- Note any partial work completed
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Check dependent tasks
|
||||
- Update project timeline
|
||||
- Notify affected stakeholders
|
||||
|
||||
3. **Future Planning**
|
||||
- Set review reminders
|
||||
- Tag for specific milestone
|
||||
- Preserve context for reactivation
|
||||
- Link to blocking issues
|
||||
|
||||
## Smart Tracking
|
||||
|
||||
- Monitor deferral duration
|
||||
- Alert when criteria met
|
||||
- Prevent scope creep
|
||||
- Regular review cycles
|
||||
44
.claude/commands/tm/set-status/to-done.md
Normal file
44
.claude/commands/tm/set-status/to-done.md
Normal file
@@ -0,0 +1,44 @@
|
||||
Mark a task as completed.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Completing a Task
|
||||
|
||||
This command validates task completion and updates project state intelligently.
|
||||
|
||||
## Pre-Completion Checks
|
||||
|
||||
1. Verify test strategy was followed
|
||||
2. Check if all subtasks are complete
|
||||
3. Validate acceptance criteria met
|
||||
4. Ensure code is committed
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=done
|
||||
```
|
||||
|
||||
## Post-Completion Actions
|
||||
|
||||
1. **Update Dependencies**
|
||||
- Identify newly unblocked tasks
|
||||
- Update sprint progress
|
||||
- Recalculate project timeline
|
||||
|
||||
2. **Documentation**
|
||||
- Generate completion summary
|
||||
- Update CLAUDE.md with learnings
|
||||
- Log implementation approach
|
||||
|
||||
3. **Next Steps**
|
||||
- Show newly available tasks
|
||||
- Suggest logical next task
|
||||
- Update velocity metrics
|
||||
|
||||
## Celebration & Learning
|
||||
|
||||
- Show impact of completion
|
||||
- Display unblocked work
|
||||
- Recognize achievement
|
||||
- Capture lessons learned
|
||||
36
.claude/commands/tm/set-status/to-in-progress.md
Normal file
36
.claude/commands/tm/set-status/to-in-progress.md
Normal file
@@ -0,0 +1,36 @@
|
||||
Start working on a task by setting its status to in-progress.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Starting Work on Task
|
||||
|
||||
This command does more than just change status - it prepares your environment for productive work.
|
||||
|
||||
## Pre-Start Checks
|
||||
|
||||
1. Verify dependencies are met
|
||||
2. Check if another task is already in-progress
|
||||
3. Ensure task details are complete
|
||||
4. Validate test strategy exists
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=in-progress
|
||||
```
|
||||
|
||||
## Environment Setup
|
||||
|
||||
After setting to in-progress:
|
||||
1. Create/checkout appropriate git branch
|
||||
2. Open relevant documentation
|
||||
3. Set up test watchers if applicable
|
||||
4. Display task details and acceptance criteria
|
||||
5. Show similar completed tasks for reference
|
||||
|
||||
## Smart Suggestions
|
||||
|
||||
- Estimated completion time based on complexity
|
||||
- Related files from similar tasks
|
||||
- Potential blockers to watch for
|
||||
- Recommended first steps
|
||||
32
.claude/commands/tm/set-status/to-pending.md
Normal file
32
.claude/commands/tm/set-status/to-pending.md
Normal file
@@ -0,0 +1,32 @@
|
||||
Set a task's status to pending.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Setting Task to Pending
|
||||
|
||||
This moves a task back to the pending state, useful for:
|
||||
- Resetting erroneously started tasks
|
||||
- Deferring work that was prematurely begun
|
||||
- Reorganizing sprint priorities
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=pending
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before setting to pending:
|
||||
- Warn if task is currently in-progress
|
||||
- Check if this will block other tasks
|
||||
- Suggest documenting why it's being reset
|
||||
- Preserve any work already done
|
||||
|
||||
## Smart Actions
|
||||
|
||||
After setting to pending:
|
||||
- Update sprint planning if needed
|
||||
- Notify about freed resources
|
||||
- Suggest priority reassessment
|
||||
- Log the status change with context
|
||||
40
.claude/commands/tm/set-status/to-review.md
Normal file
40
.claude/commands/tm/set-status/to-review.md
Normal file
@@ -0,0 +1,40 @@
|
||||
Set a task's status to review.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Marking Task for Review
|
||||
|
||||
This status indicates work is complete but needs verification before final approval.
|
||||
|
||||
## When to Use Review Status
|
||||
|
||||
- Code complete but needs peer review
|
||||
- Implementation done but needs testing
|
||||
- Documentation written but needs proofreading
|
||||
- Design complete but needs stakeholder approval
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=review
|
||||
```
|
||||
|
||||
## Review Preparation
|
||||
|
||||
When setting to review:
|
||||
1. **Generate Review Checklist**
|
||||
- Link to PR/MR if applicable
|
||||
- Highlight key changes
|
||||
- Note areas needing attention
|
||||
- Include test results
|
||||
|
||||
2. **Documentation**
|
||||
- Update task with review notes
|
||||
- Link relevant artifacts
|
||||
- Specify reviewers if known
|
||||
|
||||
3. **Smart Actions**
|
||||
- Create review reminders
|
||||
- Track review duration
|
||||
- Suggest reviewers based on expertise
|
||||
- Prepare rollback plan if needed
|
||||
117
.claude/commands/tm/setup/install.md
Normal file
117
.claude/commands/tm/setup/install.md
Normal file
@@ -0,0 +1,117 @@
|
||||
Check if Task Master is installed and install it if needed.
|
||||
|
||||
This command helps you get Task Master set up globally on your system.
|
||||
|
||||
## Detection and Installation Process
|
||||
|
||||
1. **Check Current Installation**
|
||||
```bash
|
||||
# Check if task-master command exists
|
||||
which task-master || echo "Task Master not found"
|
||||
|
||||
# Check npm global packages
|
||||
npm list -g task-master-ai
|
||||
```
|
||||
|
||||
2. **System Requirements Check**
|
||||
```bash
|
||||
# Verify Node.js is installed
|
||||
node --version
|
||||
|
||||
# Verify npm is installed
|
||||
npm --version
|
||||
|
||||
# Check Node version (need 16+)
|
||||
```
|
||||
|
||||
3. **Install Task Master Globally**
|
||||
If not installed, run:
|
||||
```bash
|
||||
npm install -g task-master-ai
|
||||
```
|
||||
|
||||
4. **Verify Installation**
|
||||
```bash
|
||||
# Check version
|
||||
task-master --version
|
||||
|
||||
# Verify command is available
|
||||
which task-master
|
||||
```
|
||||
|
||||
5. **Initial Setup**
|
||||
```bash
|
||||
# Initialize in current directory
|
||||
task-master init
|
||||
```
|
||||
|
||||
6. **Configure AI Provider**
|
||||
Ensure you have at least one AI provider API key set:
|
||||
```bash
|
||||
# Check current configuration
|
||||
task-master models --status
|
||||
|
||||
# If no API keys found, guide setup
|
||||
echo "You'll need at least one API key:"
|
||||
echo "- ANTHROPIC_API_KEY for Claude"
|
||||
echo "- OPENAI_API_KEY for GPT models"
|
||||
echo "- PERPLEXITY_API_KEY for research"
|
||||
echo ""
|
||||
echo "Set them in your shell profile or .env file"
|
||||
```
|
||||
|
||||
7. **Quick Test**
|
||||
```bash
|
||||
# Create a test PRD
|
||||
echo "Build a simple hello world API" > test-prd.txt
|
||||
|
||||
# Try parsing it
|
||||
task-master parse-prd test-prd.txt -n 3
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If installation fails:
|
||||
|
||||
**Permission Errors:**
|
||||
```bash
|
||||
# Try with sudo (macOS/Linux)
|
||||
sudo npm install -g task-master-ai
|
||||
|
||||
# Or fix npm permissions
|
||||
npm config set prefix ~/.npm-global
|
||||
export PATH=~/.npm-global/bin:$PATH
|
||||
```
|
||||
|
||||
**Network Issues:**
|
||||
```bash
|
||||
# Use different registry
|
||||
npm install -g task-master-ai --registry https://registry.npmjs.org/
|
||||
```
|
||||
|
||||
**Node Version Issues:**
|
||||
```bash
|
||||
# Install Node 18+ via nvm
|
||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
|
||||
nvm install 18
|
||||
nvm use 18
|
||||
```
|
||||
|
||||
## Success Confirmation
|
||||
|
||||
Once installed, you should see:
|
||||
```
|
||||
✅ Task Master v0.16.2 (or higher) installed
|
||||
✅ Command 'task-master' available globally
|
||||
✅ AI provider configured
|
||||
✅ Ready to use slash commands!
|
||||
|
||||
Try: /project:task-master:init your-prd.md
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation:
|
||||
1. Run `/project:utils:check-health` to verify setup
|
||||
2. Configure AI providers with `/project:task-master:models`
|
||||
3. Start using Task Master commands!
|
||||
22
.claude/commands/tm/setup/quick-install.md
Normal file
22
.claude/commands/tm/setup/quick-install.md
Normal file
@@ -0,0 +1,22 @@
|
||||
Quick install Task Master globally if not already installed.
|
||||
|
||||
Execute this streamlined installation:
|
||||
|
||||
```bash
|
||||
# Check and install in one command
|
||||
task-master --version 2>/dev/null || npm install -g task-master-ai
|
||||
|
||||
# Verify installation
|
||||
task-master --version
|
||||
|
||||
# Quick setup check
|
||||
task-master models --status || echo "Note: You'll need to set up an AI provider API key"
|
||||
```
|
||||
|
||||
If you see "command not found" after installation, you may need to:
|
||||
1. Restart your terminal
|
||||
2. Or add npm global bin to PATH: `export PATH=$(npm bin -g):$PATH`
|
||||
|
||||
Once installed, you can use all the Task Master commands!
|
||||
|
||||
Quick test: Run `/project:help` to see all available commands.
|
||||
82
.claude/commands/tm/show/index.md
Normal file
82
.claude/commands/tm/show/index.md
Normal file
@@ -0,0 +1,82 @@
|
||||
Show detailed task information with rich context and insights.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Enhanced Task Display
|
||||
|
||||
Parse arguments to determine what to show and how.
|
||||
|
||||
### 1. **Smart Task Selection**
|
||||
|
||||
Based on $ARGUMENTS:
|
||||
- Number → Show specific task with full context
|
||||
- "current" → Show active in-progress task(s)
|
||||
- "next" → Show recommended next task
|
||||
- "blocked" → Show all blocked tasks with reasons
|
||||
- "critical" → Show critical path tasks
|
||||
- Multiple IDs → Comparative view
|
||||
|
||||
### 2. **Contextual Information**
|
||||
|
||||
For each task, intelligently include:
|
||||
|
||||
**Core Details**
|
||||
- Full task information (id, title, description, details)
|
||||
- Current status with history
|
||||
- Test strategy and acceptance criteria
|
||||
- Priority and complexity analysis
|
||||
|
||||
**Relationships**
|
||||
- Dependencies (what it needs)
|
||||
- Dependents (what needs it)
|
||||
- Parent/subtask hierarchy
|
||||
- Related tasks (similar work)
|
||||
|
||||
**Time Intelligence**
|
||||
- Created/updated timestamps
|
||||
- Time in current status
|
||||
- Estimated vs actual time
|
||||
- Historical completion patterns
|
||||
|
||||
### 3. **Visual Enhancements**
|
||||
|
||||
```
|
||||
📋 Task #45: Implement User Authentication
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Status: 🟡 in-progress (2 hours)
|
||||
Priority: 🔴 High | Complexity: 73/100
|
||||
|
||||
Dependencies: ✅ #41, ✅ #42, ⏳ #43 (blocked)
|
||||
Blocks: #46, #47, #52
|
||||
|
||||
Progress: ████████░░ 80% complete
|
||||
|
||||
Recent Activity:
|
||||
- 2h ago: Status changed to in-progress
|
||||
- 4h ago: Dependency #42 completed
|
||||
- Yesterday: Task expanded with 3 subtasks
|
||||
```
|
||||
|
||||
### 4. **Intelligent Insights**
|
||||
|
||||
Based on task analysis:
|
||||
- **Risk Assessment**: Complexity vs time remaining
|
||||
- **Bottleneck Analysis**: Is this blocking critical work?
|
||||
- **Recommendation**: Suggested approach or concerns
|
||||
- **Similar Tasks**: How others completed similar work
|
||||
|
||||
### 5. **Action Suggestions**
|
||||
|
||||
Context-aware next steps:
|
||||
- If blocked → Show how to unblock
|
||||
- If complex → Suggest expansion
|
||||
- If in-progress → Show completion checklist
|
||||
- If done → Show dependent tasks ready to start
|
||||
|
||||
### 6. **Multi-Task View**
|
||||
|
||||
When showing multiple tasks:
|
||||
- Common dependencies
|
||||
- Optimal completion order
|
||||
- Parallel work opportunities
|
||||
- Combined complexity analysis
|
||||
64
.claude/commands/tm/status/index.md
Normal file
64
.claude/commands/tm/status/index.md
Normal file
@@ -0,0 +1,64 @@
|
||||
Enhanced status command with comprehensive project insights.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Status Overview
|
||||
|
||||
### 1. **Executive Summary**
|
||||
Quick dashboard view:
|
||||
- 🏃 Active work (in-progress tasks)
|
||||
- 📊 Progress metrics (% complete, velocity)
|
||||
- 🚧 Blockers and risks
|
||||
- ⏱️ Time analysis (estimated vs actual)
|
||||
- 🎯 Sprint/milestone progress
|
||||
|
||||
### 2. **Contextual Analysis**
|
||||
|
||||
Based on $ARGUMENTS, focus on:
|
||||
- "sprint" → Current sprint progress and burndown
|
||||
- "blocked" → Dependency chains and resolution paths
|
||||
- "team" → Task distribution and workload
|
||||
- "timeline" → Schedule adherence and projections
|
||||
- "risk" → High complexity or overdue items
|
||||
|
||||
### 3. **Smart Insights**
|
||||
|
||||
**Workflow Health:**
|
||||
- Idle tasks (in-progress > 24h without updates)
|
||||
- Bottlenecks (multiple tasks waiting on same dependency)
|
||||
- Quick wins (low complexity, high impact)
|
||||
|
||||
**Predictive Analytics:**
|
||||
- Completion projections based on velocity
|
||||
- Risk of missing deadlines
|
||||
- Recommended task order for optimal flow
|
||||
|
||||
### 4. **Visual Intelligence**
|
||||
|
||||
Dynamic visualization based on data:
|
||||
```
|
||||
Sprint Progress: ████████░░ 80% (16/20 tasks)
|
||||
Velocity Trend: ↗️ +15% this week
|
||||
Blocked Tasks: 🔴 3 critical path items
|
||||
|
||||
Priority Distribution:
|
||||
High: ████████ 8 tasks (2 blocked)
|
||||
Medium: ████░░░░ 4 tasks
|
||||
Low: ██░░░░░░ 2 tasks
|
||||
```
|
||||
|
||||
### 5. **Actionable Recommendations**
|
||||
|
||||
Based on analysis:
|
||||
1. **Immediate actions** (unblock critical path)
|
||||
2. **Today's focus** (optimal task sequence)
|
||||
3. **Process improvements** (recurring patterns)
|
||||
4. **Resource needs** (skills, time, dependencies)
|
||||
|
||||
### 6. **Historical Context**
|
||||
|
||||
Compare to previous periods:
|
||||
- Velocity changes
|
||||
- Pattern recognition
|
||||
- Improvement areas
|
||||
- Success patterns to repeat
|
||||
117
.claude/commands/tm/sync-readme/index.md
Normal file
117
.claude/commands/tm/sync-readme/index.md
Normal file
@@ -0,0 +1,117 @@
|
||||
Export tasks to README.md with professional formatting.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Generate a well-formatted README with current task information.
|
||||
|
||||
## README Synchronization
|
||||
|
||||
Creates or updates README.md with beautifully formatted task information.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Optional filters:
|
||||
- "pending" → Only pending tasks
|
||||
- "with-subtasks" → Include subtask details
|
||||
- "by-priority" → Group by priority
|
||||
- "sprint" → Current sprint only
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master sync-readme [--with-subtasks] [--status=<status>]
|
||||
```
|
||||
|
||||
## README Generation
|
||||
|
||||
### 1. **Project Header**
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
## 📋 Task Progress
|
||||
|
||||
Last Updated: 2024-01-15 10:30 AM
|
||||
|
||||
### Summary
|
||||
- Total Tasks: 45
|
||||
- Completed: 15 (33%)
|
||||
- In Progress: 5 (11%)
|
||||
- Pending: 25 (56%)
|
||||
```
|
||||
|
||||
### 2. **Task Sections**
|
||||
Organized by status or priority:
|
||||
- Progress indicators
|
||||
- Task descriptions
|
||||
- Dependencies noted
|
||||
- Time estimates
|
||||
|
||||
### 3. **Visual Elements**
|
||||
- Progress bars
|
||||
- Status badges
|
||||
- Priority indicators
|
||||
- Completion checkmarks
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Intelligent Grouping**
|
||||
- By feature area
|
||||
- By sprint/milestone
|
||||
- By assigned developer
|
||||
- By priority
|
||||
|
||||
2. **Progress Tracking**
|
||||
- Overall completion
|
||||
- Sprint velocity
|
||||
- Burndown indication
|
||||
- Time tracking
|
||||
|
||||
3. **Formatting Options**
|
||||
- GitHub-flavored markdown
|
||||
- Task checkboxes
|
||||
- Collapsible sections
|
||||
- Table format available
|
||||
|
||||
## Example Output
|
||||
|
||||
```markdown
|
||||
## 🚀 Current Sprint
|
||||
|
||||
### In Progress
|
||||
- [ ] 🔄 #5 **Implement user authentication** (60% complete)
|
||||
- Dependencies: API design (#3 ✅)
|
||||
- Subtasks: 4 (2 completed)
|
||||
- Est: 8h / Spent: 5h
|
||||
|
||||
### Pending (High Priority)
|
||||
- [ ] ⚡ #8 **Create dashboard UI**
|
||||
- Blocked by: #5
|
||||
- Complexity: High
|
||||
- Est: 12h
|
||||
```
|
||||
|
||||
## Customization
|
||||
|
||||
Based on arguments:
|
||||
- Include/exclude sections
|
||||
- Detail level control
|
||||
- Custom grouping
|
||||
- Filter by criteria
|
||||
|
||||
## Post-Sync
|
||||
|
||||
After generation:
|
||||
1. Show diff preview
|
||||
2. Backup existing README
|
||||
3. Write new content
|
||||
4. Commit reminder
|
||||
5. Update timestamp
|
||||
|
||||
## Integration
|
||||
|
||||
Works well with:
|
||||
- Git workflows
|
||||
- CI/CD pipelines
|
||||
- Project documentation
|
||||
- Team updates
|
||||
- Client reports
|
||||
108
.claude/commands/tm/update/from-id.md
Normal file
108
.claude/commands/tm/update/from-id.md
Normal file
@@ -0,0 +1,108 @@
|
||||
Update multiple tasks starting from a specific ID.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse starting task ID and update context.
|
||||
|
||||
## Bulk Task Updates
|
||||
|
||||
Update multiple related tasks based on new requirements or context changes.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "from 5: add security requirements"
|
||||
- "5 onwards: update API endpoints"
|
||||
- "starting at 5: change to use new framework"
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master update --from=<id> --prompt="<context>"
|
||||
```
|
||||
|
||||
## Update Process
|
||||
|
||||
### 1. **Task Selection**
|
||||
Starting from specified ID:
|
||||
- Include the task itself
|
||||
- Include all dependent tasks
|
||||
- Include related subtasks
|
||||
- Smart boundary detection
|
||||
|
||||
### 2. **Context Application**
|
||||
AI analyzes the update context and:
|
||||
- Identifies what needs changing
|
||||
- Maintains consistency
|
||||
- Preserves completed work
|
||||
- Updates related information
|
||||
|
||||
### 3. **Intelligent Updates**
|
||||
- Modify descriptions appropriately
|
||||
- Update test strategies
|
||||
- Adjust time estimates
|
||||
- Revise dependencies if needed
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Scope Detection**
|
||||
- Find natural task groupings
|
||||
- Identify related features
|
||||
- Stop at logical boundaries
|
||||
- Avoid over-updating
|
||||
|
||||
2. **Consistency Maintenance**
|
||||
- Keep naming conventions
|
||||
- Preserve relationships
|
||||
- Update cross-references
|
||||
- Maintain task flow
|
||||
|
||||
3. **Change Preview**
|
||||
```
|
||||
Bulk Update Preview
|
||||
━━━━━━━━━━━━━━━━━━
|
||||
Starting from: Task #5
|
||||
Tasks to update: 8 tasks + 12 subtasks
|
||||
|
||||
Context: "add security requirements"
|
||||
|
||||
Changes will include:
|
||||
- Add security sections to descriptions
|
||||
- Update test strategies for security
|
||||
- Add security-related subtasks where needed
|
||||
- Adjust time estimates (+20% average)
|
||||
|
||||
Continue? (y/n)
|
||||
```
|
||||
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/project:tm/update/from-id 5: change database to PostgreSQL
|
||||
→ Analyzing impact starting from task #5
|
||||
→ Found 6 related tasks to update
|
||||
→ Updates will maintain consistency
|
||||
→ Preview changes? (y/n)
|
||||
|
||||
Applied updates:
|
||||
✓ Task #5: Updated connection logic references
|
||||
✓ Task #6: Changed migration approach
|
||||
✓ Task #7: Updated query syntax notes
|
||||
✓ Task #8: Revised testing strategy
|
||||
✓ Task #9: Updated deployment steps
|
||||
✓ Task #12: Changed backup procedures
|
||||
```
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Preview all changes
|
||||
- Selective confirmation
|
||||
- Rollback capability
|
||||
- Change logging
|
||||
- Validation checks
|
||||
|
||||
## Post-Update
|
||||
|
||||
- Summary of changes
|
||||
- Consistency verification
|
||||
- Suggest review tasks
|
||||
- Update timeline if needed
|
||||
72
.claude/commands/tm/update/index.md
Normal file
72
.claude/commands/tm/update/index.md
Normal file
@@ -0,0 +1,72 @@
|
||||
Update tasks with intelligent field detection and bulk operations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Task Updates
|
||||
|
||||
Parse arguments to determine update intent and execute smartly.
|
||||
|
||||
### 1. **Natural Language Processing**
|
||||
|
||||
Understand update requests like:
|
||||
- "mark 23 as done" → Update status to done
|
||||
- "increase priority of 45" → Set priority to high
|
||||
- "add dependency on 12 to task 34" → Add dependency
|
||||
- "tasks 20-25 need review" → Bulk status update
|
||||
- "all API tasks high priority" → Pattern-based update
|
||||
|
||||
### 2. **Smart Field Detection**
|
||||
|
||||
Automatically detect what to update:
|
||||
- Status keywords: done, complete, start, pause, review
|
||||
- Priority changes: urgent, high, low, deprioritize
|
||||
- Dependency updates: depends on, blocks, after
|
||||
- Assignment: assign to, owner, responsible
|
||||
- Time: estimate, spent, deadline
|
||||
|
||||
### 3. **Bulk Operations**
|
||||
|
||||
Support for multiple task updates:
|
||||
```
|
||||
Examples:
|
||||
- "complete tasks 12, 15, 18"
|
||||
- "all pending auth tasks to in-progress"
|
||||
- "increase priority for tasks blocking 45"
|
||||
- "defer all documentation tasks"
|
||||
```
|
||||
|
||||
### 4. **Contextual Validation**
|
||||
|
||||
Before updating, check:
|
||||
- Status transitions are valid
|
||||
- Dependencies don't create cycles
|
||||
- Priority changes make sense
|
||||
- Bulk updates won't break project flow
|
||||
|
||||
Show preview:
|
||||
```
|
||||
Update Preview:
|
||||
─────────────────
|
||||
Tasks to update: #23, #24, #25
|
||||
Change: status → in-progress
|
||||
Impact: Will unblock tasks #30, #31
|
||||
Warning: Task #24 has unmet dependencies
|
||||
```
|
||||
|
||||
### 5. **Smart Suggestions**
|
||||
|
||||
Based on update:
|
||||
- Completing task? → Show newly unblocked tasks
|
||||
- Changing priority? → Show impact on sprint
|
||||
- Adding dependency? → Check for conflicts
|
||||
- Bulk update? → Show summary of changes
|
||||
|
||||
### 6. **Workflow Integration**
|
||||
|
||||
After updates:
|
||||
- Auto-update dependent task states
|
||||
- Trigger status recalculation
|
||||
- Update sprint/milestone progress
|
||||
- Log changes with context
|
||||
|
||||
Result: Flexible, intelligent task updates with safety checks.
|
||||
119
.claude/commands/tm/update/single.md
Normal file
119
.claude/commands/tm/update/single.md
Normal file
@@ -0,0 +1,119 @@
|
||||
Update a single specific task with new information.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse task ID and update details.
|
||||
|
||||
## Single Task Update
|
||||
|
||||
Precisely update one task with AI assistance to maintain consistency.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Natural language updates:
|
||||
- "5: add caching requirement"
|
||||
- "update 5 to include error handling"
|
||||
- "task 5 needs rate limiting"
|
||||
- "5 change priority to high"
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master update-task --id=<id> --prompt="<context>"
|
||||
```
|
||||
|
||||
## Update Types
|
||||
|
||||
### 1. **Content Updates**
|
||||
- Enhance description
|
||||
- Add requirements
|
||||
- Clarify details
|
||||
- Update acceptance criteria
|
||||
|
||||
### 2. **Metadata Updates**
|
||||
- Change priority
|
||||
- Adjust time estimates
|
||||
- Update complexity
|
||||
- Modify dependencies
|
||||
|
||||
### 3. **Strategic Updates**
|
||||
- Revise approach
|
||||
- Change test strategy
|
||||
- Update implementation notes
|
||||
- Adjust subtask needs
|
||||
|
||||
## AI-Powered Updates
|
||||
|
||||
The AI:
|
||||
1. **Understands Context**
|
||||
- Reads current task state
|
||||
- Identifies update intent
|
||||
- Maintains consistency
|
||||
- Preserves important info
|
||||
|
||||
2. **Applies Changes**
|
||||
- Updates relevant fields
|
||||
- Keeps style consistent
|
||||
- Adds without removing
|
||||
- Enhances clarity
|
||||
|
||||
3. **Validates Results**
|
||||
- Checks coherence
|
||||
- Verifies completeness
|
||||
- Maintains relationships
|
||||
- Suggests related updates
|
||||
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/project:tm/update/single 5: add rate limiting
|
||||
→ Updating Task #5: "Implement API endpoints"
|
||||
|
||||
Current: Basic CRUD endpoints
|
||||
Adding: Rate limiting requirements
|
||||
|
||||
Updated sections:
|
||||
✓ Description: Added rate limiting mention
|
||||
✓ Details: Added specific limits (100/min)
|
||||
✓ Test Strategy: Added rate limit tests
|
||||
✓ Complexity: Increased from 5 to 6
|
||||
✓ Time Estimate: Increased by 2 hours
|
||||
|
||||
Suggestion: Also update task #6 (API Gateway) for consistency?
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Incremental Updates**
|
||||
- Adds without overwriting
|
||||
- Preserves work history
|
||||
- Tracks what changed
|
||||
- Shows diff view
|
||||
|
||||
2. **Consistency Checks**
|
||||
- Related task alignment
|
||||
- Subtask compatibility
|
||||
- Dependency validity
|
||||
- Timeline impact
|
||||
|
||||
3. **Update History**
|
||||
- Timestamp changes
|
||||
- Track who/what updated
|
||||
- Reason for update
|
||||
- Previous versions
|
||||
|
||||
## Field-Specific Updates
|
||||
|
||||
Quick syntax for specific fields:
|
||||
- "5 priority:high" → Update priority only
|
||||
- "5 add-time:4h" → Add to time estimate
|
||||
- "5 status:review" → Change status
|
||||
- "5 depends:3,4" → Add dependencies
|
||||
|
||||
## Post-Update
|
||||
|
||||
- Show updated task
|
||||
- Highlight changes
|
||||
- Check related tasks
|
||||
- Update suggestions
|
||||
- Timeline adjustments
|
||||
97
.claude/commands/tm/utils/analyze.md
Normal file
97
.claude/commands/tm/utils/analyze.md
Normal file
@@ -0,0 +1,97 @@
|
||||
Advanced project analysis with actionable insights and recommendations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Comprehensive Project Analysis
|
||||
|
||||
Multi-dimensional analysis based on requested focus area.
|
||||
|
||||
### 1. **Analysis Modes**
|
||||
|
||||
Based on $ARGUMENTS:
|
||||
- "velocity" → Sprint velocity and trends
|
||||
- "quality" → Code quality metrics
|
||||
- "risk" → Risk assessment and mitigation
|
||||
- "dependencies" → Dependency graph analysis
|
||||
- "team" → Workload and skill distribution
|
||||
- "architecture" → System design coherence
|
||||
- Default → Full spectrum analysis
|
||||
|
||||
### 2. **Velocity Analytics**
|
||||
|
||||
```
|
||||
📊 Velocity Analysis
|
||||
━━━━━━━━━━━━━━━━━━━
|
||||
Current Sprint: 24 points/week ↗️ +20%
|
||||
Rolling Average: 20 points/week
|
||||
Efficiency: 85% (17/20 tasks on time)
|
||||
|
||||
Bottlenecks Detected:
|
||||
- Code review delays (avg 4h wait)
|
||||
- Test environment availability
|
||||
- Dependency on external team
|
||||
|
||||
Recommendations:
|
||||
1. Implement parallel review process
|
||||
2. Add staging environment
|
||||
3. Mock external dependencies
|
||||
```
|
||||
|
||||
### 3. **Risk Assessment**
|
||||
|
||||
**Technical Risks**
|
||||
- High complexity tasks without backup assignee
|
||||
- Single points of failure in architecture
|
||||
- Insufficient test coverage in critical paths
|
||||
- Technical debt accumulation rate
|
||||
|
||||
**Project Risks**
|
||||
- Critical path dependencies
|
||||
- Resource availability gaps
|
||||
- Deadline feasibility analysis
|
||||
- Scope creep indicators
|
||||
|
||||
### 4. **Dependency Intelligence**
|
||||
|
||||
Visual dependency analysis:
|
||||
```
|
||||
Critical Path:
|
||||
#12 → #15 → #23 → #45 → #50 (20 days)
|
||||
↘ #24 → #46 ↗
|
||||
|
||||
Optimization: Parallelize #15 and #24
|
||||
Time Saved: 3 days
|
||||
```
|
||||
|
||||
### 5. **Quality Metrics**
|
||||
|
||||
**Code Quality**
|
||||
- Test coverage trends
|
||||
- Complexity scores
|
||||
- Technical debt ratio
|
||||
- Review feedback patterns
|
||||
|
||||
**Process Quality**
|
||||
- Rework frequency
|
||||
- Bug introduction rate
|
||||
- Time to resolution
|
||||
- Knowledge distribution
|
||||
|
||||
### 6. **Predictive Insights**
|
||||
|
||||
Based on patterns:
|
||||
- Completion probability by deadline
|
||||
- Resource needs projection
|
||||
- Risk materialization likelihood
|
||||
- Suggested interventions
|
||||
|
||||
### 7. **Executive Dashboard**
|
||||
|
||||
High-level summary with:
|
||||
- Health score (0-100)
|
||||
- Top 3 risks
|
||||
- Top 3 opportunities
|
||||
- Recommended actions
|
||||
- Success probability
|
||||
|
||||
Result: Data-driven decisions with clear action paths.
|
||||
71
.claude/commands/tm/validate-dependencies/index.md
Normal file
71
.claude/commands/tm/validate-dependencies/index.md
Normal file
@@ -0,0 +1,71 @@
|
||||
Validate all task dependencies for issues.
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
Comprehensive check for dependency problems across the entire project.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master validate-dependencies
|
||||
```
|
||||
|
||||
## Validation Checks
|
||||
|
||||
1. **Circular Dependencies**
|
||||
- A depends on B, B depends on A
|
||||
- Complex circular chains
|
||||
- Self-dependencies
|
||||
|
||||
2. **Missing Dependencies**
|
||||
- References to non-existent tasks
|
||||
- Deleted task references
|
||||
- Invalid task IDs
|
||||
|
||||
3. **Logical Issues**
|
||||
- Completed tasks depending on pending
|
||||
- Cancelled tasks in dependency chains
|
||||
- Impossible sequences
|
||||
|
||||
4. **Complexity Warnings**
|
||||
- Over-complex dependency chains
|
||||
- Too many dependencies per task
|
||||
- Bottleneck tasks
|
||||
|
||||
## Smart Analysis
|
||||
|
||||
The validation provides:
|
||||
- Visual dependency graph
|
||||
- Critical path analysis
|
||||
- Bottleneck identification
|
||||
- Suggested optimizations
|
||||
|
||||
## Report Format
|
||||
|
||||
```
|
||||
Dependency Validation Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ No circular dependencies found
|
||||
⚠️ 2 warnings found:
|
||||
- Task #23 has 7 dependencies (consider breaking down)
|
||||
- Task #45 blocks 5 other tasks (potential bottleneck)
|
||||
❌ 1 error found:
|
||||
- Task #67 depends on deleted task #66
|
||||
|
||||
Critical Path: #1 → #5 → #23 → #45 → #50 (15 days)
|
||||
```
|
||||
|
||||
## Actionable Output
|
||||
|
||||
For each issue found:
|
||||
- Clear description
|
||||
- Impact assessment
|
||||
- Suggested fix
|
||||
- Command to resolve
|
||||
|
||||
## Next Steps
|
||||
|
||||
After validation:
|
||||
- Run `/project:tm/fix-dependencies` to auto-fix
|
||||
- Manually adjust problematic dependencies
|
||||
- Rerun to verify fixes
|
||||
97
.claude/commands/tm/workflows/auto-implement.md
Normal file
97
.claude/commands/tm/workflows/auto-implement.md
Normal file
@@ -0,0 +1,97 @@
|
||||
Enhanced auto-implementation with intelligent code generation and testing.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Auto-Implementation
|
||||
|
||||
Advanced implementation with context awareness and quality checks.
|
||||
|
||||
### 1. **Pre-Implementation Analysis**
|
||||
|
||||
Before starting:
|
||||
- Analyze task complexity and requirements
|
||||
- Check codebase patterns and conventions
|
||||
- Identify similar completed tasks
|
||||
- Assess test coverage needs
|
||||
- Detect potential risks
|
||||
|
||||
### 2. **Smart Implementation Strategy**
|
||||
|
||||
Based on task type and context:
|
||||
|
||||
**Feature Tasks**
|
||||
1. Research existing patterns
|
||||
2. Design component architecture
|
||||
3. Implement with tests
|
||||
4. Integrate with system
|
||||
5. Update documentation
|
||||
|
||||
**Bug Fix Tasks**
|
||||
1. Reproduce issue
|
||||
2. Identify root cause
|
||||
3. Implement minimal fix
|
||||
4. Add regression tests
|
||||
5. Verify side effects
|
||||
|
||||
**Refactoring Tasks**
|
||||
1. Analyze current structure
|
||||
2. Plan incremental changes
|
||||
3. Maintain test coverage
|
||||
4. Refactor step-by-step
|
||||
5. Verify behavior unchanged
|
||||
|
||||
### 3. **Code Intelligence**
|
||||
|
||||
**Pattern Recognition**
|
||||
- Learn from existing code
|
||||
- Follow team conventions
|
||||
- Use preferred libraries
|
||||
- Match style guidelines
|
||||
|
||||
**Test-Driven Approach**
|
||||
- Write tests first when possible
|
||||
- Ensure comprehensive coverage
|
||||
- Include edge cases
|
||||
- Performance considerations
|
||||
|
||||
### 4. **Progressive Implementation**
|
||||
|
||||
Step-by-step with validation:
|
||||
```
|
||||
Step 1/5: Setting up component structure ✓
|
||||
Step 2/5: Implementing core logic ✓
|
||||
Step 3/5: Adding error handling ⚡ (in progress)
|
||||
Step 4/5: Writing tests ⏳
|
||||
Step 5/5: Integration testing ⏳
|
||||
|
||||
Current: Adding try-catch blocks and validation...
|
||||
```
|
||||
|
||||
### 5. **Quality Assurance**
|
||||
|
||||
Automated checks:
|
||||
- Linting and formatting
|
||||
- Test execution
|
||||
- Type checking
|
||||
- Dependency validation
|
||||
- Performance analysis
|
||||
|
||||
### 6. **Smart Recovery**
|
||||
|
||||
If issues arise:
|
||||
- Diagnostic analysis
|
||||
- Suggestion generation
|
||||
- Fallback strategies
|
||||
- Manual intervention points
|
||||
- Learning from failures
|
||||
|
||||
### 7. **Post-Implementation**
|
||||
|
||||
After completion:
|
||||
- Generate PR description
|
||||
- Update documentation
|
||||
- Log lessons learned
|
||||
- Suggest follow-up tasks
|
||||
- Update task relationships
|
||||
|
||||
Result: High-quality, production-ready implementations.
|
||||
77
.claude/commands/tm/workflows/pipeline.md
Normal file
77
.claude/commands/tm/workflows/pipeline.md
Normal file
@@ -0,0 +1,77 @@
|
||||
Execute a pipeline of commands based on a specification.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Command Pipeline Execution
|
||||
|
||||
Parse pipeline specification from arguments. Supported formats:
|
||||
|
||||
### Simple Pipeline
|
||||
`init → expand-all → sprint-plan`
|
||||
|
||||
### Conditional Pipeline
|
||||
`status → if:pending>10 → sprint-plan → else → next`
|
||||
|
||||
### Iterative Pipeline
|
||||
`for:pending-tasks → expand → complexity-check`
|
||||
|
||||
### Smart Pipeline Patterns
|
||||
|
||||
**1. Project Setup Pipeline**
|
||||
```
|
||||
init [prd] →
|
||||
expand-all →
|
||||
complexity-report →
|
||||
sprint-plan →
|
||||
show first-sprint
|
||||
```
|
||||
|
||||
**2. Daily Work Pipeline**
|
||||
```
|
||||
standup →
|
||||
if:in-progress → continue →
|
||||
else → next → start
|
||||
```
|
||||
|
||||
**3. Task Completion Pipeline**
|
||||
```
|
||||
complete [id] →
|
||||
git-commit →
|
||||
if:blocked-tasks-freed → show-freed →
|
||||
next
|
||||
```
|
||||
|
||||
**4. Quality Check Pipeline**
|
||||
```
|
||||
list in-progress →
|
||||
for:each → check-idle-time →
|
||||
if:idle>1day → prompt-update
|
||||
```
|
||||
|
||||
### Pipeline Features
|
||||
|
||||
**Variables**
|
||||
- Store results: `status → $count=pending-count`
|
||||
- Use in conditions: `if:$count>10`
|
||||
- Pass between commands: `expand $high-priority-tasks`
|
||||
|
||||
**Error Handling**
|
||||
- On failure: `try:complete → catch:show-blockers`
|
||||
- Skip on error: `optional:test-run`
|
||||
- Retry logic: `retry:3:commit`
|
||||
|
||||
**Parallel Execution**
|
||||
- Parallel branches: `[analyze | test | lint]`
|
||||
- Join results: `parallel → join:report`
|
||||
|
||||
### Execution Flow
|
||||
|
||||
1. Parse pipeline specification
|
||||
2. Validate command sequence
|
||||
3. Execute with state passing
|
||||
4. Handle conditions and loops
|
||||
5. Aggregate results
|
||||
6. Show summary
|
||||
|
||||
This enables complex workflows like:
|
||||
`parse-prd → expand-all → filter:complex>70 → assign:senior → sprint-plan:weighted`
|
||||
55
.claude/commands/tm/workflows/smart-flow.md
Normal file
55
.claude/commands/tm/workflows/smart-flow.md
Normal file
@@ -0,0 +1,55 @@
|
||||
Execute an intelligent workflow based on current project state and recent commands.
|
||||
|
||||
This command analyzes:
|
||||
1. Recent commands you've run
|
||||
2. Current project state
|
||||
3. Time of day / day of week
|
||||
4. Your working patterns
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Workflow Selection
|
||||
|
||||
Based on context, I'll determine the best workflow:
|
||||
|
||||
### Context Analysis
|
||||
- Previous command executed
|
||||
- Current task states
|
||||
- Unfinished work from last session
|
||||
- Your typical patterns
|
||||
|
||||
### Smart Execution
|
||||
|
||||
If last command was:
|
||||
- `status` → Likely starting work → Run daily standup
|
||||
- `complete` → Task finished → Find next task
|
||||
- `list pending` → Planning → Suggest sprint planning
|
||||
- `expand` → Breaking down work → Show complexity analysis
|
||||
- `init` → New project → Show onboarding workflow
|
||||
|
||||
If no recent commands:
|
||||
- Morning? → Daily standup workflow
|
||||
- Many pending tasks? → Sprint planning
|
||||
- Tasks blocked? → Dependency resolution
|
||||
- Friday? → Weekly review
|
||||
|
||||
### Workflow Composition
|
||||
|
||||
I'll chain appropriate commands:
|
||||
1. Analyze current state
|
||||
2. Execute primary workflow
|
||||
3. Suggest follow-up actions
|
||||
4. Prepare environment for coding
|
||||
|
||||
### Learning Mode
|
||||
|
||||
This command learns from your patterns:
|
||||
- Track command sequences
|
||||
- Note time preferences
|
||||
- Remember common workflows
|
||||
- Adapt to your style
|
||||
|
||||
Example flows detected:
|
||||
- Morning: standup → next → start
|
||||
- After lunch: status → continue task
|
||||
- End of day: complete → commit → status
|
||||
@@ -26,6 +26,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||
* `--description <text>`: `Provide a brief description for your project.`
|
||||
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||
* `--no-git`: `Skip initializing a Git repository entirely.`
|
||||
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||
* **Usage:** Run this once at the beginning of a new project.
|
||||
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||
@@ -36,6 +37,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||
* `noGit`: `Skip initializing a Git repository entirely. Default is false.` (CLI: `--no-git`)
|
||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||
|
||||
213
CHANGELOG.md
213
CHANGELOG.md
@@ -1,5 +1,218 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.18.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
||||
|
||||
- For example:
|
||||
- `OPENAI_BASE_URL`
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added comprehensive rule profile management:
|
||||
|
||||
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
|
||||
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
|
||||
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
|
||||
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
|
||||
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
|
||||
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
|
||||
- Resolves #338
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Make task-master more compatible with the "o" family models of OpenAI
|
||||
|
||||
Now works well with:
|
||||
|
||||
- o3
|
||||
- o3-mini
|
||||
- etc.
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - - **Git Worktree Detection:**
|
||||
|
||||
- Now properly skips Git initialization when inside existing Git worktree
|
||||
- Prevents accidental nested repository creation
|
||||
- **Flag System Overhaul:**
|
||||
- `--git`/`--no-git` controls repository initialization
|
||||
- `--aliases`/`--no-aliases` consistently manages shell alias creation
|
||||
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
|
||||
- `--dry-run` accurately previews all initialization behaviors
|
||||
- **GitTasks Functionality:**
|
||||
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
|
||||
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
|
||||
- Supports both CLI and MCP interfaces with proper parameter passing
|
||||
|
||||
**Implementation Details:**
|
||||
|
||||
- Added explicit Git worktree detection before initialization
|
||||
- Refactored flag processing to ensure consistent behavior
|
||||
- Fixes #734
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
|
||||
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
- Lazy loading ensures the provider only loads when requested
|
||||
- Full integration with existing Task Master commands and workflows
|
||||
- Comprehensive test coverage for reliability
|
||||
- New --claude-code flag for the models command
|
||||
|
||||
Users can now configure Claude Code models with:
|
||||
task-master models --set-main sonnet --claude-code
|
||||
task-master models --set-research opus --claude-code
|
||||
|
||||
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
|
||||
|
||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
||||
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
|
||||
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
|
||||
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Call rules interactive setup during init
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Update o3 model price
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixes issue with expand CLI command "Complexity report not found"
|
||||
|
||||
- Closes #735
|
||||
- Closes #728
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Store tasks in Git by default
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
|
||||
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
|
||||
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix weird `task-master init` bug when using in certain environments
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Rename Roo Code Boomerang role to Orchestrator
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
|
||||
|
||||
## 0.18.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
||||
|
||||
- For example:
|
||||
- `OPENAI_BASE_URL`
|
||||
|
||||
- [#460](https://github.com/eyaltoledano/claude-task-master/pull/460) [`a09a2d0`](https://github.com/eyaltoledano/claude-task-master/commit/a09a2d0967a10276623e3f3ead3ed577c15ce62f) Thanks [@joedanz](https://github.com/joedanz)! - Added comprehensive rule profile management:
|
||||
|
||||
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
|
||||
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
|
||||
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
|
||||
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
|
||||
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
|
||||
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
|
||||
- Resolves #338
|
||||
|
||||
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
||||
|
||||
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
|
||||
|
||||
- Now properly skips Git initialization when inside existing Git worktree
|
||||
- Prevents accidental nested repository creation
|
||||
- **Flag System Overhaul:**
|
||||
- `--git`/`--no-git` controls repository initialization
|
||||
- `--aliases`/`--no-aliases` consistently manages shell alias creation
|
||||
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
|
||||
- `--dry-run` accurately previews all initialization behaviors
|
||||
- **GitTasks Functionality:**
|
||||
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
|
||||
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
|
||||
- Supports both CLI and MCP interfaces with proper parameter passing
|
||||
|
||||
**Implementation Details:**
|
||||
|
||||
- Added explicit Git worktree detection before initialization
|
||||
- Refactored flag processing to ensure consistent behavior
|
||||
- Fixes #734
|
||||
|
||||
- [#829](https://github.com/eyaltoledano/claude-task-master/pull/829) [`4b0c9d9`](https://github.com/eyaltoledano/claude-task-master/commit/4b0c9d9af62d00359fca3f43283cf33223d410bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
|
||||
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
- Lazy loading ensures the provider only loads when requested
|
||||
- Full integration with existing Task Master commands and workflows
|
||||
- Comprehensive test coverage for reliability
|
||||
- New --claude-code flag for the models command
|
||||
|
||||
Users can now configure Claude Code models with:
|
||||
task-master models --set-main sonnet --claude-code
|
||||
task-master models --set-research opus --claude-code
|
||||
|
||||
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
|
||||
|
||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
||||
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
|
||||
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
|
||||
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
|
||||
|
||||
- [#833](https://github.com/eyaltoledano/claude-task-master/pull/833) [`cf2c066`](https://github.com/eyaltoledano/claude-task-master/commit/cf2c06697a0b5b952fb6ca4b3c923e9892604d08) Thanks [@joedanz](https://github.com/joedanz)! - Call rules interactive setup during init
|
||||
|
||||
- [#826](https://github.com/eyaltoledano/claude-task-master/pull/826) [`7811227`](https://github.com/eyaltoledano/claude-task-master/commit/78112277b3caa4539e6e29805341a944799fb0e7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
|
||||
|
||||
- [#834](https://github.com/eyaltoledano/claude-task-master/pull/834) [`6483537`](https://github.com/eyaltoledano/claude-task-master/commit/648353794eb60d11ffceda87370a321ad310fbd7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
|
||||
|
||||
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
|
||||
|
||||
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
|
||||
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
|
||||
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
|
||||
|
||||
- [#633](https://github.com/eyaltoledano/claude-task-master/pull/633) [`3a2325a`](https://github.com/eyaltoledano/claude-task-master/commit/3a2325a963fed82377ab52546eedcbfebf507a7e) Thanks [@nmarley](https://github.com/nmarley)! - Fix weird `task-master init` bug when using in certain environments
|
||||
|
||||
- [#831](https://github.com/eyaltoledano/claude-task-master/pull/831) [`b592dff`](https://github.com/eyaltoledano/claude-task-master/commit/b592dff8bc5c5d7966843fceaa0adf4570934336) Thanks [@joedanz](https://github.com/joedanz)! - Rename Roo Code Boomerang role to Orchestrator
|
||||
|
||||
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
|
||||
|
||||
## 0.17.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -94,6 +94,8 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||
|
||||
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
|
||||
|
||||
###### VS Code (`servers` + `type`)
|
||||
|
||||
```json
|
||||
|
||||
@@ -9,32 +9,32 @@
|
||||
|
||||
**Architectural Design & Planning Role (Delegated Tasks):**
|
||||
|
||||
Your primary role when activated via `new_task` by the Boomerang orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
|
||||
Your primary role when activated via `new_task` by the Orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
|
||||
|
||||
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Boomerang. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
|
||||
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Orchestrator. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
|
||||
2. **Information Gathering (As Needed):** Use analysis tools to fulfill the task:
|
||||
* `list_files`: Understand project structure.
|
||||
* `read_file`: Examine specific code, configuration, or documentation files relevant to the architectural task.
|
||||
* `list_code_definition_names`: Analyze code structure and relationships.
|
||||
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Boomerang in the delegation message to gather further context beyond what was provided.
|
||||
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Orchestrator in the delegation message to gather further context beyond what was provided.
|
||||
3. **Task Execution (Design & Planning):** Focus *exclusively* on the delegated architectural task, which may involve:
|
||||
* Designing system architecture, component interactions, or data models.
|
||||
* Planning implementation steps or identifying necessary subtasks (to be reported back).
|
||||
* Analyzing technical feasibility, complexity, or potential risks.
|
||||
* Defining interfaces, APIs, or data contracts.
|
||||
* Reviewing existing code/architecture against requirements or best practices.
|
||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
||||
* Summary of design decisions, plans created, analysis performed, or subtasks identified.
|
||||
* Any relevant artifacts produced (e.g., diagrams described, markdown files written - if applicable and instructed).
|
||||
* Completion status (success, failure, needs review).
|
||||
* Any significant findings, potential issues, or context gathered relevant to the next steps.
|
||||
5. **Handling Issues:**
|
||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
||||
* **Failure:** If the task fails (e.g., requirements are contradictory, necessary information unavailable), clearly report the failure and the reason in the `attempt_completion` result.
|
||||
6. **Taskmaster Interaction:**
|
||||
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||
7. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
||||
7. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
|
||||
**Context Reporting Strategy:**
|
||||
|
||||
@@ -42,17 +42,17 @@ context_reporting: |
|
||||
<thinking>
|
||||
Strategy:
|
||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||
</thinking>
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
||||
- **Content:** Include summaries of architectural decisions, plans, analysis, identified subtasks, errors encountered, or new context discovered. Structure the `result` clearly.
|
||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
||||
|
||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||
|
||||
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
||||
taskmaster_strategy:
|
||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||
initialization: |
|
||||
@@ -64,7 +64,7 @@ taskmaster_strategy:
|
||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||
if_uninitialized: |
|
||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
||||
if_ready: |
|
||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||
@@ -73,21 +73,21 @@ taskmaster_strategy:
|
||||
**Mode Collaboration & Triggers (Architect Perspective):**
|
||||
|
||||
mode_collaboration: |
|
||||
# Architect Mode Collaboration (Focus on receiving from Boomerang and reporting back)
|
||||
- Delegated Task Reception (FROM Boomerang via `new_task`):
|
||||
# Architect Mode Collaboration (Focus on receiving from Orchestrator and reporting back)
|
||||
- Delegated Task Reception (FROM Orchestrator via `new_task`):
|
||||
* Receive specific architectural/planning task instructions referencing a `taskmaster-ai` ID.
|
||||
* Analyze requirements, scope, and constraints provided by Boomerang.
|
||||
- Completion Reporting (TO Boomerang via `attempt_completion`):
|
||||
* Analyze requirements, scope, and constraints provided by Orchestrator.
|
||||
- Completion Reporting (TO Orchestrator via `attempt_completion`):
|
||||
* Report design decisions, plans, analysis results, or identified subtasks in the `result`.
|
||||
* Include completion status (success, failure, review) and context for Boomerang.
|
||||
* Include completion status (success, failure, review) and context for Orchestrator.
|
||||
* Signal completion of the *specific delegated architectural task*.
|
||||
|
||||
mode_triggers:
|
||||
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Boomerang based on needs identified by other modes or the user)
|
||||
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Orchestrator based on needs identified by other modes or the user)
|
||||
architect:
|
||||
- condition: needs_architectural_design # e.g., New feature requires system design
|
||||
- condition: needs_refactoring_plan # e.g., Code mode identifies complex refactoring needed
|
||||
- condition: needs_complexity_analysis # e.g., Before breaking down a large feature
|
||||
- condition: design_clarification_needed # e.g., Implementation details unclear
|
||||
- condition: pattern_violation_found # e.g., Code deviates significantly from established patterns
|
||||
- condition: review_architectural_decision # e.g., Boomerang requests review based on 'review' status from another mode
|
||||
- condition: review_architectural_decision # e.g., Orchestrator requests review based on 'review' status from another mode
|
||||
@@ -9,16 +9,16 @@
|
||||
|
||||
**Information Retrieval & Explanation Role (Delegated Tasks):**
|
||||
|
||||
Your primary role when activated via `new_task` by the Boomerang (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
Your primary role when activated via `new_task` by the Orchestrator (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
|
||||
1. **Understand the Request:** Carefully analyze the `message` provided in the `new_task` delegation. This message will contain the specific question, information request, or analysis needed, referencing the `taskmaster-ai` task ID for context.
|
||||
2. **Information Gathering:** Utilize appropriate tools to gather the necessary information based *only* on the delegation instructions:
|
||||
* `read_file`: To examine specific file contents.
|
||||
* `search_files`: To find patterns or specific text across the project.
|
||||
* `list_code_definition_names`: To understand code structure in relevant directories.
|
||||
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Boomerang delegation message to retrieve specific task details (e.g., using `get_task`).
|
||||
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Orchestrator delegation message to retrieve specific task details (e.g., using `get_task`).
|
||||
3. **Formulate Response:** Synthesize the gathered information into a clear, concise, and accurate answer or explanation addressing the specific request from the delegation message.
|
||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to process and potentially update `taskmaster-ai`. Include:
|
||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to process and potentially update `taskmaster-ai`. Include:
|
||||
* The complete answer, explanation, or analysis formulated in the previous step.
|
||||
* Completion status (success, failure - e.g., if information could not be found).
|
||||
* Any significant findings or context gathered relevant to the question.
|
||||
@@ -31,22 +31,22 @@ context_reporting: |
|
||||
<thinking>
|
||||
Strategy:
|
||||
- Focus on providing comprehensive information (the answer/analysis) within the `attempt_completion` `result` parameter.
|
||||
- Boomerang will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- Orchestrator will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- My role is to *report* accurately, not *log* directly to Taskmaster.
|
||||
</thinking>
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Boomerang.
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Orchestrator.
|
||||
- **Content:** Include the full answer, explanation, or analysis results. Cite sources if applicable. Structure the `result` clearly.
|
||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||
- **Mechanism:** Boomerang receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
|
||||
- **Mechanism:** Orchestrator receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
|
||||
|
||||
**Taskmaster Interaction:**
|
||||
|
||||
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Boomerang within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
|
||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Orchestrator within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
|
||||
|
||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||
|
||||
# Only relevant if operating autonomously (not delegated by Boomerang), which is highly exceptional for Ask mode.
|
||||
# Only relevant if operating autonomously (not delegated by Orchestrator), which is highly exceptional for Ask mode.
|
||||
taskmaster_strategy:
|
||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||
initialization: |
|
||||
@@ -58,7 +58,7 @@ taskmaster_strategy:
|
||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||
if_uninitialized: |
|
||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
||||
if_ready: |
|
||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context (again, very rare for Ask).
|
||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||
@@ -67,13 +67,13 @@ taskmaster_strategy:
|
||||
**Mode Collaboration & Triggers:**
|
||||
|
||||
mode_collaboration: |
|
||||
# Ask Mode Collaboration: Focuses on receiving tasks from Boomerang and reporting back findings.
|
||||
- Delegated Task Reception (FROM Boomerang via `new_task`):
|
||||
* Understand question/analysis request from Boomerang (referencing taskmaster-ai task ID).
|
||||
# Ask Mode Collaboration: Focuses on receiving tasks from Orchestrator and reporting back findings.
|
||||
- Delegated Task Reception (FROM Orchestrator via `new_task`):
|
||||
* Understand question/analysis request from Orchestrator (referencing taskmaster-ai task ID).
|
||||
* Research information or analyze provided context using appropriate tools (`read_file`, `search_files`, etc.) as instructed.
|
||||
* Formulate answers/explanations strictly within the subtask scope.
|
||||
* Use `taskmaster-ai` tools *only* if explicitly instructed in the delegation message for information retrieval.
|
||||
- Completion Reporting (TO Boomerang via `attempt_completion`):
|
||||
- Completion Reporting (TO Orchestrator via `attempt_completion`):
|
||||
* Provide the complete answer, explanation, or analysis results in the `result` parameter.
|
||||
* Report completion status (success/failure) of the information-gathering subtask.
|
||||
* Cite sources or relevant context found.
|
||||
|
||||
@@ -9,22 +9,22 @@
|
||||
|
||||
**Execution Role (Delegated Tasks):**
|
||||
|
||||
Your primary role is to **execute** tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
Your primary role is to **execute** tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
|
||||
1. **Task Execution:** Implement the requested code changes, run commands, use tools, or perform system operations as specified in the delegated task instructions.
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
||||
* Outcome of commands/tool usage.
|
||||
* Summary of code changes made or system operations performed.
|
||||
* Completion status (success, failure, needs review).
|
||||
* Any significant findings, errors encountered, or context gathered.
|
||||
* Links to commits or relevant code sections if applicable.
|
||||
3. **Handling Issues:**
|
||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
||||
* **Failure:** If the task fails, clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||
4. **Taskmaster Interaction:**
|
||||
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
|
||||
**Context Reporting Strategy:**
|
||||
|
||||
@@ -32,17 +32,17 @@ context_reporting: |
|
||||
<thinking>
|
||||
Strategy:
|
||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||
</thinking>
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
||||
- **Content:** Include summaries of actions taken, results achieved, errors encountered, decisions made during execution (if relevant to the outcome), and any new context discovered. Structure the `result` clearly.
|
||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
||||
|
||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||
|
||||
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
||||
taskmaster_strategy:
|
||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||
initialization: |
|
||||
@@ -54,7 +54,7 @@ taskmaster_strategy:
|
||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||
if_uninitialized: |
|
||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
||||
if_ready: |
|
||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||
|
||||
@@ -9,29 +9,29 @@
|
||||
|
||||
**Execution Role (Delegated Tasks):**
|
||||
|
||||
Your primary role is to **execute diagnostic tasks** delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
Your primary role is to **execute diagnostic tasks** delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||
|
||||
1. **Task Execution:**
|
||||
* Carefully analyze the `message` from Boomerang, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
|
||||
* Carefully analyze the `message` from Orchestrator, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
|
||||
* Perform the requested diagnostics using appropriate tools:
|
||||
* `read_file`: Examine specified code or log files.
|
||||
* `search_files`: Locate relevant code, errors, or patterns.
|
||||
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Boomerang.
|
||||
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Boomerang.
|
||||
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Orchestrator.
|
||||
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Orchestrator.
|
||||
* Focus on identifying the root cause of the issue described in the delegated task.
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
||||
* Summary of diagnostic steps taken and findings (e.g., identified root cause, affected areas).
|
||||
* Recommended next steps (e.g., specific code changes for Code mode, further tests for Test mode).
|
||||
* Completion status (success, failure, needs review). Reference the original `taskmaster-ai` task ID.
|
||||
* Any significant context gathered during the investigation.
|
||||
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Boomerang.
|
||||
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Orchestrator.
|
||||
3. **Handling Issues:**
|
||||
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
||||
* **Failure:** If the diagnostic task cannot be completed (e.g., required files missing, commands fail), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||
4. **Taskmaster Interaction:**
|
||||
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
|
||||
**Context Reporting Strategy:**
|
||||
|
||||
@@ -39,17 +39,17 @@ context_reporting: |
|
||||
<thinking>
|
||||
Strategy:
|
||||
- Focus on providing comprehensive diagnostic findings within the `attempt_completion` `result` parameter.
|
||||
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
|
||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
|
||||
- My role is to *report* diagnostic findings accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||
</thinking>
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Boomerang to understand the issue, update Taskmaster, and plan the next action.
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Orchestrator to understand the issue, update Taskmaster, and plan the next action.
|
||||
- **Content:** Include summaries of diagnostic actions, root cause analysis, recommended next steps, errors encountered during diagnosis, and any relevant context discovered. Structure the `result` clearly.
|
||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
|
||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
|
||||
|
||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||
|
||||
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
||||
taskmaster_strategy:
|
||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||
initialization: |
|
||||
@@ -61,7 +61,7 @@ taskmaster_strategy:
|
||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||
if_uninitialized: |
|
||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
||||
if_ready: |
|
||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||
|
||||
@@ -70,52 +70,52 @@ taskmaster_strategy:
|
||||
**Mode Collaboration & Triggers:**
|
||||
|
||||
mode_collaboration: |
|
||||
# Collaboration definitions for how Boomerang orchestrates and interacts.
|
||||
# Boomerang delegates via `new_task` using taskmaster-ai for task context,
|
||||
# Collaboration definitions for how Orchestrator orchestrates and interacts.
|
||||
# Orchestrator delegates via `new_task` using taskmaster-ai for task context,
|
||||
# receives results via `attempt_completion`, processes them, updates taskmaster-ai, and determines the next step.
|
||||
|
||||
1. Architect Mode Collaboration: # Interaction initiated BY Boomerang
|
||||
1. Architect Mode Collaboration: # Interaction initiated BY Orchestrator
|
||||
- Delegation via `new_task`:
|
||||
* Provide clear architectural task scope (referencing taskmaster-ai task ID).
|
||||
* Request design, structure, planning based on taskmaster context.
|
||||
- Completion Reporting TO Boomerang: # Receiving results FROM Architect via attempt_completion
|
||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Architect via attempt_completion
|
||||
* Expect design decisions, artifacts created, completion status (taskmaster-ai task ID).
|
||||
* Expect context needed for subsequent implementation delegation.
|
||||
|
||||
2. Test Mode Collaboration: # Interaction initiated BY Boomerang
|
||||
2. Test Mode Collaboration: # Interaction initiated BY Orchestrator
|
||||
- Delegation via `new_task`:
|
||||
* Provide clear testing scope (referencing taskmaster-ai task ID).
|
||||
* Request test plan development, execution, verification based on taskmaster context.
|
||||
- Completion Reporting TO Boomerang: # Receiving results FROM Test via attempt_completion
|
||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Test via attempt_completion
|
||||
* Expect summary of test results (pass/fail, coverage), completion status (taskmaster-ai task ID).
|
||||
* Expect details on bugs or validation issues.
|
||||
|
||||
3. Debug Mode Collaboration: # Interaction initiated BY Boomerang
|
||||
3. Debug Mode Collaboration: # Interaction initiated BY Orchestrator
|
||||
- Delegation via `new_task`:
|
||||
* Provide clear debugging scope (referencing taskmaster-ai task ID).
|
||||
* Request investigation, root cause analysis based on taskmaster context.
|
||||
- Completion Reporting TO Boomerang: # Receiving results FROM Debug via attempt_completion
|
||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Debug via attempt_completion
|
||||
* Expect summary of findings (root cause, affected areas), completion status (taskmaster-ai task ID).
|
||||
* Expect recommended fixes or next diagnostic steps.
|
||||
|
||||
4. Ask Mode Collaboration: # Interaction initiated BY Boomerang
|
||||
4. Ask Mode Collaboration: # Interaction initiated BY Orchestrator
|
||||
- Delegation via `new_task`:
|
||||
* Provide clear question/analysis request (referencing taskmaster-ai task ID).
|
||||
* Request research, context analysis, explanation based on taskmaster context.
|
||||
- Completion Reporting TO Boomerang: # Receiving results FROM Ask via attempt_completion
|
||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Ask via attempt_completion
|
||||
* Expect answers, explanations, analysis results, completion status (taskmaster-ai task ID).
|
||||
* Expect cited sources or relevant context found.
|
||||
|
||||
5. Code Mode Collaboration: # Interaction initiated BY Boomerang
|
||||
5. Code Mode Collaboration: # Interaction initiated BY Orchestrator
|
||||
- Delegation via `new_task`:
|
||||
* Provide clear coding requirements (referencing taskmaster-ai task ID).
|
||||
* Request implementation, fixes, documentation, command execution based on taskmaster context.
|
||||
- Completion Reporting TO Boomerang: # Receiving results FROM Code via attempt_completion
|
||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Code via attempt_completion
|
||||
* Expect outcome of commands/tool usage, summary of code changes/operations, completion status (taskmaster-ai task ID).
|
||||
* Expect links to commits or relevant code sections if relevant.
|
||||
|
||||
7. Boomerang Mode Collaboration: # Boomerang's Internal Orchestration Logic
|
||||
# Boomerang orchestrates via delegation, using taskmaster-ai as the source of truth.
|
||||
7. Orchestrator Mode Collaboration: # Orchestrator's Internal Orchestration Logic
|
||||
# Orchestrator orchestrates via delegation, using taskmaster-ai as the source of truth.
|
||||
- Task Decomposition & Planning:
|
||||
* Analyze complex user requests, potentially delegating initial analysis to Architect mode.
|
||||
* Use `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`) to understand current state.
|
||||
@@ -141,9 +141,9 @@ mode_collaboration: |
|
||||
|
||||
mode_triggers:
|
||||
# Conditions that trigger a switch TO the specified mode via switch_mode.
|
||||
# Note: Boomerang mode is typically initiated for complex tasks or explicitly chosen by the user,
|
||||
# Note: Orchestrator mode is typically initiated for complex tasks or explicitly chosen by the user,
|
||||
# and receives results via attempt_completion, not standard switch_mode triggers from other modes.
|
||||
# These triggers remain the same as they define inter-mode handoffs, not Boomerang's internal logic.
|
||||
# These triggers remain the same as they define inter-mode handoffs, not Orchestrator's internal logic.
|
||||
|
||||
architect:
|
||||
- condition: needs_architectural_changes
|
||||
@@ -9,22 +9,22 @@
|
||||
|
||||
**Execution Role (Delegated Tasks):**
|
||||
|
||||
Your primary role is to **execute** testing tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
|
||||
Your primary role is to **execute** testing tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
|
||||
|
||||
1. **Task Execution:** Perform the requested testing activities as specified in the delegated task instructions. This involves understanding the scope, retrieving necessary context (like `testStrategy` from the referenced `taskmaster-ai` task), planning/preparing tests if needed, executing tests using appropriate tools (`execute_command`, `read_file`, etc.), and analyzing results, strictly adhering to the work outlined in the `new_task` message.
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
||||
* Summary of testing activities performed (e.g., tests planned, executed).
|
||||
* Concise results/outcome (e.g., pass/fail counts, overall status, coverage information if applicable).
|
||||
* Completion status (success, failure, needs review - e.g., if tests reveal significant issues needing broader attention).
|
||||
* Any significant findings (e.g., details of bugs, errors, or validation issues found).
|
||||
* Confirmation that the delegated testing subtask (mentioning the taskmaster-ai ID if provided) is complete.
|
||||
3. **Handling Issues:**
|
||||
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Boomerang.
|
||||
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Orchestrator.
|
||||
* **Failure:** If the testing task itself cannot be completed (e.g., unable to run tests due to environment issues), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||
4. **Taskmaster Interaction:**
|
||||
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||
|
||||
**Context Reporting Strategy:**
|
||||
|
||||
@@ -32,17 +32,17 @@ context_reporting: |
|
||||
<thinking>
|
||||
Strategy:
|
||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||
</thinking>
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
||||
- **Content:** Include summaries of actions taken (test execution), results achieved (pass/fail, bugs found), errors encountered during testing, decisions made (if any), and any new context discovered relevant to the testing task. Structure the `result` clearly.
|
||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
||||
|
||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||
|
||||
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
||||
taskmaster_strategy:
|
||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||
initialization: |
|
||||
@@ -54,7 +54,7 @@ taskmaster_strategy:
|
||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||
if_uninitialized: |
|
||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
||||
if_ready: |
|
||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||
|
||||
@@ -6,7 +6,8 @@
|
||||
".changeset",
|
||||
"tasks",
|
||||
"package-lock.json",
|
||||
"tests/fixture/*.json"
|
||||
"tests/fixture/*.json",
|
||||
"dist"
|
||||
]
|
||||
},
|
||||
"formatter": {
|
||||
|
||||
@@ -41,13 +41,14 @@ Taskmaster uses two primary methods for configuration:
|
||||
"defaultTag": "master",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
|
||||
|
||||
- For projects that haven't migrated to the new structure yet.
|
||||
@@ -72,6 +73,7 @@ Taskmaster uses two primary methods for configuration:
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||
@@ -128,16 +130,19 @@ ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
|
||||
PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# GOOGLE_API_KEY=AIzaSy...
|
||||
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
# etc.
|
||||
|
||||
# Optional Endpoint Overrides
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
|
||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
||||
#
|
||||
# Azure OpenAI Configuration
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
# VERTEX_LOCATION=us-central1
|
||||
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
@@ -205,3 +210,104 @@ Google Vertex AI is Google Cloud's enterprise AI platform and requires specific
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
```
|
||||
|
||||
### Azure OpenAI Configuration
|
||||
|
||||
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- An Azure account with an active subscription
|
||||
- Azure OpenAI service resource created in the Azure portal
|
||||
- Azure OpenAI API key and endpoint URL
|
||||
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource
|
||||
|
||||
2. **Authentication**:
|
||||
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
|
||||
- Configure the endpoint URL using one of the methods below
|
||||
|
||||
3. **Configuration Options**:
|
||||
|
||||
**Option 1: Using Global Azure Base URL (affects all Azure models)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
|
||||
```json
|
||||
// In .taskmaster/config.json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Environment Variables**:
|
||||
```bash
|
||||
# In .env file
|
||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
|
||||
# Optional: Override endpoint for all Azure models
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||
```
|
||||
|
||||
5. **Important Notes**:
|
||||
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
|
||||
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
|
||||
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`
|
||||
|
||||
6. **Troubleshooting**:
|
||||
|
||||
**"Resource not found" errors:**
|
||||
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
|
||||
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
|
||||
- Check that your Azure OpenAI resource is in the correct region and properly deployed
|
||||
|
||||
**Authentication errors:**
|
||||
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
|
||||
- Ensure your Azure OpenAI resource has the necessary permissions
|
||||
- Check that your subscription has not been suspended or reached quota limits
|
||||
|
||||
**Model availability errors:**
|
||||
- Confirm the model is deployed in your Azure OpenAI resource
|
||||
- Verify the deployment name matches your configuration exactly (case-sensitive)
|
||||
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
|
||||
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
|
||||
|
||||
256
docs/models.md
256
docs/models.md
@@ -1,131 +1,143 @@
|
||||
# Available Models as of June 20, 2025
|
||||
# Available Models as of June 21, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| openai | o1 | 0.489 | 15 | 60 |
|
||||
| openai | o3 | 0.5 | 2 | 8 |
|
||||
| openai | o3-mini | 0.493 | 1.1 | 4.4 |
|
||||
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openai | o1-mini | 0.4 | 1.1 | 4.4 |
|
||||
| openai | o1-pro | — | 150 | 600 |
|
||||
| openai | gpt-4-5-preview | 0.38 | 75 | 150 |
|
||||
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
|
||||
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
|
||||
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
|
||||
| google | gemini-2.0-flash-lite | — | — | — |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
| ollama | qwen3:32b | — | 0 | 0 |
|
||||
| ollama | mistral-small3.1:latest | — | 0 | 0 |
|
||||
| ollama | llama3.3:latest | — | 0 | 0 |
|
||||
| ollama | phi4:latest | — | 0 | 0 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
|
||||
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
|
||||
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
|
||||
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
|
||||
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
|
||||
| openrouter | openai/o3 | — | 10 | 40 |
|
||||
| openrouter | openai/codex-mini | — | 1.5 | 6 |
|
||||
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
|
||||
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
|
||||
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
|
||||
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||
| azure | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||
| azure | gpt-4-1 | — | 2 | 10 |
|
||||
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| openai | o1 | 0.489 | 15 | 60 |
|
||||
| openai | o3 | 0.5 | 2 | 8 |
|
||||
| openai | o3-mini | 0.493 | 1.1 | 4.4 |
|
||||
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openai | o1-mini | 0.4 | 1.1 | 4.4 |
|
||||
| openai | o1-pro | — | 150 | 600 |
|
||||
| openai | gpt-4-5-preview | 0.38 | 75 | 150 |
|
||||
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
|
||||
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
|
||||
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
|
||||
| google | gemini-2.0-flash-lite | — | — | — |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
| ollama | qwen3:32b | — | 0 | 0 |
|
||||
| ollama | mistral-small3.1:latest | — | 0 | 0 |
|
||||
| ollama | llama3.3:latest | — | 0 | 0 |
|
||||
| ollama | phi4:latest | — | 0 | 0 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
|
||||
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
|
||||
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
|
||||
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
|
||||
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
|
||||
| openrouter | openai/o3 | — | 10 | 40 |
|
||||
| openrouter | openai/codex-mini | — | 1.5 | 6 |
|
||||
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
|
||||
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
|
||||
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
|
||||
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
|
||||
## Research Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ---------- | -------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
|
||||
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
||||
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar | — | 1 | 1 |
|
||||
| perplexity | deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | -------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
|
||||
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
||||
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar | — | 1 | 1 |
|
||||
| perplexity | deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
|
||||
## Fallback Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| openai | o3 | 0.5 | 2 | 8 |
|
||||
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
|
||||
| google | gemini-2.0-flash-lite | — | — | — |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
| ollama | qwen3:32b | — | 0 | 0 |
|
||||
| ollama | mistral-small3.1:latest | — | 0 | 0 |
|
||||
| ollama | llama3.3:latest | — | 0 | 0 |
|
||||
| ollama | phi4:latest | — | 0 | 0 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
|
||||
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
|
||||
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
|
||||
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
|
||||
| openrouter | openai/o3 | — | 10 | 40 |
|
||||
| openrouter | openai/codex-mini | — | 1.5 | 6 |
|
||||
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
|
||||
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
|
||||
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
|
||||
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||
| azure | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||
| azure | gpt-4-1 | — | 2 | 10 |
|
||||
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||
| openai | o3 | 0.5 | 2 | 8 |
|
||||
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
|
||||
| google | gemini-2.0-flash-lite | — | — | — |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| xai | grok-3 | — | 3 | 15 |
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| ollama | devstral:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:latest | — | 0 | 0 |
|
||||
| ollama | qwen3:14b | — | 0 | 0 |
|
||||
| ollama | qwen3:32b | — | 0 | 0 |
|
||||
| ollama | mistral-small3.1:latest | — | 0 | 0 |
|
||||
| ollama | llama3.3:latest | — | 0 | 0 |
|
||||
| ollama | phi4:latest | — | 0 | 0 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
|
||||
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
|
||||
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
|
||||
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
|
||||
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
|
||||
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
|
||||
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
|
||||
| openrouter | openai/o3 | — | 10 | 40 |
|
||||
| openrouter | openai/codex-mini | — | 1.5 | 6 |
|
||||
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
|
||||
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
|
||||
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
|
||||
| openrouter | openai/o1-pro | — | 150 | 600 |
|
||||
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
|
||||
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
|
||||
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
|
||||
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
|
||||
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
|
||||
5
index.js
5
index.js
@@ -83,6 +83,11 @@ if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
.option('--skip-install', 'Skip installing dependencies')
|
||||
.option('--dry-run', 'Show what would be done without making changes')
|
||||
.option('--aliases', 'Add shell aliases (tm, taskmaster)')
|
||||
.option('--no-aliases', 'Skip shell aliases (tm, taskmaster)')
|
||||
.option('--git', 'Initialize Git repository')
|
||||
.option('--no-git', 'Skip Git repository initialization')
|
||||
.option('--git-tasks', 'Store tasks in Git')
|
||||
.option('--no-git-tasks', 'No Git storage of tasks')
|
||||
.action(async (cmdOptions) => {
|
||||
try {
|
||||
await runInitCLI(cmdOptions);
|
||||
|
||||
@@ -26,6 +26,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {string} [args.prompt] - Additional context to guide subtask generation.
|
||||
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
|
||||
* @param {string} [args.projectRoot] - Project root directory.
|
||||
* @param {string} [args.tag] - Tag for the task
|
||||
* @param {Object} log - Logger object
|
||||
* @param {Object} context - Context object containing session
|
||||
* @param {Object} [context.session] - MCP Session object
|
||||
@@ -34,7 +35,8 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
export async function expandTaskDirect(args, log, context = {}) {
|
||||
const { session } = context; // Extract session
|
||||
// Destructure expected args, including projectRoot
|
||||
const { tasksJsonPath, id, num, research, prompt, force, projectRoot } = args;
|
||||
const { tasksJsonPath, id, num, research, prompt, force, projectRoot, tag } =
|
||||
args;
|
||||
|
||||
// Log session root data for debugging
|
||||
log.info(
|
||||
@@ -194,7 +196,8 @@ export async function expandTaskDirect(args, log, context = {}) {
|
||||
session,
|
||||
projectRoot,
|
||||
commandName: 'expand-task',
|
||||
outputType: 'mcp'
|
||||
outputType: 'mcp',
|
||||
tag
|
||||
},
|
||||
forceFlag
|
||||
);
|
||||
|
||||
@@ -11,7 +11,7 @@ import { convertAllRulesToProfileRules } from '../../../../src/utils/rule-transf
|
||||
/**
|
||||
* Direct function wrapper for initializing a project.
|
||||
* Derives target directory from session, sets CWD, and calls core init logic.
|
||||
* @param {object} args - Arguments containing initialization options (addAliases, skipInstall, yes, projectRoot, rules)
|
||||
* @param {object} args - Arguments containing initialization options (addAliases, initGit, storeTasksInGit, skipInstall, yes, projectRoot, rules)
|
||||
* @param {object} log - The FastMCP logger instance.
|
||||
* @param {object} context - The context object, must contain { session }.
|
||||
* @returns {Promise<{success: boolean, data?: any, error?: {code: string, message: string}}>} - Standard result object.
|
||||
@@ -65,7 +65,9 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
||||
// Construct options ONLY from the relevant flags in args
|
||||
// The core initializeProject operates in the current CWD, which we just set
|
||||
const options = {
|
||||
aliases: args.addAliases,
|
||||
addAliases: args.addAliases,
|
||||
initGit: args.initGit,
|
||||
storeTasksInGit: args.storeTasksInGit,
|
||||
skipInstall: args.skipInstall,
|
||||
yes: true // Force yes mode
|
||||
};
|
||||
|
||||
@@ -45,7 +45,8 @@ export function registerExpandTaskTool(server) {
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe('Force expansion even if subtasks exist')
|
||||
.describe('Force expansion even if subtasks exist'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
@@ -73,7 +74,8 @@ export function registerExpandTaskTool(server) {
|
||||
research: args.research,
|
||||
prompt: args.prompt,
|
||||
force: args.force,
|
||||
projectRoot: args.projectRoot
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag || 'master'
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -23,8 +23,18 @@ export function registerInitializeProjectTool(server) {
|
||||
addAliases: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.default(true)
|
||||
.describe('Add shell aliases (tm, taskmaster) to shell config file.'),
|
||||
initGit: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(true)
|
||||
.describe('Initialize Git repository in project root.'),
|
||||
storeTasksInGit: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(true)
|
||||
.describe('Store tasks in Git (tasks.json and tasks/ directory).'),
|
||||
yes: z
|
||||
.boolean()
|
||||
.optional()
|
||||
|
||||
2
package-lock.json
generated
2
package-lock.json
generated
@@ -12317,4 +12317,4 @@
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.17.1",
|
||||
"version": "0.18.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
|
||||
294
scripts/init.js
294
scripts/init.js
@@ -23,6 +23,8 @@ import figlet from 'figlet';
|
||||
import boxen from 'boxen';
|
||||
import gradient from 'gradient-string';
|
||||
import { isSilentMode } from './modules/utils.js';
|
||||
import { insideGitWorkTree } from './modules/utils/git-utils.js';
|
||||
import { manageGitignoreFile } from '../src/utils/manage-gitignore.js';
|
||||
import { RULE_PROFILES } from '../src/constants/profiles.js';
|
||||
import {
|
||||
convertAllRulesToProfileRules,
|
||||
@@ -320,16 +322,60 @@ async function initializeProject(options = {}) {
|
||||
// console.log('==================================================');
|
||||
// }
|
||||
|
||||
// Handle boolean aliases flags
|
||||
if (options.aliases === true) {
|
||||
options.addAliases = true; // --aliases flag provided
|
||||
} else if (options.aliases === false) {
|
||||
options.addAliases = false; // --no-aliases flag provided
|
||||
}
|
||||
// If options.aliases and options.noAliases are undefined, we'll prompt for it
|
||||
|
||||
// Handle boolean git flags
|
||||
if (options.git === true) {
|
||||
options.initGit = true; // --git flag provided
|
||||
} else if (options.git === false) {
|
||||
options.initGit = false; // --no-git flag provided
|
||||
}
|
||||
// If options.git and options.noGit are undefined, we'll prompt for it
|
||||
|
||||
// Handle boolean gitTasks flags
|
||||
if (options.gitTasks === true) {
|
||||
options.storeTasksInGit = true; // --git-tasks flag provided
|
||||
} else if (options.gitTasks === false) {
|
||||
options.storeTasksInGit = false; // --no-git-tasks flag provided
|
||||
}
|
||||
// If options.gitTasks and options.noGitTasks are undefined, we'll prompt for it
|
||||
|
||||
const skipPrompts = options.yes || (options.name && options.description);
|
||||
|
||||
// if (!isSilentMode()) {
|
||||
// console.log('Skip prompts determined:', skipPrompts);
|
||||
// }
|
||||
|
||||
const selectedRuleProfiles =
|
||||
options.rules && Array.isArray(options.rules) && options.rules.length > 0
|
||||
? options.rules
|
||||
: RULE_PROFILES; // Default to all profiles
|
||||
let selectedRuleProfiles;
|
||||
if (options.rulesExplicitlyProvided) {
|
||||
// If --rules flag was used, always respect it.
|
||||
log(
|
||||
'info',
|
||||
`Using rule profiles provided via command line: ${options.rules.join(', ')}`
|
||||
);
|
||||
selectedRuleProfiles = options.rules;
|
||||
} else if (skipPrompts) {
|
||||
// If non-interactive (e.g., --yes) and no rules specified, default to ALL.
|
||||
log(
|
||||
'info',
|
||||
`No rules specified in non-interactive mode, defaulting to all profiles.`
|
||||
);
|
||||
selectedRuleProfiles = RULE_PROFILES;
|
||||
} else {
|
||||
// If interactive and no rules specified, default to NONE.
|
||||
// The 'rules --setup' wizard will handle selection.
|
||||
log(
|
||||
'info',
|
||||
'No rules specified; interactive setup will be launched to select profiles.'
|
||||
);
|
||||
selectedRuleProfiles = [];
|
||||
}
|
||||
|
||||
if (skipPrompts) {
|
||||
if (!isSilentMode()) {
|
||||
@@ -343,21 +389,44 @@ async function initializeProject(options = {}) {
|
||||
const projectVersion = options.version || '0.1.0';
|
||||
const authorName = options.author || 'Vibe coder';
|
||||
const dryRun = options.dryRun || false;
|
||||
const addAliases = options.aliases || false;
|
||||
const addAliases =
|
||||
options.addAliases !== undefined ? options.addAliases : true; // Default to true if not specified
|
||||
const initGit = options.initGit !== undefined ? options.initGit : true; // Default to true if not specified
|
||||
const storeTasksInGit =
|
||||
options.storeTasksInGit !== undefined ? options.storeTasksInGit : true; // Default to true if not specified
|
||||
|
||||
if (dryRun) {
|
||||
log('info', 'DRY RUN MODE: No files will be modified');
|
||||
log('info', 'Would initialize Task Master project');
|
||||
log('info', 'Would create/update necessary project files');
|
||||
if (addAliases) {
|
||||
log('info', 'Would add shell aliases for task-master');
|
||||
}
|
||||
|
||||
// Show flag-specific behavior
|
||||
log(
|
||||
'info',
|
||||
`${addAliases ? 'Would add shell aliases (tm, taskmaster)' : 'Would skip shell aliases'}`
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
`${initGit ? 'Would initialize Git repository' : 'Would skip Git initialization'}`
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
`${storeTasksInGit ? 'Would store tasks in Git' : 'Would exclude tasks from Git'}`
|
||||
);
|
||||
|
||||
return {
|
||||
dryRun: true
|
||||
};
|
||||
}
|
||||
|
||||
createProjectStructure(addAliases, dryRun, options, selectedRuleProfiles);
|
||||
createProjectStructure(
|
||||
addAliases,
|
||||
initGit,
|
||||
storeTasksInGit,
|
||||
dryRun,
|
||||
options,
|
||||
selectedRuleProfiles
|
||||
);
|
||||
} else {
|
||||
// Interactive logic
|
||||
log('info', 'Required options not provided, proceeding with prompts.');
|
||||
@@ -367,14 +436,45 @@ async function initializeProject(options = {}) {
|
||||
input: process.stdin,
|
||||
output: process.stdout
|
||||
});
|
||||
// Only prompt for shell aliases
|
||||
const addAliasesInput = await promptQuestion(
|
||||
rl,
|
||||
chalk.cyan(
|
||||
'Add shell aliases for task-master? This lets you type "tm" instead of "task-master" (Y/n): '
|
||||
)
|
||||
);
|
||||
const addAliasesPrompted = addAliasesInput.trim().toLowerCase() !== 'n';
|
||||
// Prompt for shell aliases (skip if --aliases or --no-aliases flag was provided)
|
||||
let addAliasesPrompted = true; // Default to true
|
||||
if (options.addAliases !== undefined) {
|
||||
addAliasesPrompted = options.addAliases; // Use flag value if provided
|
||||
} else {
|
||||
const addAliasesInput = await promptQuestion(
|
||||
rl,
|
||||
chalk.cyan(
|
||||
'Add shell aliases for task-master? This lets you type "tm" instead of "task-master" (Y/n): '
|
||||
)
|
||||
);
|
||||
addAliasesPrompted = addAliasesInput.trim().toLowerCase() !== 'n';
|
||||
}
|
||||
|
||||
// Prompt for Git initialization (skip if --git or --no-git flag was provided)
|
||||
let initGitPrompted = true; // Default to true
|
||||
if (options.initGit !== undefined) {
|
||||
initGitPrompted = options.initGit; // Use flag value if provided
|
||||
} else {
|
||||
const gitInitInput = await promptQuestion(
|
||||
rl,
|
||||
chalk.cyan('Initialize a Git repository in project root? (Y/n): ')
|
||||
);
|
||||
initGitPrompted = gitInitInput.trim().toLowerCase() !== 'n';
|
||||
}
|
||||
|
||||
// Prompt for Git tasks storage (skip if --git-tasks or --no-git-tasks flag was provided)
|
||||
let storeGitPrompted = true; // Default to true
|
||||
if (options.storeTasksInGit !== undefined) {
|
||||
storeGitPrompted = options.storeTasksInGit; // Use flag value if provided
|
||||
} else {
|
||||
const gitTasksInput = await promptQuestion(
|
||||
rl,
|
||||
chalk.cyan(
|
||||
'Store tasks in Git (tasks.json and tasks/ directory)? (Y/n): '
|
||||
)
|
||||
);
|
||||
storeGitPrompted = gitTasksInput.trim().toLowerCase() !== 'n';
|
||||
}
|
||||
|
||||
// Confirm settings...
|
||||
console.log('\nTask Master Project settings:');
|
||||
@@ -384,6 +484,14 @@ async function initializeProject(options = {}) {
|
||||
),
|
||||
chalk.white(addAliasesPrompted ? 'Yes' : 'No')
|
||||
);
|
||||
console.log(
|
||||
chalk.blue('Initialize Git repository in project root:'),
|
||||
chalk.white(initGitPrompted ? 'Yes' : 'No')
|
||||
);
|
||||
console.log(
|
||||
chalk.blue('Store tasks in Git (tasks.json and tasks/ directory):'),
|
||||
chalk.white(storeGitPrompted ? 'Yes' : 'No')
|
||||
);
|
||||
|
||||
const confirmInput = await promptQuestion(
|
||||
rl,
|
||||
@@ -404,16 +512,6 @@ async function initializeProject(options = {}) {
|
||||
'info',
|
||||
`Using rule profiles provided via command line: ${selectedRuleProfiles.join(', ')}`
|
||||
);
|
||||
} else {
|
||||
try {
|
||||
const targetDir = process.cwd();
|
||||
execSync('npx task-master rules setup', {
|
||||
stdio: 'inherit',
|
||||
cwd: targetDir
|
||||
});
|
||||
} catch (error) {
|
||||
log('error', 'Failed to run interactive rules setup:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
const dryRun = options.dryRun || false;
|
||||
@@ -422,9 +520,21 @@ async function initializeProject(options = {}) {
|
||||
log('info', 'DRY RUN MODE: No files will be modified');
|
||||
log('info', 'Would initialize Task Master project');
|
||||
log('info', 'Would create/update necessary project files');
|
||||
if (addAliasesPrompted) {
|
||||
log('info', 'Would add shell aliases for task-master');
|
||||
}
|
||||
|
||||
// Show flag-specific behavior
|
||||
log(
|
||||
'info',
|
||||
`${addAliasesPrompted ? 'Would add shell aliases (tm, taskmaster)' : 'Would skip shell aliases'}`
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
`${initGitPrompted ? 'Would initialize Git repository' : 'Would skip Git initialization'}`
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
`${storeGitPrompted ? 'Would store tasks in Git' : 'Would exclude tasks from Git'}`
|
||||
);
|
||||
|
||||
return {
|
||||
dryRun: true
|
||||
};
|
||||
@@ -433,13 +543,17 @@ async function initializeProject(options = {}) {
|
||||
// Create structure using only necessary values
|
||||
createProjectStructure(
|
||||
addAliasesPrompted,
|
||||
initGitPrompted,
|
||||
storeGitPrompted,
|
||||
dryRun,
|
||||
options,
|
||||
selectedRuleProfiles
|
||||
);
|
||||
rl.close();
|
||||
} catch (error) {
|
||||
rl.close();
|
||||
if (rl) {
|
||||
rl.close();
|
||||
}
|
||||
log('error', `Error during initialization process: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
@@ -458,9 +572,11 @@ function promptQuestion(rl, question) {
|
||||
// Function to create the project structure
|
||||
function createProjectStructure(
|
||||
addAliases,
|
||||
initGit,
|
||||
storeTasksInGit,
|
||||
dryRun,
|
||||
options,
|
||||
selectedRuleProfiles = RULE_PROFILES // Default to all rule profiles
|
||||
selectedRuleProfiles = RULE_PROFILES
|
||||
) {
|
||||
const targetDir = process.cwd();
|
||||
log('info', `Initializing project in ${targetDir}`);
|
||||
@@ -507,27 +623,67 @@ function createProjectStructure(
|
||||
}
|
||||
);
|
||||
|
||||
// Copy .gitignore
|
||||
copyTemplateFile('gitignore', path.join(targetDir, GITIGNORE_FILE));
|
||||
// Copy .gitignore with GitTasks preference
|
||||
try {
|
||||
const gitignoreTemplatePath = path.join(
|
||||
__dirname,
|
||||
'..',
|
||||
'assets',
|
||||
'gitignore'
|
||||
);
|
||||
const templateContent = fs.readFileSync(gitignoreTemplatePath, 'utf8');
|
||||
manageGitignoreFile(
|
||||
path.join(targetDir, GITIGNORE_FILE),
|
||||
templateContent,
|
||||
storeTasksInGit,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log('error', `Failed to create .gitignore: ${error.message}`);
|
||||
}
|
||||
|
||||
// Copy example_prd.txt to NEW location
|
||||
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
|
||||
|
||||
// Initialize git repository if git is available
|
||||
try {
|
||||
if (!fs.existsSync(path.join(targetDir, '.git'))) {
|
||||
log('info', 'Initializing git repository...');
|
||||
execSync('git init', { stdio: 'ignore' });
|
||||
log('success', 'Git repository initialized');
|
||||
if (initGit === false) {
|
||||
log('info', 'Git initialization skipped due to --no-git flag.');
|
||||
} else if (initGit === true) {
|
||||
if (insideGitWorkTree()) {
|
||||
log(
|
||||
'info',
|
||||
'Existing Git repository detected – skipping git init despite --git flag.'
|
||||
);
|
||||
} else {
|
||||
log('info', 'Initializing Git repository due to --git flag...');
|
||||
execSync('git init', { cwd: targetDir, stdio: 'ignore' });
|
||||
log('success', 'Git repository initialized');
|
||||
}
|
||||
} else {
|
||||
// Default behavior when no flag is provided (from interactive prompt)
|
||||
if (insideGitWorkTree()) {
|
||||
log('info', 'Existing Git repository detected – skipping git init.');
|
||||
} else {
|
||||
log(
|
||||
'info',
|
||||
'No Git repository detected. Initializing one in project root...'
|
||||
);
|
||||
execSync('git init', { cwd: targetDir, stdio: 'ignore' });
|
||||
log('success', 'Git repository initialized');
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
log('warn', 'Git not available, skipping repository initialization');
|
||||
}
|
||||
|
||||
// Generate profile rules from assets/rules
|
||||
log('info', 'Generating profile rules from assets/rules...');
|
||||
for (const profileName of selectedRuleProfiles) {
|
||||
_processSingleProfile(profileName);
|
||||
// Only run the manual transformer if rules were provided via flags.
|
||||
// The interactive `rules --setup` wizard handles its own installation.
|
||||
if (options.rulesExplicitlyProvided || options.yes) {
|
||||
log('info', 'Generating profile rules from command-line flags...');
|
||||
for (const profileName of selectedRuleProfiles) {
|
||||
_processSingleProfile(profileName);
|
||||
}
|
||||
}
|
||||
|
||||
// Add shell aliases if requested
|
||||
@@ -558,6 +714,49 @@ function createProjectStructure(
|
||||
);
|
||||
}
|
||||
|
||||
// === Add Rule Profiles Setup Step ===
|
||||
if (
|
||||
!isSilentMode() &&
|
||||
!dryRun &&
|
||||
!options?.yes &&
|
||||
!options.rulesExplicitlyProvided
|
||||
) {
|
||||
console.log(
|
||||
boxen(chalk.cyan('Configuring Rule Profiles...'), {
|
||||
padding: 0.5,
|
||||
margin: { top: 1, bottom: 0.5 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
'Running interactive rules setup. Please select which rule profiles to include.'
|
||||
);
|
||||
try {
|
||||
// Correct command confirmed by you.
|
||||
execSync('npx task-master rules --setup', {
|
||||
stdio: 'inherit',
|
||||
cwd: targetDir
|
||||
});
|
||||
log('success', 'Rule profiles configured.');
|
||||
} catch (error) {
|
||||
log('error', 'Failed to configure rule profiles:', error.message);
|
||||
log('warn', 'You may need to run "task-master rules --setup" manually.');
|
||||
}
|
||||
} else if (isSilentMode() || dryRun || options?.yes) {
|
||||
// This branch can log why setup was skipped, similar to the model setup logic.
|
||||
if (options.rulesExplicitlyProvided) {
|
||||
log(
|
||||
'info',
|
||||
'Skipping interactive rules setup because --rules flag was used.'
|
||||
);
|
||||
} else {
|
||||
log('info', 'Skipping interactive rules setup in non-interactive mode.');
|
||||
}
|
||||
}
|
||||
// =====================================
|
||||
|
||||
// === Add Model Configuration Step ===
|
||||
if (!isSilentMode() && !dryRun && !options?.yes) {
|
||||
console.log(
|
||||
@@ -599,6 +798,17 @@ function createProjectStructure(
|
||||
}
|
||||
// ====================================
|
||||
|
||||
// Add shell aliases if requested
|
||||
if (addAliases && !dryRun) {
|
||||
log('info', 'Adding shell aliases...');
|
||||
const aliasResult = addShellAliases();
|
||||
if (aliasResult) {
|
||||
log('success', 'Shell aliases added successfully');
|
||||
}
|
||||
} else if (addAliases && dryRun) {
|
||||
log('info', 'DRY RUN: Would add shell aliases (tm, taskmaster)');
|
||||
}
|
||||
|
||||
// Display success message
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
|
||||
@@ -3342,6 +3342,11 @@ ${result.result}
|
||||
.option('--skip-install', 'Skip installing dependencies')
|
||||
.option('--dry-run', 'Show what would be done without making changes')
|
||||
.option('--aliases', 'Add shell aliases (tm, taskmaster)')
|
||||
.option('--no-aliases', 'Skip shell aliases (tm, taskmaster)')
|
||||
.option('--git', 'Initialize Git repository')
|
||||
.option('--no-git', 'Skip Git repository initialization')
|
||||
.option('--git-tasks', 'Store tasks in Git')
|
||||
.option('--no-git-tasks', 'No Git storage of tasks')
|
||||
.action(async (cmdOptions) => {
|
||||
// cmdOptions contains parsed arguments
|
||||
// Parse rules: accept space or comma separated, default to all available rules
|
||||
@@ -3823,7 +3828,26 @@ Examples:
|
||||
if (options[RULES_SETUP_ACTION]) {
|
||||
// Run interactive rules setup ONLY (no project init)
|
||||
const selectedRuleProfiles = await runInteractiveProfilesSetup();
|
||||
for (const profile of selectedRuleProfiles) {
|
||||
|
||||
if (!selectedRuleProfiles || selectedRuleProfiles.length === 0) {
|
||||
console.log(chalk.yellow('No profiles selected. Exiting.'));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Installing ${selectedRuleProfiles.length} selected profile(s)...`
|
||||
)
|
||||
);
|
||||
|
||||
for (let i = 0; i < selectedRuleProfiles.length; i++) {
|
||||
const profile = selectedRuleProfiles[i];
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Processing profile ${i + 1}/${selectedRuleProfiles.length}: ${profile}...`
|
||||
)
|
||||
);
|
||||
|
||||
if (!isValidProfile(profile)) {
|
||||
console.warn(
|
||||
`Rule profile for "${profile}" not found. Valid profiles: ${RULE_PROFILES.join(', ')}. Skipping.`
|
||||
@@ -3831,16 +3855,20 @@ Examples:
|
||||
continue;
|
||||
}
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
const addResult = convertAllRulesToProfileRules(
|
||||
projectDir,
|
||||
profileConfig
|
||||
);
|
||||
if (typeof profileConfig.onAddRulesProfile === 'function') {
|
||||
profileConfig.onAddRulesProfile(projectDir);
|
||||
}
|
||||
|
||||
console.log(chalk.green(generateProfileSummary(profile, addResult)));
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.green(
|
||||
`\nCompleted installation of all ${selectedRuleProfiles.length} profile(s).`
|
||||
)
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ const DEFAULTS = {
|
||||
// No default fallback provider/model initially
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000, // Default parameters if fallback IS configured
|
||||
maxTokens: 8192, // Default parameters if fallback IS configured
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
@@ -571,10 +571,11 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
|
||||
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
|
||||
const mcpConfig = JSON.parse(mcpConfigRaw);
|
||||
|
||||
const mcpEnv = mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
|
||||
const mcpEnv =
|
||||
mcpConfig?.mcpServers?.['task-master-ai']?.env ||
|
||||
mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
|
||||
if (!mcpEnv) {
|
||||
// console.warn(chalk.yellow('Warning: Could not find taskmaster-ai env in mcp.json.'));
|
||||
return false; // Structure missing
|
||||
return false;
|
||||
}
|
||||
|
||||
let apiKeyToCheck = null;
|
||||
@@ -782,9 +783,15 @@ function getAllProviders() {
|
||||
|
||||
function getBaseUrlForRole(role, explicitRoot = null) {
|
||||
const roleConfig = getModelConfigForRole(role, explicitRoot);
|
||||
return roleConfig && typeof roleConfig.baseURL === 'string'
|
||||
? roleConfig.baseURL
|
||||
: undefined;
|
||||
if (roleConfig && typeof roleConfig.baseURL === 'string') {
|
||||
return roleConfig.baseURL;
|
||||
}
|
||||
const provider = roleConfig?.provider;
|
||||
if (provider) {
|
||||
const envVarName = `${provider.toUpperCase()}_BASE_URL`;
|
||||
return resolveEnvVariable(envVarName, null, explicitRoot);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
export {
|
||||
|
||||
@@ -54,7 +54,39 @@
|
||||
"output": 15.0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 64000
|
||||
"max_tokens": 8192
|
||||
}
|
||||
],
|
||||
"azure": [
|
||||
{
|
||||
"id": "gpt-4o",
|
||||
"swe_score": 0.332,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 2.5,
|
||||
"output": 10.0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 16384
|
||||
},
|
||||
{
|
||||
"id": "gpt-4o-mini",
|
||||
"swe_score": 0.3,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0.15,
|
||||
"output": 0.6
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 16384
|
||||
},
|
||||
{
|
||||
"id": "gpt-4-1",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 2.0,
|
||||
"output": 10.0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 16384
|
||||
}
|
||||
],
|
||||
"openai": [
|
||||
@@ -84,7 +116,8 @@
|
||||
"input": 2.0,
|
||||
"output": 8.0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 100000
|
||||
},
|
||||
{
|
||||
"id": "o3-mini",
|
||||
|
||||
@@ -27,7 +27,6 @@ import {
|
||||
} from '../utils.js';
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { getDefaultPriority } from '../config-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import ContextGatherer from '../utils/contextGatherer.js';
|
||||
|
||||
// Define Zod schema for the expected AI output object
|
||||
@@ -44,7 +43,7 @@ const AiTaskDataSchema = z.object({
|
||||
.describe('Detailed approach for verifying task completion'),
|
||||
dependencies: z
|
||||
.array(z.number())
|
||||
.optional()
|
||||
.nullable()
|
||||
.describe(
|
||||
'Array of task IDs that this task depends on (must be completed before this task can start)'
|
||||
)
|
||||
|
||||
@@ -32,7 +32,12 @@ async function expandAllTasks(
|
||||
context = {},
|
||||
outputFormat = 'text' // Assume text default for CLI
|
||||
) {
|
||||
const { session, mcpLog, projectRoot: providedProjectRoot } = context;
|
||||
const {
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot: providedProjectRoot,
|
||||
tag: contextTag
|
||||
} = context;
|
||||
const isMCPCall = !!mcpLog; // Determine if called from MCP
|
||||
|
||||
const projectRoot = providedProjectRoot || findProjectRoot();
|
||||
@@ -74,7 +79,7 @@ async function expandAllTasks(
|
||||
|
||||
try {
|
||||
logger.info(`Reading tasks from ${tasksPath}`);
|
||||
const data = readJSON(tasksPath, projectRoot);
|
||||
const data = readJSON(tasksPath, projectRoot, contextTag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
||||
}
|
||||
@@ -124,7 +129,7 @@ async function expandAllTasks(
|
||||
numSubtasks,
|
||||
useResearch,
|
||||
additionalContext,
|
||||
{ ...context, projectRoot }, // Pass the whole context object with projectRoot
|
||||
{ ...context, projectRoot, tag: data.tag || contextTag }, // Pass the whole context object with projectRoot and resolved tag
|
||||
force
|
||||
);
|
||||
expandedCount++;
|
||||
|
||||
@@ -43,8 +43,9 @@ const subtaskSchema = z
|
||||
),
|
||||
testStrategy: z
|
||||
.string()
|
||||
.optional()
|
||||
.nullable()
|
||||
.describe('Approach for testing this subtask')
|
||||
.default('')
|
||||
})
|
||||
.strict();
|
||||
const subtaskArraySchema = z.array(subtaskSchema);
|
||||
@@ -417,7 +418,7 @@ async function expandTask(
|
||||
context = {},
|
||||
force = false
|
||||
) {
|
||||
const { session, mcpLog, projectRoot: contextProjectRoot } = context;
|
||||
const { session, mcpLog, projectRoot: contextProjectRoot, tag } = context;
|
||||
const outputFormat = mcpLog ? 'json' : 'text';
|
||||
|
||||
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
|
||||
@@ -439,7 +440,7 @@ async function expandTask(
|
||||
try {
|
||||
// --- Task Loading/Filtering (Unchanged) ---
|
||||
logger.info(`Reading tasks from ${tasksPath}`);
|
||||
const data = readJSON(tasksPath, projectRoot);
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
if (!data || !data.tasks)
|
||||
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
||||
const taskIndex = data.tasks.findIndex(
|
||||
@@ -668,7 +669,7 @@ async function expandTask(
|
||||
// --- End Change: Append instead of replace ---
|
||||
|
||||
data.tasks[taskIndex] = task; // Assign the modified task back
|
||||
writeJSON(tasksPath, data);
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
// Display AI Usage Summary for CLI
|
||||
|
||||
@@ -26,11 +26,11 @@ const prdSingleTaskSchema = z.object({
|
||||
id: z.number().int().positive(),
|
||||
title: z.string().min(1),
|
||||
description: z.string().min(1),
|
||||
details: z.string().optional().default(''),
|
||||
testStrategy: z.string().optional().default(''),
|
||||
priority: z.enum(['high', 'medium', 'low']).default('medium'),
|
||||
dependencies: z.array(z.number().int().positive()).optional().default([]),
|
||||
status: z.string().optional().default('pending')
|
||||
details: z.string().nullable(),
|
||||
testStrategy: z.string().nullable(),
|
||||
priority: z.enum(['high', 'medium', 'low']).nullable(),
|
||||
dependencies: z.array(z.number().int().positive()).nullable(),
|
||||
status: z.string().nullable()
|
||||
});
|
||||
|
||||
// Define the Zod schema for the ENTIRE expected AI response object
|
||||
|
||||
@@ -36,10 +36,27 @@ const updatedTaskSchema = z
|
||||
description: z.string(),
|
||||
status: z.string(),
|
||||
dependencies: z.array(z.union([z.number().int(), z.string()])),
|
||||
priority: z.string().optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional(),
|
||||
subtasks: z.array(z.any()).optional()
|
||||
priority: z.string().nullable().default('medium'),
|
||||
details: z.string().nullable().default(''),
|
||||
testStrategy: z.string().nullable().default(''),
|
||||
subtasks: z
|
||||
.array(
|
||||
z.object({
|
||||
id: z
|
||||
.number()
|
||||
.int()
|
||||
.positive()
|
||||
.describe('Sequential subtask ID starting from 1'),
|
||||
title: z.string(),
|
||||
description: z.string(),
|
||||
status: z.string(),
|
||||
dependencies: z.array(z.number().int()).nullable().default([]),
|
||||
details: z.string().nullable().default(''),
|
||||
testStrategy: z.string().nullable().default('')
|
||||
})
|
||||
)
|
||||
.nullable()
|
||||
.default([])
|
||||
})
|
||||
.strip(); // Allows parsing even if AI adds extra fields, but validation focuses on schema
|
||||
|
||||
@@ -441,6 +458,8 @@ Guidelines:
|
||||
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
|
||||
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
|
||||
11. Ensure any new subtasks have unique IDs that don't conflict with existing ones
|
||||
12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings ("1", "2", "3")
|
||||
13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix
|
||||
|
||||
The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`;
|
||||
|
||||
@@ -573,6 +592,37 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
||||
);
|
||||
updatedTask.status = taskToUpdate.status;
|
||||
}
|
||||
// Fix subtask IDs if they exist (ensure they are numeric and sequential)
|
||||
if (updatedTask.subtasks && Array.isArray(updatedTask.subtasks)) {
|
||||
let currentSubtaskId = 1;
|
||||
updatedTask.subtasks = updatedTask.subtasks.map((subtask) => {
|
||||
// Fix AI-generated subtask IDs that might be strings or use parent ID as prefix
|
||||
const correctedSubtask = {
|
||||
...subtask,
|
||||
id: currentSubtaskId, // Override AI-generated ID with correct sequential ID
|
||||
dependencies: Array.isArray(subtask.dependencies)
|
||||
? subtask.dependencies
|
||||
.map((dep) =>
|
||||
typeof dep === 'string' ? parseInt(dep, 10) : dep
|
||||
)
|
||||
.filter(
|
||||
(depId) =>
|
||||
!Number.isNaN(depId) &&
|
||||
depId >= 1 &&
|
||||
depId < currentSubtaskId
|
||||
)
|
||||
: [],
|
||||
status: subtask.status || 'pending'
|
||||
};
|
||||
currentSubtaskId++;
|
||||
return correctedSubtask;
|
||||
});
|
||||
report(
|
||||
'info',
|
||||
`Fixed ${updatedTask.subtasks.length} subtask IDs to be sequential numeric IDs.`
|
||||
);
|
||||
}
|
||||
|
||||
// Preserve completed subtasks (Keep existing logic)
|
||||
if (taskToUpdate.subtasks?.length > 0) {
|
||||
if (!updatedTask.subtasks) {
|
||||
|
||||
@@ -35,10 +35,10 @@ const updatedTaskSchema = z
|
||||
description: z.string(),
|
||||
status: z.string(),
|
||||
dependencies: z.array(z.union([z.number().int(), z.string()])),
|
||||
priority: z.string().optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional(),
|
||||
subtasks: z.array(z.any()).optional() // Keep subtasks flexible for now
|
||||
priority: z.string().nullable(),
|
||||
details: z.string().nullable(),
|
||||
testStrategy: z.string().nullable(),
|
||||
subtasks: z.array(z.any()).nullable() // Keep subtasks flexible for now
|
||||
})
|
||||
.strip(); // Allow potential extra fields during parsing if needed, then validate structure
|
||||
const updatedTaskArraySchema = z.array(updatedTaskSchema);
|
||||
|
||||
@@ -73,7 +73,7 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
|
||||
*/
|
||||
function findProjectRoot(
|
||||
startDir = process.cwd(),
|
||||
markers = ['package.json', '.git', LEGACY_CONFIG_FILE]
|
||||
markers = ['package.json', 'pyproject.toml', '.git', LEGACY_CONFIG_FILE]
|
||||
) {
|
||||
let currentPath = path.resolve(startDir);
|
||||
const rootPath = path.parse(currentPath).root;
|
||||
|
||||
@@ -349,6 +349,25 @@ function getCurrentBranchSync(projectRoot) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if the current working directory is inside a Git work-tree.
|
||||
* Uses `git rev-parse --is-inside-work-tree` which is more specific than --git-dir
|
||||
* for detecting work-trees (excludes bare repos and .git directories).
|
||||
* This is ideal for preventing accidental git init in existing work-trees.
|
||||
* @returns {boolean} True if inside a Git work-tree, false otherwise.
|
||||
*/
|
||||
function insideGitWorkTree() {
|
||||
try {
|
||||
execSync('git rev-parse --is-inside-work-tree', {
|
||||
stdio: 'ignore',
|
||||
cwd: process.cwd()
|
||||
});
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Export all functions
|
||||
export {
|
||||
isGitRepository,
|
||||
@@ -366,5 +385,6 @@ export {
|
||||
checkAndAutoSwitchGitTag,
|
||||
checkAndAutoSwitchGitTagSync,
|
||||
isGitRepositorySync,
|
||||
getCurrentBranchSync
|
||||
getCurrentBranchSync,
|
||||
insideGitWorkTree
|
||||
};
|
||||
|
||||
293
src/utils/manage-gitignore.js
Normal file
293
src/utils/manage-gitignore.js
Normal file
@@ -0,0 +1,293 @@
|
||||
// Utility to manage .gitignore files with task file preferences and template merging
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
// Constants
|
||||
const TASK_FILES_COMMENT = '# Task files';
|
||||
const TASK_JSON_PATTERN = 'tasks.json';
|
||||
const TASK_DIR_PATTERN = 'tasks/';
|
||||
|
||||
/**
|
||||
* Normalizes a line by removing comments and trimming whitespace
|
||||
* @param {string} line - Line to normalize
|
||||
* @returns {string} Normalized line
|
||||
*/
|
||||
function normalizeLine(line) {
|
||||
return line.trim().replace(/^#/, '').trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a line is task-related (tasks.json or tasks/)
|
||||
* @param {string} line - Line to check
|
||||
* @returns {boolean} True if line is task-related
|
||||
*/
|
||||
function isTaskLine(line) {
|
||||
const normalized = normalizeLine(line);
|
||||
return normalized === TASK_JSON_PATTERN || normalized === TASK_DIR_PATTERN;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adjusts task-related lines in template based on storage preference
|
||||
* @param {string[]} templateLines - Array of template lines
|
||||
* @param {boolean} storeTasksInGit - Whether to comment out task lines
|
||||
* @returns {string[]} Adjusted template lines
|
||||
*/
|
||||
function adjustTaskLinesInTemplate(templateLines, storeTasksInGit) {
|
||||
return templateLines.map((line) => {
|
||||
if (isTaskLine(line)) {
|
||||
const normalized = normalizeLine(line);
|
||||
// Preserve original trailing whitespace from the line
|
||||
const originalTrailingSpace = line.match(/\s*$/)[0];
|
||||
return storeTasksInGit
|
||||
? `# ${normalized}${originalTrailingSpace}`
|
||||
: `${normalized}${originalTrailingSpace}`;
|
||||
}
|
||||
return line;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes existing task files section from content
|
||||
* @param {string[]} existingLines - Existing file lines
|
||||
* @returns {string[]} Lines with task section removed
|
||||
*/
|
||||
function removeExistingTaskSection(existingLines) {
|
||||
const cleanedLines = [];
|
||||
let inTaskSection = false;
|
||||
|
||||
for (const line of existingLines) {
|
||||
// Start of task files section
|
||||
if (line.trim() === TASK_FILES_COMMENT) {
|
||||
inTaskSection = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Task lines (commented or not)
|
||||
if (isTaskLine(line)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Empty lines within task section
|
||||
if (inTaskSection && !line.trim()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// End of task section (any non-empty, non-task line)
|
||||
if (inTaskSection && line.trim() && !isTaskLine(line)) {
|
||||
inTaskSection = false;
|
||||
}
|
||||
|
||||
// Keep all other lines
|
||||
if (!inTaskSection) {
|
||||
cleanedLines.push(line);
|
||||
}
|
||||
}
|
||||
|
||||
return cleanedLines;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters template lines to only include new content not already present
|
||||
* @param {string[]} templateLines - Template lines
|
||||
* @param {Set<string>} existingLinesSet - Set of existing trimmed lines
|
||||
* @returns {string[]} New lines to add
|
||||
*/
|
||||
function filterNewTemplateLines(templateLines, existingLinesSet) {
|
||||
return templateLines.filter((line) => {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed) return false;
|
||||
|
||||
// Skip task-related lines (handled separately)
|
||||
if (isTaskLine(line) || trimmed === TASK_FILES_COMMENT) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Include only if not already present
|
||||
return !existingLinesSet.has(trimmed);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Builds the task files section based on storage preference
|
||||
* @param {boolean} storeTasksInGit - Whether to comment out task lines
|
||||
* @returns {string[]} Task files section lines
|
||||
*/
|
||||
function buildTaskFilesSection(storeTasksInGit) {
|
||||
const section = [TASK_FILES_COMMENT];
|
||||
|
||||
if (storeTasksInGit) {
|
||||
section.push(`# ${TASK_JSON_PATTERN}`, `# ${TASK_DIR_PATTERN} `);
|
||||
} else {
|
||||
section.push(TASK_JSON_PATTERN, `${TASK_DIR_PATTERN} `);
|
||||
}
|
||||
|
||||
return section;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a separator line if needed (avoids double spacing)
|
||||
* @param {string[]} lines - Current lines array
|
||||
*/
|
||||
function addSeparatorIfNeeded(lines) {
|
||||
if (lines.some((line) => line.trim())) {
|
||||
const lastLine = lines[lines.length - 1];
|
||||
if (lastLine && lastLine.trim()) {
|
||||
lines.push('');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates input parameters
|
||||
* @param {string} targetPath - Path to .gitignore file
|
||||
* @param {string} content - Template content
|
||||
* @param {boolean} storeTasksInGit - Storage preference
|
||||
* @throws {Error} If validation fails
|
||||
*/
|
||||
function validateInputs(targetPath, content, storeTasksInGit) {
|
||||
if (!targetPath || typeof targetPath !== 'string') {
|
||||
throw new Error('targetPath must be a non-empty string');
|
||||
}
|
||||
|
||||
if (!targetPath.endsWith('.gitignore')) {
|
||||
throw new Error('targetPath must end with .gitignore');
|
||||
}
|
||||
|
||||
if (!content || typeof content !== 'string') {
|
||||
throw new Error('content must be a non-empty string');
|
||||
}
|
||||
|
||||
if (typeof storeTasksInGit !== 'boolean') {
|
||||
throw new Error('storeTasksInGit must be a boolean');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a new .gitignore file from template
|
||||
* @param {string} targetPath - Path to create file at
|
||||
* @param {string[]} templateLines - Adjusted template lines
|
||||
* @param {function} log - Logging function
|
||||
*/
|
||||
function createNewGitignoreFile(targetPath, templateLines, log) {
|
||||
try {
|
||||
fs.writeFileSync(targetPath, templateLines.join('\n'));
|
||||
if (typeof log === 'function') {
|
||||
log('success', `Created ${targetPath} with full template`);
|
||||
}
|
||||
} catch (error) {
|
||||
if (typeof log === 'function') {
|
||||
log('error', `Failed to create ${targetPath}: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges template content with existing .gitignore file
|
||||
* @param {string} targetPath - Path to existing file
|
||||
* @param {string[]} templateLines - Adjusted template lines
|
||||
* @param {boolean} storeTasksInGit - Storage preference
|
||||
* @param {function} log - Logging function
|
||||
*/
|
||||
function mergeWithExistingFile(
|
||||
targetPath,
|
||||
templateLines,
|
||||
storeTasksInGit,
|
||||
log
|
||||
) {
|
||||
try {
|
||||
// Read and process existing file
|
||||
const existingContent = fs.readFileSync(targetPath, 'utf8');
|
||||
const existingLines = existingContent.split('\n');
|
||||
|
||||
// Remove existing task section
|
||||
const cleanedExistingLines = removeExistingTaskSection(existingLines);
|
||||
|
||||
// Find new template lines to add
|
||||
const existingLinesSet = new Set(
|
||||
cleanedExistingLines.map((line) => line.trim()).filter((line) => line)
|
||||
);
|
||||
const newLines = filterNewTemplateLines(templateLines, existingLinesSet);
|
||||
|
||||
// Build final content
|
||||
const finalLines = [...cleanedExistingLines];
|
||||
|
||||
// Add new template content
|
||||
if (newLines.length > 0) {
|
||||
addSeparatorIfNeeded(finalLines);
|
||||
finalLines.push(...newLines);
|
||||
}
|
||||
|
||||
// Add task files section
|
||||
addSeparatorIfNeeded(finalLines);
|
||||
finalLines.push(...buildTaskFilesSection(storeTasksInGit));
|
||||
|
||||
// Write result
|
||||
fs.writeFileSync(targetPath, finalLines.join('\n'));
|
||||
|
||||
if (typeof log === 'function') {
|
||||
const hasNewContent =
|
||||
newLines.length > 0 ? ' and merged new content' : '';
|
||||
log(
|
||||
'success',
|
||||
`Updated ${targetPath} according to user preference${hasNewContent}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
if (typeof log === 'function') {
|
||||
log(
|
||||
'error',
|
||||
`Failed to merge content with ${targetPath}: ${error.message}`
|
||||
);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Manages .gitignore file creation and updates with task file preferences
|
||||
* @param {string} targetPath - Path to the .gitignore file
|
||||
* @param {string} content - Template content for .gitignore
|
||||
* @param {boolean} storeTasksInGit - Whether to store tasks in git or not
|
||||
* @param {function} log - Logging function (level, message)
|
||||
* @throws {Error} If validation or file operations fail
|
||||
*/
|
||||
function manageGitignoreFile(
|
||||
targetPath,
|
||||
content,
|
||||
storeTasksInGit = true,
|
||||
log = null
|
||||
) {
|
||||
// Validate inputs
|
||||
validateInputs(targetPath, content, storeTasksInGit);
|
||||
|
||||
// Process template with task preference
|
||||
const templateLines = content.split('\n');
|
||||
const adjustedTemplateLines = adjustTaskLinesInTemplate(
|
||||
templateLines,
|
||||
storeTasksInGit
|
||||
);
|
||||
|
||||
// Handle file creation or merging
|
||||
if (!fs.existsSync(targetPath)) {
|
||||
createNewGitignoreFile(targetPath, adjustedTemplateLines, log);
|
||||
} else {
|
||||
mergeWithExistingFile(
|
||||
targetPath,
|
||||
adjustedTemplateLines,
|
||||
storeTasksInGit,
|
||||
log
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export default manageGitignoreFile;
|
||||
export {
|
||||
manageGitignoreFile,
|
||||
normalizeLine,
|
||||
isTaskLine,
|
||||
buildTaskFilesSection,
|
||||
TASK_FILES_COMMENT,
|
||||
TASK_JSON_PATTERN,
|
||||
TASK_DIR_PATTERN
|
||||
};
|
||||
@@ -206,6 +206,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const assetsDir = path.join(__dirname, '..', '..', 'assets');
|
||||
|
||||
if (typeof profile.onPostConvertRulesProfile === 'function') {
|
||||
profile.onPostConvertRulesProfile(projectDir, assetsDir);
|
||||
}
|
||||
|
||||
@@ -333,8 +333,8 @@ log_step() {
|
||||
|
||||
log_step "Initializing Task Master project (non-interactive)"
|
||||
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
|
||||
if [ ! -f ".taskmasterconfig" ]; then
|
||||
log_error "Initialization failed: .taskmasterconfig not found."
|
||||
if [ ! -f ".taskmaster/config.json" ]; then
|
||||
log_error "Initialization failed: .taskmaster/config.json not found."
|
||||
exit 1
|
||||
fi
|
||||
log_success "Project initialized."
|
||||
@@ -344,8 +344,8 @@ log_step() {
|
||||
exit_status_prd=$?
|
||||
echo "$cmd_output_prd"
|
||||
extract_and_sum_cost "$cmd_output_prd"
|
||||
if [ $exit_status_prd -ne 0 ] || [ ! -s "tasks/tasks.json" ]; then
|
||||
log_error "Parsing PRD failed: tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
||||
if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then
|
||||
log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
||||
exit 1
|
||||
else
|
||||
log_success "PRD parsed successfully."
|
||||
@@ -386,6 +386,95 @@ log_step() {
|
||||
task-master list --with-subtasks > task_list_after_changes.log
|
||||
log_success "Task list after changes saved to task_list_after_changes.log"
|
||||
|
||||
# === Start New Test Section: Tag-Aware Expand Testing ===
|
||||
log_step "Creating additional tag for expand testing"
|
||||
task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation"
|
||||
log_success "Created feature-expand tag."
|
||||
|
||||
log_step "Adding task to feature-expand tag"
|
||||
task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium
|
||||
# Get the new task ID dynamically
|
||||
new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
||||
log_success "Added task $new_expand_task_id to feature-expand tag."
|
||||
|
||||
log_step "Verifying tags exist before expand test"
|
||||
task-master tags > tags_before_expand.log
|
||||
tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
log_success "Tag count before expand: $tag_count_before"
|
||||
|
||||
log_step "Expanding task in feature-expand tag (testing tag corruption fix)"
|
||||
cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1)
|
||||
exit_status_expand_tagged=$?
|
||||
echo "$cmd_output_expand_tagged"
|
||||
extract_and_sum_cost "$cmd_output_expand_tagged"
|
||||
if [ $exit_status_expand_tagged -ne 0 ]; then
|
||||
log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged"
|
||||
else
|
||||
log_success "Tagged expand completed."
|
||||
fi
|
||||
|
||||
log_step "Verifying tag preservation after expand"
|
||||
task-master tags > tags_after_expand.log
|
||||
tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
|
||||
if [ "$tag_count_before" -eq "$tag_count_after" ]; then
|
||||
log_success "Tag count preserved: $tag_count_after (no corruption detected)"
|
||||
else
|
||||
log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after"
|
||||
fi
|
||||
|
||||
log_step "Verifying master tag still exists and has tasks"
|
||||
master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
||||
if [ "$master_task_count" -gt "0" ]; then
|
||||
log_success "Master tag preserved with $master_task_count tasks"
|
||||
else
|
||||
log_error "Master tag corrupted or empty after tagged expand"
|
||||
fi
|
||||
|
||||
log_step "Verifying feature-expand tag has expanded subtasks"
|
||||
expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
||||
if [ "$expanded_subtask_count" -gt "0" ]; then
|
||||
log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag"
|
||||
else
|
||||
log_error "Expand failed: No subtasks found in feature-expand tag"
|
||||
fi
|
||||
|
||||
log_step "Testing force expand with tag preservation"
|
||||
cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1)
|
||||
exit_status_force_expand=$?
|
||||
echo "$cmd_output_force_expand"
|
||||
extract_and_sum_cost "$cmd_output_force_expand"
|
||||
|
||||
# Verify tags still preserved after force expand
|
||||
tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then
|
||||
log_success "Force expand preserved all tags"
|
||||
else
|
||||
log_error "Force expand caused tag corruption"
|
||||
fi
|
||||
|
||||
log_step "Testing expand --all with tag preservation"
|
||||
# Add another task to feature-expand for expand-all testing
|
||||
task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low
|
||||
second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
||||
|
||||
cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1)
|
||||
exit_status_expand_all=$?
|
||||
echo "$cmd_output_expand_all"
|
||||
extract_and_sum_cost "$cmd_output_expand_all"
|
||||
|
||||
# Verify tags preserved after expand-all
|
||||
tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then
|
||||
log_success "Expand --all preserved all tags"
|
||||
else
|
||||
log_error "Expand --all caused tag corruption"
|
||||
fi
|
||||
|
||||
log_success "Completed expand --all tag preservation test."
|
||||
|
||||
# === End New Test Section: Tag-Aware Expand Testing ===
|
||||
|
||||
# === Test Model Commands ===
|
||||
log_step "Checking initial model configuration"
|
||||
task-master models > models_initial_config.log
|
||||
@@ -626,7 +715,7 @@ log_step() {
|
||||
|
||||
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
||||
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
||||
last_task_id=$(jq '[.tasks[].id] | max' tasks/tasks.json)
|
||||
last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json)
|
||||
manual_task_id=$((last_task_id + 1))
|
||||
ai_task_id=$((manual_task_id + 1))
|
||||
|
||||
@@ -747,30 +836,30 @@ log_step() {
|
||||
task-master list --with-subtasks > task_list_after_clear_all.log
|
||||
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
|
||||
|
||||
log_step "Expanding Task 1 again (to have subtasks for next test)"
|
||||
task-master expand --id=1
|
||||
log_success "Attempted to expand Task 1 again."
|
||||
# Verify 1.1 exists again
|
||||
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null; then
|
||||
log_error "Subtask 1.1 not found in tasks.json after re-expanding Task 1."
|
||||
log_step "Expanding Task 3 again (to have subtasks for next test)"
|
||||
task-master expand --id=3
|
||||
log_success "Attempted to expand Task 3."
|
||||
# Verify 3.1 exists
|
||||
if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then
|
||||
log_error "Subtask 3.1 not found in tasks.json after expanding Task 3."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
|
||||
task-master add-dependency --id=3 --depends-on=1.1
|
||||
log_success "Added dependency 3 -> 1.1."
|
||||
log_step "Adding dependency: Task 4 depends on Subtask 3.1"
|
||||
task-master add-dependency --id=4 --depends-on=3.1
|
||||
log_success "Added dependency 4 -> 3.1."
|
||||
|
||||
log_step "Showing Task 3 details (after adding subtask dependency)"
|
||||
task-master show 3 > task_3_details_after_dep_add.log
|
||||
log_success "Task 3 details saved. (Manual/LLM check recommended for dependency [1.1])"
|
||||
log_step "Showing Task 4 details (after adding subtask dependency)"
|
||||
task-master show 4 > task_4_details_after_dep_add.log
|
||||
log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])"
|
||||
|
||||
log_step "Removing dependency: Task 3 depends on Subtask 1.1"
|
||||
task-master remove-dependency --id=3 --depends-on=1.1
|
||||
log_success "Removed dependency 3 -> 1.1."
|
||||
log_step "Removing dependency: Task 4 depends on Subtask 3.1"
|
||||
task-master remove-dependency --id=4 --depends-on=3.1
|
||||
log_success "Removed dependency 4 -> 3.1."
|
||||
|
||||
log_step "Showing Task 3 details (after removing subtask dependency)"
|
||||
task-master show 3 > task_3_details_after_dep_remove.log
|
||||
log_success "Task 3 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
||||
log_step "Showing Task 4 details (after removing subtask dependency)"
|
||||
task-master show 4 > task_4_details_after_dep_remove.log
|
||||
log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
||||
|
||||
# === End New Test Section ===
|
||||
|
||||
|
||||
581
tests/integration/manage-gitignore.test.js
Normal file
581
tests/integration/manage-gitignore.test.js
Normal file
@@ -0,0 +1,581 @@
|
||||
/**
|
||||
* Integration tests for manage-gitignore.js module
|
||||
* Tests actual file system operations in a temporary directory
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import manageGitignoreFile from '../../src/utils/manage-gitignore.js';
|
||||
|
||||
describe('manage-gitignore.js Integration Tests', () => {
|
||||
let tempDir;
|
||||
let testGitignorePath;
|
||||
|
||||
beforeEach(() => {
|
||||
// Create a temporary directory for each test
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gitignore-test-'));
|
||||
testGitignorePath = path.join(tempDir, '.gitignore');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up temporary directory after each test
|
||||
if (fs.existsSync(tempDir)) {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe('New File Creation', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.local
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should create new .gitignore file with commented task lines (storeTasksInGit = true)', () => {
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true, mockLog);
|
||||
|
||||
// Verify file was created
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
// Verify content
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('logs');
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toContain('tasks.json');
|
||||
expect(content).toContain('tasks/');
|
||||
|
||||
// Verify task lines are commented (storeTasksInGit = true)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Created')
|
||||
});
|
||||
});
|
||||
|
||||
test('should create new .gitignore file with uncommented task lines (storeTasksInGit = false)', () => {
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
|
||||
// Verify file was created
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
// Verify content
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Task files');
|
||||
|
||||
// Verify task lines are uncommented (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Created')
|
||||
});
|
||||
});
|
||||
|
||||
test('should work without log function', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
}).not.toThrow();
|
||||
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('File Merging', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should merge template with existing file content', () => {
|
||||
// Create existing .gitignore file
|
||||
const existingContent = `# Existing content
|
||||
old-files.txt
|
||||
*.backup
|
||||
|
||||
# Old task files (to be replaced)
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/
|
||||
|
||||
# More existing content
|
||||
cache/`;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
|
||||
// Verify file still exists
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing non-task content
|
||||
expect(content).toContain('# Existing content');
|
||||
expect(content).toContain('old-files.txt');
|
||||
expect(content).toContain('*.backup');
|
||||
expect(content).toContain('# More existing content');
|
||||
expect(content).toContain('cache/');
|
||||
|
||||
// Should add new template content
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('logs');
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Environment variables');
|
||||
expect(content).toContain('.env');
|
||||
|
||||
// Should replace task section with new preference (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Updated')
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle switching task preferences from commented to uncommented', () => {
|
||||
// Create existing file with commented task lines
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Update with storeTasksInGit = true (commented)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
|
||||
// Should have commented task lines (storeTasksInGit = true)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle switching task preferences from uncommented to commented', () => {
|
||||
// Create existing file with uncommented task lines
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Update with storeTasksInGit = false (uncommented)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
|
||||
// Should have uncommented task lines (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should not duplicate existing template content', () => {
|
||||
// Create existing file that already has some template content
|
||||
const existingContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Custom content
|
||||
custom.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should not duplicate logs section
|
||||
const logsMatches = content.match(/# Logs/g);
|
||||
expect(logsMatches).toHaveLength(1);
|
||||
|
||||
// Should not duplicate dependencies section
|
||||
const depsMatches = content.match(/# Dependencies/g);
|
||||
expect(depsMatches).toHaveLength(1);
|
||||
|
||||
// Should retain custom content
|
||||
expect(content).toContain('# Custom content');
|
||||
expect(content).toContain('custom.txt');
|
||||
|
||||
// Should add new template content that wasn't present
|
||||
expect(content).toContain('# Environment variables');
|
||||
expect(content).toContain('.env');
|
||||
});
|
||||
|
||||
test('should handle empty existing file', () => {
|
||||
// Create empty file
|
||||
fs.writeFileSync(testGitignorePath, '');
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle file with only whitespace', () => {
|
||||
// Create file with only whitespace
|
||||
fs.writeFileSync(testGitignorePath, ' \n\n \n');
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Task Section Handling', () => {
|
||||
test('should remove task section with mixed comments and spacing', () => {
|
||||
const existingContent = `# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
|
||||
# tasks.json
|
||||
tasks/
|
||||
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
const templateContent = `# New content
|
||||
new.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain non-task content
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# More content');
|
||||
expect(content).toContain('more.txt');
|
||||
|
||||
// Should add new content
|
||||
expect(content).toContain('# New content');
|
||||
expect(content).toContain('new.txt');
|
||||
|
||||
// Should have clean task section (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle multiple task file variations', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
# tasks.json
|
||||
# tasks/
|
||||
tasks/
|
||||
#tasks.json
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain non-task content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
expect(content).toContain('# More content');
|
||||
expect(content).toContain('more.txt');
|
||||
|
||||
// Should have clean task section with preference applied (storeTasksInGit = true means commented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
|
||||
// Should not have multiple task sections
|
||||
const taskFileMatches = content.match(/# Task files/g);
|
||||
expect(taskFileMatches).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
test('should handle permission errors gracefully', () => {
|
||||
// Create a directory where we would create the file, then remove write permissions
|
||||
const readOnlyDir = path.join(tempDir, 'readonly');
|
||||
fs.mkdirSync(readOnlyDir);
|
||||
fs.chmodSync(readOnlyDir, 0o444); // Read-only
|
||||
|
||||
const readOnlyGitignorePath = path.join(readOnlyDir, '.gitignore');
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(
|
||||
readOnlyGitignorePath,
|
||||
templateContent,
|
||||
false,
|
||||
mockLog
|
||||
);
|
||||
}).toThrow();
|
||||
|
||||
// Verify error was logged
|
||||
expect(logs).toContainEqual({
|
||||
level: 'error',
|
||||
message: expect.stringContaining('Failed to create')
|
||||
});
|
||||
|
||||
// Restore permissions for cleanup
|
||||
fs.chmodSync(readOnlyDir, 0o755);
|
||||
});
|
||||
|
||||
test('should handle read errors on existing files', () => {
|
||||
// Create a file then remove read permissions
|
||||
fs.writeFileSync(testGitignorePath, 'existing content');
|
||||
fs.chmodSync(testGitignorePath, 0o000); // No permissions
|
||||
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
}).toThrow();
|
||||
|
||||
// Verify error was logged
|
||||
expect(logs).toContainEqual({
|
||||
level: 'error',
|
||||
message: expect.stringContaining('Failed to merge content')
|
||||
});
|
||||
|
||||
// Restore permissions for cleanup
|
||||
fs.chmodSync(testGitignorePath, 0o644);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Real-world Scenarios', () => {
|
||||
test('should handle typical Node.js project .gitignore', () => {
|
||||
const existingNodeGitignore = `# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Runtime data
|
||||
pids
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Dependency directories
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# Optional npm cache directory
|
||||
.npm
|
||||
|
||||
# Output of 'npm pack'
|
||||
*.tgz
|
||||
|
||||
# Yarn Integrity file
|
||||
.yarn-integrity
|
||||
|
||||
# dotenv environment variables file
|
||||
.env
|
||||
|
||||
# next.js build output
|
||||
.next`;
|
||||
|
||||
const taskMasterTemplate = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Build output
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingNodeGitignore);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, taskMasterTemplate, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing Node.js specific entries
|
||||
expect(content).toContain('npm-debug.log*');
|
||||
expect(content).toContain('yarn-debug.log*');
|
||||
expect(content).toContain('*.pid');
|
||||
expect(content).toContain('jspm_packages/');
|
||||
expect(content).toContain('.npm');
|
||||
expect(content).toContain('*.tgz');
|
||||
expect(content).toContain('.yarn-integrity');
|
||||
expect(content).toContain('.next');
|
||||
|
||||
// Should add new content from template that wasn't present
|
||||
expect(content).toContain('dist/');
|
||||
expect(content).toContain('build/');
|
||||
|
||||
// Should add task files section with correct preference (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Should not duplicate common entries
|
||||
const nodeModulesMatches = content.match(/node_modules\//g);
|
||||
expect(nodeModulesMatches).toHaveLength(1);
|
||||
|
||||
const logsMatches = content.match(/# Logs/g);
|
||||
expect(logsMatches).toHaveLength(1);
|
||||
});
|
||||
|
||||
test('should handle project with existing task files in git', () => {
|
||||
const existingContent = `# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
|
||||
# Current task setup - keeping in git
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/
|
||||
|
||||
# Build output
|
||||
dist/`;
|
||||
|
||||
const templateContent = `# New template
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Change preference to exclude tasks from git (storeTasksInGit = false means uncommented/ignored)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('*.log');
|
||||
expect(content).toContain('# Build output');
|
||||
expect(content).toContain('dist/');
|
||||
|
||||
// Should update task preference to uncommented (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -133,7 +133,7 @@ jest.mock('../../../scripts/modules/utils.js', () => ({
|
||||
readComplexityReport: mockReadComplexityReport,
|
||||
CONFIG: {
|
||||
model: 'claude-3-7-sonnet-20250219',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2,
|
||||
defaultSubtasks: 5
|
||||
}
|
||||
@@ -625,19 +625,38 @@ describe('MCP Server Direct Functions', () => {
|
||||
// For successful cases, record that functions were called but don't make real calls
|
||||
mockEnableSilentMode();
|
||||
|
||||
// Mock expandAllTasks
|
||||
// Mock expandAllTasks - now returns a structured object instead of undefined
|
||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
||||
// Just simulate success without any real operations
|
||||
return undefined; // expandAllTasks doesn't return anything
|
||||
// Return the new structured response that matches the actual implementation
|
||||
return {
|
||||
success: true,
|
||||
expandedCount: 2,
|
||||
failedCount: 0,
|
||||
skippedCount: 1,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.05,
|
||||
totalTokens: 1000,
|
||||
inputTokens: 600,
|
||||
outputTokens: 400
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Call mock expandAllTasks
|
||||
await mockExpandAll(
|
||||
args.num,
|
||||
args.research || false,
|
||||
args.prompt || '',
|
||||
args.force || false,
|
||||
{ mcpLog: mockLogger, session: options.session }
|
||||
// Call mock expandAllTasks with the correct signature
|
||||
const result = await mockExpandAll(
|
||||
args.file, // tasksPath
|
||||
args.num, // numSubtasks
|
||||
args.research || false, // useResearch
|
||||
args.prompt || '', // additionalContext
|
||||
args.force || false, // force
|
||||
{
|
||||
mcpLog: mockLogger,
|
||||
session: options.session,
|
||||
projectRoot: args.projectRoot
|
||||
}
|
||||
);
|
||||
|
||||
mockDisableSilentMode();
|
||||
@@ -645,13 +664,14 @@ describe('MCP Server Direct Functions', () => {
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: 'Successfully expanded all pending tasks with subtasks',
|
||||
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
|
||||
details: {
|
||||
numSubtasks: args.num,
|
||||
research: args.research || false,
|
||||
prompt: args.prompt || '',
|
||||
force: args.force || false
|
||||
}
|
||||
expandedCount: result.expandedCount,
|
||||
failedCount: result.failedCount,
|
||||
skippedCount: result.skippedCount,
|
||||
tasksToExpand: result.tasksToExpand
|
||||
},
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -671,10 +691,13 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.message).toBe(
|
||||
'Successfully expanded all pending tasks with subtasks'
|
||||
);
|
||||
expect(result.data.details.numSubtasks).toBe(3);
|
||||
expect(result.data.message).toMatch(/Expand all operation completed/);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.details.failedCount).toBe(0);
|
||||
expect(result.data.details.skippedCount).toBe(1);
|
||||
expect(result.data.details.tasksToExpand).toBe(3);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(result.data.telemetryData.commandName).toBe('expand-all-tasks');
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -695,7 +718,8 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.research).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -715,7 +739,8 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.force).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -735,11 +760,77 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.prompt).toBe(
|
||||
'Additional context for subtasks'
|
||||
);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle case with no eligible tasks', async () => {
|
||||
// Arrange
|
||||
const args = {
|
||||
projectRoot: testProjectRoot,
|
||||
file: testTasksPath,
|
||||
num: 3
|
||||
};
|
||||
|
||||
// Act - Mock the scenario where no tasks are eligible for expansion
|
||||
async function testNoEligibleTasks(args, mockLogger, options = {}) {
|
||||
mockEnableSilentMode();
|
||||
|
||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
success: true,
|
||||
expandedCount: 0,
|
||||
failedCount: 0,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 0,
|
||||
telemetryData: null,
|
||||
message: 'No tasks eligible for expansion.'
|
||||
};
|
||||
});
|
||||
|
||||
const result = await mockExpandAll(
|
||||
args.file,
|
||||
args.num,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
mcpLog: mockLogger,
|
||||
session: options.session,
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
mockDisableSilentMode();
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: result.message,
|
||||
details: {
|
||||
expandedCount: result.expandedCount,
|
||||
failedCount: result.failedCount,
|
||||
skippedCount: result.skippedCount,
|
||||
tasksToExpand: result.tasksToExpand
|
||||
},
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const result = await testNoEligibleTasks(args, mockLogger, {
|
||||
session: mockSession
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.message).toBe('No tasks eligible for expansion.');
|
||||
expect(result.data.details.expandedCount).toBe(0);
|
||||
expect(result.data.details.tasksToExpand).toBe(0);
|
||||
expect(result.data.telemetryData).toBeNull();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -129,7 +129,7 @@ const DEFAULT_CONFIG = {
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
|
||||
@@ -75,7 +75,7 @@ const DEFAULT_CONFIG = {
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
|
||||
538
tests/unit/initialize-project.test.js
Normal file
538
tests/unit/initialize-project.test.js
Normal file
@@ -0,0 +1,538 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Reduce noise in test output
|
||||
process.env.TASKMASTER_LOG_LEVEL = 'error';
|
||||
|
||||
// === Mock everything early ===
|
||||
jest.mock('child_process', () => ({ execSync: jest.fn() }));
|
||||
jest.mock('fs', () => ({
|
||||
...jest.requireActual('fs'),
|
||||
mkdirSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
readFileSync: jest.fn(),
|
||||
appendFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdtempSync: jest.requireActual('fs').mkdtempSync,
|
||||
rmSync: jest.requireActual('fs').rmSync
|
||||
}));
|
||||
|
||||
// Mock console methods to suppress output
|
||||
const consoleMethods = ['log', 'info', 'warn', 'error', 'clear'];
|
||||
consoleMethods.forEach((method) => {
|
||||
global.console[method] = jest.fn();
|
||||
});
|
||||
|
||||
// Mock ES modules using unstable_mockModule
|
||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||
isSilentMode: jest.fn(() => true),
|
||||
enableSilentMode: jest.fn(),
|
||||
log: jest.fn(),
|
||||
findProjectRoot: jest.fn(() => process.cwd())
|
||||
}));
|
||||
|
||||
// Mock git-utils module
|
||||
jest.unstable_mockModule('../../scripts/modules/utils/git-utils.js', () => ({
|
||||
insideGitWorkTree: jest.fn(() => false)
|
||||
}));
|
||||
|
||||
// Mock rule transformer
|
||||
jest.unstable_mockModule('../../src/utils/rule-transformer.js', () => ({
|
||||
convertAllRulesToProfileRules: jest.fn(),
|
||||
getRulesProfile: jest.fn(() => ({
|
||||
conversionConfig: {},
|
||||
globalReplacements: []
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock any other modules that might output or do real operations
|
||||
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
createDefaultConfig: jest.fn(() => ({ models: {}, project: {} })),
|
||||
saveConfig: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock display libraries
|
||||
jest.mock('figlet', () => ({ textSync: jest.fn(() => 'MOCKED BANNER') }));
|
||||
jest.mock('boxen', () => jest.fn(() => 'MOCKED BOX'));
|
||||
jest.mock('gradient-string', () => jest.fn(() => jest.fn((text) => text)));
|
||||
jest.mock('chalk', () => ({
|
||||
blue: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
red: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: jest.fn((text) => text),
|
||||
dim: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text),
|
||||
underline: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
const { execSync } = jest.requireMock('child_process');
|
||||
const mockFs = jest.requireMock('fs');
|
||||
|
||||
// Import the mocked modules
|
||||
const mockUtils = await import('../../scripts/modules/utils.js');
|
||||
const mockGitUtils = await import('../../scripts/modules/utils/git-utils.js');
|
||||
const mockRuleTransformer = await import('../../src/utils/rule-transformer.js');
|
||||
|
||||
// Import after mocks
|
||||
const { initializeProject } = await import('../../scripts/init.js');
|
||||
|
||||
describe('initializeProject – Git / Alias flag logic', () => {
|
||||
let tmpDir;
|
||||
const origCwd = process.cwd();
|
||||
|
||||
// Standard non-interactive options for all tests
|
||||
const baseOptions = {
|
||||
yes: true,
|
||||
skipInstall: true,
|
||||
name: 'test-project',
|
||||
description: 'Test project description',
|
||||
version: '1.0.0',
|
||||
author: 'Test Author'
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Set up basic fs mocks
|
||||
mockFs.mkdirSync.mockImplementation(() => {});
|
||||
mockFs.writeFileSync.mockImplementation(() => {});
|
||||
mockFs.readFileSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return 'mock template content';
|
||||
}
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return '# existing config';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
mockFs.appendFileSync.mockImplementation(() => {});
|
||||
mockFs.existsSync.mockImplementation((filePath) => {
|
||||
// Template source files exist
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return true;
|
||||
}
|
||||
// Shell config files exist by default
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
// Reset utils mocks
|
||||
mockUtils.isSilentMode.mockReturnValue(true);
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
// Default execSync mock
|
||||
execSync.mockImplementation(() => '');
|
||||
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-init-'));
|
||||
process.chdir(tmpDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(origCwd);
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe('Git Flag Behavior', () => {
|
||||
it('completes successfully with git:false in dry run', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
|
||||
it('completes successfully with git:true when not inside repo', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('completes successfully when already inside repo', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(true);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('uses default git behavior without errors', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles git command failures gracefully', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('git init')) {
|
||||
throw new Error('git not found');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Alias Flag Behavior', () => {
|
||||
it('completes successfully when aliases:true and environment is set up', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
|
||||
it('completes successfully when aliases:false', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles missing shell gracefully', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
delete process.env.SHELL; // Remove shell env var
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
|
||||
it('handles missing shell config file gracefully', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
// Shell config doesn't exist
|
||||
mockFs.existsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return false;
|
||||
}
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
});
|
||||
|
||||
describe('Flag Combinations', () => {
|
||||
it.each`
|
||||
git | aliases | description
|
||||
${true} | ${true} | ${'git & aliases enabled'}
|
||||
${true} | ${false} | ${'git enabled, aliases disabled'}
|
||||
${false} | ${true} | ${'git disabled, aliases enabled'}
|
||||
${false} | ${false} | ${'git & aliases disabled'}
|
||||
`('handles $description without errors', async ({ git, aliases }) => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
if (aliases) {
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
}
|
||||
|
||||
if (git) {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
}
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git,
|
||||
aliases,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
});
|
||||
|
||||
describe('Dry Run Mode', () => {
|
||||
it('returns dry run result and performs no operations', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: true,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
|
||||
it.each`
|
||||
git | aliases | description
|
||||
${true} | ${false} | ${'git-specific behavior'}
|
||||
${false} | ${false} | ${'no-git behavior'}
|
||||
${false} | ${true} | ${'alias behavior'}
|
||||
`('shows $description in dry run', async ({ git, aliases }) => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git,
|
||||
aliases,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
it('handles npm install failures gracefully', async () => {
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('npm install')) {
|
||||
throw new Error('npm failed');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
skipInstall: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles git failures gracefully', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('git init')) {
|
||||
throw new Error('git failed');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles file system errors gracefully', async () => {
|
||||
mockFs.mkdirSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Should handle file system errors gracefully
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Non-Interactive Mode', () => {
|
||||
it('bypasses prompts with yes:true', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: true,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result).toEqual({ dryRun: true });
|
||||
});
|
||||
|
||||
it('completes without hanging', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles all flag combinations without hanging', async () => {
|
||||
const flagCombinations = [
|
||||
{ git: true, aliases: true },
|
||||
{ git: true, aliases: false },
|
||||
{ git: false, aliases: true },
|
||||
{ git: false, aliases: false },
|
||||
{} // No flags (uses defaults)
|
||||
];
|
||||
|
||||
for (const flags of flagCombinations) {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
...flags,
|
||||
dryRun: true // Use dry run for speed
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
}
|
||||
});
|
||||
|
||||
it('accepts complete project details', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
name: 'test-project',
|
||||
description: 'test description',
|
||||
version: '2.0.0',
|
||||
author: 'Test User',
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('works with skipInstall option', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
skipInstall: true,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Function Integration', () => {
|
||||
it('calls utility functions without errors', async () => {
|
||||
await initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
});
|
||||
|
||||
// Verify that utility functions were called
|
||||
expect(mockUtils.isSilentMode).toHaveBeenCalled();
|
||||
expect(
|
||||
mockRuleTransformer.convertAllRulesToProfileRules
|
||||
).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('handles template operations gracefully', async () => {
|
||||
// Make file operations throw errors
|
||||
mockFs.writeFileSync.mockImplementation(() => {
|
||||
throw new Error('Write failed');
|
||||
});
|
||||
|
||||
// Should complete despite file operation failures
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('validates boolean flag conversion', async () => {
|
||||
// Test the boolean flag handling specifically
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true, // Should convert to initGit: true
|
||||
aliases: false, // Should convert to addAliases: false
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false, // Should convert to initGit: false
|
||||
aliases: true, // Should convert to addAliases: true
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
439
tests/unit/manage-gitignore.test.js
Normal file
439
tests/unit/manage-gitignore.test.js
Normal file
@@ -0,0 +1,439 @@
|
||||
/**
|
||||
* Unit tests for manage-gitignore.js module
|
||||
* Tests the logic with Jest spies instead of mocked modules
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Import the module under test and its exports
|
||||
import manageGitignoreFile, {
|
||||
normalizeLine,
|
||||
isTaskLine,
|
||||
buildTaskFilesSection,
|
||||
TASK_FILES_COMMENT,
|
||||
TASK_JSON_PATTERN,
|
||||
TASK_DIR_PATTERN
|
||||
} from '../../src/utils/manage-gitignore.js';
|
||||
|
||||
describe('manage-gitignore.js Unit Tests', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'manage-gitignore-test-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
describe('Constants', () => {
|
||||
test('should have correct constant values', () => {
|
||||
expect(TASK_FILES_COMMENT).toBe('# Task files');
|
||||
expect(TASK_JSON_PATTERN).toBe('tasks.json');
|
||||
expect(TASK_DIR_PATTERN).toBe('tasks/');
|
||||
});
|
||||
});
|
||||
|
||||
describe('normalizeLine function', () => {
|
||||
test('should remove leading/trailing whitespace', () => {
|
||||
expect(normalizeLine(' test ')).toBe('test');
|
||||
});
|
||||
|
||||
test('should remove comment hash and trim', () => {
|
||||
expect(normalizeLine('# tasks.json')).toBe('tasks.json');
|
||||
expect(normalizeLine('#tasks/')).toBe('tasks/');
|
||||
});
|
||||
|
||||
test('should handle empty strings', () => {
|
||||
expect(normalizeLine('')).toBe('');
|
||||
expect(normalizeLine(' ')).toBe('');
|
||||
});
|
||||
|
||||
test('should handle lines without comments', () => {
|
||||
expect(normalizeLine('tasks.json')).toBe('tasks.json');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isTaskLine function', () => {
|
||||
test('should identify task.json patterns', () => {
|
||||
expect(isTaskLine('tasks.json')).toBe(true);
|
||||
expect(isTaskLine('# tasks.json')).toBe(true);
|
||||
expect(isTaskLine(' # tasks.json ')).toBe(true);
|
||||
});
|
||||
|
||||
test('should identify tasks/ patterns', () => {
|
||||
expect(isTaskLine('tasks/')).toBe(true);
|
||||
expect(isTaskLine('# tasks/')).toBe(true);
|
||||
expect(isTaskLine(' # tasks/ ')).toBe(true);
|
||||
});
|
||||
|
||||
test('should reject non-task patterns', () => {
|
||||
expect(isTaskLine('node_modules/')).toBe(false);
|
||||
expect(isTaskLine('# Some comment')).toBe(false);
|
||||
expect(isTaskLine('')).toBe(false);
|
||||
expect(isTaskLine('tasks.txt')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildTaskFilesSection function', () => {
|
||||
test('should build commented section when storeTasksInGit is true (tasks stored in git)', () => {
|
||||
const result = buildTaskFilesSection(true);
|
||||
expect(result).toEqual(['# Task files', '# tasks.json', '# tasks/ ']);
|
||||
});
|
||||
|
||||
test('should build uncommented section when storeTasksInGit is false (tasks ignored)', () => {
|
||||
const result = buildTaskFilesSection(false);
|
||||
expect(result).toEqual(['# Task files', 'tasks.json', 'tasks/ ']);
|
||||
});
|
||||
});
|
||||
|
||||
describe('manageGitignoreFile function - Input Validation', () => {
|
||||
test('should throw error for invalid targetPath', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('', 'content', false);
|
||||
}).toThrow('targetPath must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(null, 'content', false);
|
||||
}).toThrow('targetPath must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('invalid.txt', 'content', false);
|
||||
}).toThrow('targetPath must end with .gitignore');
|
||||
});
|
||||
|
||||
test('should throw error for invalid content', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', '', false);
|
||||
}).toThrow('content must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', null, false);
|
||||
}).toThrow('content must be a non-empty string');
|
||||
});
|
||||
|
||||
test('should throw error for invalid storeTasksInGit', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', 'content', 'not-boolean');
|
||||
}).toThrow('storeTasksInGit must be a boolean');
|
||||
});
|
||||
});
|
||||
|
||||
describe('manageGitignoreFile function - File Operations with Spies', () => {
|
||||
let writeFileSyncSpy;
|
||||
let readFileSyncSpy;
|
||||
let existsSyncSpy;
|
||||
let mockLog;
|
||||
|
||||
beforeEach(() => {
|
||||
// Set up spies
|
||||
writeFileSyncSpy = jest
|
||||
.spyOn(fs, 'writeFileSync')
|
||||
.mockImplementation(() => {});
|
||||
readFileSyncSpy = jest
|
||||
.spyOn(fs, 'readFileSync')
|
||||
.mockImplementation(() => '');
|
||||
existsSyncSpy = jest
|
||||
.spyOn(fs, 'existsSync')
|
||||
.mockImplementation(() => false);
|
||||
mockLog = jest.fn();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore original implementations
|
||||
writeFileSyncSpy.mockRestore();
|
||||
readFileSyncSpy.mockRestore();
|
||||
existsSyncSpy.mockRestore();
|
||||
});
|
||||
|
||||
describe('New File Creation', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should create new file with commented task lines when storeTasksInGit is true', () => {
|
||||
existsSyncSpy.mockReturnValue(false); // File doesn't exist
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
`# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
'Created .gitignore with full template'
|
||||
);
|
||||
});
|
||||
|
||||
test('should create new file with uncommented task lines when storeTasksInGit is false', () => {
|
||||
existsSyncSpy.mockReturnValue(false); // File doesn't exist
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
`# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
'Created .gitignore with full template'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle write errors gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const writeError = new Error('Permission denied');
|
||||
writeFileSyncSpy.mockImplementation(() => {
|
||||
throw writeError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('Permission denied');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to create .gitignore: Permission denied'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('File Merging', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should merge with existing file and add new content', () => {
|
||||
const existingContent = `# Old content
|
||||
old-file.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true); // File exists
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Old content')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Logs')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Dependencies')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Task files')
|
||||
);
|
||||
});
|
||||
|
||||
test('should remove existing task section and replace with new preferences', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
|
||||
// Should contain existing non-task content
|
||||
expect(writtenContent).toContain('# Existing');
|
||||
expect(writtenContent).toContain('existing.txt');
|
||||
expect(writtenContent).toContain('# More content');
|
||||
expect(writtenContent).toContain('more.txt');
|
||||
|
||||
// Should contain new template content
|
||||
expect(writtenContent).toContain('# Logs');
|
||||
expect(writtenContent).toContain('# Dependencies');
|
||||
|
||||
// Should have uncommented task lines (storeTasksInGit = false means ignore tasks)
|
||||
expect(writtenContent).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle different task preferences correctly', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
// Test with storeTasksInGit = true (commented)
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should not duplicate existing template content', () => {
|
||||
const existingContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
|
||||
// Should not duplicate the logs section
|
||||
const logsCount = (writtenContent.match(/# Logs/g) || []).length;
|
||||
expect(logsCount).toBe(1);
|
||||
|
||||
// Should not duplicate dependencies
|
||||
const depsCount = (writtenContent.match(/# Dependencies/g) || [])
|
||||
.length;
|
||||
expect(depsCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should handle read errors gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
const readError = new Error('File not readable');
|
||||
readFileSyncSpy.mockImplementation(() => {
|
||||
throw readError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('File not readable');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to merge content with .gitignore: File not readable'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle write errors during merge gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue('existing content');
|
||||
|
||||
const writeError = new Error('Disk full');
|
||||
writeFileSyncSpy.mockImplementation(() => {
|
||||
throw writeError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('Disk full');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to merge content with .gitignore: Disk full'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
test('should work without log function', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/`;
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false);
|
||||
}).not.toThrow();
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle empty existing file', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue('');
|
||||
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/`;
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalled();
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toContain('# Task files');
|
||||
});
|
||||
|
||||
test('should handle template with only task files', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toBe(`# Task files
|
||||
# tasks.json
|
||||
# tasks/ `);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
324
tests/unit/mcp/tools/expand-all.test.js
Normal file
324
tests/unit/mcp/tools/expand-all.test.js
Normal file
@@ -0,0 +1,324 @@
|
||||
/**
|
||||
* Tests for the expand-all MCP tool
|
||||
*
|
||||
* Note: This test does NOT test the actual implementation. It tests that:
|
||||
* 1. The tool is registered correctly with the correct parameters
|
||||
* 2. Arguments are passed correctly to expandAllTasksDirect
|
||||
* 3. Error handling works as expected
|
||||
*
|
||||
* We do NOT import the real implementation - everything is mocked
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock EVERYTHING
|
||||
const mockExpandAllTasksDirect = jest.fn();
|
||||
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
|
||||
expandAllTasksDirect: mockExpandAllTasksDirect
|
||||
}));
|
||||
|
||||
const mockHandleApiResult = jest.fn((result) => result);
|
||||
const mockGetProjectRootFromSession = jest.fn(() => '/mock/project/root');
|
||||
const mockCreateErrorResponse = jest.fn((msg) => ({
|
||||
success: false,
|
||||
error: { code: 'ERROR', message: msg }
|
||||
}));
|
||||
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
|
||||
|
||||
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
|
||||
getProjectRootFromSession: mockGetProjectRootFromSession,
|
||||
handleApiResult: mockHandleApiResult,
|
||||
createErrorResponse: mockCreateErrorResponse,
|
||||
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
|
||||
}));
|
||||
|
||||
// Mock the z object from zod
|
||||
const mockZod = {
|
||||
object: jest.fn(() => mockZod),
|
||||
string: jest.fn(() => mockZod),
|
||||
number: jest.fn(() => mockZod),
|
||||
boolean: jest.fn(() => mockZod),
|
||||
optional: jest.fn(() => mockZod),
|
||||
describe: jest.fn(() => mockZod),
|
||||
_def: {
|
||||
shape: () => ({
|
||||
num: {},
|
||||
research: {},
|
||||
prompt: {},
|
||||
force: {},
|
||||
tag: {},
|
||||
projectRoot: {}
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
jest.mock('zod', () => ({
|
||||
z: mockZod
|
||||
}));
|
||||
|
||||
// DO NOT import the real module - create a fake implementation
|
||||
// This is the fake implementation of registerExpandAllTool
|
||||
const registerExpandAllTool = (server) => {
|
||||
// Create simplified version of the tool config
|
||||
const toolConfig = {
|
||||
name: 'expand_all',
|
||||
description: 'Use Taskmaster to expand all eligible pending tasks',
|
||||
parameters: mockZod,
|
||||
|
||||
// Create a simplified mock of the execute function
|
||||
execute: mockWithNormalizedProjectRoot(async (args, context) => {
|
||||
const { log, session } = context;
|
||||
|
||||
try {
|
||||
log.info &&
|
||||
log.info(`Starting expand-all with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Call expandAllTasksDirect
|
||||
const result = await mockExpandAllTasksDirect(args, log, { session });
|
||||
|
||||
// Handle result
|
||||
return mockHandleApiResult(result, log);
|
||||
} catch (error) {
|
||||
log.error && log.error(`Error in expand-all tool: ${error.message}`);
|
||||
return mockCreateErrorResponse(error.message);
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
// Register the tool with the server
|
||||
server.addTool(toolConfig);
|
||||
};
|
||||
|
||||
describe('MCP Tool: expand-all', () => {
|
||||
// Create mock server
|
||||
let mockServer;
|
||||
let executeFunction;
|
||||
|
||||
// Create mock logger
|
||||
const mockLogger = {
|
||||
debug: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn()
|
||||
};
|
||||
|
||||
// Test data
|
||||
const validArgs = {
|
||||
num: 3,
|
||||
research: true,
|
||||
prompt: 'additional context',
|
||||
force: false,
|
||||
tag: 'master',
|
||||
projectRoot: '/test/project'
|
||||
};
|
||||
|
||||
// Standard responses
|
||||
const successResponse = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 2, Failed: 0, Skipped: 1',
|
||||
details: {
|
||||
expandedCount: 2,
|
||||
failedCount: 0,
|
||||
skippedCount: 1,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.15,
|
||||
totalTokens: 2500
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const errorResponse = {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'EXPAND_ALL_ERROR',
|
||||
message: 'Failed to expand tasks'
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset all mocks
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create mock server
|
||||
mockServer = {
|
||||
addTool: jest.fn((config) => {
|
||||
executeFunction = config.execute;
|
||||
})
|
||||
};
|
||||
|
||||
// Setup default successful response
|
||||
mockExpandAllTasksDirect.mockResolvedValue(successResponse);
|
||||
|
||||
// Register the tool
|
||||
registerExpandAllTool(mockServer);
|
||||
});
|
||||
|
||||
test('should register the tool correctly', () => {
|
||||
// Verify tool was registered
|
||||
expect(mockServer.addTool).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
name: 'expand_all',
|
||||
description: expect.stringContaining('expand all eligible pending'),
|
||||
parameters: expect.any(Object),
|
||||
execute: expect.any(Function)
|
||||
})
|
||||
);
|
||||
|
||||
// Verify the tool config was passed
|
||||
const toolConfig = mockServer.addTool.mock.calls[0][0];
|
||||
expect(toolConfig).toHaveProperty('parameters');
|
||||
expect(toolConfig).toHaveProperty('execute');
|
||||
});
|
||||
|
||||
test('should execute the tool with valid parameters', async () => {
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
const result = await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify expandAllTasksDirect was called with correct arguments
|
||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
||||
validArgs,
|
||||
mockLogger,
|
||||
{ session: mockContext.session }
|
||||
);
|
||||
|
||||
// Verify handleApiResult was called
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
successResponse,
|
||||
mockLogger
|
||||
);
|
||||
expect(result).toEqual(successResponse);
|
||||
});
|
||||
|
||||
test('should handle expand all with no eligible tasks', async () => {
|
||||
// Arrange
|
||||
const mockDirectResult = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 0, Failed: 0, Skipped: 0',
|
||||
details: {
|
||||
expandedCount: 0,
|
||||
failedCount: 0,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 0,
|
||||
telemetryData: null
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
||||
mockHandleApiResult.mockReturnValue({
|
||||
success: true,
|
||||
data: mockDirectResult.data
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(0);
|
||||
expect(result.data.details.tasksToExpand).toBe(0);
|
||||
});
|
||||
|
||||
test('should handle expand all with mixed success/failure', async () => {
|
||||
// Arrange
|
||||
const mockDirectResult = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 2, Failed: 1, Skipped: 0',
|
||||
details: {
|
||||
expandedCount: 2,
|
||||
failedCount: 1,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.1,
|
||||
totalTokens: 1500
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
||||
mockHandleApiResult.mockReturnValue({
|
||||
success: true,
|
||||
data: mockDirectResult.data
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.details.failedCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should handle errors from expandAllTasksDirect', async () => {
|
||||
// Arrange
|
||||
mockExpandAllTasksDirect.mockRejectedValue(
|
||||
new Error('Direct function error')
|
||||
);
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Error in expand-all tool')
|
||||
);
|
||||
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
|
||||
'Direct function error'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle different argument combinations', async () => {
|
||||
// Test with minimal args
|
||||
const minimalArgs = {
|
||||
projectRoot: '/test/project'
|
||||
};
|
||||
|
||||
// Act
|
||||
await executeFunction(minimalArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
||||
minimalArgs,
|
||||
mockLogger,
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
|
||||
test('should use withNormalizedProjectRoot wrapper correctly', () => {
|
||||
// Verify that the execute function is wrapped with withNormalizedProjectRoot
|
||||
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
|
||||
expect.any(Function)
|
||||
);
|
||||
});
|
||||
});
|
||||
502
tests/unit/scripts/modules/task-manager/expand-all-tasks.test.js
Normal file
502
tests/unit/scripts/modules/task-manager/expand-all-tasks.test.js
Normal file
@@ -0,0 +1,502 @@
|
||||
/**
|
||||
* Tests for the expand-all-tasks.js module
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js',
|
||||
() => ({
|
||||
default: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
findProjectRoot: jest.fn(() => '/test/project'),
|
||||
aggregateTelemetry: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
startLoadingIndicator: jest.fn(),
|
||||
stopLoadingIndicator: jest.fn(),
|
||||
displayAiUsageSummary: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
white: { bold: jest.fn((text) => text) },
|
||||
cyan: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
gray: jest.fn((text) => text),
|
||||
red: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
// Import the mocked modules
|
||||
const { default: expandTask } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
||||
);
|
||||
const { readJSON, aggregateTelemetry, findProjectRoot } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
// Import the module under test
|
||||
const { default: expandAllTasks } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-all-tasks.js'
|
||||
);
|
||||
|
||||
const mockExpandTask = expandTask;
|
||||
const mockReadJSON = readJSON;
|
||||
const mockAggregateTelemetry = aggregateTelemetry;
|
||||
const mockFindProjectRoot = findProjectRoot;
|
||||
|
||||
describe('expandAllTasks', () => {
|
||||
const mockTasksPath = '/test/tasks.json';
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockSession = { userId: 'test-user' };
|
||||
const mockMcpLog = {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn()
|
||||
};
|
||||
|
||||
const sampleTasksData = {
|
||||
tag: 'master',
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Pending Task 1',
|
||||
status: 'pending',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'In Progress Task',
|
||||
status: 'in-progress',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Done Task',
|
||||
status: 'done',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task with Subtasks',
|
||||
status: 'pending',
|
||||
subtasks: [{ id: '4.1', title: 'Existing subtask' }]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockReadJSON.mockReturnValue(sampleTasksData);
|
||||
mockAggregateTelemetry.mockReturnValue({
|
||||
timestamp: '2024-01-01T00:00:00.000Z',
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.1,
|
||||
totalTokens: 2000,
|
||||
inputTokens: 1200,
|
||||
outputTokens: 800
|
||||
});
|
||||
});
|
||||
|
||||
describe('successful expansion', () => {
|
||||
test('should expand all eligible pending tasks', async () => {
|
||||
// Arrange
|
||||
const mockTelemetryData = {
|
||||
timestamp: '2024-01-01T00:00:00.000Z',
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.05,
|
||||
totalTokens: 1000
|
||||
};
|
||||
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: mockTelemetryData
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3, // numSubtasks
|
||||
false, // useResearch
|
||||
'test context', // additionalContext
|
||||
false, // force
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'master'
|
||||
},
|
||||
'json' // outputFormat
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(2); // Tasks 1 and 2 (pending and in-progress)
|
||||
expect(result.failedCount).toBe(0);
|
||||
expect(result.skippedCount).toBe(0);
|
||||
expect(result.tasksToExpand).toBe(2);
|
||||
expect(result.telemetryData).toBeDefined();
|
||||
|
||||
// Verify readJSON was called correctly
|
||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
mockProjectRoot,
|
||||
'master'
|
||||
);
|
||||
|
||||
// Verify expandTask was called for eligible tasks
|
||||
expect(mockExpandTask).toHaveBeenCalledTimes(2);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
1,
|
||||
3,
|
||||
false,
|
||||
'test context',
|
||||
expect.objectContaining({
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'master'
|
||||
}),
|
||||
false
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle force flag to expand tasks with existing subtasks', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
2,
|
||||
false,
|
||||
'',
|
||||
true, // force = true
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.expandedCount).toBe(3); // Tasks 1, 2, and 4 (including task with existing subtasks)
|
||||
expect(mockExpandTask).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
test('should handle research flag', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.08 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
undefined, // numSubtasks not specified
|
||||
true, // useResearch = true
|
||||
'research context',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.any(Number),
|
||||
undefined,
|
||||
true, // research flag passed correctly
|
||||
'research context',
|
||||
expect.any(Object),
|
||||
false
|
||||
);
|
||||
});
|
||||
|
||||
test('should return success with message when no tasks are eligible', async () => {
|
||||
// Arrange - Mock tasks data with no eligible tasks
|
||||
const noEligibleTasksData = {
|
||||
tag: 'master',
|
||||
tasks: [
|
||||
{ id: 1, status: 'done', subtasks: [] },
|
||||
{
|
||||
id: 2,
|
||||
status: 'pending',
|
||||
subtasks: [{ id: '2.1', title: 'existing' }]
|
||||
}
|
||||
]
|
||||
};
|
||||
mockReadJSON.mockReturnValue(noEligibleTasksData);
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false, // force = false, so task with subtasks won't be expanded
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(0);
|
||||
expect(result.failedCount).toBe(0);
|
||||
expect(result.skippedCount).toBe(0);
|
||||
expect(result.tasksToExpand).toBe(0);
|
||||
expect(result.message).toBe('No tasks eligible for expansion.');
|
||||
expect(mockExpandTask).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('error handling', () => {
|
||||
test('should handle expandTask failures gracefully', async () => {
|
||||
// Arrange
|
||||
mockExpandTask
|
||||
.mockResolvedValueOnce({ telemetryData: { totalCost: 0.05 } }) // First task succeeds
|
||||
.mockRejectedValueOnce(new Error('AI service error')); // Second task fails
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(1);
|
||||
expect(result.failedCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should throw error when tasks.json is invalid', async () => {
|
||||
// Arrange
|
||||
mockReadJSON.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
)
|
||||
).rejects.toThrow('Invalid tasks data');
|
||||
});
|
||||
|
||||
test('should throw error when project root cannot be determined', async () => {
|
||||
// Arrange - Mock findProjectRoot to return null for this test
|
||||
mockFindProjectRoot.mockReturnValueOnce(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog
|
||||
// No projectRoot provided, and findProjectRoot will return null
|
||||
},
|
||||
'json'
|
||||
)
|
||||
).rejects.toThrow('Could not determine project root directory');
|
||||
});
|
||||
});
|
||||
|
||||
describe('telemetry aggregation', () => {
|
||||
test('should aggregate telemetry data from multiple expand operations', async () => {
|
||||
// Arrange
|
||||
const telemetryData1 = {
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.03,
|
||||
totalTokens: 600
|
||||
};
|
||||
const telemetryData2 = {
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.04,
|
||||
totalTokens: 800
|
||||
};
|
||||
|
||||
mockExpandTask
|
||||
.mockResolvedValueOnce({ telemetryData: telemetryData1 })
|
||||
.mockResolvedValueOnce({ telemetryData: telemetryData2 });
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
||||
[telemetryData1, telemetryData2],
|
||||
'expand-all-tasks'
|
||||
);
|
||||
expect(result.telemetryData).toBeDefined();
|
||||
expect(result.telemetryData.commandName).toBe('expand-all-tasks');
|
||||
});
|
||||
|
||||
test('should handle missing telemetry data gracefully', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({}); // No telemetryData
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
||||
[],
|
||||
'expand-all-tasks'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('output format handling', () => {
|
||||
test('should use text output format for CLI calls', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
projectRoot: mockProjectRoot
|
||||
// No mcpLog provided, should use CLI logger
|
||||
},
|
||||
'text' // CLI output format
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
// In text mode, loading indicators and console output would be used
|
||||
// This is harder to test directly but we can verify the result structure
|
||||
});
|
||||
|
||||
test('should handle context tag properly', async () => {
|
||||
// Arrange
|
||||
const taggedTasksData = {
|
||||
...sampleTasksData,
|
||||
tag: 'feature-branch'
|
||||
};
|
||||
mockReadJSON.mockReturnValue(taggedTasksData);
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'feature-branch'
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
mockProjectRoot,
|
||||
'feature-branch'
|
||||
);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.any(Number),
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
expect.objectContaining({
|
||||
tag: 'feature-branch'
|
||||
}),
|
||||
false
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
888
tests/unit/scripts/modules/task-manager/expand-task.test.js
Normal file
888
tests/unit/scripts/modules/task-manager/expand-task.test.js
Normal file
@@ -0,0 +1,888 @@
|
||||
/**
|
||||
* Tests for the expand-task.js module
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
CONFIG: {
|
||||
model: 'mock-claude-model',
|
||||
maxTokens: 4000,
|
||||
temperature: 0.7,
|
||||
debug: false
|
||||
},
|
||||
sanitizePrompt: jest.fn((prompt) => prompt),
|
||||
truncate: jest.fn((text) => text),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
findTaskById: jest.fn(),
|
||||
findProjectRoot: jest.fn((tasksPath) => '/mock/project/root'),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
ensureTagMetadata: jest.fn((tagObj) => tagObj),
|
||||
flattenTasksWithSubtasks: jest.fn((tasks) => {
|
||||
const allTasks = [];
|
||||
const queue = [...(tasks || [])];
|
||||
while (queue.length > 0) {
|
||||
const task = queue.shift();
|
||||
allTasks.push(task);
|
||||
if (task.subtasks) {
|
||||
for (const subtask of task.subtasks) {
|
||||
queue.push({ ...subtask, id: `${task.id}.${subtask.id}` });
|
||||
}
|
||||
}
|
||||
}
|
||||
return allTasks;
|
||||
}),
|
||||
readComplexityReport: jest.fn(),
|
||||
markMigrationForNotice: jest.fn(),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn(),
|
||||
getStatusWithColor: jest.fn((status) => status),
|
||||
startLoadingIndicator: jest.fn(),
|
||||
stopLoadingIndicator: jest.fn(),
|
||||
succeedLoadingIndicator: jest.fn(),
|
||||
failLoadingIndicator: jest.fn(),
|
||||
warnLoadingIndicator: jest.fn(),
|
||||
infoLoadingIndicator: jest.fn(),
|
||||
displayAiUsageSummary: jest.fn(),
|
||||
displayContextAnalysis: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/ai-services-unified.js',
|
||||
() => ({
|
||||
generateTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: JSON.stringify({
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
description:
|
||||
'Create the basic project directory structure and configuration files',
|
||||
dependencies: [],
|
||||
details:
|
||||
'Initialize package.json, create src/ and test/ directories, set up linting configuration',
|
||||
status: 'pending',
|
||||
testStrategy:
|
||||
'Verify all expected files and directories are created'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
description: 'Develop the main application logic and core features',
|
||||
dependencies: [1],
|
||||
details:
|
||||
'Create main classes, implement business logic, set up data models',
|
||||
status: 'pending',
|
||||
testStrategy: 'Unit tests for all core functions and classes'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
description: 'Create the user interface components and layouts',
|
||||
dependencies: [2],
|
||||
details:
|
||||
'Design UI components, implement responsive layouts, add user interactions',
|
||||
status: 'pending',
|
||||
testStrategy: 'UI tests and visual regression testing'
|
||||
}
|
||||
]
|
||||
}),
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: '1234567890',
|
||||
commandName: 'expand-task',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDefaultSubtasks: jest.fn(() => 3),
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/utils/contextGatherer.js',
|
||||
() => ({
|
||||
ContextGatherer: jest.fn().mockImplementation(() => ({
|
||||
gather: jest.fn().mockResolvedValue({
|
||||
contextSummary: 'Mock context summary',
|
||||
allRelatedTaskIds: [],
|
||||
graphVisualization: 'Mock graph'
|
||||
})
|
||||
}))
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn().mockResolvedValue()
|
||||
})
|
||||
);
|
||||
|
||||
// Mock external UI libraries
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
white: { bold: jest.fn((text) => text) },
|
||||
cyan: Object.assign(
|
||||
jest.fn((text) => text),
|
||||
{
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
),
|
||||
green: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('cli-table3', () => ({
|
||||
default: jest.fn().mockImplementation(() => ({
|
||||
push: jest.fn(),
|
||||
toString: jest.fn(() => 'mocked table')
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock process.exit to prevent Jest worker crashes
|
||||
const mockExit = jest.spyOn(process, 'exit').mockImplementation((code) => {
|
||||
throw new Error(`process.exit called with "${code}"`);
|
||||
});
|
||||
|
||||
// Import the mocked modules
|
||||
const {
|
||||
readJSON,
|
||||
writeJSON,
|
||||
log,
|
||||
findTaskById,
|
||||
ensureTagMetadata,
|
||||
readComplexityReport,
|
||||
findProjectRoot
|
||||
} = await import('../../../../../scripts/modules/utils.js');
|
||||
|
||||
const { generateTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
const generateTaskFiles = (
|
||||
await import(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js'
|
||||
)
|
||||
).default;
|
||||
|
||||
// Import the module under test
|
||||
const { default: expandTask } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
||||
);
|
||||
|
||||
describe('expandTask', () => {
|
||||
const sampleTasks = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
description: 'First task',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
details: 'Already completed task',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
description: 'Second task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Task ready for expansion',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Complex Task',
|
||||
description: 'A complex task that needs breakdown',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
details: 'This task involves multiple steps',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task with existing subtasks',
|
||||
description: 'Task that already has subtasks',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Has existing subtasks',
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Existing subtask',
|
||||
description: 'Already exists',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
'feature-branch': {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Feature Task 1',
|
||||
description: 'Task in feature branch',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Feature-specific task',
|
||||
subtasks: []
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Create a helper function for consistent mcpLog mock
|
||||
const createMcpLogMock = () => ({
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockExit.mockClear();
|
||||
|
||||
// Default readJSON implementation - returns tagged structure
|
||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
||||
const selectedTag = tag || 'master';
|
||||
return {
|
||||
...sampleTasksCopy[selectedTag],
|
||||
tag: selectedTag,
|
||||
_rawTaggedData: sampleTasksCopy
|
||||
};
|
||||
});
|
||||
|
||||
// Default findTaskById implementation
|
||||
findTaskById.mockImplementation((tasks, taskId) => {
|
||||
const id = parseInt(taskId, 10);
|
||||
return tasks.find((t) => t.id === id);
|
||||
});
|
||||
|
||||
// Default complexity report (no report available)
|
||||
readComplexityReport.mockReturnValue(null);
|
||||
|
||||
// Mock findProjectRoot to return consistent path for complexity report
|
||||
findProjectRoot.mockReturnValue('/mock/project/root');
|
||||
|
||||
writeJSON.mockResolvedValue();
|
||||
generateTaskFiles.mockResolvedValue();
|
||||
log.mockImplementation(() => {});
|
||||
|
||||
// Mock console.log to avoid output during tests
|
||||
jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
console.log.mockRestore();
|
||||
});
|
||||
|
||||
describe('Basic Functionality', () => {
|
||||
test('should expand a task with AI-generated subtasks', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const numSubtasks = 3;
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
numSubtasks,
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
})
|
||||
]),
|
||||
tag: 'master',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.objectContaining({
|
||||
tasks: expect.any(Array)
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle research flag correctly', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const numSubtasks = 3;
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
numSubtasks,
|
||||
true, // useResearch = true
|
||||
'Additional context for research',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
role: 'research',
|
||||
commandName: expect.any(String)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle complexity report integration without errors', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act & Assert - Should complete without errors
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
undefined, // numSubtasks not specified
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should successfully expand and return expected structure
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.any(Array)
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Tag Handling (The Critical Bug Fix)', () => {
|
||||
test('should preserve tagged structure when expanding with default tag', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'master' // Explicit tag context
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - CRITICAL: Check tag is passed to readJSON and writeJSON
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
'master'
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tag: 'master',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object),
|
||||
'feature-branch': expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
'master' // CRITICAL: Tag must be passed to writeJSON
|
||||
);
|
||||
});
|
||||
|
||||
test('should preserve tagged structure when expanding with non-default tag', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task in feature-branch
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'feature-branch' // Different tag context
|
||||
};
|
||||
|
||||
// Configure readJSON to return feature-branch data
|
||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
||||
return {
|
||||
...sampleTasksCopy['feature-branch'],
|
||||
tag: 'feature-branch',
|
||||
_rawTaggedData: sampleTasksCopy
|
||||
};
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - CRITICAL: Check tag preservation for non-default tag
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
'feature-branch'
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tag: 'feature-branch',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object),
|
||||
'feature-branch': expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
'feature-branch' // CRITICAL: Correct tag passed to writeJSON
|
||||
);
|
||||
});
|
||||
|
||||
test('should NOT corrupt tagged structure when tag is undefined', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
// No tag specified - should default gracefully
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should still preserve structure with undefined tag
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
|
||||
// CRITICAL: Verify structure is NOT flattened to old format
|
||||
const writeCallArgs = writeJSON.mock.calls[0][1];
|
||||
expect(writeCallArgs).toHaveProperty('tasks'); // Should have tasks property from readJSON mock
|
||||
expect(writeCallArgs).toHaveProperty('_rawTaggedData'); // Should preserve tagged structure
|
||||
});
|
||||
});
|
||||
|
||||
describe('Force Flag Handling', () => {
|
||||
test('should replace existing subtasks when force=true', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '4'; // Task with existing subtasks
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, true);
|
||||
|
||||
// Assert - Should replace existing subtasks
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure'
|
||||
})
|
||||
])
|
||||
})
|
||||
])
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
|
||||
test('should append to existing subtasks when force=false', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '4'; // Task with existing subtasks
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should append to existing subtasks with proper ID increments
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
subtasks: expect.arrayContaining([
|
||||
// Should contain both existing and new subtasks
|
||||
expect.any(Object),
|
||||
expect.any(Object),
|
||||
expect.any(Object),
|
||||
expect.any(Object) // 1 existing + 3 new = 4 total
|
||||
])
|
||||
})
|
||||
])
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
test('should handle non-existent task ID', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '999'; // Non-existent task
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
findTaskById.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('Task 999 not found');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should expand tasks regardless of status (including done tasks)', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task with 'done' status
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should successfully expand even 'done' tasks
|
||||
expect(writeJSON).toHaveBeenCalled();
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 1,
|
||||
status: 'done', // Status unchanged
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle AI service failures', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
generateTextService.mockRejectedValueOnce(new Error('AI service error'));
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('AI service error');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle file read errors', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
readJSON.mockImplementation(() => {
|
||||
throw new Error('File read failed');
|
||||
});
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('File read failed');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle invalid tasks data', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
readJSON.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Output Format Handling', () => {
|
||||
test('should display telemetry for CLI output format', async () => {
|
||||
// Arrange
|
||||
const { displayAiUsageSummary } = await import(
|
||||
'../../../../../scripts/modules/ui.js'
|
||||
);
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
projectRoot: '/mock/project/root'
|
||||
// No mcpLog - should trigger CLI mode
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should display telemetry for CLI users
|
||||
expect(displayAiUsageSummary).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
commandName: 'expand-task',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
totalCost: 0.012414
|
||||
}),
|
||||
'cli'
|
||||
);
|
||||
});
|
||||
|
||||
test('should not display telemetry for MCP output format', async () => {
|
||||
// Arrange
|
||||
const { displayAiUsageSummary } = await import(
|
||||
'../../../../../scripts/modules/ui.js'
|
||||
);
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should NOT display telemetry for MCP (handled at higher level)
|
||||
expect(displayAiUsageSummary).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
test('should handle empty additional context', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should work with empty context (but may include project context)
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
prompt: expect.stringMatching(/.*/) // Just ensure prompt exists
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle additional context correctly', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const additionalContext = 'Use React hooks and TypeScript';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
3,
|
||||
false,
|
||||
additionalContext,
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should include additional context in prompt
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
prompt: expect.stringContaining('Use React hooks and TypeScript')
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle missing project root in context', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock()
|
||||
// No projectRoot in context
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should derive project root from tasksPath
|
||||
expect(findProjectRoot).toHaveBeenCalledWith(tasksPath);
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -123,7 +123,9 @@ describe('updateTasks', () => {
|
||||
details: 'New details 2 based on direction',
|
||||
description: 'Updated description',
|
||||
dependencies: [],
|
||||
priority: 'medium'
|
||||
priority: 'medium',
|
||||
testStrategy: 'Unit test the updated functionality',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
@@ -132,7 +134,9 @@ describe('updateTasks', () => {
|
||||
details: 'New details 3 based on direction',
|
||||
description: 'Updated description',
|
||||
dependencies: [],
|
||||
priority: 'medium'
|
||||
priority: 'medium',
|
||||
testStrategy: 'Integration test the updated features',
|
||||
subtasks: []
|
||||
}
|
||||
];
|
||||
|
||||
|
||||
Reference in New Issue
Block a user