Compare commits
111 Commits
task-maste
...
v017-adds-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea6a58ef90 | ||
|
|
f07945734d | ||
|
|
d5360f625f | ||
|
|
2e2d290c63 | ||
|
|
d3edd24b5a | ||
|
|
60c0f26f3c | ||
|
|
be0bf18f41 | ||
|
|
32236a0bc5 | ||
|
|
668b22e615 | ||
|
|
4901908f5d | ||
|
|
a047886910 | ||
|
|
92234323d7 | ||
|
|
514fdb0b78 | ||
|
|
932825c2d6 | ||
|
|
3c8c62434f | ||
|
|
9d755b9e79 | ||
|
|
fcc2351b3d | ||
|
|
3db62b5b88 | ||
|
|
6b929fa9fa | ||
|
|
ddaa1dceef | ||
|
|
cac8c234d6 | ||
|
|
b205e52d08 | ||
|
|
4585a6bbc7 | ||
|
|
b3ec151b27 | ||
|
|
5d9748af89 | ||
|
|
b28479a09d | ||
|
|
9efbd38f10 | ||
|
|
dc7a5414c0 | ||
|
|
75e017e371 | ||
|
|
a84cd1a492 | ||
|
|
3888ef41d4 | ||
|
|
83d6405b17 | ||
|
|
bb775e3180 | ||
|
|
2328efe482 | ||
|
|
f3fe481f3f | ||
|
|
f43b5fdd75 | ||
|
|
153b190e0d | ||
|
|
0be5ae59fe | ||
|
|
f1d593f887 | ||
|
|
efd14544f0 | ||
|
|
ef9439d441 | ||
|
|
a49071a6b8 | ||
|
|
c2d83690a1 | ||
|
|
b40a94c4ac | ||
|
|
05a389e171 | ||
|
|
3352a6a99f | ||
|
|
d391f3b5b3 | ||
|
|
f2c5911e58 | ||
|
|
bb5a0211f4 | ||
|
|
4234cc3d87 | ||
|
|
d942db4868 | ||
|
|
3cf718a718 | ||
|
|
a2ff8a97b7 | ||
|
|
b1b888a5f3 | ||
|
|
f817de9da6 | ||
|
|
806c505aac | ||
|
|
6f225cf81a | ||
|
|
74eb9907f3 | ||
|
|
5c29969741 | ||
|
|
8e794e18ac | ||
|
|
3ce4d2cc74 | ||
|
|
2d85fcc6a7 | ||
|
|
0102be4f3b | ||
|
|
b6f1376625 | ||
|
|
d4f21be1a3 | ||
|
|
f28de8b729 | ||
|
|
e50230f9ce | ||
|
|
01992ebd0b | ||
|
|
af652978a0 | ||
|
|
54005d5486 | ||
|
|
65b70d746a | ||
|
|
f533fd0931 | ||
|
|
7db7cf3859 | ||
|
|
2434b97247 | ||
|
|
bac58c606d | ||
|
|
89f8bff219 | ||
|
|
366cd161da | ||
|
|
a346dd5020 | ||
|
|
c2709edd78 | ||
|
|
e53006066e | ||
|
|
2d11b94804 | ||
|
|
a5e36cf7b4 | ||
|
|
9cd18caa3c | ||
|
|
9058d7dfdd | ||
|
|
199e32c2d1 | ||
|
|
a874a12e17 | ||
|
|
eb343287ae | ||
|
|
94eeb5117b | ||
|
|
87c85d3d66 | ||
|
|
0b8f594ac7 | ||
|
|
15b190b87b | ||
|
|
9ae255ccb4 | ||
|
|
518f73eefa | ||
|
|
40a52385ba | ||
|
|
78397fe0be | ||
|
|
f9b89dc25c | ||
|
|
ca69e1294f | ||
|
|
ac36e2497e | ||
|
|
1d4b80fe6f | ||
|
|
023f51c579 | ||
|
|
1e020023ed | ||
|
|
325f5a2aa3 | ||
|
|
de46bfd84b | ||
|
|
cc26c36366 | ||
|
|
15ad34928d | ||
|
|
f74d639110 | ||
|
|
de58e9ede5 | ||
|
|
947541e4ee | ||
|
|
275cd55da7 | ||
|
|
67ac212973 | ||
|
|
235371ff47 |
77
.changeset/bright-windows-sing.md
Normal file
77
.changeset/bright-windows-sing.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add comprehensive AI-powered research command with intelligent context gathering and interactive follow-ups.
|
||||
|
||||
The new `research` command provides AI-powered research capabilities that automatically gather relevant project context to answer your questions. The command intelligently selects context from multiple sources and supports interactive follow-up questions in CLI mode.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- **Intelligent Task Discovery**: Automatically finds relevant tasks and subtasks using fuzzy search based on your query keywords, supplementing any explicitly provided task IDs
|
||||
- **Multi-Source Context**: Gathers context from tasks, files, project structure, and custom text to provide comprehensive answers
|
||||
- **Interactive Follow-ups**: CLI users can ask follow-up questions that build on the conversation history while allowing fresh context discovery for each question
|
||||
- **Flexible Detail Levels**: Choose from low (concise), medium (balanced), or high (comprehensive) response detail levels
|
||||
- **Token Transparency**: Displays detailed token breakdown showing context size, sources, and estimated costs
|
||||
- **Enhanced Display**: Syntax-highlighted code blocks and structured output with clear visual separation
|
||||
|
||||
**Usage Examples:**
|
||||
|
||||
```bash
|
||||
# Basic research with auto-discovered context
|
||||
task-master research "How should I implement user authentication?"
|
||||
|
||||
# Research with specific task context
|
||||
task-master research "What's the best approach for this?" --id=15,23.2
|
||||
|
||||
# Research with file context and project tree
|
||||
task-master research "How does the current auth system work?" --files=src/auth.js,config/auth.json --tree
|
||||
|
||||
# Research with custom context and low detail
|
||||
task-master research "Quick implementation steps?" --context="Using JWT tokens" --detail=low
|
||||
```
|
||||
|
||||
**Context Sources:**
|
||||
|
||||
- **Tasks**: Automatically discovers relevant tasks/subtasks via fuzzy search, plus any explicitly specified via `--id`
|
||||
- **Files**: Include specific files via `--files` for code-aware responses
|
||||
- **Project Tree**: Add `--tree` to include project structure overview
|
||||
- **Custom Context**: Provide additional context via `--context` for domain-specific information
|
||||
|
||||
**Interactive Features (CLI only):**
|
||||
|
||||
- Follow-up questions that maintain conversation history
|
||||
- Fresh fuzzy search for each follow-up to discover newly relevant tasks
|
||||
- Cumulative context building across the conversation
|
||||
- Clean visual separation between exchanges
|
||||
- **Save to Tasks**: Save entire research conversations (including follow-ups) directly to task or subtask details with timestamps
|
||||
- **Clean Menu Interface**: Streamlined inquirer-based menu for follow-up actions without redundant UI elements
|
||||
|
||||
**Save Functionality:**
|
||||
|
||||
The research command now supports saving complete conversation threads to tasks or subtasks:
|
||||
|
||||
- Save research results and follow-up conversations to any task (e.g., "15") or subtask (e.g., "15.2")
|
||||
- Automatic timestamping and formatting of conversation history
|
||||
- Validation of task/subtask existence before saving
|
||||
- Appends to existing task details without overwriting content
|
||||
- Supports both CLI interactive mode and MCP programmatic access via `--save-to` flag
|
||||
|
||||
**Enhanced CLI Options:**
|
||||
|
||||
```bash
|
||||
# Auto-save research results to a task
|
||||
task-master research "Implementation approach?" --save-to=15
|
||||
|
||||
# Combine auto-save with context gathering
|
||||
task-master research "How to optimize this?" --id=23 --save-to=23.1
|
||||
```
|
||||
|
||||
**MCP Integration:**
|
||||
|
||||
- `saveTo` parameter for automatic saving to specified task/subtask ID
|
||||
- Structured response format with telemetry data
|
||||
- Silent operation mode for programmatic usage
|
||||
- Full feature parity with CLI except interactive follow-ups
|
||||
|
||||
The research command integrates with the existing AI service layer and supports all configured AI providers. Both CLI and MCP interfaces provide comprehensive research capabilities with intelligent context gathering and flexible output options.
|
||||
5
.changeset/chatty-rats-talk.md
Normal file
5
.changeset/chatty-rats-talk.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility
|
||||
13
.changeset/cold-pears-poke.md
Normal file
13
.changeset/cold-pears-poke.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix critical bugs in task move functionality:
|
||||
|
||||
- **Fixed moving tasks to become subtasks of empty parents**: When moving a task to become a subtask of a parent that had no existing subtasks (e.g., task 89 → task 98.1), the operation would fail with validation errors.
|
||||
|
||||
- **Fixed moving subtasks between parents**: Subtasks can now be properly moved between different parent tasks, including to parents that previously had no subtasks.
|
||||
|
||||
- **Improved comma-separated batch moves**: Multiple tasks can now be moved simultaneously using comma-separated IDs (e.g., "88,90" → "92,93") with proper error handling and atomic operations.
|
||||
|
||||
These fixes enables proper task hierarchy reorganization for corner cases that were previously broken.
|
||||
@@ -2,15 +2,13 @@
|
||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||
"changelog": [
|
||||
"@changesets/changelog-github",
|
||||
{
|
||||
"repo": "eyaltoledano/claude-task-master"
|
||||
}
|
||||
{ "repo": "eyaltoledano/claude-task-master" }
|
||||
],
|
||||
"commit": false,
|
||||
"fixed": [],
|
||||
"linked": [],
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"ignore": [
|
||||
"docs"
|
||||
]
|
||||
"updateInternalDependencies": "patch",
|
||||
"ignore": []
|
||||
}
|
||||
5
.changeset/curly-dragons-design.md
Normal file
5
.changeset/curly-dragons-design.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
improve findTasks algorithm for resolving tasks path
|
||||
5
.changeset/eleven-news-check.md
Normal file
5
.changeset/eleven-news-check.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix update tool on MCP giving `No valid tasks found`
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix module not found for new 0.27.0 release
|
||||
5
.changeset/fluffy-waves-allow.md
Normal file
5
.changeset/fluffy-waves-allow.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Adds ability to automatically create/switch tags to match the current git branch. The configuration to enable the git workflow and then use the auto switching is in config.json."
|
||||
39
.changeset/four-cups-enter.md
Normal file
39
.changeset/four-cups-enter.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Enhanced add-task fuzzy search intelligence and improved user experience
|
||||
|
||||
**Smarter Task Discovery:**
|
||||
|
||||
- Remove hardcoded category system that always matched "Task management"
|
||||
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
|
||||
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
|
||||
- Generate context-driven task recommendations based on true semantic similarity
|
||||
|
||||
**Enhanced Terminal Experience:**
|
||||
|
||||
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
|
||||
- Remove console.clear() and redundant displayBanner() calls from UI functions
|
||||
- Preserve command history for better development workflow
|
||||
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
|
||||
|
||||
**Visual Improvements:**
|
||||
|
||||
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
|
||||
- Improve consistency and readability of task complexity display
|
||||
|
||||
**AI Provider Compatibility:**
|
||||
|
||||
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
|
||||
- Add qwen3-235n-a22b:free model support (closes #687)
|
||||
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
|
||||
|
||||
**Technical Improvements:**
|
||||
|
||||
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
|
||||
- Improved dependency analysis and common pattern detection
|
||||
- Better handling of task relationships and relevance scoring
|
||||
- More intelligent task suggestion algorithms
|
||||
|
||||
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.
|
||||
18
.changeset/free-pants-rescue.md
Normal file
18
.changeset/free-pants-rescue.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhance update-task with --append flag for timestamped task updates
|
||||
|
||||
Adds the `--append` flag to `update-task` command, enabling it to behave like `update-subtask` with timestamped information appending. This provides more flexible task updating options:
|
||||
|
||||
**CLI Enhancement:**
|
||||
- `task-master update-task --id=5 --prompt="New info"` - Full task update (existing behavior)
|
||||
- `task-master update-task --id=5 --append --prompt="Progress update"` - Append timestamped info to task details
|
||||
|
||||
**Full MCP Integration:**
|
||||
- MCP tool `update_task` now supports `append` parameter
|
||||
- Seamless integration with Cursor and other MCP clients
|
||||
- Consistent behavior between CLI and MCP interfaces
|
||||
|
||||
Instead of requiring separate subtask creation for progress tracking, you can now append timestamped information directly to parent tasks while preserving the option for comprehensive task updates.
|
||||
5
.changeset/large-wolves-strive.md
Normal file
5
.changeset/large-wolves-strive.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Update o3 model price
|
||||
11
.changeset/late-dryers-relax.md
Normal file
11
.changeset/late-dryers-relax.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add --tag flag support to core commands for multi-context task management. Commands like parse-prd, analyze-complexity, and others now support targeting specific task lists, enabling rapid prototyping and parallel development workflows.
|
||||
|
||||
Key features:
|
||||
- parse-prd --tag=feature-name: Parse PRDs into separate task contexts on the fly
|
||||
- analyze-complexity --tag=branch: Generate tag-specific complexity reports
|
||||
- All task operations can target specific contexts while preserving other lists
|
||||
- Non-existent tags are created automatically for seamless workflow
|
||||
8
.changeset/nasty-chefs-add.md
Normal file
8
.changeset/nasty-chefs-add.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fixes issue with expand CLI command "Complexity report not found"
|
||||
|
||||
- Closes #735
|
||||
- Closes #728
|
||||
7
.changeset/pink-houses-lay.md
Normal file
7
.changeset/pink-houses-lay.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix double .taskmaster directory paths in file resolution utilities
|
||||
|
||||
- Closes #636
|
||||
5
.changeset/polite-areas-shave.md
Normal file
5
.changeset/polite-areas-shave.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Add one-click MCP server installation for Cursor
|
||||
11
.changeset/pre.json
Normal file
11
.changeset/pre.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.16.1"
|
||||
},
|
||||
"changesets": [
|
||||
"pink-houses-lay",
|
||||
"polite-areas-shave"
|
||||
]
|
||||
}
|
||||
5
.changeset/quick-flies-sniff.md
Normal file
5
.changeset/quick-flies-sniff.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix issue with generate command which was creating tasks in the legacy tasks location.
|
||||
136
.changeset/six-cups-see.md
Normal file
136
.changeset/six-cups-see.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Introduces Tagged Lists: AI Multi-Context Task Management System
|
||||
|
||||
This major release introduces Tagged Lists, a comprehensive system that transforms Task Master into a multi-context task management powerhouse. You can now organize tasks into completely isolated contexts, enabling parallel (agentic) development workflows, team collaboration, and project experimentation without conflicts.
|
||||
|
||||
**🏷️ Tagged Task Lists Architecture:**
|
||||
|
||||
The new tagged system fundamentally changes how tasks are organized:
|
||||
- **Legacy Format**: `{ "tasks": [...] }`
|
||||
- **New Tagged Format**: `{ "master": { "tasks": [...], "metadata": {...} }, "feature-xyz": { "tasks": [...], "metadata": {...} } }`
|
||||
- **Automatic Migration**: Existing projects seamlessly migrate to tagged format with zero user intervention
|
||||
- **State Management**: New `.taskmaster/state.json` tracks current tag, last switched time, and migration status
|
||||
- **Configuration Integration**: Enhanced `.taskmaster/config.json` with tag-specific settings and defaults
|
||||
|
||||
**🚀 Complete Tag Management Suite:**
|
||||
|
||||
**Core Tag Commands:**
|
||||
- `task-master tags [--show-metadata]` - List all tags with task counts, completion stats, and metadata
|
||||
- `task-master add-tag <name> [options]` - Create new tag contexts with optional task copying
|
||||
- `task-master delete-tag <name> [--yes]` - Delete tags with double confirmation protection
|
||||
- `task-master use-tag <name>` - Switch contexts and immediately see next available task
|
||||
- `task-master rename-tag <old> <new>` - Rename tags with automatic current tag reference updates
|
||||
- `task-master copy-tag <source> <target> [options]` - Duplicate tag contexts for experimentation
|
||||
|
||||
**🤖 Full MCP Integration for Tag Management:**
|
||||
|
||||
Task Master's multi-context capabilities are now fully exposed through the MCP server, enabling powerful agentic workflows:
|
||||
- **`list_tags`**: List all available tag contexts.
|
||||
- **`add_tag`**: Programmatically create new tags.
|
||||
- **`delete_tag`**: Remove tag contexts.
|
||||
- **`use_tag`**: Switch the agent's active task context.
|
||||
- **`rename_tag`**: Rename existing tags.
|
||||
- **`copy_tag`**: Duplicate entire task contexts for experimentation.
|
||||
|
||||
**Tag Creation Options:**
|
||||
- `--copy-from-current` - Copy tasks from currently active tag
|
||||
- `--copy-from=<tag>` - Copy tasks from specific tag
|
||||
- `--from-branch` - Creates a new tag usin active git branch name (for `add-tag` only)
|
||||
- `--description="<text>"` - Add custom tag descriptions
|
||||
- Empty tag creation for fresh contexts
|
||||
|
||||
**🎯 Universal --tag Flag Support:**
|
||||
|
||||
Every task operation now supports tag-specific execution:
|
||||
- `task-master list --tag=feature-branch` - View tasks in specific context
|
||||
- `task-master add-task --tag=experiment --prompt="..."` - Create tasks in specific tag
|
||||
- `task-master parse-prd document.txt --tag=v2-redesign` - Parse PRDs into dedicated contexts
|
||||
- `task-master analyze-complexity --tag=performance-work` - Generate tag-specific reports
|
||||
- `task-master set-status --tag=hotfix --id=5 --status=done` - Update tasks in specific contexts
|
||||
- `task-master expand --tag=research --id=3` - Break down tasks within tag contexts
|
||||
|
||||
**📊 Enhanced Workflow Features:**
|
||||
|
||||
**Smart Context Switching:**
|
||||
- `use-tag` command shows immediate next task after switching
|
||||
- Automatic tag creation when targeting non-existent tags
|
||||
- Current tag persistence across terminal sessions
|
||||
- Branch-tag mapping for future Git integration
|
||||
|
||||
**Intelligent File Management:**
|
||||
- Tag-specific complexity reports: `task-complexity-report_tagname.json`
|
||||
- Master tag uses default filenames: `task-complexity-report.json`
|
||||
- Automatic file isolation prevents cross-tag contamination
|
||||
|
||||
**Advanced Confirmation Logic:**
|
||||
- Commands only prompt when target tag has existing tasks
|
||||
- Empty tags allow immediate operations without confirmation
|
||||
- Smart append vs overwrite detection
|
||||
|
||||
**🔄 Seamless Migration & Compatibility:**
|
||||
|
||||
**Zero-Disruption Migration:**
|
||||
- Existing `tasks.json` files automatically migrate on first command
|
||||
- Master tag receives proper metadata (creation date, description)
|
||||
- Migration notice shown once with helpful explanation
|
||||
- All existing commands work identically to before
|
||||
|
||||
**State Management:**
|
||||
- `.taskmaster/state.json` tracks current tag and migration status
|
||||
- Automatic state creation and maintenance
|
||||
- Branch-tag mapping foundation for Git integration
|
||||
- Migration notice tracking to avoid repeated notifications
|
||||
- Grounds for future context additions
|
||||
|
||||
**Backward Compatibility:**
|
||||
- All existing workflows continue unchanged
|
||||
- Legacy commands work exactly as before
|
||||
- Gradual adoption - users can ignore tags entirely if desired
|
||||
- No breaking changes to existing tasks or file formats
|
||||
|
||||
**💡 Real-World Use Cases:**
|
||||
|
||||
**Team Collaboration:**
|
||||
- `task-master add-tag alice --copy-from-current` - Create teammate-specific contexts
|
||||
- `task-master add-tag bob --copy-from=master` - Onboard new team members
|
||||
- `task-master use-tag alice` - Switch to teammate's work context
|
||||
|
||||
**Feature Development:**
|
||||
- `task-master parse-prd feature-spec.txt --tag=user-auth` - Dedicated feature planning
|
||||
- `task-master add-tag experiment --copy-from=user-auth` - Safe experimentation
|
||||
- `task-master analyze-complexity --tag=user-auth` - Feature-specific analysis
|
||||
|
||||
**Release Management:**
|
||||
- `task-master add-tag v2.0 --description="Next major release"` - Version-specific planning
|
||||
- `task-master copy-tag master v2.1` - Release branch preparation
|
||||
- `task-master use-tag hotfix` - Emergency fix context
|
||||
|
||||
**Project Phases:**
|
||||
- `task-master add-tag research --description="Discovery phase"` - Research tasks
|
||||
- `task-master add-tag implementation --copy-from=research` - Development phase
|
||||
- `task-master add-tag testing --copy-from=implementation` - QA phase
|
||||
|
||||
**🛠️ Technical Implementation:**
|
||||
|
||||
**Data Structure:**
|
||||
- Tagged format with complete isolation between contexts
|
||||
- Rich metadata per tag (creation date, description, update tracking)
|
||||
- Automatic metadata enhancement for existing tags
|
||||
- Clean separation of tag data and internal state
|
||||
|
||||
**Performance Optimizations:**
|
||||
- Dynamic task counting without stored counters
|
||||
- Efficient tag resolution and caching
|
||||
- Minimal file I/O with smart data loading
|
||||
- Responsive table layouts adapting to terminal width
|
||||
|
||||
**Error Handling:**
|
||||
- Comprehensive validation for tag names (alphanumeric, hyphens, underscores)
|
||||
- Reserved name protection (master, main, default)
|
||||
- Graceful handling of missing tags and corrupted data
|
||||
- Detailed error messages with suggested corrections
|
||||
|
||||
This release establishes the foundation for advanced multi-context workflows while maintaining the simplicity and power that makes Task Master effective for individual developers.
|
||||
24
.changeset/slick-webs-lead.md
Normal file
24
.changeset/slick-webs-lead.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Research Save-to-File Feature & Critical MCP Tag Corruption Fix
|
||||
|
||||
**🔬 New Research Save-to-File Functionality:**
|
||||
|
||||
Added comprehensive save-to-file capability to the research command, enabling users to preserve research sessions for future reference and documentation.
|
||||
|
||||
**CLI Integration:**
|
||||
- New `--save-file` flag for `task-master research` command
|
||||
- Consistent with existing `--save` and `--save-to` flags for intuitive usage
|
||||
- Interactive "Save to file" option in follow-up questions menu
|
||||
|
||||
**MCP Integration:**
|
||||
- New `saveToFile` boolean parameter for the `research` MCP tool
|
||||
- Enables programmatic research saving for AI agents and integrated tools
|
||||
|
||||
**File Management:**
|
||||
- Automatically creates `.taskmaster/docs/research/` directory structure
|
||||
- Generates timestamped, slugified filenames (e.g., `2025-01-13_what-is-typescript.md`)
|
||||
- Comprehensive Markdown format with metadata headers including query, timestamp, and context sources
|
||||
- Clean conversation history formatting without duplicate information
|
||||
5
.changeset/slow-lies-make.md
Normal file
5
.changeset/slow-lies-make.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
No longer automatically creates individual task files as they are not used by the applicatoin. You can still generate them anytime using the `generate` command.
|
||||
19
.changeset/stale-bats-sin.md
Normal file
19
.changeset/stale-bats-sin.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations
|
||||
|
||||
**New Features:**
|
||||
- **Multiple Task Retrieval**: Pass comma-separated IDs to get/show multiple tasks at once (e.g., `task-master show 1,3,5` or MCP `get_task` with `id: "1,3,5"`)
|
||||
- **Smart Display Logic**: Single ID shows detailed view, multiple IDs show compact summary table with interactive options
|
||||
- **Batch Action Menu**: Interactive menu for multiple tasks with copy-paste ready commands for common operations (mark as done/in-progress, expand all, view dependencies, etc.)
|
||||
- **MCP Array Response**: MCP tool returns structured array of task objects for efficient AI agent context gathering
|
||||
|
||||
**Benefits:**
|
||||
- **Faster Context Gathering**: AI agents can collect multiple tasks/subtasks in one call instead of iterating
|
||||
- **Improved Workflow**: Interactive batch operations reduce repetitive command execution
|
||||
- **Better UX**: Responsive layout adapts to terminal width, maintains consistency with existing UI patterns
|
||||
- **API Efficiency**: RESTful array responses in MCP format enable more sophisticated integrations
|
||||
|
||||
This enhancement maintains full backward compatibility while significantly improving efficiency for both human users and AI agents working with multiple tasks.
|
||||
7
.changeset/tiny-ads-decide.md
Normal file
7
.changeset/tiny-ads-decide.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Adds support for filtering tasks by multiple statuses at once using comma-separated statuses.
|
||||
|
||||
Example: `cancelled,deferred`
|
||||
5
.changeset/two-lies-start.md
Normal file
5
.changeset/two-lies-start.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improves dependency management when moving tasks by updating subtask dependencies that reference sibling subtasks by their old parent-based ID
|
||||
22
.changeset/vast-shrimps-happen.md
Normal file
22
.changeset/vast-shrimps-happen.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Add sync-readme command for a task export to GitHub README
|
||||
|
||||
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
|
||||
|
||||
**Features:**
|
||||
|
||||
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
|
||||
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
|
||||
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
|
||||
|
||||
**Usage:**
|
||||
|
||||
- `task-master sync-readme` - Export tasks without subtasks
|
||||
- `task-master sync-readme --with-subtasks` - Include subtasks in export
|
||||
- `task-master sync-readme --status=pending` - Only export pending tasks
|
||||
- `task-master sync-readme --status=done --with-subtasks` - Export completed tasks with subtasks
|
||||
|
||||
Perfect for showcasing project progress on GitHub. Experimental. Open to feedback.
|
||||
5
.changeset/yellow-olives-admire.md
Normal file
5
.changeset/yellow-olives-admire.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Adds tag to CLI output so you know which tag you are performing operations on. Already supported in the MCP response.
|
||||
@@ -1,147 +0,0 @@
|
||||
# Task Master Commands for Claude Code
|
||||
|
||||
Complete guide to using Task Master through Claude Code's slash commands.
|
||||
|
||||
## Overview
|
||||
|
||||
All Task Master functionality is available through the `/project:tm/` namespace with natural language support and intelligent features.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install Task Master
|
||||
/project:tm/setup/quick-install
|
||||
|
||||
# Initialize project
|
||||
/project:tm/init/quick
|
||||
|
||||
# Parse requirements
|
||||
/project:tm/parse-prd requirements.md
|
||||
|
||||
# Start working
|
||||
/project:tm/next
|
||||
```
|
||||
|
||||
## Command Structure
|
||||
|
||||
Commands are organized hierarchically to match Task Master's CLI:
|
||||
- Main commands at `/project:tm/[command]`
|
||||
- Subcommands for specific operations `/project:tm/[command]/[subcommand]`
|
||||
- Natural language arguments accepted throughout
|
||||
|
||||
## Complete Command Reference
|
||||
|
||||
### Setup & Configuration
|
||||
- `/project:tm/setup/install` - Full installation guide
|
||||
- `/project:tm/setup/quick-install` - One-line install
|
||||
- `/project:tm/init` - Initialize project
|
||||
- `/project:tm/init/quick` - Quick init with -y
|
||||
- `/project:tm/models` - View AI config
|
||||
- `/project:tm/models/setup` - Configure AI
|
||||
|
||||
### Task Generation
|
||||
- `/project:tm/parse-prd` - Generate from PRD
|
||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||
- `/project:tm/generate` - Create task files
|
||||
|
||||
### Task Management
|
||||
- `/project:tm/list` - List with natural language filters
|
||||
- `/project:tm/list/with-subtasks` - Hierarchical view
|
||||
- `/project:tm/list/by-status <status>` - Filter by status
|
||||
- `/project:tm/show <id>` - Task details
|
||||
- `/project:tm/add-task` - Create task
|
||||
- `/project:tm/update` - Update tasks
|
||||
- `/project:tm/remove-task` - Delete task
|
||||
|
||||
### Status Management
|
||||
- `/project:tm/set-status/to-pending <id>`
|
||||
- `/project:tm/set-status/to-in-progress <id>`
|
||||
- `/project:tm/set-status/to-done <id>`
|
||||
- `/project:tm/set-status/to-review <id>`
|
||||
- `/project:tm/set-status/to-deferred <id>`
|
||||
- `/project:tm/set-status/to-cancelled <id>`
|
||||
|
||||
### Task Analysis
|
||||
- `/project:tm/analyze-complexity` - AI analysis
|
||||
- `/project:tm/complexity-report` - View report
|
||||
- `/project:tm/expand <id>` - Break down task
|
||||
- `/project:tm/expand/all` - Expand all complex
|
||||
|
||||
### Dependencies
|
||||
- `/project:tm/add-dependency` - Add dependency
|
||||
- `/project:tm/remove-dependency` - Remove dependency
|
||||
- `/project:tm/validate-dependencies` - Check issues
|
||||
- `/project:tm/fix-dependencies` - Auto-fix
|
||||
|
||||
### Workflows
|
||||
- `/project:tm/workflows/smart-flow` - Adaptive workflows
|
||||
- `/project:tm/workflows/pipeline` - Chain commands
|
||||
- `/project:tm/workflows/auto-implement` - AI implementation
|
||||
|
||||
### Utilities
|
||||
- `/project:tm/status` - Project dashboard
|
||||
- `/project:tm/next` - Next task recommendation
|
||||
- `/project:tm/utils/analyze` - Project analysis
|
||||
- `/project:tm/learn` - Interactive help
|
||||
|
||||
## Key Features
|
||||
|
||||
### Natural Language Support
|
||||
All commands understand natural language:
|
||||
```
|
||||
/project:tm/list pending high priority
|
||||
/project:tm/update mark 23 as done
|
||||
/project:tm/add-task implement OAuth login
|
||||
```
|
||||
|
||||
### Smart Context
|
||||
Commands analyze project state and provide intelligent suggestions based on:
|
||||
- Current task status
|
||||
- Dependencies
|
||||
- Team patterns
|
||||
- Project phase
|
||||
|
||||
### Visual Enhancements
|
||||
- Progress bars and indicators
|
||||
- Status badges
|
||||
- Organized displays
|
||||
- Clear hierarchies
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Daily Development
|
||||
```
|
||||
/project:tm/workflows/smart-flow morning
|
||||
/project:tm/next
|
||||
/project:tm/set-status/to-in-progress <id>
|
||||
/project:tm/set-status/to-done <id>
|
||||
```
|
||||
|
||||
### Task Breakdown
|
||||
```
|
||||
/project:tm/show <id>
|
||||
/project:tm/expand <id>
|
||||
/project:tm/list/with-subtasks
|
||||
```
|
||||
|
||||
### Sprint Planning
|
||||
```
|
||||
/project:tm/analyze-complexity
|
||||
/project:tm/workflows/pipeline init → expand/all → status
|
||||
```
|
||||
|
||||
## Migration from Old Commands
|
||||
|
||||
| Old | New |
|
||||
|-----|-----|
|
||||
| `/project:task-master:list` | `/project:tm/list` |
|
||||
| `/project:task-master:complete` | `/project:tm/set-status/to-done` |
|
||||
| `/project:workflows:auto-implement` | `/project:tm/workflows/auto-implement` |
|
||||
|
||||
## Tips
|
||||
|
||||
1. Use `/project:tm/` + Tab for command discovery
|
||||
2. Natural language is supported everywhere
|
||||
3. Commands provide smart defaults
|
||||
4. Chain commands for automation
|
||||
5. Check `/project:tm/learn` for interactive help
|
||||
@@ -1,162 +0,0 @@
|
||||
---
|
||||
name: task-checker
|
||||
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
|
||||
model: sonnet
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Task Specification Review**
|
||||
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
|
||||
- Understand the requirements, test strategy, and success criteria
|
||||
- Review any subtasks and their individual requirements
|
||||
|
||||
2. **Implementation Verification**
|
||||
- Use `Read` tool to examine all created/modified files
|
||||
- Use `Bash` tool to run compilation and build commands
|
||||
- Use `Grep` tool to search for required patterns and implementations
|
||||
- Verify file structure matches specifications
|
||||
- Check that all required methods/functions are implemented
|
||||
|
||||
3. **Test Execution**
|
||||
- Run tests specified in the task's testStrategy
|
||||
- Execute build commands (npm run build, tsc --noEmit, etc.)
|
||||
- Verify no compilation errors or warnings
|
||||
- Check for runtime errors where applicable
|
||||
- Test edge cases mentioned in requirements
|
||||
|
||||
4. **Code Quality Assessment**
|
||||
- Verify code follows project conventions
|
||||
- Check for proper error handling
|
||||
- Ensure TypeScript typing is strict (no 'any' unless justified)
|
||||
- Verify documentation/comments where required
|
||||
- Check for security best practices
|
||||
|
||||
5. **Dependency Validation**
|
||||
- Verify all task dependencies were actually completed
|
||||
- Check integration points with dependent tasks
|
||||
- Ensure no breaking changes to existing functionality
|
||||
|
||||
## Verification Workflow
|
||||
|
||||
1. **Retrieve Task Information**
|
||||
```
|
||||
Use mcp__task-master-ai__get_task to get full task details
|
||||
Note the implementation requirements and test strategy
|
||||
```
|
||||
|
||||
2. **Check File Existence**
|
||||
```bash
|
||||
# Verify all required files exist
|
||||
ls -la [expected directories]
|
||||
# Read key files to verify content
|
||||
```
|
||||
|
||||
3. **Verify Implementation**
|
||||
- Read each created/modified file
|
||||
- Check against requirements checklist
|
||||
- Verify all subtasks are complete
|
||||
|
||||
4. **Run Tests**
|
||||
```bash
|
||||
# TypeScript compilation
|
||||
cd [project directory] && npx tsc --noEmit
|
||||
|
||||
# Run specified tests
|
||||
npm test [specific test files]
|
||||
|
||||
# Build verification
|
||||
npm run build
|
||||
```
|
||||
|
||||
5. **Generate Verification Report**
|
||||
|
||||
## Output Format
|
||||
|
||||
```yaml
|
||||
verification_report:
|
||||
task_id: [ID]
|
||||
status: PASS | FAIL | PARTIAL
|
||||
score: [1-10]
|
||||
|
||||
requirements_met:
|
||||
- ✅ [Requirement that was satisfied]
|
||||
- ✅ [Another satisfied requirement]
|
||||
|
||||
issues_found:
|
||||
- ❌ [Issue description]
|
||||
- ⚠️ [Warning or minor issue]
|
||||
|
||||
files_verified:
|
||||
- path: [file path]
|
||||
status: [created/modified/verified]
|
||||
issues: [any problems found]
|
||||
|
||||
tests_run:
|
||||
- command: [test command]
|
||||
result: [pass/fail]
|
||||
output: [relevant output]
|
||||
|
||||
recommendations:
|
||||
- [Specific fix needed]
|
||||
- [Improvement suggestion]
|
||||
|
||||
verdict: |
|
||||
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
|
||||
[If FAIL: Specific list of what must be fixed]
|
||||
[If PASS: Confirmation that all requirements are met]
|
||||
```
|
||||
|
||||
## Decision Criteria
|
||||
|
||||
**Mark as PASS (ready for 'done'):**
|
||||
- All required files exist and contain expected content
|
||||
- All tests pass successfully
|
||||
- No compilation or build errors
|
||||
- All subtasks are complete
|
||||
- Core requirements are met
|
||||
- Code quality is acceptable
|
||||
|
||||
**Mark as PARTIAL (may proceed with warnings):**
|
||||
- Core functionality is implemented
|
||||
- Minor issues that don't block functionality
|
||||
- Missing nice-to-have features
|
||||
- Documentation could be improved
|
||||
- Tests pass but coverage could be better
|
||||
|
||||
**Mark as FAIL (must return to 'pending'):**
|
||||
- Required files are missing
|
||||
- Compilation or build errors
|
||||
- Tests fail
|
||||
- Core requirements not met
|
||||
- Security vulnerabilities detected
|
||||
- Breaking changes to existing code
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **BE THOROUGH**: Check every requirement systematically
|
||||
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
|
||||
- **BE FAIR**: Distinguish between critical issues and minor improvements
|
||||
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
|
||||
- **BE EFFICIENT**: Focus on requirements, not perfection
|
||||
|
||||
## Tools You MUST Use
|
||||
|
||||
- `Read`: Examine implementation files (READ-ONLY)
|
||||
- `Bash`: Run tests and verification commands
|
||||
- `Grep`: Search for patterns in code
|
||||
- `mcp__task-master-ai__get_task`: Get task details
|
||||
- **NEVER use Write/Edit** - you only verify, not fix
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
You are the quality gate between 'review' and 'done' status:
|
||||
1. Task-executor implements and marks as 'review'
|
||||
2. You verify and report PASS/FAIL
|
||||
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
|
||||
4. If FAIL, task-executor re-implements based on your report
|
||||
|
||||
Your verification ensures high quality and prevents accumulation of technical debt.
|
||||
@@ -1,92 +0,0 @@
|
||||
---
|
||||
name: task-executor
|
||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
||||
|
||||
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
|
||||
- Execute ONE specific subtask or a small group of related subtasks
|
||||
- Complete your work, verify it, mark for review, and exit
|
||||
- Do NOT decide what to do next - the orchestrator handles task sequencing
|
||||
- Focus on implementation excellence within your assigned scope
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
|
||||
|
||||
2. **Rapid Implementation Planning**: Quickly identify:
|
||||
- The EXACT files you need to create/modify for THIS subtask
|
||||
- What already exists that you can build upon
|
||||
- The minimum viable implementation that satisfies requirements
|
||||
|
||||
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
|
||||
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
|
||||
- Use `Write` tool to create new files specified in the task
|
||||
- Use `Edit` tool to modify existing files
|
||||
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
|
||||
- Use `Read` tool to verify your implementations
|
||||
- Implement one subtask at a time for clarity and traceability
|
||||
- Follow the project's coding standards from CLAUDE.md if available
|
||||
- After each subtask, VERIFY the files exist using Read or ls commands
|
||||
|
||||
4. **Progress Documentation**:
|
||||
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
|
||||
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
|
||||
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
|
||||
- Tasks will be verified by task-checker before moving to 'done'
|
||||
|
||||
5. **Quality Assurance**:
|
||||
- Implement the testing strategy specified in the task
|
||||
- Verify that all acceptance criteria are met
|
||||
- Check for any dependency conflicts or integration issues
|
||||
- Run relevant tests before marking task as complete
|
||||
|
||||
6. **Dependency Management**:
|
||||
- Check task dependencies before starting implementation
|
||||
- If blocked by incomplete dependencies, clearly communicate this
|
||||
- Use `task-master validate-dependencies` when needed
|
||||
|
||||
**Implementation Workflow:**
|
||||
|
||||
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
|
||||
2. Check dependencies and prerequisites
|
||||
3. Plan implementation approach - list specific files to create
|
||||
4. Update task status to 'in-progress' using MCP tool
|
||||
5. **ACTUALLY IMPLEMENT** the solution using tools:
|
||||
- Use `Bash` to create directories
|
||||
- Use `Write` to create new files with actual content
|
||||
- Use `Edit` to modify existing files
|
||||
- DO NOT just describe what should be done - DO IT
|
||||
6. **VERIFY** your implementation:
|
||||
- Use `ls` or `Read` to confirm files were created
|
||||
- Use `Bash` to run any build/test commands
|
||||
- Ensure the implementation is real, not theoretical
|
||||
7. Log progress and decisions in subtask updates using MCP tools
|
||||
8. Test and verify the implementation works
|
||||
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
|
||||
10. Report completion with:
|
||||
- List of created/modified files
|
||||
- Any issues encountered
|
||||
- What needs verification by task-checker
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Focus on completing one task thoroughly before moving to the next
|
||||
- Maintain clear communication about what you're implementing and why
|
||||
- Follow existing code patterns and project conventions
|
||||
- Prioritize working code over extensive documentation unless docs are the task
|
||||
- Ask for clarification if task requirements are ambiguous
|
||||
- Consider edge cases and error handling in your implementations
|
||||
|
||||
**Integration with Task Master:**
|
||||
|
||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
||||
- Track your progress
|
||||
- Update task information
|
||||
- Maintain project state
|
||||
- Coordinate with the broader development workflow
|
||||
|
||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
||||
@@ -1,208 +0,0 @@
|
||||
---
|
||||
name: task-orchestrator
|
||||
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
|
||||
model: opus
|
||||
color: green
|
||||
---
|
||||
|
||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
|
||||
|
||||
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
|
||||
|
||||
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
|
||||
|
||||
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
|
||||
|
||||
## Operational Workflow
|
||||
|
||||
### Initial Assessment Phase
|
||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
||||
2. Analyze task statuses, priorities, and dependencies
|
||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
||||
4. Group related tasks that could benefit from specialized executors
|
||||
5. Create an execution plan that maximizes parallelization
|
||||
|
||||
### Executor Deployment Phase
|
||||
1. For each independent task or task group:
|
||||
- Deploy a task-executor agent with specific instructions
|
||||
- Provide the executor with task ID, requirements, and context
|
||||
- Set clear completion criteria and reporting expectations
|
||||
2. Maintain a registry of active executors and their assigned tasks
|
||||
3. Establish communication protocols for progress updates
|
||||
|
||||
### Coordination Phase
|
||||
1. Monitor executor progress through task status updates
|
||||
2. When a task completes:
|
||||
- Verify completion with `get_task` or `task-master show <id>`
|
||||
- Update task status if needed using `set_task_status`
|
||||
- Reassess dependency graph for newly unblocked tasks
|
||||
- Deploy new executors for available work
|
||||
3. Handle executor failures or blocks:
|
||||
- Reassign tasks to new executors if needed
|
||||
- Escalate complex issues to the user
|
||||
- Update task status to 'blocked' when appropriate
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
**Parallel Execution Rules**:
|
||||
- Never assign dependent tasks to different executors simultaneously
|
||||
- Prioritize high-priority tasks when resources are limited
|
||||
- Group small, related subtasks for single executor efficiency
|
||||
- Balance executor load to prevent bottlenecks
|
||||
|
||||
**Context Management**:
|
||||
- Provide executors with minimal but sufficient context
|
||||
- Share relevant completed task information when it aids execution
|
||||
- Maintain a shared knowledge base of project-specific patterns
|
||||
|
||||
**Quality Assurance**:
|
||||
- Verify task completion before marking as done
|
||||
- Ensure test strategies are followed when specified
|
||||
- Coordinate cross-task integration testing when needed
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
When deploying executors, provide them with:
|
||||
```
|
||||
TASK ASSIGNMENT:
|
||||
- Task ID: [specific ID]
|
||||
- Objective: [clear goal]
|
||||
- Dependencies: [list any completed prerequisites]
|
||||
- Success Criteria: [specific completion requirements]
|
||||
- Context: [relevant project information]
|
||||
- Reporting: [when and how to report back]
|
||||
```
|
||||
|
||||
When receiving executor updates:
|
||||
1. Acknowledge completion or issues
|
||||
2. Update task status in Task Master
|
||||
3. Reassess execution strategy
|
||||
4. Deploy new executors as appropriate
|
||||
|
||||
## Decision Framework
|
||||
|
||||
**When to parallelize**:
|
||||
- Multiple pending tasks with no interdependencies
|
||||
- Sufficient context available for independent execution
|
||||
- Tasks are well-defined with clear success criteria
|
||||
|
||||
**When to serialize**:
|
||||
- Strong dependencies between tasks
|
||||
- Limited context or unclear requirements
|
||||
- Integration points requiring careful coordination
|
||||
|
||||
**When to escalate**:
|
||||
- Circular dependencies detected
|
||||
- Critical blockers affecting multiple tasks
|
||||
- Ambiguous requirements needing clarification
|
||||
- Resource conflicts between executors
|
||||
|
||||
## Error Handling
|
||||
|
||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
Track and optimize for:
|
||||
- Task completion rate
|
||||
- Parallel execution efficiency
|
||||
- Executor success rate
|
||||
- Time to completion for task groups
|
||||
- Dependency resolution speed
|
||||
|
||||
## Integration with Task Master
|
||||
|
||||
Leverage these Task Master MCP tools effectively:
|
||||
- `get_tasks` - Continuous queue monitoring
|
||||
- `get_task` - Detailed task analysis
|
||||
- `set_task_status` - Progress tracking
|
||||
- `next_task` - Fallback for serial execution
|
||||
- `analyze_project_complexity` - Strategic planning
|
||||
- `complexity_report` - Resource allocation
|
||||
|
||||
## Output Format for Execution
|
||||
|
||||
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
|
||||
|
||||
After completing your dependency analysis, you MUST output a structured execution plan:
|
||||
|
||||
```yaml
|
||||
execution_plan:
|
||||
EXECUTE_IN_PARALLEL:
|
||||
# Maximum 3 subtasks running simultaneously
|
||||
- subtask_id: [e.g., 118.2]
|
||||
parent_task: [e.g., 118]
|
||||
title: [Specific subtask title]
|
||||
priority: [high/medium/low]
|
||||
estimated_time: [e.g., 10 minutes]
|
||||
executor_prompt: |
|
||||
Execute Subtask [ID]: [Specific subtask title]
|
||||
|
||||
SPECIFIC REQUIREMENTS:
|
||||
[Exact implementation needed for THIS subtask only]
|
||||
|
||||
FILES TO CREATE/MODIFY:
|
||||
[Specific file paths]
|
||||
|
||||
CONTEXT:
|
||||
[What already exists that this subtask depends on]
|
||||
|
||||
SUCCESS CRITERIA:
|
||||
[Specific completion criteria for this subtask]
|
||||
|
||||
IMPORTANT:
|
||||
- Focus ONLY on this subtask
|
||||
- Mark subtask as 'review' when complete
|
||||
- Use MCP tool: mcp__task-master-ai__set_task_status
|
||||
|
||||
- subtask_id: [Another subtask that can run in parallel]
|
||||
parent_task: [Parent task ID]
|
||||
title: [Specific subtask title]
|
||||
priority: [priority]
|
||||
estimated_time: [time estimate]
|
||||
executor_prompt: |
|
||||
[Focused prompt for this specific subtask]
|
||||
|
||||
blocked:
|
||||
- task_id: [ID]
|
||||
title: [Task title]
|
||||
waiting_for: [list of blocking task IDs]
|
||||
becomes_ready_when: [condition for unblocking]
|
||||
|
||||
next_wave:
|
||||
trigger: "After tasks [IDs] complete"
|
||||
newly_available: [List of task IDs that will unblock]
|
||||
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
|
||||
|
||||
critical_path: [Ordered list of task IDs forming the critical path]
|
||||
|
||||
parallelization_instruction: |
|
||||
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
|
||||
simultaneously using multiple Task tool invocations in a single response.
|
||||
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
|
||||
|
||||
verification_needed:
|
||||
- task_id: [ID of any task in 'review' status]
|
||||
verification_focus: [what to check]
|
||||
```
|
||||
|
||||
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
|
||||
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
|
||||
2. Use multiple Task tool invocations in a SINGLE response
|
||||
3. Do not execute them sequentially - they must run in parallel
|
||||
4. Wait for all parallel executors to complete before proceeding to next wave
|
||||
|
||||
**IMPORTANT NOTES**:
|
||||
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
|
||||
- Provide complete, self-contained prompts for each executor
|
||||
- Executors should mark tasks as 'review' for verification, not 'done'
|
||||
- Be explicit about which tasks can run simultaneously
|
||||
|
||||
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
@@ -1,55 +0,0 @@
|
||||
Add a dependency between tasks.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the task IDs to establish dependency relationship.
|
||||
|
||||
## Adding Dependencies
|
||||
|
||||
Creates a dependency where one task must be completed before another can start.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Parse natural language or IDs:
|
||||
- "make 5 depend on 3" → task 5 depends on task 3
|
||||
- "5 needs 3" → task 5 depends on task 3
|
||||
- "5 3" → task 5 depends on task 3
|
||||
- "5 after 3" → task 5 depends on task 3
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master add-dependency --id=<task-id> --depends-on=<dependency-id>
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before adding:
|
||||
1. **Verify both tasks exist**
|
||||
2. **Check for circular dependencies**
|
||||
3. **Ensure dependency makes logical sense**
|
||||
4. **Warn if creating complex chains**
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Detect if dependency already exists
|
||||
- Suggest related dependencies
|
||||
- Show impact on task flow
|
||||
- Update task priorities if needed
|
||||
|
||||
## Post-Addition
|
||||
|
||||
After adding dependency:
|
||||
1. Show updated dependency graph
|
||||
2. Identify any newly blocked tasks
|
||||
3. Suggest task order changes
|
||||
4. Update project timeline
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/add-dependency 5 needs 3
|
||||
→ Task #5 now depends on Task #3
|
||||
→ Task #5 is now blocked until #3 completes
|
||||
→ Suggested: Also consider if #5 needs #4
|
||||
```
|
||||
@@ -1,76 +0,0 @@
|
||||
Add a subtask to a parent task.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse arguments to create a new subtask or convert existing task.
|
||||
|
||||
## Adding Subtasks
|
||||
|
||||
Creates subtasks to break down complex parent tasks into manageable pieces.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Flexible natural language:
|
||||
- "add subtask to 5: implement login form"
|
||||
- "break down 5 with: setup, implement, test"
|
||||
- "subtask for 5: handle edge cases"
|
||||
- "5: validate user input" → adds subtask to task 5
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### 1. Create New Subtask
|
||||
```bash
|
||||
task-master add-subtask --parent=<id> --title="<title>" --description="<desc>"
|
||||
```
|
||||
|
||||
### 2. Convert Existing Task
|
||||
```bash
|
||||
task-master add-subtask --parent=<id> --task-id=<existing-id>
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Automatic Subtask Generation**
|
||||
- If title contains "and" or commas, create multiple
|
||||
- Suggest common subtask patterns
|
||||
- Inherit parent's context
|
||||
|
||||
2. **Intelligent Defaults**
|
||||
- Priority based on parent
|
||||
- Appropriate time estimates
|
||||
- Logical dependencies between subtasks
|
||||
|
||||
3. **Validation**
|
||||
- Check parent task complexity
|
||||
- Warn if too many subtasks
|
||||
- Ensure subtask makes sense
|
||||
|
||||
## Creation Process
|
||||
|
||||
1. Parse parent task context
|
||||
2. Generate subtask with ID like "5.1"
|
||||
3. Set appropriate defaults
|
||||
4. Link to parent task
|
||||
5. Update parent's time estimate
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/add-subtask to 5: implement user authentication
|
||||
→ Created subtask #5.1: "implement user authentication"
|
||||
→ Parent task #5 now has 1 subtask
|
||||
→ Suggested next subtasks: tests, documentation
|
||||
|
||||
/project:tm/add-subtask 5: setup, implement, test
|
||||
→ Created 3 subtasks:
|
||||
#5.1: setup
|
||||
#5.2: implement
|
||||
#5.3: test
|
||||
```
|
||||
|
||||
## Post-Creation
|
||||
|
||||
- Show updated task hierarchy
|
||||
- Suggest logical next subtasks
|
||||
- Update complexity estimates
|
||||
- Recommend subtask order
|
||||
@@ -1,71 +0,0 @@
|
||||
Convert an existing task into a subtask.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse parent ID and task ID to convert.
|
||||
|
||||
## Task Conversion
|
||||
|
||||
Converts an existing standalone task into a subtask of another task.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "move task 8 under 5"
|
||||
- "make 8 a subtask of 5"
|
||||
- "nest 8 in 5"
|
||||
- "5 8" → make task 8 a subtask of task 5
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
|
||||
```
|
||||
|
||||
## Pre-Conversion Checks
|
||||
|
||||
1. **Validation**
|
||||
- Both tasks exist and are valid
|
||||
- No circular parent relationships
|
||||
- Task isn't already a subtask
|
||||
- Logical hierarchy makes sense
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Dependencies that will be affected
|
||||
- Tasks that depend on converting task
|
||||
- Priority alignment needed
|
||||
- Status compatibility
|
||||
|
||||
## Conversion Process
|
||||
|
||||
1. Change task ID from "8" to "5.1" (next available)
|
||||
2. Update all dependency references
|
||||
3. Inherit parent's context where appropriate
|
||||
4. Adjust priorities if needed
|
||||
5. Update time estimates
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Preserve task history
|
||||
- Maintain dependencies
|
||||
- Update all references
|
||||
- Create conversion log
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/add-subtask/from-task 5 8
|
||||
→ Converting: Task #8 becomes subtask #5.1
|
||||
→ Updated: 3 dependency references
|
||||
→ Parent task #5 now has 1 subtask
|
||||
→ Note: Subtask inherits parent's priority
|
||||
|
||||
Before: #8 "Implement validation" (standalone)
|
||||
After: #5.1 "Implement validation" (subtask of #5)
|
||||
```
|
||||
|
||||
## Post-Conversion
|
||||
|
||||
- Show new task hierarchy
|
||||
- List updated dependencies
|
||||
- Verify project integrity
|
||||
- Suggest related conversions
|
||||
@@ -1,78 +0,0 @@
|
||||
Add new tasks with intelligent parsing and context awareness.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Smart Task Addition
|
||||
|
||||
Parse natural language to create well-structured tasks.
|
||||
|
||||
### 1. **Input Understanding**
|
||||
|
||||
I'll intelligently parse your request:
|
||||
- Natural language → Structured task
|
||||
- Detect priority from keywords (urgent, ASAP, important)
|
||||
- Infer dependencies from context
|
||||
- Suggest complexity based on description
|
||||
- Determine task type (feature, bug, refactor, test, docs)
|
||||
|
||||
### 2. **Smart Parsing Examples**
|
||||
|
||||
**"Add urgent task to fix login bug"**
|
||||
→ Title: Fix login bug
|
||||
→ Priority: high
|
||||
→ Type: bug
|
||||
→ Suggested complexity: medium
|
||||
|
||||
**"Create task for API documentation after task 23 is done"**
|
||||
→ Title: API documentation
|
||||
→ Dependencies: [23]
|
||||
→ Type: documentation
|
||||
→ Priority: medium
|
||||
|
||||
**"Need to refactor auth module - depends on 12 and 15, high complexity"**
|
||||
→ Title: Refactor auth module
|
||||
→ Dependencies: [12, 15]
|
||||
→ Complexity: high
|
||||
→ Type: refactor
|
||||
|
||||
### 3. **Context Enhancement**
|
||||
|
||||
Based on current project state:
|
||||
- Suggest related existing tasks
|
||||
- Warn about potential conflicts
|
||||
- Recommend dependencies
|
||||
- Propose subtasks if complex
|
||||
|
||||
### 4. **Interactive Refinement**
|
||||
|
||||
```yaml
|
||||
Task Preview:
|
||||
─────────────
|
||||
Title: [Extracted title]
|
||||
Priority: [Inferred priority]
|
||||
Dependencies: [Detected dependencies]
|
||||
Complexity: [Estimated complexity]
|
||||
|
||||
Suggestions:
|
||||
- Similar task #34 exists, consider as dependency?
|
||||
- This seems complex, break into subtasks?
|
||||
- Tasks #45-47 work on same module
|
||||
```
|
||||
|
||||
### 5. **Validation & Creation**
|
||||
|
||||
Before creating:
|
||||
- Validate dependencies exist
|
||||
- Check for duplicates
|
||||
- Ensure logical ordering
|
||||
- Verify task completeness
|
||||
|
||||
### 6. **Smart Defaults**
|
||||
|
||||
Intelligent defaults based on:
|
||||
- Task type patterns
|
||||
- Team conventions
|
||||
- Historical data
|
||||
- Current sprint/phase
|
||||
|
||||
Result: High-quality tasks from minimal input.
|
||||
@@ -1,121 +0,0 @@
|
||||
Analyze task complexity and generate expansion recommendations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Perform deep analysis of task complexity across the project.
|
||||
|
||||
## Complexity Analysis
|
||||
|
||||
Uses AI to analyze tasks and recommend which ones need breakdown.
|
||||
|
||||
## Execution Options
|
||||
|
||||
```bash
|
||||
task-master analyze-complexity [--research] [--threshold=5]
|
||||
```
|
||||
|
||||
## Analysis Parameters
|
||||
|
||||
- `--research` → Use research AI for deeper analysis
|
||||
- `--threshold=5` → Only flag tasks above complexity 5
|
||||
- Default: Analyze all pending tasks
|
||||
|
||||
## Analysis Process
|
||||
|
||||
### 1. **Task Evaluation**
|
||||
For each task, AI evaluates:
|
||||
- Technical complexity
|
||||
- Time requirements
|
||||
- Dependency complexity
|
||||
- Risk factors
|
||||
- Knowledge requirements
|
||||
|
||||
### 2. **Complexity Scoring**
|
||||
Assigns score 1-10 based on:
|
||||
- Implementation difficulty
|
||||
- Integration challenges
|
||||
- Testing requirements
|
||||
- Unknown factors
|
||||
- Technical debt risk
|
||||
|
||||
### 3. **Recommendations**
|
||||
For complex tasks:
|
||||
- Suggest expansion approach
|
||||
- Recommend subtask breakdown
|
||||
- Identify risk areas
|
||||
- Propose mitigation strategies
|
||||
|
||||
## Smart Analysis Features
|
||||
|
||||
1. **Pattern Recognition**
|
||||
- Similar task comparisons
|
||||
- Historical complexity accuracy
|
||||
- Team velocity consideration
|
||||
- Technology stack factors
|
||||
|
||||
2. **Contextual Factors**
|
||||
- Team expertise
|
||||
- Available resources
|
||||
- Timeline constraints
|
||||
- Business criticality
|
||||
|
||||
3. **Risk Assessment**
|
||||
- Technical risks
|
||||
- Timeline risks
|
||||
- Dependency risks
|
||||
- Knowledge gaps
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
Task Complexity Analysis Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
High Complexity Tasks (>7):
|
||||
📍 #5 "Implement real-time sync" - Score: 9/10
|
||||
Factors: WebSocket complexity, state management, conflict resolution
|
||||
Recommendation: Expand into 5-7 subtasks
|
||||
Risks: Performance, data consistency
|
||||
|
||||
📍 #12 "Migrate database schema" - Score: 8/10
|
||||
Factors: Data migration, zero downtime, rollback strategy
|
||||
Recommendation: Expand into 4-5 subtasks
|
||||
Risks: Data loss, downtime
|
||||
|
||||
Medium Complexity Tasks (5-7):
|
||||
📍 #23 "Add export functionality" - Score: 6/10
|
||||
Consider expansion if timeline tight
|
||||
|
||||
Low Complexity Tasks (<5):
|
||||
✅ 15 tasks - No expansion needed
|
||||
|
||||
Summary:
|
||||
- Expand immediately: 2 tasks
|
||||
- Consider expanding: 5 tasks
|
||||
- Keep as-is: 15 tasks
|
||||
```
|
||||
|
||||
## Actionable Output
|
||||
|
||||
For each high-complexity task:
|
||||
1. Complexity score with reasoning
|
||||
2. Specific expansion suggestions
|
||||
3. Risk mitigation approaches
|
||||
4. Recommended subtask structure
|
||||
|
||||
## Integration
|
||||
|
||||
Results are:
|
||||
- Saved to `.taskmaster/reports/complexity-analysis.md`
|
||||
- Used by expand command
|
||||
- Inform sprint planning
|
||||
- Guide resource allocation
|
||||
|
||||
## Next Steps
|
||||
|
||||
After analysis:
|
||||
```
|
||||
/project:tm/expand 5 # Expand specific task
|
||||
/project:tm/expand/all # Expand all recommended
|
||||
/project:tm/complexity-report # View detailed report
|
||||
```
|
||||
@@ -1,117 +0,0 @@
|
||||
Display the task complexity analysis report.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
View the detailed complexity analysis generated by analyze-complexity command.
|
||||
|
||||
## Viewing Complexity Report
|
||||
|
||||
Shows comprehensive task complexity analysis with actionable insights.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master complexity-report [--file=<path>]
|
||||
```
|
||||
|
||||
## Report Location
|
||||
|
||||
Default: `.taskmaster/reports/complexity-analysis.md`
|
||||
Custom: Specify with --file parameter
|
||||
|
||||
## Report Contents
|
||||
|
||||
### 1. **Executive Summary**
|
||||
```
|
||||
Complexity Analysis Summary
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Analysis Date: 2024-01-15
|
||||
Tasks Analyzed: 32
|
||||
High Complexity: 5 (16%)
|
||||
Medium Complexity: 12 (37%)
|
||||
Low Complexity: 15 (47%)
|
||||
|
||||
Critical Findings:
|
||||
- 5 tasks need immediate expansion
|
||||
- 3 tasks have high technical risk
|
||||
- 2 tasks block critical path
|
||||
```
|
||||
|
||||
### 2. **Detailed Task Analysis**
|
||||
For each complex task:
|
||||
- Complexity score breakdown
|
||||
- Contributing factors
|
||||
- Specific risks identified
|
||||
- Expansion recommendations
|
||||
- Similar completed tasks
|
||||
|
||||
### 3. **Risk Matrix**
|
||||
Visual representation:
|
||||
```
|
||||
Risk vs Complexity Matrix
|
||||
━━━━━━━━━━━━━━━━━━━━━━━
|
||||
High Risk | #5(9) #12(8) | #23(6)
|
||||
Med Risk | #34(7) | #45(5) #67(5)
|
||||
Low Risk | #78(8) | [15 tasks]
|
||||
| High Complex | Med Complex
|
||||
```
|
||||
|
||||
### 4. **Recommendations**
|
||||
|
||||
**Immediate Actions:**
|
||||
1. Expand task #5 - Critical path + high complexity
|
||||
2. Expand task #12 - High risk + dependencies
|
||||
3. Review task #34 - Consider splitting
|
||||
|
||||
**Sprint Planning:**
|
||||
- Don't schedule multiple high-complexity tasks together
|
||||
- Ensure expertise available for complex tasks
|
||||
- Build in buffer time for unknowns
|
||||
|
||||
## Interactive Features
|
||||
|
||||
When viewing report:
|
||||
1. **Quick Actions**
|
||||
- Press 'e' to expand a task
|
||||
- Press 'd' for task details
|
||||
- Press 'r' to refresh analysis
|
||||
|
||||
2. **Filtering**
|
||||
- View by complexity level
|
||||
- Filter by risk factors
|
||||
- Show only actionable items
|
||||
|
||||
3. **Export Options**
|
||||
- Markdown format
|
||||
- CSV for spreadsheets
|
||||
- JSON for tools
|
||||
|
||||
## Report Intelligence
|
||||
|
||||
- Compares with historical data
|
||||
- Shows complexity trends
|
||||
- Identifies patterns
|
||||
- Suggests process improvements
|
||||
|
||||
## Integration
|
||||
|
||||
Use report for:
|
||||
- Sprint planning sessions
|
||||
- Resource allocation
|
||||
- Risk assessment
|
||||
- Team discussions
|
||||
- Client updates
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
/project:tm/complexity-report
|
||||
→ Opens latest analysis
|
||||
|
||||
/project:tm/complexity-report --file=archived/2024-01-01.md
|
||||
→ View historical analysis
|
||||
|
||||
After viewing:
|
||||
/project:tm/expand 5
|
||||
→ Expand high-complexity task
|
||||
```
|
||||
@@ -1,51 +0,0 @@
|
||||
Expand all pending tasks that need subtasks.
|
||||
|
||||
## Bulk Task Expansion
|
||||
|
||||
Intelligently expands all tasks that would benefit from breakdown.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
## Smart Selection
|
||||
|
||||
Only expands tasks that:
|
||||
- Are marked as pending
|
||||
- Have high complexity (>5)
|
||||
- Lack existing subtasks
|
||||
- Would benefit from breakdown
|
||||
|
||||
## Expansion Process
|
||||
|
||||
1. **Analysis Phase**
|
||||
- Identify expansion candidates
|
||||
- Group related tasks
|
||||
- Plan expansion strategy
|
||||
|
||||
2. **Batch Processing**
|
||||
- Expand tasks in logical order
|
||||
- Maintain consistency
|
||||
- Preserve relationships
|
||||
- Optimize for parallelism
|
||||
|
||||
3. **Quality Control**
|
||||
- Ensure subtask quality
|
||||
- Avoid over-decomposition
|
||||
- Maintain task coherence
|
||||
- Update dependencies
|
||||
|
||||
## Options
|
||||
|
||||
- Add `force` to expand all regardless of complexity
|
||||
- Add `research` for enhanced AI analysis
|
||||
|
||||
## Results
|
||||
|
||||
After bulk expansion:
|
||||
- Summary of tasks expanded
|
||||
- New subtask count
|
||||
- Updated complexity metrics
|
||||
- Suggested task order
|
||||
@@ -1,49 +0,0 @@
|
||||
Break down a complex task into subtasks.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Intelligent Task Expansion
|
||||
|
||||
Analyzes a task and creates detailed subtasks for better manageability.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master expand --id=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Expansion Process
|
||||
|
||||
1. **Task Analysis**
|
||||
- Review task complexity
|
||||
- Identify components
|
||||
- Detect technical challenges
|
||||
- Estimate time requirements
|
||||
|
||||
2. **Subtask Generation**
|
||||
- Create 3-7 subtasks typically
|
||||
- Each subtask 1-4 hours
|
||||
- Logical implementation order
|
||||
- Clear acceptance criteria
|
||||
|
||||
3. **Smart Breakdown**
|
||||
- Setup/configuration tasks
|
||||
- Core implementation
|
||||
- Testing components
|
||||
- Integration steps
|
||||
- Documentation updates
|
||||
|
||||
## Enhanced Features
|
||||
|
||||
Based on task type:
|
||||
- **Feature**: Setup → Implement → Test → Integrate
|
||||
- **Bug Fix**: Reproduce → Diagnose → Fix → Verify
|
||||
- **Refactor**: Analyze → Plan → Refactor → Validate
|
||||
|
||||
## Post-Expansion
|
||||
|
||||
After expansion:
|
||||
1. Show subtask hierarchy
|
||||
2. Update time estimates
|
||||
3. Suggest implementation order
|
||||
4. Highlight critical path
|
||||
@@ -1,81 +0,0 @@
|
||||
Automatically fix dependency issues found during validation.
|
||||
|
||||
## Automatic Dependency Repair
|
||||
|
||||
Intelligently fixes common dependency problems while preserving project logic.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master fix-dependencies
|
||||
```
|
||||
|
||||
## What Gets Fixed
|
||||
|
||||
### 1. **Auto-Fixable Issues**
|
||||
- Remove references to deleted tasks
|
||||
- Break simple circular dependencies
|
||||
- Remove self-dependencies
|
||||
- Clean up duplicate dependencies
|
||||
|
||||
### 2. **Smart Resolutions**
|
||||
- Reorder dependencies to maintain logic
|
||||
- Suggest task merging for over-dependent tasks
|
||||
- Flatten unnecessary dependency chains
|
||||
- Remove redundant transitive dependencies
|
||||
|
||||
### 3. **Manual Review Required**
|
||||
- Complex circular dependencies
|
||||
- Critical path modifications
|
||||
- Business logic dependencies
|
||||
- High-impact changes
|
||||
|
||||
## Fix Process
|
||||
|
||||
1. **Analysis Phase**
|
||||
- Run validation check
|
||||
- Categorize issues by type
|
||||
- Determine fix strategy
|
||||
|
||||
2. **Execution Phase**
|
||||
- Apply automatic fixes
|
||||
- Log all changes made
|
||||
- Preserve task relationships
|
||||
|
||||
3. **Verification Phase**
|
||||
- Re-validate after fixes
|
||||
- Show before/after comparison
|
||||
- Highlight manual fixes needed
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Preserves intended task flow
|
||||
- Minimal disruption approach
|
||||
- Creates fix history/log
|
||||
- Suggests manual interventions
|
||||
|
||||
## Output Example
|
||||
|
||||
```
|
||||
Dependency Auto-Fix Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Fixed Automatically:
|
||||
✅ Removed 2 references to deleted tasks
|
||||
✅ Resolved 1 self-dependency
|
||||
✅ Cleaned 3 redundant dependencies
|
||||
|
||||
Manual Review Needed:
|
||||
⚠️ Complex circular dependency: #12 → #15 → #18 → #12
|
||||
Suggestion: Make #15 not depend on #12
|
||||
⚠️ Task #45 has 8 dependencies
|
||||
Suggestion: Break into subtasks
|
||||
|
||||
Run '/project:tm/validate-dependencies' to verify fixes
|
||||
```
|
||||
|
||||
## Safety
|
||||
|
||||
- Preview mode available
|
||||
- Rollback capability
|
||||
- Change logging
|
||||
- No data loss
|
||||
@@ -1,121 +0,0 @@
|
||||
Generate individual task files from tasks.json.
|
||||
|
||||
## Task File Generation
|
||||
|
||||
Creates separate markdown files for each task, perfect for AI agents or documentation.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
## What It Creates
|
||||
|
||||
For each task, generates a file like `task_001.txt`:
|
||||
|
||||
```
|
||||
Task ID: 1
|
||||
Title: Implement user authentication
|
||||
Status: pending
|
||||
Priority: high
|
||||
Dependencies: []
|
||||
Created: 2024-01-15
|
||||
Complexity: 7
|
||||
|
||||
## Description
|
||||
Create a secure user authentication system with login, logout, and session management.
|
||||
|
||||
## Details
|
||||
- Use JWT tokens for session management
|
||||
- Implement secure password hashing
|
||||
- Add remember me functionality
|
||||
- Include password reset flow
|
||||
|
||||
## Test Strategy
|
||||
- Unit tests for auth functions
|
||||
- Integration tests for login flow
|
||||
- Security testing for vulnerabilities
|
||||
- Performance tests for concurrent logins
|
||||
|
||||
## Subtasks
|
||||
1.1 Setup authentication framework (pending)
|
||||
1.2 Create login endpoints (pending)
|
||||
1.3 Implement session management (pending)
|
||||
1.4 Add password reset (pending)
|
||||
```
|
||||
|
||||
## File Organization
|
||||
|
||||
Creates structure:
|
||||
```
|
||||
.taskmaster/
|
||||
└── tasks/
|
||||
├── task_001.txt
|
||||
├── task_002.txt
|
||||
├── task_003.txt
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Consistent Formatting**
|
||||
- Standardized structure
|
||||
- Clear sections
|
||||
- AI-readable format
|
||||
- Markdown compatible
|
||||
|
||||
2. **Contextual Information**
|
||||
- Full task details
|
||||
- Related task references
|
||||
- Progress indicators
|
||||
- Implementation notes
|
||||
|
||||
3. **Incremental Updates**
|
||||
- Only regenerate changed tasks
|
||||
- Preserve custom additions
|
||||
- Track generation timestamp
|
||||
- Version control friendly
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **AI Context**: Provide task context to AI assistants
|
||||
- **Documentation**: Standalone task documentation
|
||||
- **Archival**: Task history preservation
|
||||
- **Sharing**: Send specific tasks to team members
|
||||
- **Review**: Easier task review process
|
||||
|
||||
## Generation Options
|
||||
|
||||
Based on arguments:
|
||||
- Filter by status
|
||||
- Include/exclude completed
|
||||
- Custom templates
|
||||
- Different formats
|
||||
|
||||
## Post-Generation
|
||||
|
||||
```
|
||||
Task File Generation Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Generated: 45 task files
|
||||
Location: .taskmaster/tasks/
|
||||
Total size: 156 KB
|
||||
|
||||
New files: 5
|
||||
Updated files: 12
|
||||
Unchanged: 28
|
||||
|
||||
Ready for:
|
||||
- AI agent consumption
|
||||
- Version control
|
||||
- Team distribution
|
||||
```
|
||||
|
||||
## Integration Benefits
|
||||
|
||||
- Git-trackable task history
|
||||
- Easy task sharing
|
||||
- AI tool compatibility
|
||||
- Offline task access
|
||||
- Backup redundancy
|
||||
@@ -1,81 +0,0 @@
|
||||
Show help for Task Master commands.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Display help for Task Master commands. If arguments provided, show specific command help.
|
||||
|
||||
## Task Master Command Help
|
||||
|
||||
### Quick Navigation
|
||||
|
||||
Type `/project:tm/` and use tab completion to explore all commands.
|
||||
|
||||
### Command Categories
|
||||
|
||||
#### 🚀 Setup & Installation
|
||||
- `/project:tm/setup/install` - Comprehensive installation guide
|
||||
- `/project:tm/setup/quick-install` - One-line global install
|
||||
|
||||
#### 📋 Project Setup
|
||||
- `/project:tm/init` - Initialize new project
|
||||
- `/project:tm/init/quick` - Quick setup with auto-confirm
|
||||
- `/project:tm/models` - View AI configuration
|
||||
- `/project:tm/models/setup` - Configure AI providers
|
||||
|
||||
#### 🎯 Task Generation
|
||||
- `/project:tm/parse-prd` - Generate tasks from PRD
|
||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||
- `/project:tm/generate` - Create task files
|
||||
|
||||
#### 📝 Task Management
|
||||
- `/project:tm/list` - List tasks (natural language filters)
|
||||
- `/project:tm/show <id>` - Display task details
|
||||
- `/project:tm/add-task` - Create new task
|
||||
- `/project:tm/update` - Update tasks naturally
|
||||
- `/project:tm/next` - Get next task recommendation
|
||||
|
||||
#### 🔄 Status Management
|
||||
- `/project:tm/set-status/to-pending <id>`
|
||||
- `/project:tm/set-status/to-in-progress <id>`
|
||||
- `/project:tm/set-status/to-done <id>`
|
||||
- `/project:tm/set-status/to-review <id>`
|
||||
- `/project:tm/set-status/to-deferred <id>`
|
||||
- `/project:tm/set-status/to-cancelled <id>`
|
||||
|
||||
#### 🔍 Analysis & Breakdown
|
||||
- `/project:tm/analyze-complexity` - Analyze task complexity
|
||||
- `/project:tm/expand <id>` - Break down complex task
|
||||
- `/project:tm/expand/all` - Expand all eligible tasks
|
||||
|
||||
#### 🔗 Dependencies
|
||||
- `/project:tm/add-dependency` - Add task dependency
|
||||
- `/project:tm/remove-dependency` - Remove dependency
|
||||
- `/project:tm/validate-dependencies` - Check for issues
|
||||
|
||||
#### 🤖 Workflows
|
||||
- `/project:tm/workflows/smart-flow` - Intelligent workflows
|
||||
- `/project:tm/workflows/pipeline` - Command chaining
|
||||
- `/project:tm/workflows/auto-implement` - Auto-implementation
|
||||
|
||||
#### 📊 Utilities
|
||||
- `/project:tm/utils/analyze` - Project analysis
|
||||
- `/project:tm/status` - Project dashboard
|
||||
- `/project:tm/learn` - Interactive learning
|
||||
|
||||
### Natural Language Examples
|
||||
|
||||
```
|
||||
/project:tm/list pending high priority
|
||||
/project:tm/update mark all API tasks as done
|
||||
/project:tm/add-task create login system with OAuth
|
||||
/project:tm/show current
|
||||
```
|
||||
|
||||
### Getting Started
|
||||
|
||||
1. Install: `/project:tm/setup/quick-install`
|
||||
2. Initialize: `/project:tm/init/quick`
|
||||
3. Learn: `/project:tm/learn start`
|
||||
4. Work: `/project:tm/workflows/smart-flow`
|
||||
|
||||
For detailed command info: `/project:tm/help <command-name>`
|
||||
@@ -1,46 +0,0 @@
|
||||
Quick initialization with auto-confirmation.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Initialize a Task Master project without prompts, accepting all defaults.
|
||||
|
||||
## Quick Setup
|
||||
|
||||
```bash
|
||||
task-master init -y
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
1. Creates `.taskmaster/` directory structure
|
||||
2. Initializes empty `tasks.json`
|
||||
3. Sets up default configuration
|
||||
4. Uses directory name as project name
|
||||
5. Skips all confirmation prompts
|
||||
|
||||
## Smart Defaults
|
||||
|
||||
- Project name: Current directory name
|
||||
- Description: "Task Master Project"
|
||||
- Model config: Existing environment vars
|
||||
- Task structure: Standard format
|
||||
|
||||
## Next Steps
|
||||
|
||||
After quick init:
|
||||
1. Configure AI models if needed:
|
||||
```
|
||||
/project:tm/models/setup
|
||||
```
|
||||
|
||||
2. Parse PRD if available:
|
||||
```
|
||||
/project:tm/parse-prd <file>
|
||||
```
|
||||
|
||||
3. Or create first task:
|
||||
```
|
||||
/project:tm/add-task create initial setup
|
||||
```
|
||||
|
||||
Perfect for rapid project setup!
|
||||
@@ -1,50 +0,0 @@
|
||||
Initialize a new Task Master project.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse arguments to determine initialization preferences.
|
||||
|
||||
## Initialization Process
|
||||
|
||||
1. **Parse Arguments**
|
||||
- PRD file path (if provided)
|
||||
- Project name
|
||||
- Auto-confirm flag (-y)
|
||||
|
||||
2. **Project Setup**
|
||||
```bash
|
||||
task-master init
|
||||
```
|
||||
|
||||
3. **Smart Initialization**
|
||||
- Detect existing project files
|
||||
- Suggest project name from directory
|
||||
- Check for git repository
|
||||
- Verify AI provider configuration
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Based on arguments:
|
||||
- `quick` / `-y` → Skip confirmations
|
||||
- `<file.md>` → Use as PRD after init
|
||||
- `--name=<name>` → Set project name
|
||||
- `--description=<desc>` → Set description
|
||||
|
||||
## Post-Initialization
|
||||
|
||||
After successful init:
|
||||
1. Show project structure created
|
||||
2. Verify AI models configured
|
||||
3. Suggest next steps:
|
||||
- Parse PRD if available
|
||||
- Configure AI providers
|
||||
- Set up git hooks
|
||||
- Create first tasks
|
||||
|
||||
## Integration
|
||||
|
||||
If PRD file provided:
|
||||
```
|
||||
/project:tm/init my-prd.md
|
||||
→ Automatically runs parse-prd after init
|
||||
```
|
||||
@@ -1,103 +0,0 @@
|
||||
Learn about Task Master capabilities through interactive exploration.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Interactive Task Master Learning
|
||||
|
||||
Based on your input, I'll help you discover capabilities:
|
||||
|
||||
### 1. **What are you trying to do?**
|
||||
|
||||
If $ARGUMENTS contains:
|
||||
- "start" / "begin" → Show project initialization workflows
|
||||
- "manage" / "organize" → Show task management commands
|
||||
- "automate" / "auto" → Show automation workflows
|
||||
- "analyze" / "report" → Show analysis tools
|
||||
- "fix" / "problem" → Show troubleshooting commands
|
||||
- "fast" / "quick" → Show efficiency shortcuts
|
||||
|
||||
### 2. **Intelligent Suggestions**
|
||||
|
||||
Based on your project state:
|
||||
|
||||
**No tasks yet?**
|
||||
```
|
||||
You'll want to start with:
|
||||
1. /project:task-master:init <prd-file>
|
||||
→ Creates tasks from requirements
|
||||
|
||||
2. /project:task-master:parse-prd <file>
|
||||
→ Alternative task generation
|
||||
|
||||
Try: /project:task-master:init demo-prd.md
|
||||
```
|
||||
|
||||
**Have tasks?**
|
||||
Let me analyze what you might need...
|
||||
- Many pending tasks? → Learn sprint planning
|
||||
- Complex tasks? → Learn task expansion
|
||||
- Daily work? → Learn workflow automation
|
||||
|
||||
### 3. **Command Discovery**
|
||||
|
||||
**By Category:**
|
||||
- 📋 Task Management: list, show, add, update, complete
|
||||
- 🔄 Workflows: auto-implement, sprint-plan, daily-standup
|
||||
- 🛠️ Utilities: check-health, complexity-report, sync-memory
|
||||
- 🔍 Analysis: validate-deps, show dependencies
|
||||
|
||||
**By Scenario:**
|
||||
- "I want to see what to work on" → `/project:task-master:next`
|
||||
- "I need to break this down" → `/project:task-master:expand <id>`
|
||||
- "Show me everything" → `/project:task-master:status`
|
||||
- "Just do it for me" → `/project:workflows:auto-implement`
|
||||
|
||||
### 4. **Power User Patterns**
|
||||
|
||||
**Command Chaining:**
|
||||
```
|
||||
/project:task-master:next
|
||||
/project:task-master:start <id>
|
||||
/project:workflows:auto-implement
|
||||
```
|
||||
|
||||
**Smart Filters:**
|
||||
```
|
||||
/project:task-master:list pending high
|
||||
/project:task-master:list blocked
|
||||
/project:task-master:list 1-5 tree
|
||||
```
|
||||
|
||||
**Automation:**
|
||||
```
|
||||
/project:workflows:pipeline init → expand-all → sprint-plan
|
||||
```
|
||||
|
||||
### 5. **Learning Path**
|
||||
|
||||
Based on your experience level:
|
||||
|
||||
**Beginner Path:**
|
||||
1. init → Create project
|
||||
2. status → Understand state
|
||||
3. next → Find work
|
||||
4. complete → Finish task
|
||||
|
||||
**Intermediate Path:**
|
||||
1. expand → Break down complex tasks
|
||||
2. sprint-plan → Organize work
|
||||
3. complexity-report → Understand difficulty
|
||||
4. validate-deps → Ensure consistency
|
||||
|
||||
**Advanced Path:**
|
||||
1. pipeline → Chain operations
|
||||
2. smart-flow → Context-aware automation
|
||||
3. Custom commands → Extend the system
|
||||
|
||||
### 6. **Try This Now**
|
||||
|
||||
Based on what you asked about, try:
|
||||
[Specific command suggestion based on $ARGUMENTS]
|
||||
|
||||
Want to learn more about a specific command?
|
||||
Type: /project:help <command-name>
|
||||
@@ -1,39 +0,0 @@
|
||||
List tasks filtered by a specific status.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the status from arguments and list only tasks matching that status.
|
||||
|
||||
## Status Options
|
||||
- `pending` - Not yet started
|
||||
- `in-progress` - Currently being worked on
|
||||
- `done` - Completed
|
||||
- `review` - Awaiting review
|
||||
- `deferred` - Postponed
|
||||
- `cancelled` - Cancelled
|
||||
|
||||
## Execution
|
||||
|
||||
Based on $ARGUMENTS, run:
|
||||
```bash
|
||||
task-master list --status=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Enhanced Display
|
||||
|
||||
For the filtered results:
|
||||
- Group by priority within the status
|
||||
- Show time in current status
|
||||
- Highlight tasks approaching deadlines
|
||||
- Display blockers and dependencies
|
||||
- Suggest next actions for each status group
|
||||
|
||||
## Intelligent Insights
|
||||
|
||||
Based on the status filter:
|
||||
- **Pending**: Show recommended start order
|
||||
- **In-Progress**: Display idle time warnings
|
||||
- **Done**: Show newly unblocked tasks
|
||||
- **Review**: Indicate review duration
|
||||
- **Deferred**: Show reactivation criteria
|
||||
- **Cancelled**: Display impact analysis
|
||||
@@ -1,29 +0,0 @@
|
||||
List all tasks including their subtasks in a hierarchical view.
|
||||
|
||||
This command shows all tasks with their nested subtasks, providing a complete project overview.
|
||||
|
||||
## Execution
|
||||
|
||||
Run the Task Master list command with subtasks flag:
|
||||
```bash
|
||||
task-master list --with-subtasks
|
||||
```
|
||||
|
||||
## Enhanced Display
|
||||
|
||||
I'll organize the output to show:
|
||||
- Parent tasks with clear indicators
|
||||
- Nested subtasks with proper indentation
|
||||
- Status badges for quick scanning
|
||||
- Dependencies and blockers highlighted
|
||||
- Progress indicators for tasks with subtasks
|
||||
|
||||
## Smart Filtering
|
||||
|
||||
Based on the task hierarchy:
|
||||
- Show completion percentage for parent tasks
|
||||
- Highlight blocked subtask chains
|
||||
- Group by functional areas
|
||||
- Indicate critical path items
|
||||
|
||||
This gives you a complete tree view of your project structure.
|
||||
@@ -1,43 +0,0 @@
|
||||
List tasks with intelligent argument parsing.
|
||||
|
||||
Parse arguments to determine filters and display options:
|
||||
- Status: pending, in-progress, done, review, deferred, cancelled
|
||||
- Priority: high, medium, low (or priority:high)
|
||||
- Special: subtasks, tree, dependencies, blocked
|
||||
- IDs: Direct numbers (e.g., "1,3,5" or "1-5")
|
||||
- Complex: "pending high" = pending AND high priority
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Let me parse your request intelligently:
|
||||
|
||||
1. **Detect Filter Intent**
|
||||
- If arguments contain status keywords → filter by status
|
||||
- If arguments contain priority → filter by priority
|
||||
- If arguments contain "subtasks" → include subtasks
|
||||
- If arguments contain "tree" → hierarchical view
|
||||
- If arguments contain numbers → show specific tasks
|
||||
- If arguments contain "blocked" → show blocked tasks only
|
||||
|
||||
2. **Smart Combinations**
|
||||
Examples of what I understand:
|
||||
- "pending high" → pending tasks with high priority
|
||||
- "done today" → tasks completed today
|
||||
- "blocked" → tasks with unmet dependencies
|
||||
- "1-5" → tasks 1 through 5
|
||||
- "subtasks tree" → hierarchical view with subtasks
|
||||
|
||||
3. **Execute Appropriate Query**
|
||||
Based on parsed intent, run the most specific task-master command
|
||||
|
||||
4. **Enhanced Display**
|
||||
- Group by relevant criteria
|
||||
- Show most important information first
|
||||
- Use visual indicators for quick scanning
|
||||
- Include relevant metrics
|
||||
|
||||
5. **Intelligent Suggestions**
|
||||
Based on what you're viewing, suggest next actions:
|
||||
- Many pending? → Suggest priority order
|
||||
- Many blocked? → Show dependency resolution
|
||||
- Looking at specific tasks? → Show related tasks
|
||||
@@ -1,51 +0,0 @@
|
||||
Run interactive setup to configure AI models.
|
||||
|
||||
## Interactive Model Configuration
|
||||
|
||||
Guides you through setting up AI providers for Task Master.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
## Setup Process
|
||||
|
||||
1. **Environment Check**
|
||||
- Detect existing API keys
|
||||
- Show current configuration
|
||||
- Identify missing providers
|
||||
|
||||
2. **Provider Selection**
|
||||
- Choose main provider (required)
|
||||
- Select research provider (recommended)
|
||||
- Configure fallback (optional)
|
||||
|
||||
3. **API Key Configuration**
|
||||
- Prompt for missing keys
|
||||
- Validate key format
|
||||
- Test connectivity
|
||||
- Save configuration
|
||||
|
||||
## Smart Recommendations
|
||||
|
||||
Based on your needs:
|
||||
- **For best results**: Claude + Perplexity
|
||||
- **Budget conscious**: GPT-3.5 + Perplexity
|
||||
- **Maximum capability**: GPT-4 + Perplexity + Claude fallback
|
||||
|
||||
## Configuration Storage
|
||||
|
||||
Keys can be stored in:
|
||||
1. Environment variables (recommended)
|
||||
2. `.env` file in project
|
||||
3. Global `.taskmaster/config`
|
||||
|
||||
## Post-Setup
|
||||
|
||||
After configuration:
|
||||
- Test each provider
|
||||
- Show usage examples
|
||||
- Suggest next steps
|
||||
- Verify parse-prd works
|
||||
@@ -1,51 +0,0 @@
|
||||
View current AI model configuration.
|
||||
|
||||
## Model Configuration Display
|
||||
|
||||
Shows the currently configured AI providers and models for Task Master.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master models
|
||||
```
|
||||
|
||||
## Information Displayed
|
||||
|
||||
1. **Main Provider**
|
||||
- Model ID and name
|
||||
- API key status (configured/missing)
|
||||
- Usage: Primary task generation
|
||||
|
||||
2. **Research Provider**
|
||||
- Model ID and name
|
||||
- API key status
|
||||
- Usage: Enhanced research mode
|
||||
|
||||
3. **Fallback Provider**
|
||||
- Model ID and name
|
||||
- API key status
|
||||
- Usage: Backup when main fails
|
||||
|
||||
## Visual Status
|
||||
|
||||
```
|
||||
Task Master AI Model Configuration
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Main: ✅ claude-3-5-sonnet (configured)
|
||||
Research: ✅ perplexity-sonar (configured)
|
||||
Fallback: ⚠️ Not configured (optional)
|
||||
|
||||
Available Models:
|
||||
- claude-3-5-sonnet
|
||||
- gpt-4-turbo
|
||||
- gpt-3.5-turbo
|
||||
- perplexity-sonar
|
||||
```
|
||||
|
||||
## Next Actions
|
||||
|
||||
Based on configuration:
|
||||
- If missing API keys → Suggest setup
|
||||
- If no research model → Explain benefits
|
||||
- If all configured → Show usage tips
|
||||
@@ -1,66 +0,0 @@
|
||||
Intelligently determine and prepare the next action based on comprehensive context.
|
||||
|
||||
This enhanced version of 'next' considers:
|
||||
- Current task states
|
||||
- Recent activity
|
||||
- Time constraints
|
||||
- Dependencies
|
||||
- Your working patterns
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Next Action
|
||||
|
||||
### 1. **Context Gathering**
|
||||
Let me analyze the current situation:
|
||||
- Active tasks (in-progress)
|
||||
- Recently completed tasks
|
||||
- Blocked tasks
|
||||
- Time since last activity
|
||||
- Arguments provided: $ARGUMENTS
|
||||
|
||||
### 2. **Smart Decision Tree**
|
||||
|
||||
**If you have an in-progress task:**
|
||||
- Has it been idle > 2 hours? → Suggest resuming or switching
|
||||
- Near completion? → Show remaining steps
|
||||
- Blocked? → Find alternative task
|
||||
|
||||
**If no in-progress tasks:**
|
||||
- Unblocked high-priority tasks? → Start highest
|
||||
- Complex tasks need breakdown? → Suggest expansion
|
||||
- All tasks blocked? → Show dependency resolution
|
||||
|
||||
**Special arguments handling:**
|
||||
- "quick" → Find task < 2 hours
|
||||
- "easy" → Find low complexity task
|
||||
- "important" → Find high priority regardless of complexity
|
||||
- "continue" → Resume last worked task
|
||||
|
||||
### 3. **Preparation Workflow**
|
||||
|
||||
Based on selected task:
|
||||
1. Show full context and history
|
||||
2. Set up development environment
|
||||
3. Run relevant tests
|
||||
4. Open related files
|
||||
5. Show similar completed tasks
|
||||
6. Estimate completion time
|
||||
|
||||
### 4. **Alternative Suggestions**
|
||||
|
||||
Always provide options:
|
||||
- Primary recommendation
|
||||
- Quick alternative (< 1 hour)
|
||||
- Strategic option (unblocks most tasks)
|
||||
- Learning option (new technology/skill)
|
||||
|
||||
### 5. **Workflow Integration**
|
||||
|
||||
Seamlessly connect to:
|
||||
- `/project:task-master:start [selected]`
|
||||
- `/project:workflows:auto-implement`
|
||||
- `/project:task-master:expand` (if complex)
|
||||
- `/project:utils:complexity-report` (if unsure)
|
||||
|
||||
The goal: Zero friction from decision to implementation.
|
||||
@@ -1,48 +0,0 @@
|
||||
Parse PRD with enhanced research mode for better task generation.
|
||||
|
||||
Arguments: $ARGUMENTS (PRD file path)
|
||||
|
||||
## Research-Enhanced Parsing
|
||||
|
||||
Uses the research AI provider (typically Perplexity) for more comprehensive task generation with current best practices.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master parse-prd --input=$ARGUMENTS --research
|
||||
```
|
||||
|
||||
## Research Benefits
|
||||
|
||||
1. **Current Best Practices**
|
||||
- Latest framework patterns
|
||||
- Security considerations
|
||||
- Performance optimizations
|
||||
- Accessibility requirements
|
||||
|
||||
2. **Technical Deep Dive**
|
||||
- Implementation approaches
|
||||
- Library recommendations
|
||||
- Architecture patterns
|
||||
- Testing strategies
|
||||
|
||||
3. **Comprehensive Coverage**
|
||||
- Edge cases consideration
|
||||
- Error handling tasks
|
||||
- Monitoring setup
|
||||
- Deployment tasks
|
||||
|
||||
## Enhanced Output
|
||||
|
||||
Research mode typically:
|
||||
- Generates more detailed tasks
|
||||
- Includes industry standards
|
||||
- Adds compliance considerations
|
||||
- Suggests modern tooling
|
||||
|
||||
## When to Use
|
||||
|
||||
- New technology domains
|
||||
- Complex requirements
|
||||
- Regulatory compliance needed
|
||||
- Best practices crucial
|
||||
@@ -1,49 +0,0 @@
|
||||
Parse a PRD document to generate tasks.
|
||||
|
||||
Arguments: $ARGUMENTS (PRD file path)
|
||||
|
||||
## Intelligent PRD Parsing
|
||||
|
||||
Analyzes your requirements document and generates a complete task breakdown.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master parse-prd --input=$ARGUMENTS
|
||||
```
|
||||
|
||||
## Parsing Process
|
||||
|
||||
1. **Document Analysis**
|
||||
- Extract key requirements
|
||||
- Identify technical components
|
||||
- Detect dependencies
|
||||
- Estimate complexity
|
||||
|
||||
2. **Task Generation**
|
||||
- Create 10-15 tasks by default
|
||||
- Include implementation tasks
|
||||
- Add testing tasks
|
||||
- Include documentation tasks
|
||||
- Set logical dependencies
|
||||
|
||||
3. **Smart Enhancements**
|
||||
- Group related functionality
|
||||
- Set appropriate priorities
|
||||
- Add acceptance criteria
|
||||
- Include test strategies
|
||||
|
||||
## Options
|
||||
|
||||
Parse arguments for modifiers:
|
||||
- Number after filename → `--num-tasks`
|
||||
- `research` → Use research mode
|
||||
- `comprehensive` → Generate more tasks
|
||||
|
||||
## Post-Generation
|
||||
|
||||
After parsing:
|
||||
1. Display task summary
|
||||
2. Show dependency graph
|
||||
3. Suggest task expansion for complex items
|
||||
4. Recommend sprint planning
|
||||
@@ -1,62 +0,0 @@
|
||||
Remove a dependency between tasks.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse the task IDs to remove dependency relationship.
|
||||
|
||||
## Removing Dependencies
|
||||
|
||||
Removes a dependency relationship, potentially unblocking tasks.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Parse natural language or IDs:
|
||||
- "remove dependency between 5 and 3"
|
||||
- "5 no longer needs 3"
|
||||
- "unblock 5 from 3"
|
||||
- "5 3" → remove dependency of 5 on 3
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master remove-dependency --id=<task-id> --depends-on=<dependency-id>
|
||||
```
|
||||
|
||||
## Pre-Removal Checks
|
||||
|
||||
1. **Verify dependency exists**
|
||||
2. **Check impact on task flow**
|
||||
3. **Warn if it breaks logical sequence**
|
||||
4. **Show what will be unblocked**
|
||||
|
||||
## Smart Analysis
|
||||
|
||||
Before removing:
|
||||
- Show why dependency might have existed
|
||||
- Check if removal makes tasks executable
|
||||
- Verify no critical path disruption
|
||||
- Suggest alternative dependencies
|
||||
|
||||
## Post-Removal
|
||||
|
||||
After removing:
|
||||
1. Show updated task status
|
||||
2. List newly unblocked tasks
|
||||
3. Update project timeline
|
||||
4. Suggest next actions
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Confirm if removing critical dependency
|
||||
- Show tasks that become immediately actionable
|
||||
- Warn about potential issues
|
||||
- Keep removal history
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/remove-dependency 5 from 3
|
||||
→ Removed: Task #5 no longer depends on #3
|
||||
→ Task #5 is now UNBLOCKED and ready to start
|
||||
→ Warning: Consider if #5 still needs #2 completed first
|
||||
```
|
||||
@@ -1,84 +0,0 @@
|
||||
Remove a subtask from its parent task.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse subtask ID to remove, with option to convert to standalone task.
|
||||
|
||||
## Removing Subtasks
|
||||
|
||||
Remove a subtask and optionally convert it back to a standalone task.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "remove subtask 5.1"
|
||||
- "delete 5.1"
|
||||
- "convert 5.1 to task" → remove and convert
|
||||
- "5.1 standalone" → convert to standalone
|
||||
|
||||
## Execution Options
|
||||
|
||||
### 1. Delete Subtask
|
||||
```bash
|
||||
task-master remove-subtask --id=<parentId.subtaskId>
|
||||
```
|
||||
|
||||
### 2. Convert to Standalone
|
||||
```bash
|
||||
task-master remove-subtask --id=<parentId.subtaskId> --convert
|
||||
```
|
||||
|
||||
## Pre-Removal Checks
|
||||
|
||||
1. **Validate Subtask**
|
||||
- Verify subtask exists
|
||||
- Check completion status
|
||||
- Review dependencies
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Other subtasks that depend on it
|
||||
- Parent task implications
|
||||
- Data that will be lost
|
||||
|
||||
## Removal Process
|
||||
|
||||
### For Deletion:
|
||||
1. Confirm if subtask has work done
|
||||
2. Update parent task estimates
|
||||
3. Remove subtask and its data
|
||||
4. Clean up dependencies
|
||||
|
||||
### For Conversion:
|
||||
1. Assign new standalone task ID
|
||||
2. Preserve all task data
|
||||
3. Update dependency references
|
||||
4. Maintain task history
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Warn if subtask is in-progress
|
||||
- Show impact on parent task
|
||||
- Preserve important data
|
||||
- Update related estimates
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/remove-subtask 5.1
|
||||
→ Warning: Subtask #5.1 is in-progress
|
||||
→ This will delete all subtask data
|
||||
→ Parent task #5 will be updated
|
||||
Confirm deletion? (y/n)
|
||||
|
||||
/project:tm/remove-subtask 5.1 convert
|
||||
→ Converting subtask #5.1 to standalone task #89
|
||||
→ Preserved: All task data and history
|
||||
→ Updated: 2 dependency references
|
||||
→ New task #89 is now independent
|
||||
```
|
||||
|
||||
## Post-Removal
|
||||
|
||||
- Update parent task status
|
||||
- Recalculate estimates
|
||||
- Show updated hierarchy
|
||||
- Suggest next actions
|
||||
@@ -1,93 +0,0 @@
|
||||
Clear all subtasks from all tasks globally.
|
||||
|
||||
## Global Subtask Clearing
|
||||
|
||||
Remove all subtasks across the entire project. Use with extreme caution.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --all
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Project-Wide Summary**
|
||||
```
|
||||
Global Subtask Summary
|
||||
━━━━━━━━━━━━━━━━━━━━
|
||||
Total parent tasks: 12
|
||||
Total subtasks: 47
|
||||
- Completed: 15
|
||||
- In-progress: 8
|
||||
- Pending: 24
|
||||
|
||||
Work at risk: ~120 hours
|
||||
```
|
||||
|
||||
2. **Critical Warnings**
|
||||
- In-progress subtasks that will lose work
|
||||
- Completed subtasks with valuable history
|
||||
- Complex dependency chains
|
||||
- Integration test results
|
||||
|
||||
## Double Confirmation
|
||||
|
||||
```
|
||||
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
This will remove ALL 47 subtasks from your project
|
||||
Including 8 in-progress and 15 completed subtasks
|
||||
|
||||
This action CANNOT be undone
|
||||
|
||||
Type 'CLEAR ALL SUBTASKS' to confirm:
|
||||
```
|
||||
|
||||
## Smart Safeguards
|
||||
|
||||
- Require explicit confirmation phrase
|
||||
- Create automatic backup
|
||||
- Log all removed data
|
||||
- Option to export first
|
||||
|
||||
## Use Cases
|
||||
|
||||
Valid reasons for global clear:
|
||||
- Project restructuring
|
||||
- Major pivot in approach
|
||||
- Starting fresh breakdown
|
||||
- Switching to different task organization
|
||||
|
||||
## Process
|
||||
|
||||
1. Full project analysis
|
||||
2. Create backup file
|
||||
3. Show detailed impact
|
||||
4. Require confirmation
|
||||
5. Execute removal
|
||||
6. Generate summary report
|
||||
|
||||
## Alternative Suggestions
|
||||
|
||||
Before clearing all:
|
||||
- Export subtasks to file
|
||||
- Clear only pending subtasks
|
||||
- Clear by task category
|
||||
- Archive instead of delete
|
||||
|
||||
## Post-Clear Report
|
||||
|
||||
```
|
||||
Global Subtask Clear Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Removed: 47 subtasks from 12 tasks
|
||||
Backup saved: .taskmaster/backup/subtasks-20240115.json
|
||||
Parent tasks updated: 12
|
||||
Time estimates adjusted: Yes
|
||||
|
||||
Next steps:
|
||||
- Review updated task list
|
||||
- Re-expand complex tasks as needed
|
||||
- Check project timeline
|
||||
```
|
||||
@@ -1,86 +0,0 @@
|
||||
Clear all subtasks from a specific task.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
Remove all subtasks from a parent task at once.
|
||||
|
||||
## Clearing Subtasks
|
||||
|
||||
Bulk removal of all subtasks from a parent task.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master clear-subtasks --id=<task-id>
|
||||
```
|
||||
|
||||
## Pre-Clear Analysis
|
||||
|
||||
1. **Subtask Summary**
|
||||
- Number of subtasks
|
||||
- Completion status of each
|
||||
- Work already done
|
||||
- Dependencies affected
|
||||
|
||||
2. **Impact Assessment**
|
||||
- Data that will be lost
|
||||
- Dependencies to be removed
|
||||
- Effect on project timeline
|
||||
- Parent task implications
|
||||
|
||||
## Confirmation Required
|
||||
|
||||
```
|
||||
Clear Subtasks Confirmation
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Parent Task: #5 "Implement user authentication"
|
||||
Subtasks to remove: 4
|
||||
- #5.1 "Setup auth framework" (done)
|
||||
- #5.2 "Create login form" (in-progress)
|
||||
- #5.3 "Add validation" (pending)
|
||||
- #5.4 "Write tests" (pending)
|
||||
|
||||
⚠️ This will permanently delete all subtask data
|
||||
Continue? (y/n)
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
- Option to convert to standalone tasks
|
||||
- Backup task data before clearing
|
||||
- Preserve completed work history
|
||||
- Update parent task appropriately
|
||||
|
||||
## Process
|
||||
|
||||
1. List all subtasks for confirmation
|
||||
2. Check for in-progress work
|
||||
3. Remove all subtasks
|
||||
4. Update parent task
|
||||
5. Clean up dependencies
|
||||
|
||||
## Alternative Options
|
||||
|
||||
Suggest alternatives:
|
||||
- Convert important subtasks to tasks
|
||||
- Keep completed subtasks
|
||||
- Archive instead of delete
|
||||
- Export subtask data first
|
||||
|
||||
## Post-Clear
|
||||
|
||||
- Show updated parent task
|
||||
- Recalculate time estimates
|
||||
- Update task complexity
|
||||
- Suggest next steps
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
/project:tm/clear-subtasks 5
|
||||
→ Found 4 subtasks to remove
|
||||
→ Warning: Subtask #5.2 is in-progress
|
||||
→ Cleared all subtasks from task #5
|
||||
→ Updated parent task estimates
|
||||
→ Suggestion: Consider re-expanding with better breakdown
|
||||
```
|
||||
@@ -1,107 +0,0 @@
|
||||
Remove a task permanently from the project.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
Delete a task and handle all its relationships properly.
|
||||
|
||||
## Task Removal
|
||||
|
||||
Permanently removes a task while maintaining project integrity.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "remove task 5"
|
||||
- "delete 5"
|
||||
- "5" → remove task 5
|
||||
- Can include "-y" for auto-confirm
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master remove-task --id=<id> [-y]
|
||||
```
|
||||
|
||||
## Pre-Removal Analysis
|
||||
|
||||
1. **Task Details**
|
||||
- Current status
|
||||
- Work completed
|
||||
- Time invested
|
||||
- Associated data
|
||||
|
||||
2. **Relationship Check**
|
||||
- Tasks that depend on this
|
||||
- Dependencies this task has
|
||||
- Subtasks that will be removed
|
||||
- Blocking implications
|
||||
|
||||
3. **Impact Assessment**
|
||||
```
|
||||
Task Removal Impact
|
||||
━━━━━━━━━━━━━━━━━━
|
||||
Task: #5 "Implement authentication" (in-progress)
|
||||
Status: 60% complete (~8 hours work)
|
||||
|
||||
Will affect:
|
||||
- 3 tasks depend on this (will be blocked)
|
||||
- Has 4 subtasks (will be deleted)
|
||||
- Part of critical path
|
||||
|
||||
⚠️ This action cannot be undone
|
||||
```
|
||||
|
||||
## Smart Warnings
|
||||
|
||||
- Warn if task is in-progress
|
||||
- Show dependent tasks that will be blocked
|
||||
- Highlight if part of critical path
|
||||
- Note any completed work being lost
|
||||
|
||||
## Removal Process
|
||||
|
||||
1. Show comprehensive impact
|
||||
2. Require confirmation (unless -y)
|
||||
3. Update dependent task references
|
||||
4. Remove task and subtasks
|
||||
5. Clean up orphaned dependencies
|
||||
6. Log removal with timestamp
|
||||
|
||||
## Alternative Actions
|
||||
|
||||
Suggest before deletion:
|
||||
- Mark as cancelled instead
|
||||
- Convert to documentation
|
||||
- Archive task data
|
||||
- Transfer work to another task
|
||||
|
||||
## Post-Removal
|
||||
|
||||
- List affected tasks
|
||||
- Show broken dependencies
|
||||
- Update project statistics
|
||||
- Suggest dependency fixes
|
||||
- Recalculate timeline
|
||||
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/project:tm/remove-task 5
|
||||
→ Task #5 is in-progress with 8 hours logged
|
||||
→ 3 other tasks depend on this
|
||||
→ Suggestion: Mark as cancelled instead?
|
||||
Remove anyway? (y/n)
|
||||
|
||||
/project:tm/remove-task 5 -y
|
||||
→ Removed: Task #5 and 4 subtasks
|
||||
→ Updated: 3 task dependencies
|
||||
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
||||
→ Run /project:tm/fix-dependencies to resolve
|
||||
```
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Confirmation required
|
||||
- Impact preview
|
||||
- Removal logging
|
||||
- Suggest alternatives
|
||||
- No cascade delete of dependents
|
||||
@@ -1,55 +0,0 @@
|
||||
Cancel a task permanently.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Cancelling a Task
|
||||
|
||||
This status indicates a task is no longer needed and won't be completed.
|
||||
|
||||
## Valid Reasons for Cancellation
|
||||
|
||||
- Requirements changed
|
||||
- Feature deprecated
|
||||
- Duplicate of another task
|
||||
- Strategic pivot
|
||||
- Technical approach invalidated
|
||||
|
||||
## Pre-Cancellation Checks
|
||||
|
||||
1. Confirm no critical dependencies
|
||||
2. Check for partial implementation
|
||||
3. Verify cancellation rationale
|
||||
4. Document lessons learned
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=cancelled
|
||||
```
|
||||
|
||||
## Cancellation Impact
|
||||
|
||||
When cancelling:
|
||||
1. **Dependency Updates**
|
||||
- Notify dependent tasks
|
||||
- Update project scope
|
||||
- Recalculate timelines
|
||||
|
||||
2. **Clean-up Actions**
|
||||
- Remove related branches
|
||||
- Archive any work done
|
||||
- Update documentation
|
||||
- Close related issues
|
||||
|
||||
3. **Learning Capture**
|
||||
- Document why cancelled
|
||||
- Note what was learned
|
||||
- Update estimation models
|
||||
- Prevent future duplicates
|
||||
|
||||
## Historical Preservation
|
||||
|
||||
- Keep for reference
|
||||
- Tag with cancellation reason
|
||||
- Link to replacement if any
|
||||
- Maintain audit trail
|
||||
@@ -1,47 +0,0 @@
|
||||
Defer a task for later consideration.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Deferring a Task
|
||||
|
||||
This status indicates a task is valid but not currently actionable or prioritized.
|
||||
|
||||
## Valid Reasons for Deferral
|
||||
|
||||
- Waiting for external dependencies
|
||||
- Reprioritized for future sprint
|
||||
- Blocked by technical limitations
|
||||
- Resource constraints
|
||||
- Strategic timing considerations
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=deferred
|
||||
```
|
||||
|
||||
## Deferral Management
|
||||
|
||||
When deferring:
|
||||
1. **Document Reason**
|
||||
- Capture why it's being deferred
|
||||
- Set reactivation criteria
|
||||
- Note any partial work completed
|
||||
|
||||
2. **Impact Analysis**
|
||||
- Check dependent tasks
|
||||
- Update project timeline
|
||||
- Notify affected stakeholders
|
||||
|
||||
3. **Future Planning**
|
||||
- Set review reminders
|
||||
- Tag for specific milestone
|
||||
- Preserve context for reactivation
|
||||
- Link to blocking issues
|
||||
|
||||
## Smart Tracking
|
||||
|
||||
- Monitor deferral duration
|
||||
- Alert when criteria met
|
||||
- Prevent scope creep
|
||||
- Regular review cycles
|
||||
@@ -1,44 +0,0 @@
|
||||
Mark a task as completed.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Completing a Task
|
||||
|
||||
This command validates task completion and updates project state intelligently.
|
||||
|
||||
## Pre-Completion Checks
|
||||
|
||||
1. Verify test strategy was followed
|
||||
2. Check if all subtasks are complete
|
||||
3. Validate acceptance criteria met
|
||||
4. Ensure code is committed
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=done
|
||||
```
|
||||
|
||||
## Post-Completion Actions
|
||||
|
||||
1. **Update Dependencies**
|
||||
- Identify newly unblocked tasks
|
||||
- Update sprint progress
|
||||
- Recalculate project timeline
|
||||
|
||||
2. **Documentation**
|
||||
- Generate completion summary
|
||||
- Update CLAUDE.md with learnings
|
||||
- Log implementation approach
|
||||
|
||||
3. **Next Steps**
|
||||
- Show newly available tasks
|
||||
- Suggest logical next task
|
||||
- Update velocity metrics
|
||||
|
||||
## Celebration & Learning
|
||||
|
||||
- Show impact of completion
|
||||
- Display unblocked work
|
||||
- Recognize achievement
|
||||
- Capture lessons learned
|
||||
@@ -1,36 +0,0 @@
|
||||
Start working on a task by setting its status to in-progress.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Starting Work on Task
|
||||
|
||||
This command does more than just change status - it prepares your environment for productive work.
|
||||
|
||||
## Pre-Start Checks
|
||||
|
||||
1. Verify dependencies are met
|
||||
2. Check if another task is already in-progress
|
||||
3. Ensure task details are complete
|
||||
4. Validate test strategy exists
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=in-progress
|
||||
```
|
||||
|
||||
## Environment Setup
|
||||
|
||||
After setting to in-progress:
|
||||
1. Create/checkout appropriate git branch
|
||||
2. Open relevant documentation
|
||||
3. Set up test watchers if applicable
|
||||
4. Display task details and acceptance criteria
|
||||
5. Show similar completed tasks for reference
|
||||
|
||||
## Smart Suggestions
|
||||
|
||||
- Estimated completion time based on complexity
|
||||
- Related files from similar tasks
|
||||
- Potential blockers to watch for
|
||||
- Recommended first steps
|
||||
@@ -1,32 +0,0 @@
|
||||
Set a task's status to pending.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Setting Task to Pending
|
||||
|
||||
This moves a task back to the pending state, useful for:
|
||||
- Resetting erroneously started tasks
|
||||
- Deferring work that was prematurely begun
|
||||
- Reorganizing sprint priorities
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=pending
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before setting to pending:
|
||||
- Warn if task is currently in-progress
|
||||
- Check if this will block other tasks
|
||||
- Suggest documenting why it's being reset
|
||||
- Preserve any work already done
|
||||
|
||||
## Smart Actions
|
||||
|
||||
After setting to pending:
|
||||
- Update sprint planning if needed
|
||||
- Notify about freed resources
|
||||
- Suggest priority reassessment
|
||||
- Log the status change with context
|
||||
@@ -1,40 +0,0 @@
|
||||
Set a task's status to review.
|
||||
|
||||
Arguments: $ARGUMENTS (task ID)
|
||||
|
||||
## Marking Task for Review
|
||||
|
||||
This status indicates work is complete but needs verification before final approval.
|
||||
|
||||
## When to Use Review Status
|
||||
|
||||
- Code complete but needs peer review
|
||||
- Implementation done but needs testing
|
||||
- Documentation written but needs proofreading
|
||||
- Design complete but needs stakeholder approval
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master set-status --id=$ARGUMENTS --status=review
|
||||
```
|
||||
|
||||
## Review Preparation
|
||||
|
||||
When setting to review:
|
||||
1. **Generate Review Checklist**
|
||||
- Link to PR/MR if applicable
|
||||
- Highlight key changes
|
||||
- Note areas needing attention
|
||||
- Include test results
|
||||
|
||||
2. **Documentation**
|
||||
- Update task with review notes
|
||||
- Link relevant artifacts
|
||||
- Specify reviewers if known
|
||||
|
||||
3. **Smart Actions**
|
||||
- Create review reminders
|
||||
- Track review duration
|
||||
- Suggest reviewers based on expertise
|
||||
- Prepare rollback plan if needed
|
||||
@@ -1,117 +0,0 @@
|
||||
Check if Task Master is installed and install it if needed.
|
||||
|
||||
This command helps you get Task Master set up globally on your system.
|
||||
|
||||
## Detection and Installation Process
|
||||
|
||||
1. **Check Current Installation**
|
||||
```bash
|
||||
# Check if task-master command exists
|
||||
which task-master || echo "Task Master not found"
|
||||
|
||||
# Check npm global packages
|
||||
npm list -g task-master-ai
|
||||
```
|
||||
|
||||
2. **System Requirements Check**
|
||||
```bash
|
||||
# Verify Node.js is installed
|
||||
node --version
|
||||
|
||||
# Verify npm is installed
|
||||
npm --version
|
||||
|
||||
# Check Node version (need 16+)
|
||||
```
|
||||
|
||||
3. **Install Task Master Globally**
|
||||
If not installed, run:
|
||||
```bash
|
||||
npm install -g task-master-ai
|
||||
```
|
||||
|
||||
4. **Verify Installation**
|
||||
```bash
|
||||
# Check version
|
||||
task-master --version
|
||||
|
||||
# Verify command is available
|
||||
which task-master
|
||||
```
|
||||
|
||||
5. **Initial Setup**
|
||||
```bash
|
||||
# Initialize in current directory
|
||||
task-master init
|
||||
```
|
||||
|
||||
6. **Configure AI Provider**
|
||||
Ensure you have at least one AI provider API key set:
|
||||
```bash
|
||||
# Check current configuration
|
||||
task-master models --status
|
||||
|
||||
# If no API keys found, guide setup
|
||||
echo "You'll need at least one API key:"
|
||||
echo "- ANTHROPIC_API_KEY for Claude"
|
||||
echo "- OPENAI_API_KEY for GPT models"
|
||||
echo "- PERPLEXITY_API_KEY for research"
|
||||
echo ""
|
||||
echo "Set them in your shell profile or .env file"
|
||||
```
|
||||
|
||||
7. **Quick Test**
|
||||
```bash
|
||||
# Create a test PRD
|
||||
echo "Build a simple hello world API" > test-prd.txt
|
||||
|
||||
# Try parsing it
|
||||
task-master parse-prd test-prd.txt -n 3
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If installation fails:
|
||||
|
||||
**Permission Errors:**
|
||||
```bash
|
||||
# Try with sudo (macOS/Linux)
|
||||
sudo npm install -g task-master-ai
|
||||
|
||||
# Or fix npm permissions
|
||||
npm config set prefix ~/.npm-global
|
||||
export PATH=~/.npm-global/bin:$PATH
|
||||
```
|
||||
|
||||
**Network Issues:**
|
||||
```bash
|
||||
# Use different registry
|
||||
npm install -g task-master-ai --registry https://registry.npmjs.org/
|
||||
```
|
||||
|
||||
**Node Version Issues:**
|
||||
```bash
|
||||
# Install Node 18+ via nvm
|
||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
|
||||
nvm install 18
|
||||
nvm use 18
|
||||
```
|
||||
|
||||
## Success Confirmation
|
||||
|
||||
Once installed, you should see:
|
||||
```
|
||||
✅ Task Master v0.16.2 (or higher) installed
|
||||
✅ Command 'task-master' available globally
|
||||
✅ AI provider configured
|
||||
✅ Ready to use slash commands!
|
||||
|
||||
Try: /project:task-master:init your-prd.md
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation:
|
||||
1. Run `/project:utils:check-health` to verify setup
|
||||
2. Configure AI providers with `/project:task-master:models`
|
||||
3. Start using Task Master commands!
|
||||
@@ -1,22 +0,0 @@
|
||||
Quick install Task Master globally if not already installed.
|
||||
|
||||
Execute this streamlined installation:
|
||||
|
||||
```bash
|
||||
# Check and install in one command
|
||||
task-master --version 2>/dev/null || npm install -g task-master-ai
|
||||
|
||||
# Verify installation
|
||||
task-master --version
|
||||
|
||||
# Quick setup check
|
||||
task-master models --status || echo "Note: You'll need to set up an AI provider API key"
|
||||
```
|
||||
|
||||
If you see "command not found" after installation, you may need to:
|
||||
1. Restart your terminal
|
||||
2. Or add npm global bin to PATH: `export PATH=$(npm bin -g):$PATH`
|
||||
|
||||
Once installed, you can use all the Task Master commands!
|
||||
|
||||
Quick test: Run `/project:help` to see all available commands.
|
||||
@@ -1,82 +0,0 @@
|
||||
Show detailed task information with rich context and insights.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Enhanced Task Display
|
||||
|
||||
Parse arguments to determine what to show and how.
|
||||
|
||||
### 1. **Smart Task Selection**
|
||||
|
||||
Based on $ARGUMENTS:
|
||||
- Number → Show specific task with full context
|
||||
- "current" → Show active in-progress task(s)
|
||||
- "next" → Show recommended next task
|
||||
- "blocked" → Show all blocked tasks with reasons
|
||||
- "critical" → Show critical path tasks
|
||||
- Multiple IDs → Comparative view
|
||||
|
||||
### 2. **Contextual Information**
|
||||
|
||||
For each task, intelligently include:
|
||||
|
||||
**Core Details**
|
||||
- Full task information (id, title, description, details)
|
||||
- Current status with history
|
||||
- Test strategy and acceptance criteria
|
||||
- Priority and complexity analysis
|
||||
|
||||
**Relationships**
|
||||
- Dependencies (what it needs)
|
||||
- Dependents (what needs it)
|
||||
- Parent/subtask hierarchy
|
||||
- Related tasks (similar work)
|
||||
|
||||
**Time Intelligence**
|
||||
- Created/updated timestamps
|
||||
- Time in current status
|
||||
- Estimated vs actual time
|
||||
- Historical completion patterns
|
||||
|
||||
### 3. **Visual Enhancements**
|
||||
|
||||
```
|
||||
📋 Task #45: Implement User Authentication
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Status: 🟡 in-progress (2 hours)
|
||||
Priority: 🔴 High | Complexity: 73/100
|
||||
|
||||
Dependencies: ✅ #41, ✅ #42, ⏳ #43 (blocked)
|
||||
Blocks: #46, #47, #52
|
||||
|
||||
Progress: ████████░░ 80% complete
|
||||
|
||||
Recent Activity:
|
||||
- 2h ago: Status changed to in-progress
|
||||
- 4h ago: Dependency #42 completed
|
||||
- Yesterday: Task expanded with 3 subtasks
|
||||
```
|
||||
|
||||
### 4. **Intelligent Insights**
|
||||
|
||||
Based on task analysis:
|
||||
- **Risk Assessment**: Complexity vs time remaining
|
||||
- **Bottleneck Analysis**: Is this blocking critical work?
|
||||
- **Recommendation**: Suggested approach or concerns
|
||||
- **Similar Tasks**: How others completed similar work
|
||||
|
||||
### 5. **Action Suggestions**
|
||||
|
||||
Context-aware next steps:
|
||||
- If blocked → Show how to unblock
|
||||
- If complex → Suggest expansion
|
||||
- If in-progress → Show completion checklist
|
||||
- If done → Show dependent tasks ready to start
|
||||
|
||||
### 6. **Multi-Task View**
|
||||
|
||||
When showing multiple tasks:
|
||||
- Common dependencies
|
||||
- Optimal completion order
|
||||
- Parallel work opportunities
|
||||
- Combined complexity analysis
|
||||
@@ -1,64 +0,0 @@
|
||||
Enhanced status command with comprehensive project insights.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Status Overview
|
||||
|
||||
### 1. **Executive Summary**
|
||||
Quick dashboard view:
|
||||
- 🏃 Active work (in-progress tasks)
|
||||
- 📊 Progress metrics (% complete, velocity)
|
||||
- 🚧 Blockers and risks
|
||||
- ⏱️ Time analysis (estimated vs actual)
|
||||
- 🎯 Sprint/milestone progress
|
||||
|
||||
### 2. **Contextual Analysis**
|
||||
|
||||
Based on $ARGUMENTS, focus on:
|
||||
- "sprint" → Current sprint progress and burndown
|
||||
- "blocked" → Dependency chains and resolution paths
|
||||
- "team" → Task distribution and workload
|
||||
- "timeline" → Schedule adherence and projections
|
||||
- "risk" → High complexity or overdue items
|
||||
|
||||
### 3. **Smart Insights**
|
||||
|
||||
**Workflow Health:**
|
||||
- Idle tasks (in-progress > 24h without updates)
|
||||
- Bottlenecks (multiple tasks waiting on same dependency)
|
||||
- Quick wins (low complexity, high impact)
|
||||
|
||||
**Predictive Analytics:**
|
||||
- Completion projections based on velocity
|
||||
- Risk of missing deadlines
|
||||
- Recommended task order for optimal flow
|
||||
|
||||
### 4. **Visual Intelligence**
|
||||
|
||||
Dynamic visualization based on data:
|
||||
```
|
||||
Sprint Progress: ████████░░ 80% (16/20 tasks)
|
||||
Velocity Trend: ↗️ +15% this week
|
||||
Blocked Tasks: 🔴 3 critical path items
|
||||
|
||||
Priority Distribution:
|
||||
High: ████████ 8 tasks (2 blocked)
|
||||
Medium: ████░░░░ 4 tasks
|
||||
Low: ██░░░░░░ 2 tasks
|
||||
```
|
||||
|
||||
### 5. **Actionable Recommendations**
|
||||
|
||||
Based on analysis:
|
||||
1. **Immediate actions** (unblock critical path)
|
||||
2. **Today's focus** (optimal task sequence)
|
||||
3. **Process improvements** (recurring patterns)
|
||||
4. **Resource needs** (skills, time, dependencies)
|
||||
|
||||
### 6. **Historical Context**
|
||||
|
||||
Compare to previous periods:
|
||||
- Velocity changes
|
||||
- Pattern recognition
|
||||
- Improvement areas
|
||||
- Success patterns to repeat
|
||||
@@ -1,117 +0,0 @@
|
||||
Export tasks to README.md with professional formatting.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Generate a well-formatted README with current task information.
|
||||
|
||||
## README Synchronization
|
||||
|
||||
Creates or updates README.md with beautifully formatted task information.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Optional filters:
|
||||
- "pending" → Only pending tasks
|
||||
- "with-subtasks" → Include subtask details
|
||||
- "by-priority" → Group by priority
|
||||
- "sprint" → Current sprint only
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master sync-readme [--with-subtasks] [--status=<status>]
|
||||
```
|
||||
|
||||
## README Generation
|
||||
|
||||
### 1. **Project Header**
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
## 📋 Task Progress
|
||||
|
||||
Last Updated: 2024-01-15 10:30 AM
|
||||
|
||||
### Summary
|
||||
- Total Tasks: 45
|
||||
- Completed: 15 (33%)
|
||||
- In Progress: 5 (11%)
|
||||
- Pending: 25 (56%)
|
||||
```
|
||||
|
||||
### 2. **Task Sections**
|
||||
Organized by status or priority:
|
||||
- Progress indicators
|
||||
- Task descriptions
|
||||
- Dependencies noted
|
||||
- Time estimates
|
||||
|
||||
### 3. **Visual Elements**
|
||||
- Progress bars
|
||||
- Status badges
|
||||
- Priority indicators
|
||||
- Completion checkmarks
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Intelligent Grouping**
|
||||
- By feature area
|
||||
- By sprint/milestone
|
||||
- By assigned developer
|
||||
- By priority
|
||||
|
||||
2. **Progress Tracking**
|
||||
- Overall completion
|
||||
- Sprint velocity
|
||||
- Burndown indication
|
||||
- Time tracking
|
||||
|
||||
3. **Formatting Options**
|
||||
- GitHub-flavored markdown
|
||||
- Task checkboxes
|
||||
- Collapsible sections
|
||||
- Table format available
|
||||
|
||||
## Example Output
|
||||
|
||||
```markdown
|
||||
## 🚀 Current Sprint
|
||||
|
||||
### In Progress
|
||||
- [ ] 🔄 #5 **Implement user authentication** (60% complete)
|
||||
- Dependencies: API design (#3 ✅)
|
||||
- Subtasks: 4 (2 completed)
|
||||
- Est: 8h / Spent: 5h
|
||||
|
||||
### Pending (High Priority)
|
||||
- [ ] ⚡ #8 **Create dashboard UI**
|
||||
- Blocked by: #5
|
||||
- Complexity: High
|
||||
- Est: 12h
|
||||
```
|
||||
|
||||
## Customization
|
||||
|
||||
Based on arguments:
|
||||
- Include/exclude sections
|
||||
- Detail level control
|
||||
- Custom grouping
|
||||
- Filter by criteria
|
||||
|
||||
## Post-Sync
|
||||
|
||||
After generation:
|
||||
1. Show diff preview
|
||||
2. Backup existing README
|
||||
3. Write new content
|
||||
4. Commit reminder
|
||||
5. Update timestamp
|
||||
|
||||
## Integration
|
||||
|
||||
Works well with:
|
||||
- Git workflows
|
||||
- CI/CD pipelines
|
||||
- Project documentation
|
||||
- Team updates
|
||||
- Client reports
|
||||
@@ -1,146 +0,0 @@
|
||||
# Task Master Command Reference
|
||||
|
||||
Comprehensive command structure for Task Master integration with Claude Code.
|
||||
|
||||
## Command Organization
|
||||
|
||||
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
|
||||
|
||||
## Project Setup & Configuration
|
||||
|
||||
### `/project:tm/init`
|
||||
- `init-project` - Initialize new project (handles PRD files intelligently)
|
||||
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
||||
|
||||
### `/project:tm/models`
|
||||
- `view-models` - View current AI model configuration
|
||||
- `setup-models` - Interactive model configuration
|
||||
- `set-main` - Set primary generation model
|
||||
- `set-research` - Set research model
|
||||
- `set-fallback` - Set fallback model
|
||||
|
||||
## Task Generation
|
||||
|
||||
### `/project:tm/parse-prd`
|
||||
- `parse-prd` - Generate tasks from PRD document
|
||||
- `parse-prd-with-research` - Enhanced parsing with research mode
|
||||
|
||||
### `/project:tm/generate`
|
||||
- `generate-tasks` - Create individual task files from tasks.json
|
||||
|
||||
## Task Management
|
||||
|
||||
### `/project:tm/list`
|
||||
- `list-tasks` - Smart listing with natural language filters
|
||||
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
||||
- `list-tasks-by-status` - Filter by specific status
|
||||
|
||||
### `/project:tm/set-status`
|
||||
- `to-pending` - Reset task to pending
|
||||
- `to-in-progress` - Start working on task
|
||||
- `to-done` - Mark task complete
|
||||
- `to-review` - Submit for review
|
||||
- `to-deferred` - Defer task
|
||||
- `to-cancelled` - Cancel task
|
||||
|
||||
### `/project:tm/sync-readme`
|
||||
- `sync-readme` - Export tasks to README.md with formatting
|
||||
|
||||
### `/project:tm/update`
|
||||
- `update-task` - Update tasks with natural language
|
||||
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
||||
- `update-single-task` - Update specific task
|
||||
|
||||
### `/project:tm/add-task`
|
||||
- `add-task` - Add new task with AI assistance
|
||||
|
||||
### `/project:tm/remove-task`
|
||||
- `remove-task` - Remove task with confirmation
|
||||
|
||||
## Subtask Management
|
||||
|
||||
### `/project:tm/add-subtask`
|
||||
- `add-subtask` - Add new subtask to parent
|
||||
- `convert-task-to-subtask` - Convert existing task to subtask
|
||||
|
||||
### `/project:tm/remove-subtask`
|
||||
- `remove-subtask` - Remove subtask (with optional conversion)
|
||||
|
||||
### `/project:tm/clear-subtasks`
|
||||
- `clear-subtasks` - Clear subtasks from specific task
|
||||
- `clear-all-subtasks` - Clear all subtasks globally
|
||||
|
||||
## Task Analysis & Breakdown
|
||||
|
||||
### `/project:tm/analyze-complexity`
|
||||
- `analyze-complexity` - Analyze and generate expansion recommendations
|
||||
|
||||
### `/project:tm/complexity-report`
|
||||
- `complexity-report` - Display complexity analysis report
|
||||
|
||||
### `/project:tm/expand`
|
||||
- `expand-task` - Break down specific task
|
||||
- `expand-all-tasks` - Expand all eligible tasks
|
||||
- `with-research` - Enhanced expansion
|
||||
|
||||
## Task Navigation
|
||||
|
||||
### `/project:tm/next`
|
||||
- `next-task` - Intelligent next task recommendation
|
||||
|
||||
### `/project:tm/show`
|
||||
- `show-task` - Display detailed task information
|
||||
|
||||
### `/project:tm/status`
|
||||
- `project-status` - Comprehensive project dashboard
|
||||
|
||||
## Dependency Management
|
||||
|
||||
### `/project:tm/add-dependency`
|
||||
- `add-dependency` - Add task dependency
|
||||
|
||||
### `/project:tm/remove-dependency`
|
||||
- `remove-dependency` - Remove task dependency
|
||||
|
||||
### `/project:tm/validate-dependencies`
|
||||
- `validate-dependencies` - Check for dependency issues
|
||||
|
||||
### `/project:tm/fix-dependencies`
|
||||
- `fix-dependencies` - Automatically fix dependency problems
|
||||
|
||||
## Workflows & Automation
|
||||
|
||||
### `/project:tm/workflows`
|
||||
- `smart-workflow` - Context-aware intelligent workflow execution
|
||||
- `command-pipeline` - Chain multiple commands together
|
||||
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
||||
|
||||
## Utilities
|
||||
|
||||
### `/project:tm/utils`
|
||||
- `analyze-project` - Deep project analysis and insights
|
||||
|
||||
### `/project:tm/setup`
|
||||
- `install-taskmaster` - Comprehensive installation guide
|
||||
- `quick-install-taskmaster` - One-line global installation
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### Natural Language
|
||||
Most commands accept natural language arguments:
|
||||
```
|
||||
/project:tm/add-task create user authentication system
|
||||
/project:tm/update mark all API tasks as high priority
|
||||
/project:tm/list show blocked tasks
|
||||
```
|
||||
|
||||
### ID-Based Commands
|
||||
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
||||
```
|
||||
/project:tm/show 45
|
||||
/project:tm/expand 23
|
||||
/project:tm/set-status/to-done 67
|
||||
```
|
||||
|
||||
### Smart Defaults
|
||||
Commands provide intelligent defaults and suggestions based on context.
|
||||
@@ -1,119 +0,0 @@
|
||||
Update a single specific task with new information.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse task ID and update details.
|
||||
|
||||
## Single Task Update
|
||||
|
||||
Precisely update one task with AI assistance to maintain consistency.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
Natural language updates:
|
||||
- "5: add caching requirement"
|
||||
- "update 5 to include error handling"
|
||||
- "task 5 needs rate limiting"
|
||||
- "5 change priority to high"
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master update-task --id=<id> --prompt="<context>"
|
||||
```
|
||||
|
||||
## Update Types
|
||||
|
||||
### 1. **Content Updates**
|
||||
- Enhance description
|
||||
- Add requirements
|
||||
- Clarify details
|
||||
- Update acceptance criteria
|
||||
|
||||
### 2. **Metadata Updates**
|
||||
- Change priority
|
||||
- Adjust time estimates
|
||||
- Update complexity
|
||||
- Modify dependencies
|
||||
|
||||
### 3. **Strategic Updates**
|
||||
- Revise approach
|
||||
- Change test strategy
|
||||
- Update implementation notes
|
||||
- Adjust subtask needs
|
||||
|
||||
## AI-Powered Updates
|
||||
|
||||
The AI:
|
||||
1. **Understands Context**
|
||||
- Reads current task state
|
||||
- Identifies update intent
|
||||
- Maintains consistency
|
||||
- Preserves important info
|
||||
|
||||
2. **Applies Changes**
|
||||
- Updates relevant fields
|
||||
- Keeps style consistent
|
||||
- Adds without removing
|
||||
- Enhances clarity
|
||||
|
||||
3. **Validates Results**
|
||||
- Checks coherence
|
||||
- Verifies completeness
|
||||
- Maintains relationships
|
||||
- Suggests related updates
|
||||
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/project:tm/update/single 5: add rate limiting
|
||||
→ Updating Task #5: "Implement API endpoints"
|
||||
|
||||
Current: Basic CRUD endpoints
|
||||
Adding: Rate limiting requirements
|
||||
|
||||
Updated sections:
|
||||
✓ Description: Added rate limiting mention
|
||||
✓ Details: Added specific limits (100/min)
|
||||
✓ Test Strategy: Added rate limit tests
|
||||
✓ Complexity: Increased from 5 to 6
|
||||
✓ Time Estimate: Increased by 2 hours
|
||||
|
||||
Suggestion: Also update task #6 (API Gateway) for consistency?
|
||||
```
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Incremental Updates**
|
||||
- Adds without overwriting
|
||||
- Preserves work history
|
||||
- Tracks what changed
|
||||
- Shows diff view
|
||||
|
||||
2. **Consistency Checks**
|
||||
- Related task alignment
|
||||
- Subtask compatibility
|
||||
- Dependency validity
|
||||
- Timeline impact
|
||||
|
||||
3. **Update History**
|
||||
- Timestamp changes
|
||||
- Track who/what updated
|
||||
- Reason for update
|
||||
- Previous versions
|
||||
|
||||
## Field-Specific Updates
|
||||
|
||||
Quick syntax for specific fields:
|
||||
- "5 priority:high" → Update priority only
|
||||
- "5 add-time:4h" → Add to time estimate
|
||||
- "5 status:review" → Change status
|
||||
- "5 depends:3,4" → Add dependencies
|
||||
|
||||
## Post-Update
|
||||
|
||||
- Show updated task
|
||||
- Highlight changes
|
||||
- Check related tasks
|
||||
- Update suggestions
|
||||
- Timeline adjustments
|
||||
@@ -1,72 +0,0 @@
|
||||
Update tasks with intelligent field detection and bulk operations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Task Updates
|
||||
|
||||
Parse arguments to determine update intent and execute smartly.
|
||||
|
||||
### 1. **Natural Language Processing**
|
||||
|
||||
Understand update requests like:
|
||||
- "mark 23 as done" → Update status to done
|
||||
- "increase priority of 45" → Set priority to high
|
||||
- "add dependency on 12 to task 34" → Add dependency
|
||||
- "tasks 20-25 need review" → Bulk status update
|
||||
- "all API tasks high priority" → Pattern-based update
|
||||
|
||||
### 2. **Smart Field Detection**
|
||||
|
||||
Automatically detect what to update:
|
||||
- Status keywords: done, complete, start, pause, review
|
||||
- Priority changes: urgent, high, low, deprioritize
|
||||
- Dependency updates: depends on, blocks, after
|
||||
- Assignment: assign to, owner, responsible
|
||||
- Time: estimate, spent, deadline
|
||||
|
||||
### 3. **Bulk Operations**
|
||||
|
||||
Support for multiple task updates:
|
||||
```
|
||||
Examples:
|
||||
- "complete tasks 12, 15, 18"
|
||||
- "all pending auth tasks to in-progress"
|
||||
- "increase priority for tasks blocking 45"
|
||||
- "defer all documentation tasks"
|
||||
```
|
||||
|
||||
### 4. **Contextual Validation**
|
||||
|
||||
Before updating, check:
|
||||
- Status transitions are valid
|
||||
- Dependencies don't create cycles
|
||||
- Priority changes make sense
|
||||
- Bulk updates won't break project flow
|
||||
|
||||
Show preview:
|
||||
```
|
||||
Update Preview:
|
||||
─────────────────
|
||||
Tasks to update: #23, #24, #25
|
||||
Change: status → in-progress
|
||||
Impact: Will unblock tasks #30, #31
|
||||
Warning: Task #24 has unmet dependencies
|
||||
```
|
||||
|
||||
### 5. **Smart Suggestions**
|
||||
|
||||
Based on update:
|
||||
- Completing task? → Show newly unblocked tasks
|
||||
- Changing priority? → Show impact on sprint
|
||||
- Adding dependency? → Check for conflicts
|
||||
- Bulk update? → Show summary of changes
|
||||
|
||||
### 6. **Workflow Integration**
|
||||
|
||||
After updates:
|
||||
- Auto-update dependent task states
|
||||
- Trigger status recalculation
|
||||
- Update sprint/milestone progress
|
||||
- Log changes with context
|
||||
|
||||
Result: Flexible, intelligent task updates with safety checks.
|
||||
@@ -1,108 +0,0 @@
|
||||
Update multiple tasks starting from a specific ID.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Parse starting task ID and update context.
|
||||
|
||||
## Bulk Task Updates
|
||||
|
||||
Update multiple related tasks based on new requirements or context changes.
|
||||
|
||||
## Argument Parsing
|
||||
|
||||
- "from 5: add security requirements"
|
||||
- "5 onwards: update API endpoints"
|
||||
- "starting at 5: change to use new framework"
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master update --from=<id> --prompt="<context>"
|
||||
```
|
||||
|
||||
## Update Process
|
||||
|
||||
### 1. **Task Selection**
|
||||
Starting from specified ID:
|
||||
- Include the task itself
|
||||
- Include all dependent tasks
|
||||
- Include related subtasks
|
||||
- Smart boundary detection
|
||||
|
||||
### 2. **Context Application**
|
||||
AI analyzes the update context and:
|
||||
- Identifies what needs changing
|
||||
- Maintains consistency
|
||||
- Preserves completed work
|
||||
- Updates related information
|
||||
|
||||
### 3. **Intelligent Updates**
|
||||
- Modify descriptions appropriately
|
||||
- Update test strategies
|
||||
- Adjust time estimates
|
||||
- Revise dependencies if needed
|
||||
|
||||
## Smart Features
|
||||
|
||||
1. **Scope Detection**
|
||||
- Find natural task groupings
|
||||
- Identify related features
|
||||
- Stop at logical boundaries
|
||||
- Avoid over-updating
|
||||
|
||||
2. **Consistency Maintenance**
|
||||
- Keep naming conventions
|
||||
- Preserve relationships
|
||||
- Update cross-references
|
||||
- Maintain task flow
|
||||
|
||||
3. **Change Preview**
|
||||
```
|
||||
Bulk Update Preview
|
||||
━━━━━━━━━━━━━━━━━━
|
||||
Starting from: Task #5
|
||||
Tasks to update: 8 tasks + 12 subtasks
|
||||
|
||||
Context: "add security requirements"
|
||||
|
||||
Changes will include:
|
||||
- Add security sections to descriptions
|
||||
- Update test strategies for security
|
||||
- Add security-related subtasks where needed
|
||||
- Adjust time estimates (+20% average)
|
||||
|
||||
Continue? (y/n)
|
||||
```
|
||||
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/project:tm/update/from-id 5: change database to PostgreSQL
|
||||
→ Analyzing impact starting from task #5
|
||||
→ Found 6 related tasks to update
|
||||
→ Updates will maintain consistency
|
||||
→ Preview changes? (y/n)
|
||||
|
||||
Applied updates:
|
||||
✓ Task #5: Updated connection logic references
|
||||
✓ Task #6: Changed migration approach
|
||||
✓ Task #7: Updated query syntax notes
|
||||
✓ Task #8: Revised testing strategy
|
||||
✓ Task #9: Updated deployment steps
|
||||
✓ Task #12: Changed backup procedures
|
||||
```
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Preview all changes
|
||||
- Selective confirmation
|
||||
- Rollback capability
|
||||
- Change logging
|
||||
- Validation checks
|
||||
|
||||
## Post-Update
|
||||
|
||||
- Summary of changes
|
||||
- Consistency verification
|
||||
- Suggest review tasks
|
||||
- Update timeline if needed
|
||||
@@ -1,97 +0,0 @@
|
||||
Advanced project analysis with actionable insights and recommendations.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Comprehensive Project Analysis
|
||||
|
||||
Multi-dimensional analysis based on requested focus area.
|
||||
|
||||
### 1. **Analysis Modes**
|
||||
|
||||
Based on $ARGUMENTS:
|
||||
- "velocity" → Sprint velocity and trends
|
||||
- "quality" → Code quality metrics
|
||||
- "risk" → Risk assessment and mitigation
|
||||
- "dependencies" → Dependency graph analysis
|
||||
- "team" → Workload and skill distribution
|
||||
- "architecture" → System design coherence
|
||||
- Default → Full spectrum analysis
|
||||
|
||||
### 2. **Velocity Analytics**
|
||||
|
||||
```
|
||||
📊 Velocity Analysis
|
||||
━━━━━━━━━━━━━━━━━━━
|
||||
Current Sprint: 24 points/week ↗️ +20%
|
||||
Rolling Average: 20 points/week
|
||||
Efficiency: 85% (17/20 tasks on time)
|
||||
|
||||
Bottlenecks Detected:
|
||||
- Code review delays (avg 4h wait)
|
||||
- Test environment availability
|
||||
- Dependency on external team
|
||||
|
||||
Recommendations:
|
||||
1. Implement parallel review process
|
||||
2. Add staging environment
|
||||
3. Mock external dependencies
|
||||
```
|
||||
|
||||
### 3. **Risk Assessment**
|
||||
|
||||
**Technical Risks**
|
||||
- High complexity tasks without backup assignee
|
||||
- Single points of failure in architecture
|
||||
- Insufficient test coverage in critical paths
|
||||
- Technical debt accumulation rate
|
||||
|
||||
**Project Risks**
|
||||
- Critical path dependencies
|
||||
- Resource availability gaps
|
||||
- Deadline feasibility analysis
|
||||
- Scope creep indicators
|
||||
|
||||
### 4. **Dependency Intelligence**
|
||||
|
||||
Visual dependency analysis:
|
||||
```
|
||||
Critical Path:
|
||||
#12 → #15 → #23 → #45 → #50 (20 days)
|
||||
↘ #24 → #46 ↗
|
||||
|
||||
Optimization: Parallelize #15 and #24
|
||||
Time Saved: 3 days
|
||||
```
|
||||
|
||||
### 5. **Quality Metrics**
|
||||
|
||||
**Code Quality**
|
||||
- Test coverage trends
|
||||
- Complexity scores
|
||||
- Technical debt ratio
|
||||
- Review feedback patterns
|
||||
|
||||
**Process Quality**
|
||||
- Rework frequency
|
||||
- Bug introduction rate
|
||||
- Time to resolution
|
||||
- Knowledge distribution
|
||||
|
||||
### 6. **Predictive Insights**
|
||||
|
||||
Based on patterns:
|
||||
- Completion probability by deadline
|
||||
- Resource needs projection
|
||||
- Risk materialization likelihood
|
||||
- Suggested interventions
|
||||
|
||||
### 7. **Executive Dashboard**
|
||||
|
||||
High-level summary with:
|
||||
- Health score (0-100)
|
||||
- Top 3 risks
|
||||
- Top 3 opportunities
|
||||
- Recommended actions
|
||||
- Success probability
|
||||
|
||||
Result: Data-driven decisions with clear action paths.
|
||||
@@ -1,71 +0,0 @@
|
||||
Validate all task dependencies for issues.
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
Comprehensive check for dependency problems across the entire project.
|
||||
|
||||
## Execution
|
||||
|
||||
```bash
|
||||
task-master validate-dependencies
|
||||
```
|
||||
|
||||
## Validation Checks
|
||||
|
||||
1. **Circular Dependencies**
|
||||
- A depends on B, B depends on A
|
||||
- Complex circular chains
|
||||
- Self-dependencies
|
||||
|
||||
2. **Missing Dependencies**
|
||||
- References to non-existent tasks
|
||||
- Deleted task references
|
||||
- Invalid task IDs
|
||||
|
||||
3. **Logical Issues**
|
||||
- Completed tasks depending on pending
|
||||
- Cancelled tasks in dependency chains
|
||||
- Impossible sequences
|
||||
|
||||
4. **Complexity Warnings**
|
||||
- Over-complex dependency chains
|
||||
- Too many dependencies per task
|
||||
- Bottleneck tasks
|
||||
|
||||
## Smart Analysis
|
||||
|
||||
The validation provides:
|
||||
- Visual dependency graph
|
||||
- Critical path analysis
|
||||
- Bottleneck identification
|
||||
- Suggested optimizations
|
||||
|
||||
## Report Format
|
||||
|
||||
```
|
||||
Dependency Validation Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ No circular dependencies found
|
||||
⚠️ 2 warnings found:
|
||||
- Task #23 has 7 dependencies (consider breaking down)
|
||||
- Task #45 blocks 5 other tasks (potential bottleneck)
|
||||
❌ 1 error found:
|
||||
- Task #67 depends on deleted task #66
|
||||
|
||||
Critical Path: #1 → #5 → #23 → #45 → #50 (15 days)
|
||||
```
|
||||
|
||||
## Actionable Output
|
||||
|
||||
For each issue found:
|
||||
- Clear description
|
||||
- Impact assessment
|
||||
- Suggested fix
|
||||
- Command to resolve
|
||||
|
||||
## Next Steps
|
||||
|
||||
After validation:
|
||||
- Run `/project:tm/fix-dependencies` to auto-fix
|
||||
- Manually adjust problematic dependencies
|
||||
- Rerun to verify fixes
|
||||
@@ -1,97 +0,0 @@
|
||||
Enhanced auto-implementation with intelligent code generation and testing.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Auto-Implementation
|
||||
|
||||
Advanced implementation with context awareness and quality checks.
|
||||
|
||||
### 1. **Pre-Implementation Analysis**
|
||||
|
||||
Before starting:
|
||||
- Analyze task complexity and requirements
|
||||
- Check codebase patterns and conventions
|
||||
- Identify similar completed tasks
|
||||
- Assess test coverage needs
|
||||
- Detect potential risks
|
||||
|
||||
### 2. **Smart Implementation Strategy**
|
||||
|
||||
Based on task type and context:
|
||||
|
||||
**Feature Tasks**
|
||||
1. Research existing patterns
|
||||
2. Design component architecture
|
||||
3. Implement with tests
|
||||
4. Integrate with system
|
||||
5. Update documentation
|
||||
|
||||
**Bug Fix Tasks**
|
||||
1. Reproduce issue
|
||||
2. Identify root cause
|
||||
3. Implement minimal fix
|
||||
4. Add regression tests
|
||||
5. Verify side effects
|
||||
|
||||
**Refactoring Tasks**
|
||||
1. Analyze current structure
|
||||
2. Plan incremental changes
|
||||
3. Maintain test coverage
|
||||
4. Refactor step-by-step
|
||||
5. Verify behavior unchanged
|
||||
|
||||
### 3. **Code Intelligence**
|
||||
|
||||
**Pattern Recognition**
|
||||
- Learn from existing code
|
||||
- Follow team conventions
|
||||
- Use preferred libraries
|
||||
- Match style guidelines
|
||||
|
||||
**Test-Driven Approach**
|
||||
- Write tests first when possible
|
||||
- Ensure comprehensive coverage
|
||||
- Include edge cases
|
||||
- Performance considerations
|
||||
|
||||
### 4. **Progressive Implementation**
|
||||
|
||||
Step-by-step with validation:
|
||||
```
|
||||
Step 1/5: Setting up component structure ✓
|
||||
Step 2/5: Implementing core logic ✓
|
||||
Step 3/5: Adding error handling ⚡ (in progress)
|
||||
Step 4/5: Writing tests ⏳
|
||||
Step 5/5: Integration testing ⏳
|
||||
|
||||
Current: Adding try-catch blocks and validation...
|
||||
```
|
||||
|
||||
### 5. **Quality Assurance**
|
||||
|
||||
Automated checks:
|
||||
- Linting and formatting
|
||||
- Test execution
|
||||
- Type checking
|
||||
- Dependency validation
|
||||
- Performance analysis
|
||||
|
||||
### 6. **Smart Recovery**
|
||||
|
||||
If issues arise:
|
||||
- Diagnostic analysis
|
||||
- Suggestion generation
|
||||
- Fallback strategies
|
||||
- Manual intervention points
|
||||
- Learning from failures
|
||||
|
||||
### 7. **Post-Implementation**
|
||||
|
||||
After completion:
|
||||
- Generate PR description
|
||||
- Update documentation
|
||||
- Log lessons learned
|
||||
- Suggest follow-up tasks
|
||||
- Update task relationships
|
||||
|
||||
Result: High-quality, production-ready implementations.
|
||||
@@ -1,77 +0,0 @@
|
||||
Execute a pipeline of commands based on a specification.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Command Pipeline Execution
|
||||
|
||||
Parse pipeline specification from arguments. Supported formats:
|
||||
|
||||
### Simple Pipeline
|
||||
`init → expand-all → sprint-plan`
|
||||
|
||||
### Conditional Pipeline
|
||||
`status → if:pending>10 → sprint-plan → else → next`
|
||||
|
||||
### Iterative Pipeline
|
||||
`for:pending-tasks → expand → complexity-check`
|
||||
|
||||
### Smart Pipeline Patterns
|
||||
|
||||
**1. Project Setup Pipeline**
|
||||
```
|
||||
init [prd] →
|
||||
expand-all →
|
||||
complexity-report →
|
||||
sprint-plan →
|
||||
show first-sprint
|
||||
```
|
||||
|
||||
**2. Daily Work Pipeline**
|
||||
```
|
||||
standup →
|
||||
if:in-progress → continue →
|
||||
else → next → start
|
||||
```
|
||||
|
||||
**3. Task Completion Pipeline**
|
||||
```
|
||||
complete [id] →
|
||||
git-commit →
|
||||
if:blocked-tasks-freed → show-freed →
|
||||
next
|
||||
```
|
||||
|
||||
**4. Quality Check Pipeline**
|
||||
```
|
||||
list in-progress →
|
||||
for:each → check-idle-time →
|
||||
if:idle>1day → prompt-update
|
||||
```
|
||||
|
||||
### Pipeline Features
|
||||
|
||||
**Variables**
|
||||
- Store results: `status → $count=pending-count`
|
||||
- Use in conditions: `if:$count>10`
|
||||
- Pass between commands: `expand $high-priority-tasks`
|
||||
|
||||
**Error Handling**
|
||||
- On failure: `try:complete → catch:show-blockers`
|
||||
- Skip on error: `optional:test-run`
|
||||
- Retry logic: `retry:3:commit`
|
||||
|
||||
**Parallel Execution**
|
||||
- Parallel branches: `[analyze | test | lint]`
|
||||
- Join results: `parallel → join:report`
|
||||
|
||||
### Execution Flow
|
||||
|
||||
1. Parse pipeline specification
|
||||
2. Validate command sequence
|
||||
3. Execute with state passing
|
||||
4. Handle conditions and loops
|
||||
5. Aggregate results
|
||||
6. Show summary
|
||||
|
||||
This enables complex workflows like:
|
||||
`parse-prd → expand-all → filter:complex>70 → assign:senior → sprint-plan:weighted`
|
||||
@@ -1,55 +0,0 @@
|
||||
Execute an intelligent workflow based on current project state and recent commands.
|
||||
|
||||
This command analyzes:
|
||||
1. Recent commands you've run
|
||||
2. Current project state
|
||||
3. Time of day / day of week
|
||||
4. Your working patterns
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
## Intelligent Workflow Selection
|
||||
|
||||
Based on context, I'll determine the best workflow:
|
||||
|
||||
### Context Analysis
|
||||
- Previous command executed
|
||||
- Current task states
|
||||
- Unfinished work from last session
|
||||
- Your typical patterns
|
||||
|
||||
### Smart Execution
|
||||
|
||||
If last command was:
|
||||
- `status` → Likely starting work → Run daily standup
|
||||
- `complete` → Task finished → Find next task
|
||||
- `list pending` → Planning → Suggest sprint planning
|
||||
- `expand` → Breaking down work → Show complexity analysis
|
||||
- `init` → New project → Show onboarding workflow
|
||||
|
||||
If no recent commands:
|
||||
- Morning? → Daily standup workflow
|
||||
- Many pending tasks? → Sprint planning
|
||||
- Tasks blocked? → Dependency resolution
|
||||
- Friday? → Weekly review
|
||||
|
||||
### Workflow Composition
|
||||
|
||||
I'll chain appropriate commands:
|
||||
1. Analyze current state
|
||||
2. Execute primary workflow
|
||||
3. Suggest follow-up actions
|
||||
4. Prepare environment for coding
|
||||
|
||||
### Learning Mode
|
||||
|
||||
This command learns from your patterns:
|
||||
- Track command sequences
|
||||
- Note time preferences
|
||||
- Remember common workflows
|
||||
- Adapt to your style
|
||||
|
||||
Example flows detected:
|
||||
- Morning: standup → next → start
|
||||
- After lunch: status → continue task
|
||||
- End of day: complete → commit → status
|
||||
@@ -1,10 +0,0 @@
|
||||
reviews:
|
||||
profile: assertive
|
||||
poem: false
|
||||
auto_review:
|
||||
base_branches:
|
||||
- rc
|
||||
- beta
|
||||
- alpha
|
||||
- production
|
||||
- next
|
||||
@@ -2,13 +2,12 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./dist/mcp-server.js"],
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||
"GROQ_API_KEY": "GROQ_API_KEY_HERE",
|
||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||
|
||||
@@ -523,7 +523,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
||||
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
||||
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
||||
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
||||
.option('--generate', 'Regenerate task files after adding subtask')
|
||||
.option('--skip-generate', 'Skip regenerating task files')
|
||||
.action(async (options) => {
|
||||
// Validate required parameters
|
||||
if (!options.parent) {
|
||||
@@ -545,7 +545,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
||||
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
||||
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
||||
.option('--generate', 'Regenerate task files after removing subtask')
|
||||
.option('--skip-generate', 'Skip regenerating task files')
|
||||
.action(async (options) => {
|
||||
// Implementation with detailed error handling
|
||||
})
|
||||
@@ -633,11 +633,11 @@ function showAddSubtaskHelp() {
|
||||
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
||||
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||
' --generate Regenerate task files after adding subtask\n\n' +
|
||||
' --skip-generate Skip regenerating task files\n\n' +
|
||||
chalk.cyan('Examples:') + '\n' +
|
||||
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
||||
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
||||
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details "Handle 401 Unauthorized.\\nHandle 500 Server Error." --generate',
|
||||
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details $\'Handle 401 Unauthorized.\nHandle 500 Server Error.\'',
|
||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||
));
|
||||
}
|
||||
@@ -652,7 +652,7 @@ function showRemoveSubtaskHelp() {
|
||||
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
||||
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||
' --generate Regenerate task files after removing subtask\n\n' +
|
||||
' --skip-generate Skip regenerating task files\n\n' +
|
||||
chalk.cyan('Examples:') + '\n' +
|
||||
' task-master remove-subtask --id=\'5.2\'\n' +
|
||||
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
||||
|
||||
@@ -1,200 +1,19 @@
|
||||
---
|
||||
description: Guide for using Taskmaster to manage task-driven development workflows
|
||||
description: Guide for using Task Master to manage task-driven development workflows
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
# Task Master Development Workflow
|
||||
|
||||
# Taskmaster Development Workflow
|
||||
|
||||
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
||||
|
||||
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
||||
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
||||
|
||||
## The Basic Loop
|
||||
The fundamental development cycle you will facilitate is:
|
||||
1. **`list`**: Show the user what needs to be done.
|
||||
2. **`next`**: Help the user decide what to work on.
|
||||
3. **`show <id>`**: Provide details for a specific task.
|
||||
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
||||
5. **Implement**: The user writes the code and tests.
|
||||
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
||||
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
||||
8. **Repeat**.
|
||||
|
||||
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
||||
|
||||
---
|
||||
|
||||
## Standard Development Workflow Process
|
||||
|
||||
### Simple Workflow (Default Starting Point)
|
||||
|
||||
For new projects or when users are getting started, operate within the `master` tag context:
|
||||
|
||||
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure
|
||||
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules cursor,windsurf`) or manage them later with `task-master rules add/remove` commands
|
||||
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
|
||||
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
|
||||
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
|
||||
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
||||
- Implement code following task details, dependencies, and project standards
|
||||
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
|
||||
---
|
||||
|
||||
## Leveling Up: Agent-Led Multi-Context Workflows
|
||||
|
||||
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
||||
|
||||
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
||||
|
||||
### When to Introduce Tags: Your Decision Patterns
|
||||
|
||||
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
||||
|
||||
#### Pattern 1: Simple Git Feature Branching
|
||||
This is the most common and direct use case for tags.
|
||||
|
||||
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
||||
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
||||
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
||||
- **Tool to Use**: `task-master add-tag --from-branch`
|
||||
|
||||
#### Pattern 2: Team Collaboration
|
||||
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
||||
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
||||
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
||||
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
||||
|
||||
#### Pattern 3: Experiments or Risky Refactors
|
||||
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
||||
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
||||
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
||||
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
||||
|
||||
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
||||
This is a more structured approach for significant new features or epics.
|
||||
|
||||
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
||||
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
||||
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
||||
- **Your Implementation Flow**:
|
||||
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
||||
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
||||
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
||||
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
||||
|
||||
#### Pattern 5: Version-Based Development
|
||||
Tailor your approach based on the project maturity indicated by tag names.
|
||||
|
||||
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
||||
- **Your Approach**: Focus on speed and functionality over perfection
|
||||
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
||||
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
||||
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
||||
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
||||
|
||||
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
||||
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
||||
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
||||
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
||||
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
||||
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
||||
|
||||
### Advanced Workflow (Tag-Based & PRD-Driven)
|
||||
|
||||
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
||||
- User mentions teammates or collaboration needs
|
||||
- Project has grown to 15+ tasks with mixed priorities
|
||||
- User creates feature branches or mentions major initiatives
|
||||
- User initializes Taskmaster on an existing, complex codebase
|
||||
- User describes large features that would benefit from dedicated planning
|
||||
|
||||
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
||||
|
||||
#### Master List Strategy (High-Value Focus)
|
||||
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
||||
- **High-level deliverables** that provide significant business value
|
||||
- **Major milestones** and epic-level features
|
||||
- **Critical infrastructure** work that affects the entire project
|
||||
- **Release-blocking** items
|
||||
|
||||
**What NOT to put in master**:
|
||||
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
||||
- Refactoring work (create dedicated tags like `refactor-auth`)
|
||||
- Experimental features (use `experiment-*` tags)
|
||||
- Team member-specific tasks (use person-specific tags)
|
||||
|
||||
#### PRD-Driven Feature Development
|
||||
|
||||
**For New Major Features**:
|
||||
1. **Identify the Initiative**: When user describes a significant feature
|
||||
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
||||
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
||||
4. **Parse & Prepare**:
|
||||
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
||||
- `analyze_project_complexity --tag=feature-[name] --research`
|
||||
- `expand_all --tag=feature-[name] --research`
|
||||
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
||||
|
||||
**For Existing Codebase Analysis**:
|
||||
When users initialize Taskmaster on existing projects:
|
||||
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
||||
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
||||
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
||||
- Current state analysis (based on your codebase research)
|
||||
- Proposed improvements or new features
|
||||
- Implementation strategy considering existing code
|
||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||
|
||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
|
||||
### Workflow Transition Examples
|
||||
|
||||
**Example 1: Simple → Team-Based**
|
||||
```
|
||||
User: "Alice is going to help with the API work"
|
||||
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
||||
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
||||
```
|
||||
|
||||
**Example 2: Simple → PRD-Driven**
|
||||
```
|
||||
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
||||
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
||||
Actions:
|
||||
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
||||
2. Collaborate on PRD creation
|
||||
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
||||
4. Add high-level "User Dashboard" task to master
|
||||
```
|
||||
|
||||
**Example 3: Existing Project → Strategic Planning**
|
||||
```
|
||||
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
||||
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
||||
Actions:
|
||||
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
||||
2. Collaborate on improvement PRD based on findings
|
||||
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
||||
4. Keep only major improvement initiatives in master
|
||||
```
|
||||
|
||||
---
|
||||
This guide outlines the typical process for using Task Master to manage software development projects.
|
||||
|
||||
## Primary Interaction: MCP Server vs. CLI
|
||||
|
||||
Taskmaster offers two primary ways to interact:
|
||||
Task Master offers two primary ways to interact:
|
||||
|
||||
1. **MCP Server (Recommended for Integrated Tools)**:
|
||||
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
|
||||
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||
- The MCP server exposes Task Master functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
||||
- Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools.
|
||||
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
|
||||
@@ -209,15 +28,62 @@ Taskmaster offers two primary ways to interact:
|
||||
- Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference.
|
||||
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
||||
|
||||
## How the Tag System Works (For Your Reference)
|
||||
## Tagged Task Lists System
|
||||
|
||||
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
||||
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
||||
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
||||
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
||||
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a full command list.
|
||||
Task Master now supports **tagged task lists** for multi-context task management:
|
||||
|
||||
---
|
||||
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0"
|
||||
- **Seamless Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption
|
||||
- **Backward Compatibility**: All existing commands continue to work exactly as before
|
||||
- **Context Isolation**: Tasks in different tags are completely separate and isolated
|
||||
- **Silent Migration**: The first time you run any Task Master command, your existing tasks.json will be automatically migrated to the new tagged format
|
||||
- **Migration Notice**: You'll see a friendly FYI notice after migration explaining the new system
|
||||
|
||||
**Migration Example**:
|
||||
```json
|
||||
// Before (legacy format)
|
||||
{
|
||||
"tasks": [
|
||||
{ "id": 1, "title": "Setup API", ... }
|
||||
]
|
||||
}
|
||||
|
||||
// After (tagged format - automatic)
|
||||
{
|
||||
"master": {
|
||||
"tasks": [
|
||||
{ "id": 1, "title": "Setup API", ... }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Tag Management**: CLI commands for tag management (`add-tag`, `use-tag`, `list-tags`, `delete-tag`, `rename-tag`, `copy-tag`) are now available with manual git integration via `--from-branch` option.
|
||||
|
||||
## Standard Development Workflow Process
|
||||
|
||||
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure
|
||||
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
|
||||
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
|
||||
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||
- Clarify tasks by checking task files in tasks/ directory or asking for user input
|
||||
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
|
||||
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`.
|
||||
- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating
|
||||
- Implement code following task details, dependencies, and project standards
|
||||
- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
|
||||
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||
- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent=<id> --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||
- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||
- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json
|
||||
- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed
|
||||
- Respect dependency chains and task priorities when selecting work
|
||||
- Report progress regularly using `get_tasks` / `task-master list`
|
||||
- Reorganize tasks as needed using `move_task` / `task-master move --from=<id> --to=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to change task hierarchy or ordering
|
||||
|
||||
## Task Complexity Analysis
|
||||
|
||||
@@ -295,17 +161,6 @@ Taskmaster configuration is managed through two main mechanisms:
|
||||
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
||||
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||
|
||||
## Rules Management
|
||||
|
||||
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
||||
|
||||
- **Available Profiles**: Claude Code, Cline, Codex, Cursor, Roo Code, Trae, Windsurf (claude, cline, codex, cursor, roo, trae, windsurf)
|
||||
- **During Initialization**: Use `task-master init --rules cursor,windsurf` to specify which rule sets to include
|
||||
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
||||
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
||||
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
||||
- **Rule Structure**: Each profile creates its own directory (e.g., `.cursor/rules`, `.roo/rules`) with appropriate configuration files
|
||||
|
||||
## Determining the Next Task
|
||||
|
||||
- Run `next_task` / `task-master next` to show the next task to work on.
|
||||
|
||||
@@ -26,7 +26,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||
* `--description <text>`: `Provide a brief description for your project.`
|
||||
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||
* `--no-git`: `Skip initializing a Git repository entirely.`
|
||||
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||
* **Usage:** Run this once at the beginning of a new project.
|
||||
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||
@@ -37,7 +36,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||
* `noGit`: `Skip initializing a Git repository entirely. Default is false.` (CLI: `--no-git`)
|
||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||
@@ -79,7 +77,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `--set-fallback <model_id>`: `Set the fallback model.`
|
||||
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
||||
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
||||
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
||||
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||
@@ -111,7 +108,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
||||
* **Key Parameters/Options:**
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* **Usage:** Identify what to work on next according to the plan.
|
||||
|
||||
### 5. Get Task Details (`get_task`)
|
||||
@@ -140,7 +136,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
||||
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Quickly add newly identified tasks during development.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
@@ -158,8 +153,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||
|
||||
@@ -172,7 +166,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
||||
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
@@ -198,13 +191,12 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master update-subtask [options]`
|
||||
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
||||
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||
* `tag`: `Specify which tag context to remove the task from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||
|
||||
### 11. Set Task Status (`set_task_status`)
|
||||
|
||||
@@ -214,7 +206,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
||||
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Mark progress as tasks move through the development cycle.
|
||||
|
||||
@@ -226,7 +217,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||
@@ -272,9 +262,8 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
|
||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
||||
|
||||
@@ -286,8 +275,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||
|
||||
@@ -299,7 +287,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
||||
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
||||
* Moving a task to become a subtask
|
||||
@@ -329,7 +316,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
||||
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||
* **Usage:** Establish the correct order of execution between tasks.
|
||||
|
||||
@@ -341,7 +327,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
||||
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Update task relationships when the order of execution changes.
|
||||
|
||||
@@ -351,7 +336,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Audit the integrity of your task dependencies.
|
||||
|
||||
@@ -389,7 +373,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master complexity-report [options]`
|
||||
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||
|
||||
@@ -461,7 +444,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
||||
* **CLI Command:** `task-master tags [options]`
|
||||
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
||||
* **Key Parameters/Options:**
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
||||
|
||||
### 27. Add Tag (`add_tag`)
|
||||
@@ -475,7 +457,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
||||
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
||||
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
||||
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 28. Delete Tag (`delete_tag`)
|
||||
|
||||
@@ -485,7 +466,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
||||
* **Key Parameters/Options:**
|
||||
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
||||
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 29. Use Tag (`use_tag`)
|
||||
|
||||
@@ -494,7 +474,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
||||
* **Description:** `Switch your active task context to a different tag.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 30. Rename Tag (`rename_tag`)
|
||||
|
||||
@@ -504,7 +483,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
||||
* **Key Parameters/Options:**
|
||||
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
||||
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 31. Copy Tag (`copy_tag`)
|
||||
|
||||
|
||||
@@ -1,803 +0,0 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# Test Workflow & Development Process
|
||||
|
||||
## **Initial Testing Framework Setup**
|
||||
|
||||
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
|
||||
|
||||
### **Detecting Project Type & Framework Needs**
|
||||
|
||||
**AI Agent Assessment Checklist:**
|
||||
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
|
||||
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
|
||||
3. **Framework Detection**: Check for existing test runners in dependencies
|
||||
4. **Project Structure**: Analyze directory structure for testing patterns
|
||||
|
||||
### **JavaScript/Node.js Projects (Jest Setup)**
|
||||
|
||||
#### **Prerequisites Check**
|
||||
```bash
|
||||
# Verify Node.js project
|
||||
ls package.json # Should exist
|
||||
|
||||
# Check for existing testing setup
|
||||
ls jest.config.js jest.config.ts # Check for Jest config
|
||||
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
|
||||
```
|
||||
|
||||
#### **Jest Installation & Configuration**
|
||||
|
||||
**Step 1: Install Dependencies**
|
||||
```bash
|
||||
# Core Jest dependencies
|
||||
npm install --save-dev jest
|
||||
|
||||
# TypeScript support (if using TypeScript)
|
||||
npm install --save-dev ts-jest @types/jest
|
||||
|
||||
# Additional useful packages
|
||||
npm install --save-dev supertest @types/supertest # For API testing
|
||||
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
|
||||
```
|
||||
|
||||
**Step 2: Create Jest Configuration**
|
||||
|
||||
Create `jest.config.js` with the following production-ready configuration:
|
||||
|
||||
```javascript
|
||||
/** @type {import('jest').Config} */
|
||||
module.exports = {
|
||||
// Use ts-jest preset for TypeScript support
|
||||
preset: 'ts-jest',
|
||||
|
||||
// Test environment
|
||||
testEnvironment: 'node',
|
||||
|
||||
// Roots for test discovery
|
||||
roots: ['<rootDir>/src', '<rootDir>/tests'],
|
||||
|
||||
// Test file patterns
|
||||
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
|
||||
|
||||
// Transform files
|
||||
transform: {
|
||||
'^.+\\.ts$': [
|
||||
'ts-jest',
|
||||
{
|
||||
tsconfig: {
|
||||
target: 'es2020',
|
||||
module: 'commonjs',
|
||||
esModuleInterop: true,
|
||||
allowSyntheticDefaultImports: true,
|
||||
skipLibCheck: true,
|
||||
strict: false,
|
||||
noImplicitAny: false,
|
||||
},
|
||||
},
|
||||
],
|
||||
'^.+\\.js$': [
|
||||
'ts-jest',
|
||||
{
|
||||
useESM: false,
|
||||
tsconfig: {
|
||||
target: 'es2020',
|
||||
module: 'commonjs',
|
||||
esModuleInterop: true,
|
||||
allowSyntheticDefaultImports: true,
|
||||
allowJs: true,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
|
||||
// Module file extensions
|
||||
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
||||
|
||||
// Transform ignore patterns - adjust for ES modules
|
||||
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
|
||||
|
||||
// Coverage configuration
|
||||
collectCoverage: true,
|
||||
coverageDirectory: 'coverage',
|
||||
coverageReporters: [
|
||||
'text', // Console output
|
||||
'text-summary', // Brief summary
|
||||
'lcov', // For IDE integration
|
||||
'html', // Detailed HTML report
|
||||
],
|
||||
|
||||
// Files to collect coverage from
|
||||
collectCoverageFrom: [
|
||||
'src/**/*.ts',
|
||||
'!src/**/*.d.ts',
|
||||
'!src/**/*.test.ts',
|
||||
'!src/**/index.ts', // Often just exports
|
||||
'!src/generated/**', // Generated code
|
||||
'!src/config/database.ts', // Database config (tested via integration)
|
||||
],
|
||||
|
||||
// Coverage thresholds - TaskMaster standards
|
||||
coverageThreshold: {
|
||||
global: {
|
||||
branches: 70,
|
||||
functions: 80,
|
||||
lines: 80,
|
||||
statements: 80,
|
||||
},
|
||||
// Higher standards for critical business logic
|
||||
'./src/utils/': {
|
||||
branches: 85,
|
||||
functions: 90,
|
||||
lines: 90,
|
||||
statements: 90,
|
||||
},
|
||||
'./src/middleware/': {
|
||||
branches: 80,
|
||||
functions: 85,
|
||||
lines: 85,
|
||||
statements: 85,
|
||||
},
|
||||
},
|
||||
|
||||
// Setup files
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
|
||||
|
||||
// Global teardown to prevent worker process leaks
|
||||
globalTeardown: '<rootDir>/tests/teardown.ts',
|
||||
|
||||
// Module path mapping (if needed)
|
||||
moduleNameMapper: {
|
||||
'^@/(.*)$': '<rootDir>/src/$1',
|
||||
},
|
||||
|
||||
// Clear mocks between tests
|
||||
clearMocks: true,
|
||||
|
||||
// Restore mocks after each test
|
||||
restoreMocks: true,
|
||||
|
||||
// Global test timeout
|
||||
testTimeout: 10000,
|
||||
|
||||
// Projects for different test types
|
||||
projects: [
|
||||
// Unit tests - for pure functions only
|
||||
{
|
||||
displayName: 'unit',
|
||||
testMatch: ['<rootDir>/src/**/*.test.ts'],
|
||||
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
collectCoverageFrom: [
|
||||
'src/**/*.ts',
|
||||
'!src/**/*.d.ts',
|
||||
'!src/**/*.test.ts',
|
||||
'!src/**/*.integration.test.ts',
|
||||
],
|
||||
coverageThreshold: {
|
||||
global: {
|
||||
branches: 70,
|
||||
functions: 80,
|
||||
lines: 80,
|
||||
statements: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
// Integration tests - real database/services
|
||||
{
|
||||
displayName: 'integration',
|
||||
testMatch: [
|
||||
'<rootDir>/src/**/*.integration.test.ts',
|
||||
'<rootDir>/tests/integration/**/*.test.ts',
|
||||
],
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
|
||||
testTimeout: 10000,
|
||||
},
|
||||
// E2E tests - full workflows
|
||||
{
|
||||
displayName: 'e2e',
|
||||
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
|
||||
testTimeout: 30000,
|
||||
},
|
||||
],
|
||||
|
||||
// Verbose output for better debugging
|
||||
verbose: true,
|
||||
|
||||
// Run projects sequentially to avoid conflicts
|
||||
maxWorkers: 1,
|
||||
|
||||
// Enable watch mode plugins
|
||||
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
|
||||
};
|
||||
```
|
||||
|
||||
**Step 3: Update package.json Scripts**
|
||||
|
||||
Add these scripts to your `package.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"scripts": {
|
||||
"test": "jest",
|
||||
"test:watch": "jest --watch",
|
||||
"test:coverage": "jest --coverage",
|
||||
"test:unit": "jest --selectProjects unit",
|
||||
"test:integration": "jest --selectProjects integration",
|
||||
"test:e2e": "jest --selectProjects e2e",
|
||||
"test:ci": "jest --ci --coverage --watchAll=false"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Create Test Setup Files**
|
||||
|
||||
Create essential test setup files:
|
||||
|
||||
```typescript
|
||||
// tests/setup.ts - Global setup
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Global test configuration
|
||||
beforeAll(() => {
|
||||
// Set test timeout
|
||||
jest.setTimeout(10000);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up mocks after each test
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// tests/setup/integration.ts - Integration test setup
|
||||
import { PrismaClient } from '@prisma/client';
|
||||
|
||||
const prisma = new PrismaClient();
|
||||
|
||||
beforeAll(async () => {
|
||||
// Connect to test database
|
||||
await prisma.$connect();
|
||||
});
|
||||
|
||||
afterAll(async () => {
|
||||
// Cleanup and disconnect
|
||||
await prisma.$disconnect();
|
||||
});
|
||||
|
||||
beforeEach(async () => {
|
||||
// Clean test data before each test
|
||||
// Add your cleanup logic here
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// tests/teardown.ts - Global teardown
|
||||
export default async () => {
|
||||
// Global cleanup after all tests
|
||||
console.log('Global test teardown complete');
|
||||
};
|
||||
```
|
||||
|
||||
**Step 5: Create Initial Test Structure**
|
||||
|
||||
```bash
|
||||
# Create test directories
|
||||
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
|
||||
mkdir -p tests/unit/src/{utils,services,middleware}
|
||||
|
||||
# Create sample test fixtures
|
||||
mkdir tests/fixtures
|
||||
```
|
||||
|
||||
### **Generic Testing Framework Setup (Any Language)**
|
||||
|
||||
#### **Framework Selection Guide**
|
||||
|
||||
**Python Projects:**
|
||||
- **pytest**: Recommended for most Python projects
|
||||
- **unittest**: Built-in, suitable for simple projects
|
||||
- **Coverage**: Use `coverage.py` for code coverage
|
||||
|
||||
```bash
|
||||
# Python setup example
|
||||
pip install pytest pytest-cov
|
||||
echo "[tool:pytest]" > pytest.ini
|
||||
echo "testpaths = tests" >> pytest.ini
|
||||
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
|
||||
```
|
||||
|
||||
**Go Projects:**
|
||||
- **Built-in testing**: Use Go's built-in `testing` package
|
||||
- **Coverage**: Built-in with `go test -cover`
|
||||
|
||||
```bash
|
||||
# Go setup example
|
||||
go mod init your-project
|
||||
mkdir -p tests
|
||||
# Tests are typically *_test.go files alongside source
|
||||
```
|
||||
|
||||
**Rust Projects:**
|
||||
- **Built-in testing**: Use Rust's built-in test framework
|
||||
- **cargo-tarpaulin**: For coverage analysis
|
||||
|
||||
```bash
|
||||
# Rust setup example
|
||||
cargo new your-project
|
||||
cd your-project
|
||||
cargo install cargo-tarpaulin # For coverage
|
||||
```
|
||||
|
||||
**Java Projects:**
|
||||
- **JUnit 5**: Modern testing framework
|
||||
- **Maven/Gradle**: Build tools with testing integration
|
||||
|
||||
```xml
|
||||
<!-- Maven pom.xml example -->
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
<artifactId>junit-jupiter</artifactId>
|
||||
<version>5.9.2</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
#### **Universal Testing Principles**
|
||||
|
||||
**Coverage Standards (Adapt to Your Language):**
|
||||
- **Global Minimum**: 70-80% line coverage
|
||||
- **Critical Code**: 85-90% coverage
|
||||
- **New Features**: Must meet or exceed standards
|
||||
- **Legacy Code**: Gradual improvement strategy
|
||||
|
||||
**Test Organization:**
|
||||
- **Unit Tests**: Fast, isolated, no external dependencies
|
||||
- **Integration Tests**: Test component interactions
|
||||
- **E2E Tests**: Test complete user workflows
|
||||
- **Performance Tests**: Load and stress testing (if applicable)
|
||||
|
||||
**Naming Conventions:**
|
||||
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
|
||||
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
|
||||
- **Test Directories**: Organized by test type and mirroring source structure
|
||||
|
||||
#### **TaskMaster Integration for Any Framework**
|
||||
|
||||
**Document Testing Setup in Subtasks:**
|
||||
```bash
|
||||
# Update subtask with testing framework setup
|
||||
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
|
||||
- Installed [Framework Name] with coverage support
|
||||
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
|
||||
- Created test directory structure: unit/, integration/, e2e/
|
||||
- Added test scripts to build configuration
|
||||
- All setup tests passing"
|
||||
```
|
||||
|
||||
**Testing Framework Verification:**
|
||||
```bash
|
||||
# Verify setup works
|
||||
[test-command] # e.g., npm test, pytest, go test, cargo test
|
||||
|
||||
# Check coverage reporting
|
||||
[coverage-command] # e.g., npm run test:coverage
|
||||
|
||||
# Update task with verification
|
||||
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
|
||||
- Sample tests running successfully
|
||||
- Coverage reporting functional
|
||||
- CI/CD integration ready
|
||||
- Ready to begin TDD workflow"
|
||||
```
|
||||
|
||||
## **Test-Driven Development (TDD) Integration**
|
||||
|
||||
### **Core TDD Cycle with Jest**
|
||||
```bash
|
||||
# 1. Start development with watch mode
|
||||
npm run test:watch
|
||||
|
||||
# 2. Write failing test first
|
||||
# Create test file: src/utils/newFeature.test.ts
|
||||
# Write test that describes expected behavior
|
||||
|
||||
# 3. Implement minimum code to make test pass
|
||||
# 4. Refactor while keeping tests green
|
||||
# 5. Add edge cases and error scenarios
|
||||
```
|
||||
|
||||
### **TDD Workflow Per Subtask**
|
||||
```bash
|
||||
# When starting a new subtask:
|
||||
task-master set-status --id=4.1 --status=in-progress
|
||||
|
||||
# Begin TDD cycle:
|
||||
npm run test:watch # Keep running during development
|
||||
|
||||
# Document TDD progress in subtask:
|
||||
task-master update-subtask --id=4.1 --prompt="TDD Progress:
|
||||
- Written 3 failing tests for core functionality
|
||||
- Implemented basic feature, tests now passing
|
||||
- Adding edge case tests for error handling"
|
||||
|
||||
# Complete subtask with test summary:
|
||||
task-master update-subtask --id=4.1 --prompt="Implementation complete:
|
||||
- Feature implemented with 8 unit tests
|
||||
- Coverage: 95% statements, 88% branches
|
||||
- All tests passing, TDD cycle complete"
|
||||
```
|
||||
|
||||
## **Testing Commands & Usage**
|
||||
|
||||
### **Development Commands**
|
||||
```bash
|
||||
# Primary development command - use during coding
|
||||
npm run test:watch # Watch mode with Jest
|
||||
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
|
||||
|
||||
# Targeted testing during development
|
||||
npm run test:unit # Run only unit tests
|
||||
npm run test:unit -- --coverage # Unit tests with coverage
|
||||
|
||||
# Integration testing when APIs are ready
|
||||
npm run test:integration # Run integration tests
|
||||
npm run test:integration -- --detectOpenHandles # Debug hanging tests
|
||||
|
||||
# End-to-end testing for workflows
|
||||
npm run test:e2e # Run E2E tests
|
||||
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
|
||||
```
|
||||
|
||||
### **Quality Assurance Commands**
|
||||
```bash
|
||||
# Full test suite with coverage (before commits)
|
||||
npm run test:coverage # Complete coverage analysis
|
||||
|
||||
# All tests (CI/CD pipeline)
|
||||
npm test # Run all test projects
|
||||
|
||||
# Specific test file execution
|
||||
npm test -- auth.test.ts # Run specific test file
|
||||
npm test -- --testNamePattern="should handle errors" # Run specific tests
|
||||
```
|
||||
|
||||
## **Test Implementation Patterns**
|
||||
|
||||
### **Unit Test Development**
|
||||
```typescript
|
||||
// ✅ DO: Follow established patterns from auth.test.ts
|
||||
describe('FeatureName', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup mocks with proper typing
|
||||
});
|
||||
|
||||
describe('functionName', () => {
|
||||
it('should handle normal case', () => {
|
||||
// Test implementation with specific assertions
|
||||
});
|
||||
|
||||
it('should throw error for invalid input', async () => {
|
||||
// Error scenario testing
|
||||
await expect(functionName(invalidInput))
|
||||
.rejects.toThrow('Specific error message');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### **Integration Test Development**
|
||||
```typescript
|
||||
// ✅ DO: Use supertest for API endpoint testing
|
||||
import request from 'supertest';
|
||||
import { app } from '../../src/app';
|
||||
|
||||
describe('POST /api/auth/register', () => {
|
||||
beforeEach(async () => {
|
||||
await integrationTestUtils.cleanupTestData();
|
||||
});
|
||||
|
||||
it('should register user successfully', async () => {
|
||||
const userData = createTestUser();
|
||||
|
||||
const response = await request(app)
|
||||
.post('/api/auth/register')
|
||||
.send(userData)
|
||||
.expect(201);
|
||||
|
||||
expect(response.body).toMatchObject({
|
||||
id: expect.any(String),
|
||||
email: userData.email
|
||||
});
|
||||
|
||||
// Verify database state
|
||||
const user = await prisma.user.findUnique({
|
||||
where: { email: userData.email }
|
||||
});
|
||||
expect(user).toBeTruthy();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### **E2E Test Development**
|
||||
```typescript
|
||||
// ✅ DO: Test complete user workflows
|
||||
describe('User Authentication Flow', () => {
|
||||
it('should complete registration → login → protected access', async () => {
|
||||
// Step 1: Register
|
||||
const userData = createTestUser();
|
||||
await request(app)
|
||||
.post('/api/auth/register')
|
||||
.send(userData)
|
||||
.expect(201);
|
||||
|
||||
// Step 2: Login
|
||||
const loginResponse = await request(app)
|
||||
.post('/api/auth/login')
|
||||
.send({ email: userData.email, password: userData.password })
|
||||
.expect(200);
|
||||
|
||||
const { token } = loginResponse.body;
|
||||
|
||||
// Step 3: Access protected resource
|
||||
await request(app)
|
||||
.get('/api/profile')
|
||||
.set('Authorization', `Bearer ${token}`)
|
||||
.expect(200);
|
||||
}, 30000); // Extended timeout for E2E
|
||||
});
|
||||
```
|
||||
|
||||
## **Mocking & Test Utilities**
|
||||
|
||||
### **Established Mocking Patterns**
|
||||
```typescript
|
||||
// ✅ DO: Use established bcrypt mocking pattern
|
||||
jest.mock('bcrypt');
|
||||
import bcrypt from 'bcrypt';
|
||||
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
|
||||
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
|
||||
|
||||
// ✅ DO: Use Prisma mocking for unit tests
|
||||
jest.mock('@prisma/client', () => ({
|
||||
PrismaClient: jest.fn().mockImplementation(() => ({
|
||||
user: {
|
||||
create: jest.fn(),
|
||||
findUnique: jest.fn(),
|
||||
},
|
||||
$connect: jest.fn(),
|
||||
$disconnect: jest.fn(),
|
||||
})),
|
||||
}));
|
||||
```
|
||||
|
||||
### **Test Fixtures Usage**
|
||||
```typescript
|
||||
// ✅ DO: Use centralized test fixtures
|
||||
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
|
||||
|
||||
describe('User Service', () => {
|
||||
it('should handle admin user creation', async () => {
|
||||
const userData = createTestUser(adminUser);
|
||||
// Test implementation
|
||||
});
|
||||
|
||||
it('should reject invalid user data', async () => {
|
||||
const userData = createTestUser(invalidUser);
|
||||
// Error testing
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## **Coverage Standards & Monitoring**
|
||||
|
||||
### **Coverage Thresholds**
|
||||
- **Global Standards**: 80% lines/functions, 70% branches
|
||||
- **Critical Code**: 90% utils, 85% middleware
|
||||
- **New Features**: Must meet or exceed global thresholds
|
||||
- **Legacy Code**: Gradual improvement with each change
|
||||
|
||||
### **Coverage Reporting & Analysis**
|
||||
```bash
|
||||
# Generate coverage reports
|
||||
npm run test:coverage
|
||||
|
||||
# View detailed HTML report
|
||||
open coverage/lcov-report/index.html
|
||||
|
||||
# Coverage files generated:
|
||||
# - coverage/lcov-report/index.html # Detailed HTML report
|
||||
# - coverage/lcov.info # LCOV format for IDE integration
|
||||
# - coverage/coverage-final.json # JSON format for tooling
|
||||
```
|
||||
|
||||
### **Coverage Quality Checks**
|
||||
```typescript
|
||||
// ✅ DO: Test all code paths
|
||||
describe('validateInput', () => {
|
||||
it('should return true for valid input', () => {
|
||||
expect(validateInput('valid')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for various invalid inputs', () => {
|
||||
expect(validateInput('')).toBe(false); // Empty string
|
||||
expect(validateInput(null)).toBe(false); // Null value
|
||||
expect(validateInput(undefined)).toBe(false); // Undefined
|
||||
});
|
||||
|
||||
it('should throw for unexpected input types', () => {
|
||||
expect(() => validateInput(123)).toThrow('Invalid input type');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## **Testing During Development Phases**
|
||||
|
||||
### **Feature Development Phase**
|
||||
```bash
|
||||
# 1. Start feature development
|
||||
task-master set-status --id=X.Y --status=in-progress
|
||||
|
||||
# 2. Begin TDD cycle
|
||||
npm run test:watch
|
||||
|
||||
# 3. Document test progress in subtask
|
||||
task-master update-subtask --id=X.Y --prompt="Test development:
|
||||
- Created test file with 5 failing tests
|
||||
- Implemented core functionality
|
||||
- Tests passing, adding error scenarios"
|
||||
|
||||
# 4. Verify coverage before completion
|
||||
npm run test:coverage
|
||||
|
||||
# 5. Update subtask with final test status
|
||||
task-master update-subtask --id=X.Y --prompt="Testing complete:
|
||||
- 12 unit tests with full coverage
|
||||
- All edge cases and error scenarios covered
|
||||
- Ready for integration testing"
|
||||
```
|
||||
|
||||
### **Integration Testing Phase**
|
||||
```bash
|
||||
# After API endpoints are implemented
|
||||
npm run test:integration
|
||||
|
||||
# Update integration test templates
|
||||
# Replace placeholder tests with real endpoint calls
|
||||
|
||||
# Document integration test results
|
||||
task-master update-subtask --id=X.Y --prompt="Integration tests:
|
||||
- Updated auth endpoint tests
|
||||
- Database integration verified
|
||||
- All HTTP status codes and responses tested"
|
||||
```
|
||||
|
||||
### **Pre-Commit Testing Phase**
|
||||
```bash
|
||||
# Before committing code
|
||||
npm run test:coverage # Verify all tests pass with coverage
|
||||
npm run test:unit # Quick unit test verification
|
||||
npm run test:integration # Integration test verification (if applicable)
|
||||
|
||||
# Commit pattern for test updates
|
||||
git add tests/ src/**/*.test.ts
|
||||
git commit -m "test(task-X): Add comprehensive tests for Feature Y
|
||||
|
||||
- Unit tests with 95% coverage (exceeds 90% threshold)
|
||||
- Integration tests for API endpoints
|
||||
- Test fixtures for data generation
|
||||
- Proper mocking patterns established
|
||||
|
||||
Task X: Feature Y - Testing complete"
|
||||
```
|
||||
|
||||
## **Error Handling & Debugging**
|
||||
|
||||
### **Test Debugging Techniques**
|
||||
```typescript
|
||||
// ✅ DO: Use test utilities for debugging
|
||||
import { testUtils } from '../setup';
|
||||
|
||||
it('should debug complex operation', () => {
|
||||
testUtils.withConsole(() => {
|
||||
// Console output visible only for this test
|
||||
console.log('Debug info:', complexData);
|
||||
service.complexOperation();
|
||||
});
|
||||
});
|
||||
|
||||
// ✅ DO: Use proper async debugging
|
||||
it('should handle async operations', async () => {
|
||||
const promise = service.asyncOperation();
|
||||
|
||||
// Test intermediate state
|
||||
expect(service.isProcessing()).toBe(true);
|
||||
|
||||
const result = await promise;
|
||||
expect(result).toBe('expected');
|
||||
expect(service.isProcessing()).toBe(false);
|
||||
});
|
||||
```
|
||||
|
||||
### **Common Test Issues & Solutions**
|
||||
```bash
|
||||
# Hanging tests (common with database connections)
|
||||
npm run test:integration -- --detectOpenHandles
|
||||
|
||||
# Memory leaks in tests
|
||||
npm run test:unit -- --logHeapUsage
|
||||
|
||||
# Slow tests identification
|
||||
npm run test:coverage -- --verbose
|
||||
|
||||
# Mock not working properly
|
||||
# Check: mock is declared before imports
|
||||
# Check: jest.clearAllMocks() in beforeEach
|
||||
# Check: TypeScript typing is correct
|
||||
```
|
||||
|
||||
## **Continuous Integration Integration**
|
||||
|
||||
### **CI/CD Pipeline Testing**
|
||||
```yaml
|
||||
# Example GitHub Actions integration
|
||||
- name: Run tests
|
||||
run: |
|
||||
npm ci
|
||||
npm run test:coverage
|
||||
|
||||
- name: Upload coverage reports
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage/lcov.info
|
||||
```
|
||||
|
||||
### **Pre-commit Hooks**
|
||||
```bash
|
||||
# Setup pre-commit testing (recommended)
|
||||
# In package.json scripts:
|
||||
"pre-commit": "npm run test:unit && npm run test:integration"
|
||||
|
||||
# Husky integration example:
|
||||
npx husky add .husky/pre-commit "npm run test:unit"
|
||||
```
|
||||
|
||||
## **Test Maintenance & Evolution**
|
||||
|
||||
### **Adding Tests for New Features**
|
||||
1. **Create test file** alongside source code or in `tests/unit/`
|
||||
2. **Follow established patterns** from `src/utils/auth.test.ts`
|
||||
3. **Use existing fixtures** from `tests/fixtures/`
|
||||
4. **Apply proper mocking** patterns for dependencies
|
||||
5. **Meet coverage thresholds** for the module
|
||||
|
||||
### **Updating Integration/E2E Tests**
|
||||
1. **Update templates** in `tests/integration/` when APIs change
|
||||
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
|
||||
3. **Update test fixtures** for new data requirements
|
||||
4. **Maintain database cleanup** utilities
|
||||
|
||||
### **Test Performance Optimization**
|
||||
- **Parallel execution**: Jest runs tests in parallel by default
|
||||
- **Test isolation**: Use proper setup/teardown for independence
|
||||
- **Mock optimization**: Mock heavy dependencies appropriately
|
||||
- **Database efficiency**: Use transaction rollbacks where possible
|
||||
|
||||
---
|
||||
|
||||
**Key References:**
|
||||
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
|
||||
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
|
||||
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
|
||||
- [Jest Configuration](mdc:jest.config.js)
|
||||
@@ -4,11 +4,9 @@ PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE
|
||||
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
|
||||
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
|
||||
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
|
||||
GROQ_API_KEY=YOUR_GROQ_KEY_HERE
|
||||
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
||||
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
||||
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
||||
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
||||
|
||||
# Google Vertex AI Configuration
|
||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
|
||||
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,45 +0,0 @@
|
||||
# What type of PR is this?
|
||||
<!-- Check one -->
|
||||
|
||||
- [ ] 🐛 Bug fix
|
||||
- [ ] ✨ Feature
|
||||
- [ ] 🔌 Integration
|
||||
- [ ] 📝 Docs
|
||||
- [ ] 🧹 Refactor
|
||||
- [ ] Other:
|
||||
## Description
|
||||
<!-- What does this PR do? -->
|
||||
|
||||
## Related Issues
|
||||
<!-- Link issues: Fixes #123 -->
|
||||
|
||||
## How to Test This
|
||||
<!-- Quick steps to verify the changes work -->
|
||||
```bash
|
||||
# Example commands or steps
|
||||
```
|
||||
|
||||
**Expected result:**
|
||||
<!-- What should happen? -->
|
||||
|
||||
## Contributor Checklist
|
||||
|
||||
- [ ] Created changeset: `npm run changeset`
|
||||
- [ ] Tests pass: `npm test`
|
||||
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
|
||||
- [ ] Addressed CodeRabbit comments (if any)
|
||||
- [ ] Linked related issues (if any)
|
||||
- [ ] Manually tested the changes
|
||||
|
||||
## Changelog Entry
|
||||
<!-- One line describing the change for users -->
|
||||
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
|
||||
|
||||
---
|
||||
|
||||
### For Maintainers
|
||||
|
||||
- [ ] PR title follows conventional commits
|
||||
- [ ] Target branch correct
|
||||
- [ ] Labels added
|
||||
- [ ] Milestone assigned (if applicable)
|
||||
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
@@ -1,39 +0,0 @@
|
||||
## 🐛 Bug Fix
|
||||
|
||||
### 🔍 Bug Description
|
||||
<!-- Describe the bug -->
|
||||
|
||||
### 🔗 Related Issues
|
||||
<!-- Fixes #123 -->
|
||||
|
||||
### ✨ Solution
|
||||
<!-- How does this PR fix the bug? -->
|
||||
|
||||
## How to Test
|
||||
|
||||
### Steps that caused the bug:
|
||||
1.
|
||||
2.
|
||||
|
||||
**Before fix:**
|
||||
**After fix:**
|
||||
|
||||
### Quick verification:
|
||||
```bash
|
||||
# Commands to verify the fix
|
||||
```
|
||||
|
||||
## Contributor Checklist
|
||||
- [ ] Created changeset: `npm run changeset`
|
||||
- [ ] Tests pass: `npm test`
|
||||
- [ ] Format check passes: `npm run format-check`
|
||||
- [ ] Addressed CodeRabbit comments
|
||||
- [ ] Added unit tests (if applicable)
|
||||
- [ ] Manually verified the fix works
|
||||
|
||||
---
|
||||
|
||||
### For Maintainers
|
||||
- [ ] Root cause identified
|
||||
- [ ] Fix doesn't introduce new issues
|
||||
- [ ] CI passes
|
||||
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
@@ -1,11 +0,0 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: 🐛 Bug Fix
|
||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
|
||||
about: Fix a bug in Task Master
|
||||
- name: ✨ New Feature
|
||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
|
||||
about: Add a new feature to Task Master
|
||||
- name: 🔌 New Integration
|
||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
|
||||
about: Add support for a new tool, IDE, or platform
|
||||
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
@@ -1,49 +0,0 @@
|
||||
## ✨ New Feature
|
||||
|
||||
### 📋 Feature Description
|
||||
<!-- Brief description -->
|
||||
|
||||
### 🎯 Problem Statement
|
||||
<!-- What problem does this feature solve? Why is it needed? -->
|
||||
|
||||
### 💡 Solution
|
||||
<!-- How does this feature solve the problem? What's the approach? -->
|
||||
|
||||
### 🔗 Related Issues
|
||||
<!-- Link related issues: Fixes #123, Part of #456 -->
|
||||
|
||||
## How to Use It
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Basic usage example
|
||||
```
|
||||
|
||||
### Example
|
||||
<!-- Show a real use case -->
|
||||
```bash
|
||||
# Practical example
|
||||
```
|
||||
|
||||
**What you should see:**
|
||||
<!-- Expected behavior -->
|
||||
|
||||
## Contributor Checklist
|
||||
- [ ] Created changeset: `npm run changeset`
|
||||
- [ ] Tests pass: `npm test`
|
||||
- [ ] Format check passes: `npm run format-check`
|
||||
- [ ] Addressed CodeRabbit comments
|
||||
- [ ] Added tests for new functionality
|
||||
- [ ] Manually tested in CLI mode
|
||||
- [ ] Manually tested in MCP mode (if applicable)
|
||||
|
||||
## Changelog Entry
|
||||
<!-- One-liner for release notes -->
|
||||
|
||||
---
|
||||
|
||||
### For Maintainers
|
||||
|
||||
- [ ] Feature aligns with project vision
|
||||
- [ ] CIs pass
|
||||
- [ ] Changeset file exists
|
||||
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
@@ -1,53 +0,0 @@
|
||||
# 🔌 New Integration
|
||||
|
||||
## What tool/IDE is being integrated?
|
||||
|
||||
<!-- Name and brief description -->
|
||||
|
||||
## What can users do with it?
|
||||
|
||||
<!-- Key benefits -->
|
||||
|
||||
## How to Enable
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
task-master rules add [name]
|
||||
# Any other setup steps
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
<!-- Show it in action -->
|
||||
|
||||
```bash
|
||||
# Real example
|
||||
```
|
||||
|
||||
### Natural Language Hooks (if applicable)
|
||||
|
||||
```
|
||||
"When tests pass, mark task as done"
|
||||
# Other examples
|
||||
```
|
||||
|
||||
## Contributor Checklist
|
||||
|
||||
- [ ] Created changeset: `npm run changeset`
|
||||
- [ ] Tests pass: `npm test`
|
||||
- [ ] Format check passes: `npm run format-check`
|
||||
- [ ] Addressed CodeRabbit comments
|
||||
- [ ] Integration fully tested with target tool/IDE
|
||||
- [ ] Error scenarios tested
|
||||
- [ ] Added integration tests
|
||||
- [ ] Documentation includes setup guide
|
||||
- [ ] Examples are working and clear
|
||||
|
||||
---
|
||||
|
||||
## For Maintainers
|
||||
|
||||
- [ ] Integration stability verified
|
||||
- [ ] Documentation comprehensive
|
||||
- [ ] Examples working
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Get context from command line argument or environment
|
||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
||||
|
||||
function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
if (existsSync(join(currentDir, 'package.json'))) {
|
||||
try {
|
||||
const pkg = JSON.parse(
|
||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||
);
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
function checkPreReleaseMode() {
|
||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
|
||||
// Check if pre.json exists
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
try {
|
||||
// Read and parse pre.json
|
||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
||||
const preJson = JSON.parse(preJsonContent);
|
||||
|
||||
// Check if we're in active pre-release mode
|
||||
if (preJson.mode === 'pre') {
|
||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
||||
console.error('');
|
||||
|
||||
// Provide context-specific error messages
|
||||
if (context === 'Release Check' || context === 'pull_request') {
|
||||
console.error(
|
||||
'Pre-release mode must be exited before merging to main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error(
|
||||
'To fix this, run the following commands in your branch:'
|
||||
);
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
console.error('');
|
||||
console.error('Then update this pull request.');
|
||||
} else if (context === 'Release' || context === 'main') {
|
||||
console.error(
|
||||
'Pre-release mode should only be used on feature branches, not main.'
|
||||
);
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands locally:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push origin main');
|
||||
console.error('');
|
||||
console.error('Then re-run this workflow.');
|
||||
} else {
|
||||
console.error('Pre-release mode must be exited before proceeding.');
|
||||
console.error('');
|
||||
console.error('To fix this, run the following commands:');
|
||||
console.error(' npx changeset pre exit');
|
||||
console.error(' git add -u');
|
||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
||||
console.error(' git push');
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
||||
console.error(`Error details: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the check
|
||||
checkPreReleaseMode();
|
||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
||||
|
||||
function parseMetricsTable(content, metricName) {
|
||||
const lines = content.split('\n');
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i].trim();
|
||||
// Match a markdown table row like: | Metric Name | value | ...
|
||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
||||
const match = line.match(re);
|
||||
if (match) {
|
||||
return match[1].trim() || 'N/A';
|
||||
}
|
||||
}
|
||||
return 'N/A';
|
||||
}
|
||||
|
||||
function parseCountMetric(content, metricName) {
|
||||
const result = parseMetricsTable(content, metricName);
|
||||
// Extract number from string, handling commas and spaces
|
||||
const numberMatch = result.toString().match(/[\d,]+/);
|
||||
if (numberMatch) {
|
||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
||||
return isNaN(number) ? 0 : number;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function main() {
|
||||
const metrics = {
|
||||
issues_created: 0,
|
||||
issues_closed: 0,
|
||||
prs_created: 0,
|
||||
prs_merged: 0,
|
||||
issue_avg_first_response: 'N/A',
|
||||
issue_avg_time_to_close: 'N/A',
|
||||
pr_avg_first_response: 'N/A',
|
||||
pr_avg_merge_time: 'N/A'
|
||||
};
|
||||
|
||||
// Parse issue metrics
|
||||
if (existsSync('issue_metrics.md')) {
|
||||
console.log('📄 Found issue_metrics.md, parsing...');
|
||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
||||
|
||||
metrics.issues_created = parseCountMetric(
|
||||
issueContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.issues_closed = parseCountMetric(
|
||||
issueContent,
|
||||
'Number of items closed'
|
||||
);
|
||||
metrics.issue_avg_first_response = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to first response'
|
||||
);
|
||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
||||
}
|
||||
|
||||
// Parse PR created metrics
|
||||
if (existsSync('pr_created_metrics.md')) {
|
||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_created = parseCountMetric(
|
||||
prCreatedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.pr_avg_first_response = parseMetricsTable(
|
||||
prCreatedContent,
|
||||
'Time to first response'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
||||
);
|
||||
}
|
||||
|
||||
// Parse PR merged metrics (for more accurate merge data)
|
||||
if (existsSync('pr_merged_metrics.md')) {
|
||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_merged = parseCountMetric(
|
||||
prMergedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
// For merged PRs, "Time to close" is actually time to merge
|
||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
||||
prMergedContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
||||
);
|
||||
// Fallback: try old pr_metrics.md if it exists
|
||||
if (existsSync('pr_metrics.md')) {
|
||||
console.log('📄 Falling back to pr_metrics.md...');
|
||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
||||
|
||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
||||
metrics.prs_merged =
|
||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
||||
|
||||
const maybeMergeTime = parseMetricsTable(
|
||||
prContent,
|
||||
'Average time to merge'
|
||||
);
|
||||
metrics.pr_avg_merge_time =
|
||||
maybeMergeTime !== 'N/A'
|
||||
? maybeMergeTime
|
||||
: parseMetricsTable(prContent, 'Time to close');
|
||||
} else {
|
||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
||||
}
|
||||
}
|
||||
|
||||
// Output for GitHub Actions
|
||||
const output = Object.entries(metrics)
|
||||
.map(([key, value]) => `${key}=${value}`)
|
||||
.join('\n');
|
||||
|
||||
// Always output to stdout for debugging
|
||||
console.log('\n=== FINAL METRICS ===');
|
||||
Object.entries(metrics).forEach(([key, value]) => {
|
||||
console.log(`${key}: ${value}`);
|
||||
});
|
||||
|
||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
||||
if (process.env.GITHUB_OUTPUT) {
|
||||
try {
|
||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
||||
console.log(
|
||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { existsSync, unlinkSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, runCommand } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
console.log('🚀 Starting release process...');
|
||||
|
||||
// Double-check we're not in pre-release mode (safety net)
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (existsSync(preJsonPath)) {
|
||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
||||
unlinkSync(preJsonPath);
|
||||
}
|
||||
|
||||
// Check if the extension version has changed and tag it
|
||||
// This prevents changeset from trying to publish the private package
|
||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
console.log('✅ Release process completed!');
|
||||
|
||||
// The extension tag (if created) will trigger the extension-release workflow
|
||||
33
.github/scripts/tag-extension.mjs
vendored
33
.github/scripts/tag-extension.mjs
vendored
@@ -1,33 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import assert from 'node:assert/strict';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
|
||||
// Read the extension's package.json
|
||||
const extensionDir = join(rootDir, 'apps', 'extension');
|
||||
const pkgPath = join(extensionDir, 'package.json');
|
||||
|
||||
let pkg;
|
||||
try {
|
||||
const pkgContent = readFileSync(pkgPath, 'utf8');
|
||||
pkg = JSON.parse(pkgContent);
|
||||
} catch (error) {
|
||||
console.error('Failed to read package.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Ensure we have required fields
|
||||
assert(pkg.name, 'package.json must have a name field');
|
||||
assert(pkg.version, 'package.json must have a version field');
|
||||
|
||||
const tag = `${pkg.name}@${pkg.version}`;
|
||||
|
||||
// Create and push the tag if it doesn't exist
|
||||
createAndPushTag(tag);
|
||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import { readFileSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
|
||||
// Find the root directory by looking for package.json with task-master-ai
|
||||
export function findRootDir(startDir) {
|
||||
let currentDir = resolve(startDir);
|
||||
while (currentDir !== '/') {
|
||||
const pkgPath = join(currentDir, 'package.json');
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {}
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
throw new Error('Could not find root directory');
|
||||
}
|
||||
|
||||
// Run a command with proper error handling
|
||||
export function runCommand(command, args = [], options = {}) {
|
||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
||||
const result = spawnSync(command, args, {
|
||||
encoding: 'utf8',
|
||||
stdio: 'inherit',
|
||||
...options
|
||||
});
|
||||
|
||||
if (result.status !== 0) {
|
||||
console.error(`Command failed with exit code ${result.status}`);
|
||||
process.exit(result.status);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Get package version from a package.json file
|
||||
export function getPackageVersion(packagePath) {
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
||||
return pkg.version;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`Failed to read package version from ${packagePath}:`,
|
||||
error.message
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Check if a git tag exists on remote
|
||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
||||
encoding: 'utf8'
|
||||
});
|
||||
|
||||
return result.status === 0 && result.stdout.trim() !== '';
|
||||
}
|
||||
|
||||
// Create and push a git tag if it doesn't exist
|
||||
export function createAndPushTag(tag, remote = 'origin') {
|
||||
// Check if tag already exists
|
||||
if (tagExistsOnRemote(tag, remote)) {
|
||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log(`Creating new tag: ${tag}`);
|
||||
|
||||
// Create the tag locally
|
||||
const tagResult = spawnSync('git', ['tag', tag]);
|
||||
if (tagResult.status !== 0) {
|
||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Push the tag to remote
|
||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
||||
if (pushResult.status !== 0) {
|
||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||
return true;
|
||||
}
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
@@ -1,46 +0,0 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
123
.github/workflows/ci.yml
vendored
123
.github/workflows/ci.yml
vendored
@@ -9,124 +9,70 @@ on:
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
DO_NOT_TRACK: 1
|
||||
NODE_ENV: development
|
||||
|
||||
jobs:
|
||||
# Fast checks that can run in parallel
|
||||
format-check:
|
||||
name: Format Check
|
||||
setup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
- name: Install Dependencies
|
||||
id: install
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
format-check:
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Format Check
|
||||
run: npm run format-check
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
typecheck:
|
||||
name: Typecheck
|
||||
timeout-minutes: 10
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Typecheck
|
||||
run: npm run turbo:typecheck
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
# Build job to ensure everything compiles
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Upload build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
retention-days: 1
|
||||
|
||||
test:
|
||||
name: Test
|
||||
timeout-minutes: 15
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
needs: [format-check, typecheck, build]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
@@ -135,6 +81,7 @@ jobs:
|
||||
NODE_ENV: test
|
||||
CI: true
|
||||
FORCE_COLOR: 1
|
||||
timeout-minutes: 10
|
||||
|
||||
- name: Upload Test Results
|
||||
if: always()
|
||||
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user