Compare commits
111 Commits
task-maste
...
v017-adds-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea6a58ef90 | ||
|
|
f07945734d | ||
|
|
d5360f625f | ||
|
|
2e2d290c63 | ||
|
|
d3edd24b5a | ||
|
|
60c0f26f3c | ||
|
|
be0bf18f41 | ||
|
|
32236a0bc5 | ||
|
|
668b22e615 | ||
|
|
4901908f5d | ||
|
|
a047886910 | ||
|
|
92234323d7 | ||
|
|
514fdb0b78 | ||
|
|
932825c2d6 | ||
|
|
3c8c62434f | ||
|
|
9d755b9e79 | ||
|
|
fcc2351b3d | ||
|
|
3db62b5b88 | ||
|
|
6b929fa9fa | ||
|
|
ddaa1dceef | ||
|
|
cac8c234d6 | ||
|
|
b205e52d08 | ||
|
|
4585a6bbc7 | ||
|
|
b3ec151b27 | ||
|
|
5d9748af89 | ||
|
|
b28479a09d | ||
|
|
9efbd38f10 | ||
|
|
dc7a5414c0 | ||
|
|
75e017e371 | ||
|
|
a84cd1a492 | ||
|
|
3888ef41d4 | ||
|
|
83d6405b17 | ||
|
|
bb775e3180 | ||
|
|
2328efe482 | ||
|
|
f3fe481f3f | ||
|
|
f43b5fdd75 | ||
|
|
153b190e0d | ||
|
|
0be5ae59fe | ||
|
|
f1d593f887 | ||
|
|
efd14544f0 | ||
|
|
ef9439d441 | ||
|
|
a49071a6b8 | ||
|
|
c2d83690a1 | ||
|
|
b40a94c4ac | ||
|
|
05a389e171 | ||
|
|
3352a6a99f | ||
|
|
d391f3b5b3 | ||
|
|
f2c5911e58 | ||
|
|
bb5a0211f4 | ||
|
|
4234cc3d87 | ||
|
|
d942db4868 | ||
|
|
3cf718a718 | ||
|
|
a2ff8a97b7 | ||
|
|
b1b888a5f3 | ||
|
|
f817de9da6 | ||
|
|
806c505aac | ||
|
|
6f225cf81a | ||
|
|
74eb9907f3 | ||
|
|
5c29969741 | ||
|
|
8e794e18ac | ||
|
|
3ce4d2cc74 | ||
|
|
2d85fcc6a7 | ||
|
|
0102be4f3b | ||
|
|
b6f1376625 | ||
|
|
d4f21be1a3 | ||
|
|
f28de8b729 | ||
|
|
e50230f9ce | ||
|
|
01992ebd0b | ||
|
|
af652978a0 | ||
|
|
54005d5486 | ||
|
|
65b70d746a | ||
|
|
f533fd0931 | ||
|
|
7db7cf3859 | ||
|
|
2434b97247 | ||
|
|
bac58c606d | ||
|
|
89f8bff219 | ||
|
|
366cd161da | ||
|
|
a346dd5020 | ||
|
|
c2709edd78 | ||
|
|
e53006066e | ||
|
|
2d11b94804 | ||
|
|
a5e36cf7b4 | ||
|
|
9cd18caa3c | ||
|
|
9058d7dfdd | ||
|
|
199e32c2d1 | ||
|
|
a874a12e17 | ||
|
|
eb343287ae | ||
|
|
94eeb5117b | ||
|
|
87c85d3d66 | ||
|
|
0b8f594ac7 | ||
|
|
15b190b87b | ||
|
|
9ae255ccb4 | ||
|
|
518f73eefa | ||
|
|
40a52385ba | ||
|
|
78397fe0be | ||
|
|
f9b89dc25c | ||
|
|
ca69e1294f | ||
|
|
ac36e2497e | ||
|
|
1d4b80fe6f | ||
|
|
023f51c579 | ||
|
|
1e020023ed | ||
|
|
325f5a2aa3 | ||
|
|
de46bfd84b | ||
|
|
cc26c36366 | ||
|
|
15ad34928d | ||
|
|
f74d639110 | ||
|
|
de58e9ede5 | ||
|
|
947541e4ee | ||
|
|
275cd55da7 | ||
|
|
67ac212973 | ||
|
|
235371ff47 |
77
.changeset/bright-windows-sing.md
Normal file
77
.changeset/bright-windows-sing.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add comprehensive AI-powered research command with intelligent context gathering and interactive follow-ups.
|
||||||
|
|
||||||
|
The new `research` command provides AI-powered research capabilities that automatically gather relevant project context to answer your questions. The command intelligently selects context from multiple sources and supports interactive follow-up questions in CLI mode.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
|
||||||
|
- **Intelligent Task Discovery**: Automatically finds relevant tasks and subtasks using fuzzy search based on your query keywords, supplementing any explicitly provided task IDs
|
||||||
|
- **Multi-Source Context**: Gathers context from tasks, files, project structure, and custom text to provide comprehensive answers
|
||||||
|
- **Interactive Follow-ups**: CLI users can ask follow-up questions that build on the conversation history while allowing fresh context discovery for each question
|
||||||
|
- **Flexible Detail Levels**: Choose from low (concise), medium (balanced), or high (comprehensive) response detail levels
|
||||||
|
- **Token Transparency**: Displays detailed token breakdown showing context size, sources, and estimated costs
|
||||||
|
- **Enhanced Display**: Syntax-highlighted code blocks and structured output with clear visual separation
|
||||||
|
|
||||||
|
**Usage Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic research with auto-discovered context
|
||||||
|
task-master research "How should I implement user authentication?"
|
||||||
|
|
||||||
|
# Research with specific task context
|
||||||
|
task-master research "What's the best approach for this?" --id=15,23.2
|
||||||
|
|
||||||
|
# Research with file context and project tree
|
||||||
|
task-master research "How does the current auth system work?" --files=src/auth.js,config/auth.json --tree
|
||||||
|
|
||||||
|
# Research with custom context and low detail
|
||||||
|
task-master research "Quick implementation steps?" --context="Using JWT tokens" --detail=low
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context Sources:**
|
||||||
|
|
||||||
|
- **Tasks**: Automatically discovers relevant tasks/subtasks via fuzzy search, plus any explicitly specified via `--id`
|
||||||
|
- **Files**: Include specific files via `--files` for code-aware responses
|
||||||
|
- **Project Tree**: Add `--tree` to include project structure overview
|
||||||
|
- **Custom Context**: Provide additional context via `--context` for domain-specific information
|
||||||
|
|
||||||
|
**Interactive Features (CLI only):**
|
||||||
|
|
||||||
|
- Follow-up questions that maintain conversation history
|
||||||
|
- Fresh fuzzy search for each follow-up to discover newly relevant tasks
|
||||||
|
- Cumulative context building across the conversation
|
||||||
|
- Clean visual separation between exchanges
|
||||||
|
- **Save to Tasks**: Save entire research conversations (including follow-ups) directly to task or subtask details with timestamps
|
||||||
|
- **Clean Menu Interface**: Streamlined inquirer-based menu for follow-up actions without redundant UI elements
|
||||||
|
|
||||||
|
**Save Functionality:**
|
||||||
|
|
||||||
|
The research command now supports saving complete conversation threads to tasks or subtasks:
|
||||||
|
|
||||||
|
- Save research results and follow-up conversations to any task (e.g., "15") or subtask (e.g., "15.2")
|
||||||
|
- Automatic timestamping and formatting of conversation history
|
||||||
|
- Validation of task/subtask existence before saving
|
||||||
|
- Appends to existing task details without overwriting content
|
||||||
|
- Supports both CLI interactive mode and MCP programmatic access via `--save-to` flag
|
||||||
|
|
||||||
|
**Enhanced CLI Options:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Auto-save research results to a task
|
||||||
|
task-master research "Implementation approach?" --save-to=15
|
||||||
|
|
||||||
|
# Combine auto-save with context gathering
|
||||||
|
task-master research "How to optimize this?" --id=23 --save-to=23.1
|
||||||
|
```
|
||||||
|
|
||||||
|
**MCP Integration:**
|
||||||
|
|
||||||
|
- `saveTo` parameter for automatic saving to specified task/subtask ID
|
||||||
|
- Structured response format with telemetry data
|
||||||
|
- Silent operation mode for programmatic usage
|
||||||
|
- Full feature parity with CLI except interactive follow-ups
|
||||||
|
|
||||||
|
The research command integrates with the existing AI service layer and supports all configured AI providers. Both CLI and MCP interfaces provide comprehensive research capabilities with intelligent context gathering and flexible output options.
|
||||||
5
.changeset/chatty-rats-talk.md
Normal file
5
.changeset/chatty-rats-talk.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility
|
||||||
13
.changeset/cold-pears-poke.md
Normal file
13
.changeset/cold-pears-poke.md
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix critical bugs in task move functionality:
|
||||||
|
|
||||||
|
- **Fixed moving tasks to become subtasks of empty parents**: When moving a task to become a subtask of a parent that had no existing subtasks (e.g., task 89 → task 98.1), the operation would fail with validation errors.
|
||||||
|
|
||||||
|
- **Fixed moving subtasks between parents**: Subtasks can now be properly moved between different parent tasks, including to parents that previously had no subtasks.
|
||||||
|
|
||||||
|
- **Improved comma-separated batch moves**: Multiple tasks can now be moved simultaneously using comma-separated IDs (e.g., "88,90" → "92,93") with proper error handling and atomic operations.
|
||||||
|
|
||||||
|
These fixes enables proper task hierarchy reorganization for corner cases that were previously broken.
|
||||||
@@ -2,16 +2,13 @@
|
|||||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||||
"changelog": [
|
"changelog": [
|
||||||
"@changesets/changelog-github",
|
"@changesets/changelog-github",
|
||||||
{
|
{ "repo": "eyaltoledano/claude-task-master" }
|
||||||
"repo": "eyaltoledano/claude-task-master"
|
|
||||||
}
|
|
||||||
],
|
],
|
||||||
"commit": false,
|
"commit": false,
|
||||||
"fixed": [],
|
"fixed": [],
|
||||||
|
"linked": [],
|
||||||
"access": "public",
|
"access": "public",
|
||||||
"baseBranch": "main",
|
"baseBranch": "main",
|
||||||
"ignore": [
|
"updateInternalDependencies": "patch",
|
||||||
"docs",
|
"ignore": []
|
||||||
"@tm/claude-code-plugin"
|
}
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|||||||
5
.changeset/curly-dragons-design.md
Normal file
5
.changeset/curly-dragons-design.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
improve findTasks algorithm for resolving tasks path
|
||||||
5
.changeset/eleven-news-check.md
Normal file
5
.changeset/eleven-news-check.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix update tool on MCP giving `No valid tasks found`
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
|
||||||
|
|
||||||
Fixes #1301
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"@tm/cli": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix warning message box width to match dashboard box width for consistent UI alignment
|
|
||||||
5
.changeset/fluffy-waves-allow.md
Normal file
5
.changeset/fluffy-waves-allow.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds ability to automatically create/switch tags to match the current git branch. The configuration to enable the git workflow and then use the auto switching is in config.json."
|
||||||
39
.changeset/four-cups-enter.md
Normal file
39
.changeset/four-cups-enter.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Enhanced add-task fuzzy search intelligence and improved user experience
|
||||||
|
|
||||||
|
**Smarter Task Discovery:**
|
||||||
|
|
||||||
|
- Remove hardcoded category system that always matched "Task management"
|
||||||
|
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
|
||||||
|
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
|
||||||
|
- Generate context-driven task recommendations based on true semantic similarity
|
||||||
|
|
||||||
|
**Enhanced Terminal Experience:**
|
||||||
|
|
||||||
|
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
|
||||||
|
- Remove console.clear() and redundant displayBanner() calls from UI functions
|
||||||
|
- Preserve command history for better development workflow
|
||||||
|
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
|
||||||
|
|
||||||
|
**Visual Improvements:**
|
||||||
|
|
||||||
|
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
|
||||||
|
- Improve consistency and readability of task complexity display
|
||||||
|
|
||||||
|
**AI Provider Compatibility:**
|
||||||
|
|
||||||
|
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
|
||||||
|
- Add qwen3-235n-a22b:free model support (closes #687)
|
||||||
|
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
|
||||||
|
|
||||||
|
**Technical Improvements:**
|
||||||
|
|
||||||
|
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
|
||||||
|
- Improved dependency analysis and common pattern detection
|
||||||
|
- Better handling of task relationships and relevance scoring
|
||||||
|
- More intelligent task suggestion algorithms
|
||||||
|
|
||||||
|
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.
|
||||||
18
.changeset/free-pants-rescue.md
Normal file
18
.changeset/free-pants-rescue.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Enhance update-task with --append flag for timestamped task updates
|
||||||
|
|
||||||
|
Adds the `--append` flag to `update-task` command, enabling it to behave like `update-subtask` with timestamped information appending. This provides more flexible task updating options:
|
||||||
|
|
||||||
|
**CLI Enhancement:**
|
||||||
|
- `task-master update-task --id=5 --prompt="New info"` - Full task update (existing behavior)
|
||||||
|
- `task-master update-task --id=5 --append --prompt="Progress update"` - Append timestamped info to task details
|
||||||
|
|
||||||
|
**Full MCP Integration:**
|
||||||
|
- MCP tool `update_task` now supports `append` parameter
|
||||||
|
- Seamless integration with Cursor and other MCP clients
|
||||||
|
- Consistent behavior between CLI and MCP interfaces
|
||||||
|
|
||||||
|
Instead of requiring separate subtask creation for progress tracking, you can now append timestamped information directly to parent tasks while preserving the option for comprehensive task updates.
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix MCP server compatibility with Draft-07 clients (Augment IDE, gemini-cli, gemini code assist)
|
|
||||||
|
|
||||||
- Resolves #1284
|
|
||||||
|
|
||||||
**Problem:**
|
|
||||||
|
|
||||||
- MCP tools were using Zod v4, which outputs JSON Schema Draft 2020-12
|
|
||||||
- MCP clients only support Draft-07
|
|
||||||
- Tools were not discoverable in gemini-cli and other clients
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
|
|
||||||
- Updated all MCP tools to import from `zod/v3` instead of `zod`
|
|
||||||
- Zod v3 schemas convert to Draft-07 via FastMCP's zod-to-json-schema
|
|
||||||
- Fixed logger to use stderr instead of stdout (MCP protocol requirement)
|
|
||||||
|
|
||||||
This is a temporary workaround until FastMCP adds JSON Schema version configuration.
|
|
||||||
@@ -2,4 +2,4 @@
|
|||||||
"task-master-ai": patch
|
"task-master-ai": patch
|
||||||
---
|
---
|
||||||
|
|
||||||
Improve auth token refresh flow
|
Update o3 model price
|
||||||
11
.changeset/late-dryers-relax.md
Normal file
11
.changeset/late-dryers-relax.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add --tag flag support to core commands for multi-context task management. Commands like parse-prd, analyze-complexity, and others now support targeting specific task lists, enabling rapid prototyping and parallel development workflows.
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
- parse-prd --tag=feature-name: Parse PRDs into separate task contexts on the fly
|
||||||
|
- analyze-complexity --tag=branch: Generate tag-specific complexity reports
|
||||||
|
- All task operations can target specific contexts while preserving other lists
|
||||||
|
- Non-existent tags are created automatically for seamless workflow
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add configurable MCP tool loading to optimize LLM context usage
|
|
||||||
|
|
||||||
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
|
||||||
|
|
||||||
**Configuration Options:**
|
|
||||||
|
|
||||||
- `all` (default): Load all 36 tools
|
|
||||||
- `core` or `lean`: Load only 7 essential tools for daily development
|
|
||||||
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
|
||||||
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
|
||||||
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
|
||||||
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
|
||||||
|
|
||||||
**Example .mcp.json configuration:**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"TASK_MASTER_TOOLS": "standard",
|
|
||||||
"ANTHROPIC_API_KEY": "your_key_here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve next command to work with remote
|
|
||||||
8
.changeset/nasty-chefs-add.md
Normal file
8
.changeset/nasty-chefs-add.md
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fixes issue with expand CLI command "Complexity report not found"
|
||||||
|
|
||||||
|
- Closes #735
|
||||||
|
- Closes #728
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers
|
|
||||||
7
.changeset/pink-houses-lay.md
Normal file
7
.changeset/pink-houses-lay.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix double .taskmaster directory paths in file resolution utilities
|
||||||
|
|
||||||
|
- Closes #636
|
||||||
5
.changeset/polite-areas-shave.md
Normal file
5
.changeset/polite-areas-shave.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add one-click MCP server installation for Cursor
|
||||||
@@ -1,22 +1,11 @@
|
|||||||
{
|
{
|
||||||
"mode": "pre",
|
"mode": "exit",
|
||||||
"tag": "rc",
|
"tag": "rc",
|
||||||
"initialVersions": {
|
"initialVersions": {
|
||||||
"task-master-ai": "0.29.0",
|
"task-master-ai": "0.16.1"
|
||||||
"@tm/cli": "",
|
|
||||||
"docs": "0.0.6",
|
|
||||||
"extension": "0.25.6",
|
|
||||||
"@tm/mcp": "0.28.0-rc.2",
|
|
||||||
"@tm/ai-sdk-provider-grok-cli": "",
|
|
||||||
"@tm/build-config": "",
|
|
||||||
"@tm/claude-code-plugin": "0.0.2",
|
|
||||||
"@tm/core": ""
|
|
||||||
},
|
},
|
||||||
"changesets": [
|
"changesets": [
|
||||||
"dirty-hairs-know",
|
"pink-houses-lay",
|
||||||
"fix-parent-directory-traversal",
|
"polite-areas-shave"
|
||||||
"fix-warning-box-alignment",
|
|
||||||
"light-owls-stay",
|
|
||||||
"metal-rocks-help"
|
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
5
.changeset/quick-flies-sniff.md
Normal file
5
.changeset/quick-flies-sniff.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix issue with generate command which was creating tasks in the legacy tasks location.
|
||||||
136
.changeset/six-cups-see.md
Normal file
136
.changeset/six-cups-see.md
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Introduces Tagged Lists: AI Multi-Context Task Management System
|
||||||
|
|
||||||
|
This major release introduces Tagged Lists, a comprehensive system that transforms Task Master into a multi-context task management powerhouse. You can now organize tasks into completely isolated contexts, enabling parallel (agentic) development workflows, team collaboration, and project experimentation without conflicts.
|
||||||
|
|
||||||
|
**🏷️ Tagged Task Lists Architecture:**
|
||||||
|
|
||||||
|
The new tagged system fundamentally changes how tasks are organized:
|
||||||
|
- **Legacy Format**: `{ "tasks": [...] }`
|
||||||
|
- **New Tagged Format**: `{ "master": { "tasks": [...], "metadata": {...} }, "feature-xyz": { "tasks": [...], "metadata": {...} } }`
|
||||||
|
- **Automatic Migration**: Existing projects seamlessly migrate to tagged format with zero user intervention
|
||||||
|
- **State Management**: New `.taskmaster/state.json` tracks current tag, last switched time, and migration status
|
||||||
|
- **Configuration Integration**: Enhanced `.taskmaster/config.json` with tag-specific settings and defaults
|
||||||
|
|
||||||
|
**🚀 Complete Tag Management Suite:**
|
||||||
|
|
||||||
|
**Core Tag Commands:**
|
||||||
|
- `task-master tags [--show-metadata]` - List all tags with task counts, completion stats, and metadata
|
||||||
|
- `task-master add-tag <name> [options]` - Create new tag contexts with optional task copying
|
||||||
|
- `task-master delete-tag <name> [--yes]` - Delete tags with double confirmation protection
|
||||||
|
- `task-master use-tag <name>` - Switch contexts and immediately see next available task
|
||||||
|
- `task-master rename-tag <old> <new>` - Rename tags with automatic current tag reference updates
|
||||||
|
- `task-master copy-tag <source> <target> [options]` - Duplicate tag contexts for experimentation
|
||||||
|
|
||||||
|
**🤖 Full MCP Integration for Tag Management:**
|
||||||
|
|
||||||
|
Task Master's multi-context capabilities are now fully exposed through the MCP server, enabling powerful agentic workflows:
|
||||||
|
- **`list_tags`**: List all available tag contexts.
|
||||||
|
- **`add_tag`**: Programmatically create new tags.
|
||||||
|
- **`delete_tag`**: Remove tag contexts.
|
||||||
|
- **`use_tag`**: Switch the agent's active task context.
|
||||||
|
- **`rename_tag`**: Rename existing tags.
|
||||||
|
- **`copy_tag`**: Duplicate entire task contexts for experimentation.
|
||||||
|
|
||||||
|
**Tag Creation Options:**
|
||||||
|
- `--copy-from-current` - Copy tasks from currently active tag
|
||||||
|
- `--copy-from=<tag>` - Copy tasks from specific tag
|
||||||
|
- `--from-branch` - Creates a new tag usin active git branch name (for `add-tag` only)
|
||||||
|
- `--description="<text>"` - Add custom tag descriptions
|
||||||
|
- Empty tag creation for fresh contexts
|
||||||
|
|
||||||
|
**🎯 Universal --tag Flag Support:**
|
||||||
|
|
||||||
|
Every task operation now supports tag-specific execution:
|
||||||
|
- `task-master list --tag=feature-branch` - View tasks in specific context
|
||||||
|
- `task-master add-task --tag=experiment --prompt="..."` - Create tasks in specific tag
|
||||||
|
- `task-master parse-prd document.txt --tag=v2-redesign` - Parse PRDs into dedicated contexts
|
||||||
|
- `task-master analyze-complexity --tag=performance-work` - Generate tag-specific reports
|
||||||
|
- `task-master set-status --tag=hotfix --id=5 --status=done` - Update tasks in specific contexts
|
||||||
|
- `task-master expand --tag=research --id=3` - Break down tasks within tag contexts
|
||||||
|
|
||||||
|
**📊 Enhanced Workflow Features:**
|
||||||
|
|
||||||
|
**Smart Context Switching:**
|
||||||
|
- `use-tag` command shows immediate next task after switching
|
||||||
|
- Automatic tag creation when targeting non-existent tags
|
||||||
|
- Current tag persistence across terminal sessions
|
||||||
|
- Branch-tag mapping for future Git integration
|
||||||
|
|
||||||
|
**Intelligent File Management:**
|
||||||
|
- Tag-specific complexity reports: `task-complexity-report_tagname.json`
|
||||||
|
- Master tag uses default filenames: `task-complexity-report.json`
|
||||||
|
- Automatic file isolation prevents cross-tag contamination
|
||||||
|
|
||||||
|
**Advanced Confirmation Logic:**
|
||||||
|
- Commands only prompt when target tag has existing tasks
|
||||||
|
- Empty tags allow immediate operations without confirmation
|
||||||
|
- Smart append vs overwrite detection
|
||||||
|
|
||||||
|
**🔄 Seamless Migration & Compatibility:**
|
||||||
|
|
||||||
|
**Zero-Disruption Migration:**
|
||||||
|
- Existing `tasks.json` files automatically migrate on first command
|
||||||
|
- Master tag receives proper metadata (creation date, description)
|
||||||
|
- Migration notice shown once with helpful explanation
|
||||||
|
- All existing commands work identically to before
|
||||||
|
|
||||||
|
**State Management:**
|
||||||
|
- `.taskmaster/state.json` tracks current tag and migration status
|
||||||
|
- Automatic state creation and maintenance
|
||||||
|
- Branch-tag mapping foundation for Git integration
|
||||||
|
- Migration notice tracking to avoid repeated notifications
|
||||||
|
- Grounds for future context additions
|
||||||
|
|
||||||
|
**Backward Compatibility:**
|
||||||
|
- All existing workflows continue unchanged
|
||||||
|
- Legacy commands work exactly as before
|
||||||
|
- Gradual adoption - users can ignore tags entirely if desired
|
||||||
|
- No breaking changes to existing tasks or file formats
|
||||||
|
|
||||||
|
**💡 Real-World Use Cases:**
|
||||||
|
|
||||||
|
**Team Collaboration:**
|
||||||
|
- `task-master add-tag alice --copy-from-current` - Create teammate-specific contexts
|
||||||
|
- `task-master add-tag bob --copy-from=master` - Onboard new team members
|
||||||
|
- `task-master use-tag alice` - Switch to teammate's work context
|
||||||
|
|
||||||
|
**Feature Development:**
|
||||||
|
- `task-master parse-prd feature-spec.txt --tag=user-auth` - Dedicated feature planning
|
||||||
|
- `task-master add-tag experiment --copy-from=user-auth` - Safe experimentation
|
||||||
|
- `task-master analyze-complexity --tag=user-auth` - Feature-specific analysis
|
||||||
|
|
||||||
|
**Release Management:**
|
||||||
|
- `task-master add-tag v2.0 --description="Next major release"` - Version-specific planning
|
||||||
|
- `task-master copy-tag master v2.1` - Release branch preparation
|
||||||
|
- `task-master use-tag hotfix` - Emergency fix context
|
||||||
|
|
||||||
|
**Project Phases:**
|
||||||
|
- `task-master add-tag research --description="Discovery phase"` - Research tasks
|
||||||
|
- `task-master add-tag implementation --copy-from=research` - Development phase
|
||||||
|
- `task-master add-tag testing --copy-from=implementation` - QA phase
|
||||||
|
|
||||||
|
**🛠️ Technical Implementation:**
|
||||||
|
|
||||||
|
**Data Structure:**
|
||||||
|
- Tagged format with complete isolation between contexts
|
||||||
|
- Rich metadata per tag (creation date, description, update tracking)
|
||||||
|
- Automatic metadata enhancement for existing tags
|
||||||
|
- Clean separation of tag data and internal state
|
||||||
|
|
||||||
|
**Performance Optimizations:**
|
||||||
|
- Dynamic task counting without stored counters
|
||||||
|
- Efficient tag resolution and caching
|
||||||
|
- Minimal file I/O with smart data loading
|
||||||
|
- Responsive table layouts adapting to terminal width
|
||||||
|
|
||||||
|
**Error Handling:**
|
||||||
|
- Comprehensive validation for tag names (alphanumeric, hyphens, underscores)
|
||||||
|
- Reserved name protection (master, main, default)
|
||||||
|
- Graceful handling of missing tags and corrupted data
|
||||||
|
- Detailed error messages with suggested corrections
|
||||||
|
|
||||||
|
This release establishes the foundation for advanced multi-context workflows while maintaining the simplicity and power that makes Task Master effective for individual developers.
|
||||||
24
.changeset/slick-webs-lead.md
Normal file
24
.changeset/slick-webs-lead.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Research Save-to-File Feature & Critical MCP Tag Corruption Fix
|
||||||
|
|
||||||
|
**🔬 New Research Save-to-File Functionality:**
|
||||||
|
|
||||||
|
Added comprehensive save-to-file capability to the research command, enabling users to preserve research sessions for future reference and documentation.
|
||||||
|
|
||||||
|
**CLI Integration:**
|
||||||
|
- New `--save-file` flag for `task-master research` command
|
||||||
|
- Consistent with existing `--save` and `--save-to` flags for intuitive usage
|
||||||
|
- Interactive "Save to file" option in follow-up questions menu
|
||||||
|
|
||||||
|
**MCP Integration:**
|
||||||
|
- New `saveToFile` boolean parameter for the `research` MCP tool
|
||||||
|
- Enables programmatic research saving for AI agents and integrated tools
|
||||||
|
|
||||||
|
**File Management:**
|
||||||
|
- Automatically creates `.taskmaster/docs/research/` directory structure
|
||||||
|
- Generates timestamped, slugified filenames (e.g., `2025-01-13_what-is-typescript.md`)
|
||||||
|
- Comprehensive Markdown format with metadata headers including query, timestamp, and context sources
|
||||||
|
- Clean conversation history formatting without duplicate information
|
||||||
5
.changeset/slow-lies-make.md
Normal file
5
.changeset/slow-lies-make.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
No longer automatically creates individual task files as they are not used by the applicatoin. You can still generate them anytime using the `generate` command.
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add autonomous TDD workflow automation system with new `tm autopilot` commands and MCP tools for AI-driven test-driven development.
|
|
||||||
|
|
||||||
**New CLI Commands:**
|
|
||||||
|
|
||||||
- `tm autopilot start <taskId>` - Initialize TDD workflow
|
|
||||||
- `tm autopilot next` - Get next action in workflow
|
|
||||||
- `tm autopilot status` - Check workflow progress
|
|
||||||
- `tm autopilot complete` - Advance phase with test results
|
|
||||||
- `tm autopilot commit` - Save progress with metadata
|
|
||||||
- `tm autopilot resume` - Continue from checkpoint
|
|
||||||
- `tm autopilot abort` - Cancel workflow
|
|
||||||
|
|
||||||
**New MCP Tools:**
|
|
||||||
Seven new autopilot tools for programmatic control: `autopilot_start`, `autopilot_next`, `autopilot_status`, `autopilot_complete_phase`, `autopilot_commit`, `autopilot_resume`, `autopilot_abort`
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
|
|
||||||
- Complete RED → GREEN → COMMIT cycle enforcement
|
|
||||||
- Intelligent commit message generation with metadata
|
|
||||||
- Activity logging and state persistence
|
|
||||||
- Configurable workflow settings via `.taskmaster/config.json`
|
|
||||||
- Comprehensive AI agent integration documentation
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
|
|
||||||
- AI Agent Integration Guide (2,800+ lines)
|
|
||||||
- TDD Quick Start Guide
|
|
||||||
- Example prompts and integration patterns
|
|
||||||
|
|
||||||
> **Learn more:** [TDD Workflow Quickstart Guide](https://dev.task-master.dev/tdd-workflow/quickstart)
|
|
||||||
|
|
||||||
This release enables AI agents to autonomously execute test-driven development workflows with full state management and recovery capabilities.
|
|
||||||
19
.changeset/stale-bats-sin.md
Normal file
19
.changeset/stale-bats-sin.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations
|
||||||
|
|
||||||
|
**New Features:**
|
||||||
|
- **Multiple Task Retrieval**: Pass comma-separated IDs to get/show multiple tasks at once (e.g., `task-master show 1,3,5` or MCP `get_task` with `id: "1,3,5"`)
|
||||||
|
- **Smart Display Logic**: Single ID shows detailed view, multiple IDs show compact summary table with interactive options
|
||||||
|
- **Batch Action Menu**: Interactive menu for multiple tasks with copy-paste ready commands for common operations (mark as done/in-progress, expand all, view dependencies, etc.)
|
||||||
|
- **MCP Array Response**: MCP tool returns structured array of task objects for efficient AI agent context gathering
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- **Faster Context Gathering**: AI agents can collect multiple tasks/subtasks in one call instead of iterating
|
||||||
|
- **Improved Workflow**: Interactive batch operations reduce repetitive command execution
|
||||||
|
- **Better UX**: Responsive layout adapts to terminal width, maintains consistency with existing UI patterns
|
||||||
|
- **API Efficiency**: RESTful array responses in MCP format enable more sophisticated integrations
|
||||||
|
|
||||||
|
This enhancement maintains full backward compatibility while significantly improving efficiency for both human users and AI agents working with multiple tasks.
|
||||||
7
.changeset/tiny-ads-decide.md
Normal file
7
.changeset/tiny-ads-decide.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds support for filtering tasks by multiple statuses at once using comma-separated statuses.
|
||||||
|
|
||||||
|
Example: `cancelled,deferred`
|
||||||
5
.changeset/two-lies-start.md
Normal file
5
.changeset/two-lies-start.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improves dependency management when moving tasks by updating subtask dependencies that reference sibling subtasks by their old parent-based ID
|
||||||
22
.changeset/vast-shrimps-happen.md
Normal file
22
.changeset/vast-shrimps-happen.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add sync-readme command for a task export to GitHub README
|
||||||
|
|
||||||
|
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
|
||||||
|
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
|
||||||
|
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
|
||||||
|
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
|
||||||
|
- `task-master sync-readme` - Export tasks without subtasks
|
||||||
|
- `task-master sync-readme --with-subtasks` - Include subtasks in export
|
||||||
|
- `task-master sync-readme --status=pending` - Only export pending tasks
|
||||||
|
- `task-master sync-readme --status=done --with-subtasks` - Export completed tasks with subtasks
|
||||||
|
|
||||||
|
Perfect for showcasing project progress on GitHub. Experimental. Open to feedback.
|
||||||
5
.changeset/yellow-olives-admire.md
Normal file
5
.changeset/yellow-olives-admire.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds tag to CLI output so you know which tag you are performing operations on. Already supported in the MCP response.
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"owner": {
|
|
||||||
"name": "Hamster",
|
|
||||||
"email": "ralph@tryhamster.com"
|
|
||||||
},
|
|
||||||
"metadata": {
|
|
||||||
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
|
|
||||||
"version": "1.0.0"
|
|
||||||
},
|
|
||||||
"plugins": [
|
|
||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"source": "./packages/claude-code-plugin",
|
|
||||||
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
|
|
||||||
"author": {
|
|
||||||
"name": "Hamster"
|
|
||||||
},
|
|
||||||
"homepage": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"repository": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"keywords": [
|
|
||||||
"task-management",
|
|
||||||
"ai",
|
|
||||||
"workflow",
|
|
||||||
"orchestration",
|
|
||||||
"automation",
|
|
||||||
"mcp"
|
|
||||||
],
|
|
||||||
"category": "productivity"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,147 +0,0 @@
|
|||||||
# Task Master Commands for Claude Code
|
|
||||||
|
|
||||||
Complete guide to using Task Master through Claude Code's slash commands.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
All Task Master functionality is available through the `/project:tm/` namespace with natural language support and intelligent features.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install Task Master
|
|
||||||
/project:tm/setup/quick-install
|
|
||||||
|
|
||||||
# Initialize project
|
|
||||||
/project:tm/init/quick
|
|
||||||
|
|
||||||
# Parse requirements
|
|
||||||
/project:tm/parse-prd requirements.md
|
|
||||||
|
|
||||||
# Start working
|
|
||||||
/project:tm/next
|
|
||||||
```
|
|
||||||
|
|
||||||
## Command Structure
|
|
||||||
|
|
||||||
Commands are organized hierarchically to match Task Master's CLI:
|
|
||||||
- Main commands at `/project:tm/[command]`
|
|
||||||
- Subcommands for specific operations `/project:tm/[command]/[subcommand]`
|
|
||||||
- Natural language arguments accepted throughout
|
|
||||||
|
|
||||||
## Complete Command Reference
|
|
||||||
|
|
||||||
### Setup & Configuration
|
|
||||||
- `/project:tm/setup/install` - Full installation guide
|
|
||||||
- `/project:tm/setup/quick-install` - One-line install
|
|
||||||
- `/project:tm/init` - Initialize project
|
|
||||||
- `/project:tm/init/quick` - Quick init with -y
|
|
||||||
- `/project:tm/models` - View AI config
|
|
||||||
- `/project:tm/models/setup` - Configure AI
|
|
||||||
|
|
||||||
### Task Generation
|
|
||||||
- `/project:tm/parse-prd` - Generate from PRD
|
|
||||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
|
||||||
- `/project:tm/generate` - Create task files
|
|
||||||
|
|
||||||
### Task Management
|
|
||||||
- `/project:tm/list` - List with natural language filters
|
|
||||||
- `/project:tm/list/with-subtasks` - Hierarchical view
|
|
||||||
- `/project:tm/list/by-status <status>` - Filter by status
|
|
||||||
- `/project:tm/show <id>` - Task details
|
|
||||||
- `/project:tm/add-task` - Create task
|
|
||||||
- `/project:tm/update` - Update tasks
|
|
||||||
- `/project:tm/remove-task` - Delete task
|
|
||||||
|
|
||||||
### Status Management
|
|
||||||
- `/project:tm/set-status/to-pending <id>`
|
|
||||||
- `/project:tm/set-status/to-in-progress <id>`
|
|
||||||
- `/project:tm/set-status/to-done <id>`
|
|
||||||
- `/project:tm/set-status/to-review <id>`
|
|
||||||
- `/project:tm/set-status/to-deferred <id>`
|
|
||||||
- `/project:tm/set-status/to-cancelled <id>`
|
|
||||||
|
|
||||||
### Task Analysis
|
|
||||||
- `/project:tm/analyze-complexity` - AI analysis
|
|
||||||
- `/project:tm/complexity-report` - View report
|
|
||||||
- `/project:tm/expand <id>` - Break down task
|
|
||||||
- `/project:tm/expand/all` - Expand all complex
|
|
||||||
|
|
||||||
### Dependencies
|
|
||||||
- `/project:tm/add-dependency` - Add dependency
|
|
||||||
- `/project:tm/remove-dependency` - Remove dependency
|
|
||||||
- `/project:tm/validate-dependencies` - Check issues
|
|
||||||
- `/project:tm/fix-dependencies` - Auto-fix
|
|
||||||
|
|
||||||
### Workflows
|
|
||||||
- `/project:tm/workflows/smart-flow` - Adaptive workflows
|
|
||||||
- `/project:tm/workflows/pipeline` - Chain commands
|
|
||||||
- `/project:tm/workflows/auto-implement` - AI implementation
|
|
||||||
|
|
||||||
### Utilities
|
|
||||||
- `/project:tm/status` - Project dashboard
|
|
||||||
- `/project:tm/next` - Next task recommendation
|
|
||||||
- `/project:tm/utils/analyze` - Project analysis
|
|
||||||
- `/project:tm/learn` - Interactive help
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
|
|
||||||
### Natural Language Support
|
|
||||||
All commands understand natural language:
|
|
||||||
```
|
|
||||||
/project:tm/list pending high priority
|
|
||||||
/project:tm/update mark 23 as done
|
|
||||||
/project:tm/add-task implement OAuth login
|
|
||||||
```
|
|
||||||
|
|
||||||
### Smart Context
|
|
||||||
Commands analyze project state and provide intelligent suggestions based on:
|
|
||||||
- Current task status
|
|
||||||
- Dependencies
|
|
||||||
- Team patterns
|
|
||||||
- Project phase
|
|
||||||
|
|
||||||
### Visual Enhancements
|
|
||||||
- Progress bars and indicators
|
|
||||||
- Status badges
|
|
||||||
- Organized displays
|
|
||||||
- Clear hierarchies
|
|
||||||
|
|
||||||
## Common Workflows
|
|
||||||
|
|
||||||
### Daily Development
|
|
||||||
```
|
|
||||||
/project:tm/workflows/smart-flow morning
|
|
||||||
/project:tm/next
|
|
||||||
/project:tm/set-status/to-in-progress <id>
|
|
||||||
/project:tm/set-status/to-done <id>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task Breakdown
|
|
||||||
```
|
|
||||||
/project:tm/show <id>
|
|
||||||
/project:tm/expand <id>
|
|
||||||
/project:tm/list/with-subtasks
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sprint Planning
|
|
||||||
```
|
|
||||||
/project:tm/analyze-complexity
|
|
||||||
/project:tm/workflows/pipeline init → expand/all → status
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration from Old Commands
|
|
||||||
|
|
||||||
| Old | New |
|
|
||||||
|-----|-----|
|
|
||||||
| `/project:task-master:list` | `/project:tm/list` |
|
|
||||||
| `/project:task-master:complete` | `/project:tm/set-status/to-done` |
|
|
||||||
| `/project:workflows:auto-implement` | `/project:tm/workflows/auto-implement` |
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. Use `/project:tm/` + Tab for command discovery
|
|
||||||
2. Natural language is supported everywhere
|
|
||||||
3. Commands provide smart defaults
|
|
||||||
4. Chain commands for automation
|
|
||||||
5. Check `/project:tm/learn` for interactive help
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
|
||||||
description: Find duplicate GitHub issues
|
|
||||||
---
|
|
||||||
|
|
||||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
|
||||||
|
|
||||||
To do this, follow these steps precisely:
|
|
||||||
|
|
||||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
|
||||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
|
||||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
|
||||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
|
||||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
|
||||||
|
|
||||||
Notes (be sure to tell this to your agents, too):
|
|
||||||
|
|
||||||
- Use `gh` to interact with Github, rather than web fetch
|
|
||||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
|
||||||
- Make a todo list first
|
|
||||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Found 3 possible duplicate issues:
|
|
||||||
|
|
||||||
1. <link to issue>
|
|
||||||
2. <link to issue>
|
|
||||||
3. <link to issue>
|
|
||||||
|
|
||||||
This issue will be automatically closed as a duplicate in 3 days.
|
|
||||||
|
|
||||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
|
||||||
- To prevent auto-closure, add a comment or 👎 this comment
|
|
||||||
|
|
||||||
🤖 Generated with \[Task Master Bot\]
|
|
||||||
|
|
||||||
---
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
reviews:
|
|
||||||
profile: chill
|
|
||||||
poem: false
|
|
||||||
auto_review:
|
|
||||||
enabled: true
|
|
||||||
base_branches:
|
|
||||||
- ".*"
|
|
||||||
@@ -2,13 +2,12 @@
|
|||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"task-master-ai": {
|
"task-master-ai": {
|
||||||
"command": "node",
|
"command": "node",
|
||||||
"args": ["./dist/mcp-server.js"],
|
"args": ["./mcp-server/server.js"],
|
||||||
"env": {
|
"env": {
|
||||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||||
"GROQ_API_KEY": "GROQ_API_KEY_HERE",
|
|
||||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||||
|
|||||||
@@ -523,7 +523,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
|||||||
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
||||||
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
||||||
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
||||||
.option('--generate', 'Regenerate task files after adding subtask')
|
.option('--skip-generate', 'Skip regenerating task files')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Validate required parameters
|
// Validate required parameters
|
||||||
if (!options.parent) {
|
if (!options.parent) {
|
||||||
@@ -545,7 +545,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
|||||||
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||||
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
||||||
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
||||||
.option('--generate', 'Regenerate task files after removing subtask')
|
.option('--skip-generate', 'Skip regenerating task files')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Implementation with detailed error handling
|
// Implementation with detailed error handling
|
||||||
})
|
})
|
||||||
@@ -633,11 +633,11 @@ function showAddSubtaskHelp() {
|
|||||||
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
||||||
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
||||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
' --generate Regenerate task files after adding subtask\n\n' +
|
' --skip-generate Skip regenerating task files\n\n' +
|
||||||
chalk.cyan('Examples:') + '\n' +
|
chalk.cyan('Examples:') + '\n' +
|
||||||
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
||||||
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
||||||
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details "Handle 401 Unauthorized.\\nHandle 500 Server Error." --generate',
|
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details $\'Handle 401 Unauthorized.\nHandle 500 Server Error.\'',
|
||||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
@@ -652,7 +652,7 @@ function showRemoveSubtaskHelp() {
|
|||||||
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
||||||
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
||||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
' --generate Regenerate task files after removing subtask\n\n' +
|
' --skip-generate Skip regenerating task files\n\n' +
|
||||||
chalk.cyan('Examples:') + '\n' +
|
chalk.cyan('Examples:') + '\n' +
|
||||||
' task-master remove-subtask --id=\'5.2\'\n' +
|
' task-master remove-subtask --id=\'5.2\'\n' +
|
||||||
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
||||||
|
|||||||
@@ -1,200 +1,19 @@
|
|||||||
---
|
---
|
||||||
description: Guide for using Taskmaster to manage task-driven development workflows
|
description: Guide for using Task Master to manage task-driven development workflows
|
||||||
globs: **/*
|
globs: **/*
|
||||||
alwaysApply: true
|
alwaysApply: true
|
||||||
---
|
---
|
||||||
|
# Task Master Development Workflow
|
||||||
|
|
||||||
# Taskmaster Development Workflow
|
This guide outlines the typical process for using Task Master to manage software development projects.
|
||||||
|
|
||||||
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
|
||||||
|
|
||||||
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
|
||||||
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
|
||||||
|
|
||||||
## The Basic Loop
|
|
||||||
The fundamental development cycle you will facilitate is:
|
|
||||||
1. **`list`**: Show the user what needs to be done.
|
|
||||||
2. **`next`**: Help the user decide what to work on.
|
|
||||||
3. **`show <id>`**: Provide details for a specific task.
|
|
||||||
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
|
||||||
5. **Implement**: The user writes the code and tests.
|
|
||||||
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
|
||||||
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
|
||||||
8. **Repeat**.
|
|
||||||
|
|
||||||
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Standard Development Workflow Process
|
|
||||||
|
|
||||||
### Simple Workflow (Default Starting Point)
|
|
||||||
|
|
||||||
For new projects or when users are getting started, operate within the `master` tag context:
|
|
||||||
|
|
||||||
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure
|
|
||||||
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules cursor,windsurf`) or manage them later with `task-master rules add/remove` commands
|
|
||||||
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
|
|
||||||
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
|
||||||
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
|
|
||||||
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
|
||||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
|
||||||
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
|
|
||||||
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
|
||||||
- Implement code following task details, dependencies, and project standards
|
|
||||||
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
|
||||||
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Leveling Up: Agent-Led Multi-Context Workflows
|
|
||||||
|
|
||||||
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
|
||||||
|
|
||||||
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
|
||||||
|
|
||||||
### When to Introduce Tags: Your Decision Patterns
|
|
||||||
|
|
||||||
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
|
||||||
|
|
||||||
#### Pattern 1: Simple Git Feature Branching
|
|
||||||
This is the most common and direct use case for tags.
|
|
||||||
|
|
||||||
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
|
||||||
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
|
||||||
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag --from-branch`
|
|
||||||
|
|
||||||
#### Pattern 2: Team Collaboration
|
|
||||||
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
|
||||||
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
|
||||||
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
|
||||||
|
|
||||||
#### Pattern 3: Experiments or Risky Refactors
|
|
||||||
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
|
||||||
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
|
||||||
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
|
||||||
|
|
||||||
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
|
||||||
This is a more structured approach for significant new features or epics.
|
|
||||||
|
|
||||||
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
|
||||||
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
|
||||||
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
|
||||||
- **Your Implementation Flow**:
|
|
||||||
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
|
||||||
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
|
||||||
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
|
||||||
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
|
||||||
|
|
||||||
#### Pattern 5: Version-Based Development
|
|
||||||
Tailor your approach based on the project maturity indicated by tag names.
|
|
||||||
|
|
||||||
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
|
||||||
- **Your Approach**: Focus on speed and functionality over perfection
|
|
||||||
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
|
||||||
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
|
||||||
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
|
||||||
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
|
||||||
|
|
||||||
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
|
||||||
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
|
||||||
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
|
||||||
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
|
||||||
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
|
||||||
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
|
||||||
|
|
||||||
### Advanced Workflow (Tag-Based & PRD-Driven)
|
|
||||||
|
|
||||||
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
|
||||||
- User mentions teammates or collaboration needs
|
|
||||||
- Project has grown to 15+ tasks with mixed priorities
|
|
||||||
- User creates feature branches or mentions major initiatives
|
|
||||||
- User initializes Taskmaster on an existing, complex codebase
|
|
||||||
- User describes large features that would benefit from dedicated planning
|
|
||||||
|
|
||||||
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
|
||||||
|
|
||||||
#### Master List Strategy (High-Value Focus)
|
|
||||||
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
|
||||||
- **High-level deliverables** that provide significant business value
|
|
||||||
- **Major milestones** and epic-level features
|
|
||||||
- **Critical infrastructure** work that affects the entire project
|
|
||||||
- **Release-blocking** items
|
|
||||||
|
|
||||||
**What NOT to put in master**:
|
|
||||||
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
|
||||||
- Refactoring work (create dedicated tags like `refactor-auth`)
|
|
||||||
- Experimental features (use `experiment-*` tags)
|
|
||||||
- Team member-specific tasks (use person-specific tags)
|
|
||||||
|
|
||||||
#### PRD-Driven Feature Development
|
|
||||||
|
|
||||||
**For New Major Features**:
|
|
||||||
1. **Identify the Initiative**: When user describes a significant feature
|
|
||||||
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
|
||||||
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
|
||||||
4. **Parse & Prepare**:
|
|
||||||
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
|
||||||
- `analyze_project_complexity --tag=feature-[name] --research`
|
|
||||||
- `expand_all --tag=feature-[name] --research`
|
|
||||||
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
|
||||||
|
|
||||||
**For Existing Codebase Analysis**:
|
|
||||||
When users initialize Taskmaster on existing projects:
|
|
||||||
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
|
||||||
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
|
||||||
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
|
||||||
- Current state analysis (based on your codebase research)
|
|
||||||
- Proposed improvements or new features
|
|
||||||
- Implementation strategy considering existing code
|
|
||||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
|
||||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
|
||||||
|
|
||||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
|
||||||
|
|
||||||
### Workflow Transition Examples
|
|
||||||
|
|
||||||
**Example 1: Simple → Team-Based**
|
|
||||||
```
|
|
||||||
User: "Alice is going to help with the API work"
|
|
||||||
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
|
||||||
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 2: Simple → PRD-Driven**
|
|
||||||
```
|
|
||||||
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
|
||||||
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
|
||||||
Actions:
|
|
||||||
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
|
||||||
2. Collaborate on PRD creation
|
|
||||||
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
|
||||||
4. Add high-level "User Dashboard" task to master
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 3: Existing Project → Strategic Planning**
|
|
||||||
```
|
|
||||||
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
|
||||||
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
|
||||||
Actions:
|
|
||||||
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
|
||||||
2. Collaborate on improvement PRD based on findings
|
|
||||||
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
|
||||||
4. Keep only major improvement initiatives in master
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Primary Interaction: MCP Server vs. CLI
|
## Primary Interaction: MCP Server vs. CLI
|
||||||
|
|
||||||
Taskmaster offers two primary ways to interact:
|
Task Master offers two primary ways to interact:
|
||||||
|
|
||||||
1. **MCP Server (Recommended for Integrated Tools)**:
|
1. **MCP Server (Recommended for Integrated Tools)**:
|
||||||
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
|
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
|
||||||
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
- The MCP server exposes Task Master functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||||
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
||||||
- Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools.
|
- Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools.
|
||||||
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
|
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
|
||||||
@@ -209,15 +28,62 @@ Taskmaster offers two primary ways to interact:
|
|||||||
- Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference.
|
- Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference.
|
||||||
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
||||||
|
|
||||||
## How the Tag System Works (For Your Reference)
|
## Tagged Task Lists System
|
||||||
|
|
||||||
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
Task Master now supports **tagged task lists** for multi-context task management:
|
||||||
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
|
||||||
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
|
||||||
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
|
||||||
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a full command list.
|
|
||||||
|
|
||||||
---
|
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0"
|
||||||
|
- **Seamless Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption
|
||||||
|
- **Backward Compatibility**: All existing commands continue to work exactly as before
|
||||||
|
- **Context Isolation**: Tasks in different tags are completely separate and isolated
|
||||||
|
- **Silent Migration**: The first time you run any Task Master command, your existing tasks.json will be automatically migrated to the new tagged format
|
||||||
|
- **Migration Notice**: You'll see a friendly FYI notice after migration explaining the new system
|
||||||
|
|
||||||
|
**Migration Example**:
|
||||||
|
```json
|
||||||
|
// Before (legacy format)
|
||||||
|
{
|
||||||
|
"tasks": [
|
||||||
|
{ "id": 1, "title": "Setup API", ... }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
// After (tagged format - automatic)
|
||||||
|
{
|
||||||
|
"master": {
|
||||||
|
"tasks": [
|
||||||
|
{ "id": 1, "title": "Setup API", ... }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tag Management**: CLI commands for tag management (`add-tag`, `use-tag`, `list-tags`, `delete-tag`, `rename-tag`, `copy-tag`) are now available with manual git integration via `--from-branch` option.
|
||||||
|
|
||||||
|
## Standard Development Workflow Process
|
||||||
|
|
||||||
|
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure
|
||||||
|
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
|
||||||
|
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||||
|
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
|
||||||
|
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||||
|
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||||
|
- Clarify tasks by checking task files in tasks/ directory or asking for user input
|
||||||
|
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
|
||||||
|
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`.
|
||||||
|
- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating
|
||||||
|
- Implement code following task details, dependencies, and project standards
|
||||||
|
- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
|
||||||
|
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
|
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
|
||||||
|
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||||
|
- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent=<id> --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||||
|
- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
|
||||||
|
- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json
|
||||||
|
- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed
|
||||||
|
- Respect dependency chains and task priorities when selecting work
|
||||||
|
- Report progress regularly using `get_tasks` / `task-master list`
|
||||||
|
- Reorganize tasks as needed using `move_task` / `task-master move --from=<id> --to=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to change task hierarchy or ordering
|
||||||
|
|
||||||
## Task Complexity Analysis
|
## Task Complexity Analysis
|
||||||
|
|
||||||
@@ -295,17 +161,6 @@ Taskmaster configuration is managed through two main mechanisms:
|
|||||||
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
||||||
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||||
|
|
||||||
## Rules Management
|
|
||||||
|
|
||||||
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
|
||||||
|
|
||||||
- **Available Profiles**: Claude Code, Cline, Codex, Cursor, Roo Code, Trae, Windsurf (claude, cline, codex, cursor, roo, trae, windsurf)
|
|
||||||
- **During Initialization**: Use `task-master init --rules cursor,windsurf` to specify which rule sets to include
|
|
||||||
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
|
||||||
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
|
||||||
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
|
||||||
- **Rule Structure**: Each profile creates its own directory (e.g., `.cursor/rules`, `.roo/rules`) with appropriate configuration files
|
|
||||||
|
|
||||||
## Determining the Next Task
|
## Determining the Next Task
|
||||||
|
|
||||||
- Run `next_task` / `task-master next` to show the next task to work on.
|
- Run `next_task` / `task-master next` to show the next task to work on.
|
||||||
|
|||||||
@@ -26,7 +26,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||||
* `--description <text>`: `Provide a brief description for your project.`
|
* `--description <text>`: `Provide a brief description for your project.`
|
||||||
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||||
* `--no-git`: `Skip initializing a Git repository entirely.`
|
|
||||||
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||||
* **Usage:** Run this once at the beginning of a new project.
|
* **Usage:** Run this once at the beginning of a new project.
|
||||||
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||||
@@ -37,7 +36,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||||
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||||
* `noGit`: `Skip initializing a Git repository entirely. Default is false.` (CLI: `--no-git`)
|
|
||||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||||
@@ -79,7 +77,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `--set-fallback <model_id>`: `Set the fallback model.`
|
* `--set-fallback <model_id>`: `Set the fallback model.`
|
||||||
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
||||||
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
||||||
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
|
||||||
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||||
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||||
@@ -111,7 +108,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* **Usage:** Identify what to work on next according to the plan.
|
* **Usage:** Identify what to work on next according to the plan.
|
||||||
|
|
||||||
### 5. Get Task Details (`get_task`)
|
### 5. Get Task Details (`get_task`)
|
||||||
@@ -140,7 +136,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
||||||
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
||||||
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
||||||
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Quickly add newly identified tasks during development.
|
* **Usage:** Quickly add newly identified tasks during development.
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
@@ -158,8 +153,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||||
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||||
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||||
|
|
||||||
@@ -172,7 +166,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
||||||
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
@@ -198,13 +191,12 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **CLI Command:** `task-master update-subtask [options]`
|
* **CLI Command:** `task-master update-subtask [options]`
|
||||||
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||||
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
* `tag`: `Specify which tag context to remove the task from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||||
|
|
||||||
### 11. Set Task Status (`set_task_status`)
|
### 11. Set Task Status (`set_task_status`)
|
||||||
|
|
||||||
@@ -214,7 +206,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
||||||
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Mark progress as tasks move through the development cycle.
|
* **Usage:** Mark progress as tasks move through the development cycle.
|
||||||
|
|
||||||
@@ -226,7 +217,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||||
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||||
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||||
@@ -272,9 +262,8 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
|
||||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
||||||
|
|
||||||
@@ -286,8 +275,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||||
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||||
|
|
||||||
@@ -299,7 +287,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
||||||
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
||||||
* Moving a task to become a subtask
|
* Moving a task to become a subtask
|
||||||
@@ -329,7 +316,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
||||||
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||||
* **Usage:** Establish the correct order of execution between tasks.
|
* **Usage:** Establish the correct order of execution between tasks.
|
||||||
|
|
||||||
@@ -341,7 +327,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
||||||
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Update task relationships when the order of execution changes.
|
* **Usage:** Update task relationships when the order of execution changes.
|
||||||
|
|
||||||
@@ -351,7 +336,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **CLI Command:** `task-master validate-dependencies [options]`
|
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||||
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Audit the integrity of your task dependencies.
|
* **Usage:** Audit the integrity of your task dependencies.
|
||||||
|
|
||||||
@@ -389,7 +373,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **CLI Command:** `task-master complexity-report [options]`
|
* **CLI Command:** `task-master complexity-report [options]`
|
||||||
* **Description:** `Display the task complexity analysis report in a readable format.`
|
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||||
|
|
||||||
@@ -461,7 +444,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
|||||||
* **CLI Command:** `task-master tags [options]`
|
* **CLI Command:** `task-master tags [options]`
|
||||||
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
||||||
|
|
||||||
### 27. Add Tag (`add_tag`)
|
### 27. Add Tag (`add_tag`)
|
||||||
@@ -475,7 +457,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
|||||||
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
||||||
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
||||||
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 28. Delete Tag (`delete_tag`)
|
### 28. Delete Tag (`delete_tag`)
|
||||||
|
|
||||||
@@ -485,7 +466,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
||||||
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 29. Use Tag (`use_tag`)
|
### 29. Use Tag (`use_tag`)
|
||||||
|
|
||||||
@@ -494,7 +474,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
|||||||
* **Description:** `Switch your active task context to a different tag.`
|
* **Description:** `Switch your active task context to a different tag.`
|
||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 30. Rename Tag (`rename_tag`)
|
### 30. Rename Tag (`rename_tag`)
|
||||||
|
|
||||||
@@ -504,7 +483,6 @@ This new suite of commands allows you to manage different task contexts (tags).
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
||||||
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 31. Copy Tag (`copy_tag`)
|
### 31. Copy Tag (`copy_tag`)
|
||||||
|
|
||||||
@@ -556,4 +534,4 @@ Environment variables are used **only** for sensitive API keys related to AI pro
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc).
|
For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc).
|
||||||
|
|||||||
@@ -1,803 +0,0 @@
|
|||||||
---
|
|
||||||
description:
|
|
||||||
globs:
|
|
||||||
alwaysApply: true
|
|
||||||
---
|
|
||||||
# Test Workflow & Development Process
|
|
||||||
|
|
||||||
## **Initial Testing Framework Setup**
|
|
||||||
|
|
||||||
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
|
|
||||||
|
|
||||||
### **Detecting Project Type & Framework Needs**
|
|
||||||
|
|
||||||
**AI Agent Assessment Checklist:**
|
|
||||||
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
|
|
||||||
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
|
|
||||||
3. **Framework Detection**: Check for existing test runners in dependencies
|
|
||||||
4. **Project Structure**: Analyze directory structure for testing patterns
|
|
||||||
|
|
||||||
### **JavaScript/Node.js Projects (Jest Setup)**
|
|
||||||
|
|
||||||
#### **Prerequisites Check**
|
|
||||||
```bash
|
|
||||||
# Verify Node.js project
|
|
||||||
ls package.json # Should exist
|
|
||||||
|
|
||||||
# Check for existing testing setup
|
|
||||||
ls jest.config.js jest.config.ts # Check for Jest config
|
|
||||||
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Jest Installation & Configuration**
|
|
||||||
|
|
||||||
**Step 1: Install Dependencies**
|
|
||||||
```bash
|
|
||||||
# Core Jest dependencies
|
|
||||||
npm install --save-dev jest
|
|
||||||
|
|
||||||
# TypeScript support (if using TypeScript)
|
|
||||||
npm install --save-dev ts-jest @types/jest
|
|
||||||
|
|
||||||
# Additional useful packages
|
|
||||||
npm install --save-dev supertest @types/supertest # For API testing
|
|
||||||
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 2: Create Jest Configuration**
|
|
||||||
|
|
||||||
Create `jest.config.js` with the following production-ready configuration:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
/** @type {import('jest').Config} */
|
|
||||||
module.exports = {
|
|
||||||
// Use ts-jest preset for TypeScript support
|
|
||||||
preset: 'ts-jest',
|
|
||||||
|
|
||||||
// Test environment
|
|
||||||
testEnvironment: 'node',
|
|
||||||
|
|
||||||
// Roots for test discovery
|
|
||||||
roots: ['<rootDir>/src', '<rootDir>/tests'],
|
|
||||||
|
|
||||||
// Test file patterns
|
|
||||||
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
|
|
||||||
|
|
||||||
// Transform files
|
|
||||||
transform: {
|
|
||||||
'^.+\\.ts$': [
|
|
||||||
'ts-jest',
|
|
||||||
{
|
|
||||||
tsconfig: {
|
|
||||||
target: 'es2020',
|
|
||||||
module: 'commonjs',
|
|
||||||
esModuleInterop: true,
|
|
||||||
allowSyntheticDefaultImports: true,
|
|
||||||
skipLibCheck: true,
|
|
||||||
strict: false,
|
|
||||||
noImplicitAny: false,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
'^.+\\.js$': [
|
|
||||||
'ts-jest',
|
|
||||||
{
|
|
||||||
useESM: false,
|
|
||||||
tsconfig: {
|
|
||||||
target: 'es2020',
|
|
||||||
module: 'commonjs',
|
|
||||||
esModuleInterop: true,
|
|
||||||
allowSyntheticDefaultImports: true,
|
|
||||||
allowJs: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
|
|
||||||
// Module file extensions
|
|
||||||
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
|
||||||
|
|
||||||
// Transform ignore patterns - adjust for ES modules
|
|
||||||
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
|
|
||||||
|
|
||||||
// Coverage configuration
|
|
||||||
collectCoverage: true,
|
|
||||||
coverageDirectory: 'coverage',
|
|
||||||
coverageReporters: [
|
|
||||||
'text', // Console output
|
|
||||||
'text-summary', // Brief summary
|
|
||||||
'lcov', // For IDE integration
|
|
||||||
'html', // Detailed HTML report
|
|
||||||
],
|
|
||||||
|
|
||||||
// Files to collect coverage from
|
|
||||||
collectCoverageFrom: [
|
|
||||||
'src/**/*.ts',
|
|
||||||
'!src/**/*.d.ts',
|
|
||||||
'!src/**/*.test.ts',
|
|
||||||
'!src/**/index.ts', // Often just exports
|
|
||||||
'!src/generated/**', // Generated code
|
|
||||||
'!src/config/database.ts', // Database config (tested via integration)
|
|
||||||
],
|
|
||||||
|
|
||||||
// Coverage thresholds - TaskMaster standards
|
|
||||||
coverageThreshold: {
|
|
||||||
global: {
|
|
||||||
branches: 70,
|
|
||||||
functions: 80,
|
|
||||||
lines: 80,
|
|
||||||
statements: 80,
|
|
||||||
},
|
|
||||||
// Higher standards for critical business logic
|
|
||||||
'./src/utils/': {
|
|
||||||
branches: 85,
|
|
||||||
functions: 90,
|
|
||||||
lines: 90,
|
|
||||||
statements: 90,
|
|
||||||
},
|
|
||||||
'./src/middleware/': {
|
|
||||||
branches: 80,
|
|
||||||
functions: 85,
|
|
||||||
lines: 85,
|
|
||||||
statements: 85,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
|
|
||||||
// Setup files
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
|
|
||||||
|
|
||||||
// Global teardown to prevent worker process leaks
|
|
||||||
globalTeardown: '<rootDir>/tests/teardown.ts',
|
|
||||||
|
|
||||||
// Module path mapping (if needed)
|
|
||||||
moduleNameMapper: {
|
|
||||||
'^@/(.*)$': '<rootDir>/src/$1',
|
|
||||||
},
|
|
||||||
|
|
||||||
// Clear mocks between tests
|
|
||||||
clearMocks: true,
|
|
||||||
|
|
||||||
// Restore mocks after each test
|
|
||||||
restoreMocks: true,
|
|
||||||
|
|
||||||
// Global test timeout
|
|
||||||
testTimeout: 10000,
|
|
||||||
|
|
||||||
// Projects for different test types
|
|
||||||
projects: [
|
|
||||||
// Unit tests - for pure functions only
|
|
||||||
{
|
|
||||||
displayName: 'unit',
|
|
||||||
testMatch: ['<rootDir>/src/**/*.test.ts'],
|
|
||||||
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
collectCoverageFrom: [
|
|
||||||
'src/**/*.ts',
|
|
||||||
'!src/**/*.d.ts',
|
|
||||||
'!src/**/*.test.ts',
|
|
||||||
'!src/**/*.integration.test.ts',
|
|
||||||
],
|
|
||||||
coverageThreshold: {
|
|
||||||
global: {
|
|
||||||
branches: 70,
|
|
||||||
functions: 80,
|
|
||||||
lines: 80,
|
|
||||||
statements: 80,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
// Integration tests - real database/services
|
|
||||||
{
|
|
||||||
displayName: 'integration',
|
|
||||||
testMatch: [
|
|
||||||
'<rootDir>/src/**/*.integration.test.ts',
|
|
||||||
'<rootDir>/tests/integration/**/*.test.ts',
|
|
||||||
],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
|
|
||||||
testTimeout: 10000,
|
|
||||||
},
|
|
||||||
// E2E tests - full workflows
|
|
||||||
{
|
|
||||||
displayName: 'e2e',
|
|
||||||
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
|
|
||||||
testTimeout: 30000,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
|
|
||||||
// Verbose output for better debugging
|
|
||||||
verbose: true,
|
|
||||||
|
|
||||||
// Run projects sequentially to avoid conflicts
|
|
||||||
maxWorkers: 1,
|
|
||||||
|
|
||||||
// Enable watch mode plugins
|
|
||||||
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 3: Update package.json Scripts**
|
|
||||||
|
|
||||||
Add these scripts to your `package.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scripts": {
|
|
||||||
"test": "jest",
|
|
||||||
"test:watch": "jest --watch",
|
|
||||||
"test:coverage": "jest --coverage",
|
|
||||||
"test:unit": "jest --selectProjects unit",
|
|
||||||
"test:integration": "jest --selectProjects integration",
|
|
||||||
"test:e2e": "jest --selectProjects e2e",
|
|
||||||
"test:ci": "jest --ci --coverage --watchAll=false"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 4: Create Test Setup Files**
|
|
||||||
|
|
||||||
Create essential test setup files:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/setup.ts - Global setup
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
|
|
||||||
// Global test configuration
|
|
||||||
beforeAll(() => {
|
|
||||||
// Set test timeout
|
|
||||||
jest.setTimeout(10000);
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
// Clean up mocks after each test
|
|
||||||
jest.clearAllMocks();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/setup/integration.ts - Integration test setup
|
|
||||||
import { PrismaClient } from '@prisma/client';
|
|
||||||
|
|
||||||
const prisma = new PrismaClient();
|
|
||||||
|
|
||||||
beforeAll(async () => {
|
|
||||||
// Connect to test database
|
|
||||||
await prisma.$connect();
|
|
||||||
});
|
|
||||||
|
|
||||||
afterAll(async () => {
|
|
||||||
// Cleanup and disconnect
|
|
||||||
await prisma.$disconnect();
|
|
||||||
});
|
|
||||||
|
|
||||||
beforeEach(async () => {
|
|
||||||
// Clean test data before each test
|
|
||||||
// Add your cleanup logic here
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/teardown.ts - Global teardown
|
|
||||||
export default async () => {
|
|
||||||
// Global cleanup after all tests
|
|
||||||
console.log('Global test teardown complete');
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 5: Create Initial Test Structure**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create test directories
|
|
||||||
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
|
|
||||||
mkdir -p tests/unit/src/{utils,services,middleware}
|
|
||||||
|
|
||||||
# Create sample test fixtures
|
|
||||||
mkdir tests/fixtures
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Generic Testing Framework Setup (Any Language)**
|
|
||||||
|
|
||||||
#### **Framework Selection Guide**
|
|
||||||
|
|
||||||
**Python Projects:**
|
|
||||||
- **pytest**: Recommended for most Python projects
|
|
||||||
- **unittest**: Built-in, suitable for simple projects
|
|
||||||
- **Coverage**: Use `coverage.py` for code coverage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Python setup example
|
|
||||||
pip install pytest pytest-cov
|
|
||||||
echo "[tool:pytest]" > pytest.ini
|
|
||||||
echo "testpaths = tests" >> pytest.ini
|
|
||||||
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
|
|
||||||
```
|
|
||||||
|
|
||||||
**Go Projects:**
|
|
||||||
- **Built-in testing**: Use Go's built-in `testing` package
|
|
||||||
- **Coverage**: Built-in with `go test -cover`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Go setup example
|
|
||||||
go mod init your-project
|
|
||||||
mkdir -p tests
|
|
||||||
# Tests are typically *_test.go files alongside source
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rust Projects:**
|
|
||||||
- **Built-in testing**: Use Rust's built-in test framework
|
|
||||||
- **cargo-tarpaulin**: For coverage analysis
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Rust setup example
|
|
||||||
cargo new your-project
|
|
||||||
cd your-project
|
|
||||||
cargo install cargo-tarpaulin # For coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**Java Projects:**
|
|
||||||
- **JUnit 5**: Modern testing framework
|
|
||||||
- **Maven/Gradle**: Build tools with testing integration
|
|
||||||
|
|
||||||
```xml
|
|
||||||
<!-- Maven pom.xml example -->
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.junit.jupiter</groupId>
|
|
||||||
<artifactId>junit-jupiter</artifactId>
|
|
||||||
<version>5.9.2</version>
|
|
||||||
<scope>test</scope>
|
|
||||||
</dependency>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Universal Testing Principles**
|
|
||||||
|
|
||||||
**Coverage Standards (Adapt to Your Language):**
|
|
||||||
- **Global Minimum**: 70-80% line coverage
|
|
||||||
- **Critical Code**: 85-90% coverage
|
|
||||||
- **New Features**: Must meet or exceed standards
|
|
||||||
- **Legacy Code**: Gradual improvement strategy
|
|
||||||
|
|
||||||
**Test Organization:**
|
|
||||||
- **Unit Tests**: Fast, isolated, no external dependencies
|
|
||||||
- **Integration Tests**: Test component interactions
|
|
||||||
- **E2E Tests**: Test complete user workflows
|
|
||||||
- **Performance Tests**: Load and stress testing (if applicable)
|
|
||||||
|
|
||||||
**Naming Conventions:**
|
|
||||||
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
|
|
||||||
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
|
|
||||||
- **Test Directories**: Organized by test type and mirroring source structure
|
|
||||||
|
|
||||||
#### **TaskMaster Integration for Any Framework**
|
|
||||||
|
|
||||||
**Document Testing Setup in Subtasks:**
|
|
||||||
```bash
|
|
||||||
# Update subtask with testing framework setup
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
|
|
||||||
- Installed [Framework Name] with coverage support
|
|
||||||
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
|
|
||||||
- Created test directory structure: unit/, integration/, e2e/
|
|
||||||
- Added test scripts to build configuration
|
|
||||||
- All setup tests passing"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Testing Framework Verification:**
|
|
||||||
```bash
|
|
||||||
# Verify setup works
|
|
||||||
[test-command] # e.g., npm test, pytest, go test, cargo test
|
|
||||||
|
|
||||||
# Check coverage reporting
|
|
||||||
[coverage-command] # e.g., npm run test:coverage
|
|
||||||
|
|
||||||
# Update task with verification
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
|
|
||||||
- Sample tests running successfully
|
|
||||||
- Coverage reporting functional
|
|
||||||
- CI/CD integration ready
|
|
||||||
- Ready to begin TDD workflow"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test-Driven Development (TDD) Integration**
|
|
||||||
|
|
||||||
### **Core TDD Cycle with Jest**
|
|
||||||
```bash
|
|
||||||
# 1. Start development with watch mode
|
|
||||||
npm run test:watch
|
|
||||||
|
|
||||||
# 2. Write failing test first
|
|
||||||
# Create test file: src/utils/newFeature.test.ts
|
|
||||||
# Write test that describes expected behavior
|
|
||||||
|
|
||||||
# 3. Implement minimum code to make test pass
|
|
||||||
# 4. Refactor while keeping tests green
|
|
||||||
# 5. Add edge cases and error scenarios
|
|
||||||
```
|
|
||||||
|
|
||||||
### **TDD Workflow Per Subtask**
|
|
||||||
```bash
|
|
||||||
# When starting a new subtask:
|
|
||||||
task-master set-status --id=4.1 --status=in-progress
|
|
||||||
|
|
||||||
# Begin TDD cycle:
|
|
||||||
npm run test:watch # Keep running during development
|
|
||||||
|
|
||||||
# Document TDD progress in subtask:
|
|
||||||
task-master update-subtask --id=4.1 --prompt="TDD Progress:
|
|
||||||
- Written 3 failing tests for core functionality
|
|
||||||
- Implemented basic feature, tests now passing
|
|
||||||
- Adding edge case tests for error handling"
|
|
||||||
|
|
||||||
# Complete subtask with test summary:
|
|
||||||
task-master update-subtask --id=4.1 --prompt="Implementation complete:
|
|
||||||
- Feature implemented with 8 unit tests
|
|
||||||
- Coverage: 95% statements, 88% branches
|
|
||||||
- All tests passing, TDD cycle complete"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Testing Commands & Usage**
|
|
||||||
|
|
||||||
### **Development Commands**
|
|
||||||
```bash
|
|
||||||
# Primary development command - use during coding
|
|
||||||
npm run test:watch # Watch mode with Jest
|
|
||||||
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
|
|
||||||
|
|
||||||
# Targeted testing during development
|
|
||||||
npm run test:unit # Run only unit tests
|
|
||||||
npm run test:unit -- --coverage # Unit tests with coverage
|
|
||||||
|
|
||||||
# Integration testing when APIs are ready
|
|
||||||
npm run test:integration # Run integration tests
|
|
||||||
npm run test:integration -- --detectOpenHandles # Debug hanging tests
|
|
||||||
|
|
||||||
# End-to-end testing for workflows
|
|
||||||
npm run test:e2e # Run E2E tests
|
|
||||||
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Quality Assurance Commands**
|
|
||||||
```bash
|
|
||||||
# Full test suite with coverage (before commits)
|
|
||||||
npm run test:coverage # Complete coverage analysis
|
|
||||||
|
|
||||||
# All tests (CI/CD pipeline)
|
|
||||||
npm test # Run all test projects
|
|
||||||
|
|
||||||
# Specific test file execution
|
|
||||||
npm test -- auth.test.ts # Run specific test file
|
|
||||||
npm test -- --testNamePattern="should handle errors" # Run specific tests
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test Implementation Patterns**
|
|
||||||
|
|
||||||
### **Unit Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Follow established patterns from auth.test.ts
|
|
||||||
describe('FeatureName', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
// Setup mocks with proper typing
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('functionName', () => {
|
|
||||||
it('should handle normal case', () => {
|
|
||||||
// Test implementation with specific assertions
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw error for invalid input', async () => {
|
|
||||||
// Error scenario testing
|
|
||||||
await expect(functionName(invalidInput))
|
|
||||||
.rejects.toThrow('Specific error message');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Integration Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use supertest for API endpoint testing
|
|
||||||
import request from 'supertest';
|
|
||||||
import { app } from '../../src/app';
|
|
||||||
|
|
||||||
describe('POST /api/auth/register', () => {
|
|
||||||
beforeEach(async () => {
|
|
||||||
await integrationTestUtils.cleanupTestData();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should register user successfully', async () => {
|
|
||||||
const userData = createTestUser();
|
|
||||||
|
|
||||||
const response = await request(app)
|
|
||||||
.post('/api/auth/register')
|
|
||||||
.send(userData)
|
|
||||||
.expect(201);
|
|
||||||
|
|
||||||
expect(response.body).toMatchObject({
|
|
||||||
id: expect.any(String),
|
|
||||||
email: userData.email
|
|
||||||
});
|
|
||||||
|
|
||||||
// Verify database state
|
|
||||||
const user = await prisma.user.findUnique({
|
|
||||||
where: { email: userData.email }
|
|
||||||
});
|
|
||||||
expect(user).toBeTruthy();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **E2E Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Test complete user workflows
|
|
||||||
describe('User Authentication Flow', () => {
|
|
||||||
it('should complete registration → login → protected access', async () => {
|
|
||||||
// Step 1: Register
|
|
||||||
const userData = createTestUser();
|
|
||||||
await request(app)
|
|
||||||
.post('/api/auth/register')
|
|
||||||
.send(userData)
|
|
||||||
.expect(201);
|
|
||||||
|
|
||||||
// Step 2: Login
|
|
||||||
const loginResponse = await request(app)
|
|
||||||
.post('/api/auth/login')
|
|
||||||
.send({ email: userData.email, password: userData.password })
|
|
||||||
.expect(200);
|
|
||||||
|
|
||||||
const { token } = loginResponse.body;
|
|
||||||
|
|
||||||
// Step 3: Access protected resource
|
|
||||||
await request(app)
|
|
||||||
.get('/api/profile')
|
|
||||||
.set('Authorization', `Bearer ${token}`)
|
|
||||||
.expect(200);
|
|
||||||
}, 30000); // Extended timeout for E2E
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Mocking & Test Utilities**
|
|
||||||
|
|
||||||
### **Established Mocking Patterns**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use established bcrypt mocking pattern
|
|
||||||
jest.mock('bcrypt');
|
|
||||||
import bcrypt from 'bcrypt';
|
|
||||||
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
|
|
||||||
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
|
|
||||||
|
|
||||||
// ✅ DO: Use Prisma mocking for unit tests
|
|
||||||
jest.mock('@prisma/client', () => ({
|
|
||||||
PrismaClient: jest.fn().mockImplementation(() => ({
|
|
||||||
user: {
|
|
||||||
create: jest.fn(),
|
|
||||||
findUnique: jest.fn(),
|
|
||||||
},
|
|
||||||
$connect: jest.fn(),
|
|
||||||
$disconnect: jest.fn(),
|
|
||||||
})),
|
|
||||||
}));
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test Fixtures Usage**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use centralized test fixtures
|
|
||||||
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
|
|
||||||
|
|
||||||
describe('User Service', () => {
|
|
||||||
it('should handle admin user creation', async () => {
|
|
||||||
const userData = createTestUser(adminUser);
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should reject invalid user data', async () => {
|
|
||||||
const userData = createTestUser(invalidUser);
|
|
||||||
// Error testing
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Coverage Standards & Monitoring**
|
|
||||||
|
|
||||||
### **Coverage Thresholds**
|
|
||||||
- **Global Standards**: 80% lines/functions, 70% branches
|
|
||||||
- **Critical Code**: 90% utils, 85% middleware
|
|
||||||
- **New Features**: Must meet or exceed global thresholds
|
|
||||||
- **Legacy Code**: Gradual improvement with each change
|
|
||||||
|
|
||||||
### **Coverage Reporting & Analysis**
|
|
||||||
```bash
|
|
||||||
# Generate coverage reports
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
# View detailed HTML report
|
|
||||||
open coverage/lcov-report/index.html
|
|
||||||
|
|
||||||
# Coverage files generated:
|
|
||||||
# - coverage/lcov-report/index.html # Detailed HTML report
|
|
||||||
# - coverage/lcov.info # LCOV format for IDE integration
|
|
||||||
# - coverage/coverage-final.json # JSON format for tooling
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Coverage Quality Checks**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Test all code paths
|
|
||||||
describe('validateInput', () => {
|
|
||||||
it('should return true for valid input', () => {
|
|
||||||
expect(validateInput('valid')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false for various invalid inputs', () => {
|
|
||||||
expect(validateInput('')).toBe(false); // Empty string
|
|
||||||
expect(validateInput(null)).toBe(false); // Null value
|
|
||||||
expect(validateInput(undefined)).toBe(false); // Undefined
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw for unexpected input types', () => {
|
|
||||||
expect(() => validateInput(123)).toThrow('Invalid input type');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Testing During Development Phases**
|
|
||||||
|
|
||||||
### **Feature Development Phase**
|
|
||||||
```bash
|
|
||||||
# 1. Start feature development
|
|
||||||
task-master set-status --id=X.Y --status=in-progress
|
|
||||||
|
|
||||||
# 2. Begin TDD cycle
|
|
||||||
npm run test:watch
|
|
||||||
|
|
||||||
# 3. Document test progress in subtask
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Test development:
|
|
||||||
- Created test file with 5 failing tests
|
|
||||||
- Implemented core functionality
|
|
||||||
- Tests passing, adding error scenarios"
|
|
||||||
|
|
||||||
# 4. Verify coverage before completion
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
# 5. Update subtask with final test status
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing complete:
|
|
||||||
- 12 unit tests with full coverage
|
|
||||||
- All edge cases and error scenarios covered
|
|
||||||
- Ready for integration testing"
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Integration Testing Phase**
|
|
||||||
```bash
|
|
||||||
# After API endpoints are implemented
|
|
||||||
npm run test:integration
|
|
||||||
|
|
||||||
# Update integration test templates
|
|
||||||
# Replace placeholder tests with real endpoint calls
|
|
||||||
|
|
||||||
# Document integration test results
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Integration tests:
|
|
||||||
- Updated auth endpoint tests
|
|
||||||
- Database integration verified
|
|
||||||
- All HTTP status codes and responses tested"
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Pre-Commit Testing Phase**
|
|
||||||
```bash
|
|
||||||
# Before committing code
|
|
||||||
npm run test:coverage # Verify all tests pass with coverage
|
|
||||||
npm run test:unit # Quick unit test verification
|
|
||||||
npm run test:integration # Integration test verification (if applicable)
|
|
||||||
|
|
||||||
# Commit pattern for test updates
|
|
||||||
git add tests/ src/**/*.test.ts
|
|
||||||
git commit -m "test(task-X): Add comprehensive tests for Feature Y
|
|
||||||
|
|
||||||
- Unit tests with 95% coverage (exceeds 90% threshold)
|
|
||||||
- Integration tests for API endpoints
|
|
||||||
- Test fixtures for data generation
|
|
||||||
- Proper mocking patterns established
|
|
||||||
|
|
||||||
Task X: Feature Y - Testing complete"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Error Handling & Debugging**
|
|
||||||
|
|
||||||
### **Test Debugging Techniques**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use test utilities for debugging
|
|
||||||
import { testUtils } from '../setup';
|
|
||||||
|
|
||||||
it('should debug complex operation', () => {
|
|
||||||
testUtils.withConsole(() => {
|
|
||||||
// Console output visible only for this test
|
|
||||||
console.log('Debug info:', complexData);
|
|
||||||
service.complexOperation();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// ✅ DO: Use proper async debugging
|
|
||||||
it('should handle async operations', async () => {
|
|
||||||
const promise = service.asyncOperation();
|
|
||||||
|
|
||||||
// Test intermediate state
|
|
||||||
expect(service.isProcessing()).toBe(true);
|
|
||||||
|
|
||||||
const result = await promise;
|
|
||||||
expect(result).toBe('expected');
|
|
||||||
expect(service.isProcessing()).toBe(false);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Common Test Issues & Solutions**
|
|
||||||
```bash
|
|
||||||
# Hanging tests (common with database connections)
|
|
||||||
npm run test:integration -- --detectOpenHandles
|
|
||||||
|
|
||||||
# Memory leaks in tests
|
|
||||||
npm run test:unit -- --logHeapUsage
|
|
||||||
|
|
||||||
# Slow tests identification
|
|
||||||
npm run test:coverage -- --verbose
|
|
||||||
|
|
||||||
# Mock not working properly
|
|
||||||
# Check: mock is declared before imports
|
|
||||||
# Check: jest.clearAllMocks() in beforeEach
|
|
||||||
# Check: TypeScript typing is correct
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Continuous Integration Integration**
|
|
||||||
|
|
||||||
### **CI/CD Pipeline Testing**
|
|
||||||
```yaml
|
|
||||||
# Example GitHub Actions integration
|
|
||||||
- name: Run tests
|
|
||||||
run: |
|
|
||||||
npm ci
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
- name: Upload coverage reports
|
|
||||||
uses: codecov/codecov-action@v3
|
|
||||||
with:
|
|
||||||
file: ./coverage/lcov.info
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Pre-commit Hooks**
|
|
||||||
```bash
|
|
||||||
# Setup pre-commit testing (recommended)
|
|
||||||
# In package.json scripts:
|
|
||||||
"pre-commit": "npm run test:unit && npm run test:integration"
|
|
||||||
|
|
||||||
# Husky integration example:
|
|
||||||
npx husky add .husky/pre-commit "npm run test:unit"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test Maintenance & Evolution**
|
|
||||||
|
|
||||||
### **Adding Tests for New Features**
|
|
||||||
1. **Create test file** alongside source code or in `tests/unit/`
|
|
||||||
2. **Follow established patterns** from `src/utils/auth.test.ts`
|
|
||||||
3. **Use existing fixtures** from `tests/fixtures/`
|
|
||||||
4. **Apply proper mocking** patterns for dependencies
|
|
||||||
5. **Meet coverage thresholds** for the module
|
|
||||||
|
|
||||||
### **Updating Integration/E2E Tests**
|
|
||||||
1. **Update templates** in `tests/integration/` when APIs change
|
|
||||||
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
|
|
||||||
3. **Update test fixtures** for new data requirements
|
|
||||||
4. **Maintain database cleanup** utilities
|
|
||||||
|
|
||||||
### **Test Performance Optimization**
|
|
||||||
- **Parallel execution**: Jest runs tests in parallel by default
|
|
||||||
- **Test isolation**: Use proper setup/teardown for independence
|
|
||||||
- **Mock optimization**: Mock heavy dependencies appropriately
|
|
||||||
- **Database efficiency**: Use transaction rollbacks where possible
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Key References:**
|
|
||||||
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
|
|
||||||
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
|
|
||||||
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
|
|
||||||
- [Jest Configuration](mdc:jest.config.js)
|
|
||||||
@@ -4,14 +4,12 @@ PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE
|
|||||||
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
|
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
|
||||||
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
|
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
|
||||||
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
|
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
|
||||||
GROQ_API_KEY=YOUR_GROQ_KEY_HERE
|
|
||||||
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
||||||
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
||||||
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
||||||
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
|
||||||
|
|
||||||
# Google Vertex AI Configuration
|
# Google Vertex AI Configuration
|
||||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
VERTEX_LOCATION=us-central1
|
VERTEX_LOCATION=us-central1
|
||||||
# Optional: Path to service account credentials JSON file (alternative to API key)
|
# Optional: Path to service account credentials JSON file (alternative to API key)
|
||||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||||
|
|||||||
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,45 +0,0 @@
|
|||||||
# What type of PR is this?
|
|
||||||
<!-- Check one -->
|
|
||||||
|
|
||||||
- [ ] 🐛 Bug fix
|
|
||||||
- [ ] ✨ Feature
|
|
||||||
- [ ] 🔌 Integration
|
|
||||||
- [ ] 📝 Docs
|
|
||||||
- [ ] 🧹 Refactor
|
|
||||||
- [ ] Other:
|
|
||||||
## Description
|
|
||||||
<!-- What does this PR do? -->
|
|
||||||
|
|
||||||
## Related Issues
|
|
||||||
<!-- Link issues: Fixes #123 -->
|
|
||||||
|
|
||||||
## How to Test This
|
|
||||||
<!-- Quick steps to verify the changes work -->
|
|
||||||
```bash
|
|
||||||
# Example commands or steps
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result:**
|
|
||||||
<!-- What should happen? -->
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
|
|
||||||
- [ ] Addressed CodeRabbit comments (if any)
|
|
||||||
- [ ] Linked related issues (if any)
|
|
||||||
- [ ] Manually tested the changes
|
|
||||||
|
|
||||||
## Changelog Entry
|
|
||||||
<!-- One line describing the change for users -->
|
|
||||||
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
|
|
||||||
- [ ] PR title follows conventional commits
|
|
||||||
- [ ] Target branch correct
|
|
||||||
- [ ] Labels added
|
|
||||||
- [ ] Milestone assigned (if applicable)
|
|
||||||
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
@@ -1,39 +0,0 @@
|
|||||||
## 🐛 Bug Fix
|
|
||||||
|
|
||||||
### 🔍 Bug Description
|
|
||||||
<!-- Describe the bug -->
|
|
||||||
|
|
||||||
### 🔗 Related Issues
|
|
||||||
<!-- Fixes #123 -->
|
|
||||||
|
|
||||||
### ✨ Solution
|
|
||||||
<!-- How does this PR fix the bug? -->
|
|
||||||
|
|
||||||
## How to Test
|
|
||||||
|
|
||||||
### Steps that caused the bug:
|
|
||||||
1.
|
|
||||||
2.
|
|
||||||
|
|
||||||
**Before fix:**
|
|
||||||
**After fix:**
|
|
||||||
|
|
||||||
### Quick verification:
|
|
||||||
```bash
|
|
||||||
# Commands to verify the fix
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Added unit tests (if applicable)
|
|
||||||
- [ ] Manually verified the fix works
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
- [ ] Root cause identified
|
|
||||||
- [ ] Fix doesn't introduce new issues
|
|
||||||
- [ ] CI passes
|
|
||||||
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
@@ -1,11 +0,0 @@
|
|||||||
blank_issues_enabled: false
|
|
||||||
contact_links:
|
|
||||||
- name: 🐛 Bug Fix
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
|
|
||||||
about: Fix a bug in Task Master
|
|
||||||
- name: ✨ New Feature
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
|
|
||||||
about: Add a new feature to Task Master
|
|
||||||
- name: 🔌 New Integration
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
|
|
||||||
about: Add support for a new tool, IDE, or platform
|
|
||||||
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
@@ -1,49 +0,0 @@
|
|||||||
## ✨ New Feature
|
|
||||||
|
|
||||||
### 📋 Feature Description
|
|
||||||
<!-- Brief description -->
|
|
||||||
|
|
||||||
### 🎯 Problem Statement
|
|
||||||
<!-- What problem does this feature solve? Why is it needed? -->
|
|
||||||
|
|
||||||
### 💡 Solution
|
|
||||||
<!-- How does this feature solve the problem? What's the approach? -->
|
|
||||||
|
|
||||||
### 🔗 Related Issues
|
|
||||||
<!-- Link related issues: Fixes #123, Part of #456 -->
|
|
||||||
|
|
||||||
## How to Use It
|
|
||||||
|
|
||||||
### Quick Start
|
|
||||||
```bash
|
|
||||||
# Basic usage example
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example
|
|
||||||
<!-- Show a real use case -->
|
|
||||||
```bash
|
|
||||||
# Practical example
|
|
||||||
```
|
|
||||||
|
|
||||||
**What you should see:**
|
|
||||||
<!-- Expected behavior -->
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Added tests for new functionality
|
|
||||||
- [ ] Manually tested in CLI mode
|
|
||||||
- [ ] Manually tested in MCP mode (if applicable)
|
|
||||||
|
|
||||||
## Changelog Entry
|
|
||||||
<!-- One-liner for release notes -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
|
|
||||||
- [ ] Feature aligns with project vision
|
|
||||||
- [ ] CIs pass
|
|
||||||
- [ ] Changeset file exists
|
|
||||||
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
@@ -1,53 +0,0 @@
|
|||||||
# 🔌 New Integration
|
|
||||||
|
|
||||||
## What tool/IDE is being integrated?
|
|
||||||
|
|
||||||
<!-- Name and brief description -->
|
|
||||||
|
|
||||||
## What can users do with it?
|
|
||||||
|
|
||||||
<!-- Key benefits -->
|
|
||||||
|
|
||||||
## How to Enable
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
task-master rules add [name]
|
|
||||||
# Any other setup steps
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Usage
|
|
||||||
|
|
||||||
<!-- Show it in action -->
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Real example
|
|
||||||
```
|
|
||||||
|
|
||||||
### Natural Language Hooks (if applicable)
|
|
||||||
|
|
||||||
```
|
|
||||||
"When tests pass, mark task as done"
|
|
||||||
# Other examples
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Integration fully tested with target tool/IDE
|
|
||||||
- [ ] Error scenarios tested
|
|
||||||
- [ ] Added integration tests
|
|
||||||
- [ ] Documentation includes setup guide
|
|
||||||
- [ ] Examples are working and clear
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## For Maintainers
|
|
||||||
|
|
||||||
- [ ] Integration stability verified
|
|
||||||
- [ ] Documentation comprehensive
|
|
||||||
- [ ] Examples working
|
|
||||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'auto-close-duplicates-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
function extractDuplicateIssueNumber(commentBody) {
|
|
||||||
const match = commentBody.match(/#(\d+)/);
|
|
||||||
return match ? parseInt(match[1], 10) : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
duplicateOfNumber,
|
|
||||||
token
|
|
||||||
) {
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
|
||||||
token,
|
|
||||||
'PATCH',
|
|
||||||
{
|
|
||||||
state: 'closed',
|
|
||||||
state_reason: 'not_planned',
|
|
||||||
labels: ['duplicate']
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
|
||||||
|
|
||||||
If this is incorrect, please re-open this issue or create a new one.
|
|
||||||
|
|
||||||
🤖 Generated with [Task Master Bot]`
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function autoCloseDuplicates() {
|
|
||||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
|
|
||||||
const threeDaysAgo = new Date();
|
|
||||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
|
||||||
let foundRecentIssue = false;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
// Filter for issues created more than 3 days ago
|
|
||||||
const oldEnoughIssues = pageIssues.filter(
|
|
||||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
|
||||||
);
|
|
||||||
|
|
||||||
allIssues.push(...oldEnoughIssues);
|
|
||||||
|
|
||||||
// If all issues on this page are newer than 3 days, we can stop
|
|
||||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
|
||||||
foundRecentIssue = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we found some old issues but not all, continue to next page
|
|
||||||
// as there might be more old issues
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > MAX_PAGES) {
|
|
||||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const issues = allIssues;
|
|
||||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
|
|
||||||
for (const issue of issues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
const dupeComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeComments.length === 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
|
||||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeCommentDate > threeDaysAgo) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - duplicate comment is old enough (${Math.floor(
|
|
||||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
|
||||||
)} days)`
|
|
||||||
);
|
|
||||||
|
|
||||||
const commentsAfterDupe = comments.filter(
|
|
||||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (commentsAfterDupe.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
|
||||||
);
|
|
||||||
const reactions = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
|
||||||
);
|
|
||||||
|
|
||||||
const authorThumbsDown = reactions.some(
|
|
||||||
(reaction) =>
|
|
||||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (authorThumbsDown) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
|
||||||
lastDupeComment.body
|
|
||||||
);
|
|
||||||
if (!duplicateIssueNumber) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issue.number,
|
|
||||||
duplicateIssueNumber,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
autoCloseDuplicates().catch(console.error);
|
|
||||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'backfill-duplicate-comments-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function triggerDedupeWorkflow(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
token,
|
|
||||||
dryRun = true
|
|
||||||
) {
|
|
||||||
if (dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
ref: 'main',
|
|
||||||
inputs: {
|
|
||||||
issue_number: issueNumber.toString()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function backfillDuplicateComments() {
|
|
||||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
|
|
||||||
Environment Variables:
|
|
||||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
|
||||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
|
||||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
const dryRun = process.env.DRY_RUN !== 'false';
|
|
||||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
|
||||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
|
||||||
|
|
||||||
const cutoffDate = new Date();
|
|
||||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
|
||||||
);
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
allIssues.push(...pageIssues);
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > 100) {
|
|
||||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
|
||||||
);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
let triggeredCount = 0;
|
|
||||||
|
|
||||||
for (const issue of allIssues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
|
||||||
const dupeDetectionComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Skip if there's already a duplicate detection comment
|
|
||||||
if (dupeDetectionComments.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
|
||||||
|
|
||||||
if (!dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
triggeredCount++;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
backfillDuplicateComments().catch(console.error);
|
|
||||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { readFileSync, existsSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
// Get context from command line argument or environment
|
|
||||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
|
||||||
|
|
||||||
function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
if (existsSync(join(currentDir, 'package.json'))) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(
|
|
||||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
|
||||||
);
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
function checkPreReleaseMode() {
|
|
||||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
|
|
||||||
// Check if pre.json exists
|
|
||||||
if (!existsSync(preJsonPath)) {
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Read and parse pre.json
|
|
||||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
|
||||||
const preJson = JSON.parse(preJsonContent);
|
|
||||||
|
|
||||||
// Check if we're in active pre-release mode
|
|
||||||
if (preJson.mode === 'pre') {
|
|
||||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
|
||||||
console.error('');
|
|
||||||
|
|
||||||
// Provide context-specific error messages
|
|
||||||
if (context === 'Release Check' || context === 'pull_request') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode must be exited before merging to main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error(
|
|
||||||
'To fix this, run the following commands in your branch:'
|
|
||||||
);
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then update this pull request.');
|
|
||||||
} else if (context === 'Release' || context === 'main') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode should only be used on feature branches, not main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands locally:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push origin main');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then re-run this workflow.');
|
|
||||||
} else {
|
|
||||||
console.error('Pre-release mode must be exited before proceeding.');
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
}
|
|
||||||
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
|
||||||
console.error(`Error details: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run the check
|
|
||||||
checkPreReleaseMode();
|
|
||||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
|
||||||
|
|
||||||
function parseMetricsTable(content, metricName) {
|
|
||||||
const lines = content.split('\n');
|
|
||||||
|
|
||||||
for (let i = 0; i < lines.length; i++) {
|
|
||||||
const line = lines[i].trim();
|
|
||||||
// Match a markdown table row like: | Metric Name | value | ...
|
|
||||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
|
||||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
|
||||||
const match = line.match(re);
|
|
||||||
if (match) {
|
|
||||||
return match[1].trim() || 'N/A';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return 'N/A';
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseCountMetric(content, metricName) {
|
|
||||||
const result = parseMetricsTable(content, metricName);
|
|
||||||
// Extract number from string, handling commas and spaces
|
|
||||||
const numberMatch = result.toString().match(/[\d,]+/);
|
|
||||||
if (numberMatch) {
|
|
||||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
|
||||||
return isNaN(number) ? 0 : number;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
function main() {
|
|
||||||
const metrics = {
|
|
||||||
issues_created: 0,
|
|
||||||
issues_closed: 0,
|
|
||||||
prs_created: 0,
|
|
||||||
prs_merged: 0,
|
|
||||||
issue_avg_first_response: 'N/A',
|
|
||||||
issue_avg_time_to_close: 'N/A',
|
|
||||||
pr_avg_first_response: 'N/A',
|
|
||||||
pr_avg_merge_time: 'N/A'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Parse issue metrics
|
|
||||||
if (existsSync('issue_metrics.md')) {
|
|
||||||
console.log('📄 Found issue_metrics.md, parsing...');
|
|
||||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.issues_created = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.issues_closed = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Number of items closed'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_first_response = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR created metrics
|
|
||||||
if (existsSync('pr_created_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
|
||||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_created = parseCountMetric(
|
|
||||||
prCreatedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_first_response = parseMetricsTable(
|
|
||||||
prCreatedContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR merged metrics (for more accurate merge data)
|
|
||||||
if (existsSync('pr_merged_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
|
||||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_merged = parseCountMetric(
|
|
||||||
prMergedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
// For merged PRs, "Time to close" is actually time to merge
|
|
||||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
|
||||||
prMergedContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
|
||||||
);
|
|
||||||
// Fallback: try old pr_metrics.md if it exists
|
|
||||||
if (existsSync('pr_metrics.md')) {
|
|
||||||
console.log('📄 Falling back to pr_metrics.md...');
|
|
||||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
|
||||||
metrics.prs_merged =
|
|
||||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
|
||||||
|
|
||||||
const maybeMergeTime = parseMetricsTable(
|
|
||||||
prContent,
|
|
||||||
'Average time to merge'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_merge_time =
|
|
||||||
maybeMergeTime !== 'N/A'
|
|
||||||
? maybeMergeTime
|
|
||||||
: parseMetricsTable(prContent, 'Time to close');
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output for GitHub Actions
|
|
||||||
const output = Object.entries(metrics)
|
|
||||||
.map(([key, value]) => `${key}=${value}`)
|
|
||||||
.join('\n');
|
|
||||||
|
|
||||||
// Always output to stdout for debugging
|
|
||||||
console.log('\n=== FINAL METRICS ===');
|
|
||||||
Object.entries(metrics).forEach(([key, value]) => {
|
|
||||||
console.log(`${key}: ${value}`);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
|
||||||
if (process.env.GITHUB_OUTPUT) {
|
|
||||||
try {
|
|
||||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
|
||||||
console.log(
|
|
||||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
console.log(
|
|
||||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
main();
|
|
||||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { existsSync, unlinkSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, runCommand } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
console.log('🚀 Starting release process...');
|
|
||||||
|
|
||||||
// Double-check we're not in pre-release mode (safety net)
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
if (existsSync(preJsonPath)) {
|
|
||||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
|
||||||
unlinkSync(preJsonPath);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the extension version has changed and tag it
|
|
||||||
// This prevents changeset from trying to publish the private package
|
|
||||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
|
||||||
|
|
||||||
// Run changeset publish for npm packages
|
|
||||||
runCommand('npx', ['changeset', 'publish']);
|
|
||||||
|
|
||||||
console.log('✅ Release process completed!');
|
|
||||||
|
|
||||||
// The extension tag (if created) will trigger the extension-release workflow
|
|
||||||
33
.github/scripts/tag-extension.mjs
vendored
33
.github/scripts/tag-extension.mjs
vendored
@@ -1,33 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import assert from 'node:assert/strict';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
// Read the extension's package.json
|
|
||||||
const extensionDir = join(rootDir, 'apps', 'extension');
|
|
||||||
const pkgPath = join(extensionDir, 'package.json');
|
|
||||||
|
|
||||||
let pkg;
|
|
||||||
try {
|
|
||||||
const pkgContent = readFileSync(pkgPath, 'utf8');
|
|
||||||
pkg = JSON.parse(pkgContent);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to read package.json:', error.message);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure we have required fields
|
|
||||||
assert(pkg.name, 'package.json must have a name field');
|
|
||||||
assert(pkg.version, 'package.json must have a version field');
|
|
||||||
|
|
||||||
const tag = `${pkg.name}@${pkg.version}`;
|
|
||||||
|
|
||||||
// Create and push the tag if it doesn't exist
|
|
||||||
createAndPushTag(tag);
|
|
||||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { spawnSync } from 'node:child_process';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
|
|
||||||
// Find the root directory by looking for package.json with task-master-ai
|
|
||||||
export function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
const pkgPath = join(currentDir, 'package.json');
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run a command with proper error handling
|
|
||||||
export function runCommand(command, args = [], options = {}) {
|
|
||||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
|
||||||
const result = spawnSync(command, args, {
|
|
||||||
encoding: 'utf8',
|
|
||||||
stdio: 'inherit',
|
|
||||||
...options
|
|
||||||
});
|
|
||||||
|
|
||||||
if (result.status !== 0) {
|
|
||||||
console.error(`Command failed with exit code ${result.status}`);
|
|
||||||
process.exit(result.status);
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get package version from a package.json file
|
|
||||||
export function getPackageVersion(packagePath) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
|
||||||
return pkg.version;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`Failed to read package version from ${packagePath}:`,
|
|
||||||
error.message
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if a git tag exists on remote
|
|
||||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
|
||||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
|
||||||
encoding: 'utf8'
|
|
||||||
});
|
|
||||||
|
|
||||||
return result.status === 0 && result.stdout.trim() !== '';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create and push a git tag if it doesn't exist
|
|
||||||
export function createAndPushTag(tag, remote = 'origin') {
|
|
||||||
// Check if tag already exists
|
|
||||||
if (tagExistsOnRemote(tag, remote)) {
|
|
||||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`Creating new tag: ${tag}`);
|
|
||||||
|
|
||||||
// Create the tag locally
|
|
||||||
const tagResult = spawnSync('git', ['tag', tag]);
|
|
||||||
if (tagResult.status !== 0) {
|
|
||||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Push the tag to remote
|
|
||||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
|
||||||
if (pushResult.status !== 0) {
|
|
||||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
|||||||
name: Auto-close duplicate issues
|
|
||||||
# description: Auto-closes issues that are duplicates of existing issues
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
auto-close-duplicates:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write # Need write permission to close issues and add comments
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Auto-close duplicate issues
|
|
||||||
run: node .github/scripts/auto-close-duplicates.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
name: Backfill Duplicate Comments
|
|
||||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
days_back:
|
|
||||||
description: "How many days back to look for old issues"
|
|
||||||
required: false
|
|
||||||
default: "90"
|
|
||||||
type: string
|
|
||||||
dry_run:
|
|
||||||
description: "Dry run mode (true to only log what would be done)"
|
|
||||||
required: false
|
|
||||||
default: "true"
|
|
||||||
type: choice
|
|
||||||
options:
|
|
||||||
- "true"
|
|
||||||
- "false"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
backfill-duplicate-comments:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 30
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
actions: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Backfill duplicate comments
|
|
||||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
DAYS_BACK: ${{ inputs.days_back }}
|
|
||||||
DRY_RUN: ${{ inputs.dry_run }}
|
|
||||||
126
.github/workflows/ci.yml
vendored
126
.github/workflows/ci.yml
vendored
@@ -6,124 +6,73 @@ on:
|
|||||||
- main
|
- main
|
||||||
- next
|
- next
|
||||||
pull_request:
|
pull_request:
|
||||||
workflow_dispatch:
|
branches:
|
||||||
|
- main
|
||||||
concurrency:
|
- next
|
||||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
|
||||||
env:
|
|
||||||
DO_NOT_TRACK: 1
|
|
||||||
NODE_ENV: development
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
# Fast checks that can run in parallel
|
setup:
|
||||||
format-check:
|
|
||||||
name: Format Check
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 0
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Install Dependencies
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
id: install
|
||||||
timeout-minutes: 5
|
run: npm ci
|
||||||
|
timeout-minutes: 2
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
|
format-check:
|
||||||
|
needs: setup
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Restore node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Format Check
|
- name: Format Check
|
||||||
run: npm run format-check
|
run: npm run format-check
|
||||||
env:
|
env:
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
typecheck:
|
|
||||||
name: Typecheck
|
|
||||||
timeout-minutes: 10
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Typecheck
|
|
||||||
run: npm run turbo:typecheck
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
# Build job to ensure everything compiles
|
|
||||||
build:
|
|
||||||
name: Build
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Build
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Upload build artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: build-artifacts
|
|
||||||
path: dist/
|
|
||||||
retention-days: 1
|
|
||||||
|
|
||||||
test:
|
test:
|
||||||
name: Test
|
needs: setup
|
||||||
timeout-minutes: 15
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [format-check, typecheck, build]
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Restore node_modules
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
uses: actions/cache@v4
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Download build artifacts
|
|
||||||
uses: actions/download-artifact@v4
|
|
||||||
with:
|
with:
|
||||||
name: build-artifacts
|
path: node_modules
|
||||||
path: dist/
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Run Tests
|
- name: Run Tests
|
||||||
run: |
|
run: |
|
||||||
@@ -132,6 +81,7 @@ jobs:
|
|||||||
NODE_ENV: test
|
NODE_ENV: test
|
||||||
CI: true
|
CI: true
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
timeout-minutes: 10
|
||||||
|
|
||||||
- name: Upload Test Results
|
- name: Upload Test Results
|
||||||
if: always()
|
if: always()
|
||||||
|
|||||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
|||||||
name: Claude Issue Dedupe
|
|
||||||
# description: Automatically dedupe GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
issue_number:
|
|
||||||
description: "Issue number to process for duplicate detection"
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude-dedupe-issues:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Run Claude Code slash command
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Log duplicate comment event to Statsig
|
|
||||||
if: always()
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg triggered_by "${{ github.event_name }}" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_duplicate_comment_added",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
triggered_by: $triggered_by,
|
|
||||||
workflow_run_id: "${{ github.run_id }}"
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
57
.github/workflows/claude-docs-trigger.yml
vendored
57
.github/workflows/claude-docs-trigger.yml
vendored
@@ -1,57 +0,0 @@
|
|||||||
name: Trigger Claude Documentation Update
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- next
|
|
||||||
paths-ignore:
|
|
||||||
- "apps/docs/**"
|
|
||||||
- "*.md"
|
|
||||||
- ".github/workflows/**"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
trigger-docs-update:
|
|
||||||
# Only run if changes were merged (not direct pushes from bots)
|
|
||||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
actions: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2 # Need previous commit for comparison
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
run: |
|
|
||||||
echo "Changed files in this push:"
|
|
||||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
|
||||||
|
|
||||||
# Store changed files for Claude to analyze (escaped for JSON)
|
|
||||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
|
||||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get the commit message (escaped for JSON)
|
|
||||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
|
||||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get diff for documentation context (escaped for JSON)
|
|
||||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
|
||||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get commit SHA
|
|
||||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Trigger Claude workflow
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
# Trigger the Claude docs updater workflow with the change information
|
|
||||||
gh workflow run claude-docs-updater.yml \
|
|
||||||
--ref next \
|
|
||||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
|
||||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
|
||||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
|
||||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
|
||||||
145
.github/workflows/claude-docs-updater.yml
vendored
145
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,145 +0,0 @@
|
|||||||
name: Claude Documentation Updater
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
commit_sha:
|
|
||||||
description: 'The commit SHA that triggered this update'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_message:
|
|
||||||
description: 'The commit message'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
changed_files:
|
|
||||||
description: 'List of changed files'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_diff:
|
|
||||||
description: 'Diff summary of changes'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
update-docs:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
issues: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
ref: next
|
|
||||||
fetch-depth: 0 # Need full history to checkout specific commit
|
|
||||||
|
|
||||||
- name: Create docs update branch
|
|
||||||
id: create-branch
|
|
||||||
run: |
|
|
||||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
git checkout -b $BRANCH_NAME
|
|
||||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Run Claude Code to Update Documentation
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
timeout_minutes: "30"
|
|
||||||
mode: "agent"
|
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
experimental_allowed_domains: |
|
|
||||||
.anthropic.com
|
|
||||||
.github.com
|
|
||||||
api.github.com
|
|
||||||
.githubusercontent.com
|
|
||||||
registry.npmjs.org
|
|
||||||
.task-master.dev
|
|
||||||
base_branch: "next"
|
|
||||||
direct_prompt: |
|
|
||||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
|
||||||
|
|
||||||
Recent changes:
|
|
||||||
- Commit: ${{ inputs.commit_message }}
|
|
||||||
- Changed files:
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
|
|
||||||
- Changes summary:
|
|
||||||
${{ inputs.commit_diff }}
|
|
||||||
|
|
||||||
Your task:
|
|
||||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
|
||||||
2. Check if these changes require documentation updates in apps/docs/
|
|
||||||
3. If documentation updates are needed:
|
|
||||||
- Update relevant documentation files in apps/docs/
|
|
||||||
- Ensure examples are updated if APIs changed
|
|
||||||
- Update any configuration documentation if config options changed
|
|
||||||
- Add new documentation pages if new features were added
|
|
||||||
- Update the changelog or release notes if applicable
|
|
||||||
4. If no documentation updates are needed, skip creating changes
|
|
||||||
|
|
||||||
Guidelines:
|
|
||||||
- Focus only on user-facing changes that need documentation
|
|
||||||
- Keep documentation clear, concise, and helpful
|
|
||||||
- Include code examples where appropriate
|
|
||||||
- Maintain consistent documentation style with existing docs
|
|
||||||
- Don't document internal implementation details unless they affect users
|
|
||||||
- Update navigation/menu files if new pages are added
|
|
||||||
|
|
||||||
Only make changes if the documentation truly needs updating based on the code changes.
|
|
||||||
|
|
||||||
- name: Check if changes were made
|
|
||||||
id: check-changes
|
|
||||||
run: |
|
|
||||||
if git diff --quiet; then
|
|
||||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
|
||||||
else
|
|
||||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
|
||||||
git add -A
|
|
||||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
|
||||||
git config --local user.name "github-actions[bot]"
|
|
||||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
|
||||||
|
|
||||||
This PR was automatically generated to update documentation based on recent changes.
|
|
||||||
|
|
||||||
Original commit: ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Push changes and create PR
|
|
||||||
if: steps.check-changes.outputs.has_changes == 'true'
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
|
||||||
|
|
||||||
# Create PR using GitHub CLI
|
|
||||||
gh pr create \
|
|
||||||
--title "docs: update documentation for recent changes" \
|
|
||||||
--body "## 📚 Documentation Update
|
|
||||||
|
|
||||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
|
||||||
|
|
||||||
### Original Changes
|
|
||||||
**Commit:** ${{ inputs.commit_sha }}
|
|
||||||
**Message:** ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
### Changed Files in Original Commit
|
|
||||||
\`\`\`
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
### Documentation Updates
|
|
||||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
|
||||||
- [ ] Documentation accurately reflects the changes
|
|
||||||
- [ ] Examples are correct and working
|
|
||||||
- [ ] No important details are missing
|
|
||||||
- [ ] Style is consistent with existing documentation
|
|
||||||
|
|
||||||
---
|
|
||||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
|
||||||
--base next \
|
|
||||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
|
||||||
--label "documentation" \
|
|
||||||
--label "automated"
|
|
||||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
|||||||
name: Claude Issue Triage
|
|
||||||
# description: Automatically triage GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
triage-issue:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Create triage prompt
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/claude-prompts
|
|
||||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
|
||||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
|
||||||
|
|
||||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
|
||||||
|
|
||||||
Issue Information:
|
|
||||||
- REPO: ${{ github.repository }}
|
|
||||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
|
||||||
|
|
||||||
TASK OVERVIEW:
|
|
||||||
|
|
||||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
|
||||||
|
|
||||||
2. Next, use the GitHub tools to get context about the issue:
|
|
||||||
- You have access to these tools:
|
|
||||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
|
||||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
|
||||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
|
||||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
|
||||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
|
||||||
- Start by using mcp__github__get_issue to get the issue details
|
|
||||||
|
|
||||||
3. Analyze the issue content, considering:
|
|
||||||
- The issue title and description
|
|
||||||
- The type of issue (bug report, feature request, question, etc.)
|
|
||||||
- Technical areas mentioned
|
|
||||||
- Severity or priority indicators
|
|
||||||
- User impact
|
|
||||||
- Components affected
|
|
||||||
|
|
||||||
4. Select appropriate labels from the available labels list provided above:
|
|
||||||
- Choose labels that accurately reflect the issue's nature
|
|
||||||
- Be specific but comprehensive
|
|
||||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
|
||||||
- Consider platform labels (android, ios) if applicable
|
|
||||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
|
||||||
|
|
||||||
5. Apply the selected labels:
|
|
||||||
- Use mcp__github__update_issue to apply your selected labels
|
|
||||||
- DO NOT post any comments explaining your decision
|
|
||||||
- DO NOT communicate directly with users
|
|
||||||
- If no labels are clearly applicable, do not apply any labels
|
|
||||||
|
|
||||||
IMPORTANT GUIDELINES:
|
|
||||||
- Be thorough in your analysis
|
|
||||||
- Only select labels from the provided list above
|
|
||||||
- DO NOT post any comments to the issue
|
|
||||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
|
||||||
- It's okay to not add any labels if none are clearly applicable
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Setup GitHub MCP Server
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/mcp-config
|
|
||||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"github": {
|
|
||||||
"command": "docker",
|
|
||||||
"args": [
|
|
||||||
"run",
|
|
||||||
"-i",
|
|
||||||
"--rm",
|
|
||||||
"-e",
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
|
||||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
|
||||||
],
|
|
||||||
"env": {
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Run Claude Code for Issue Triage
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
|
||||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
|
||||||
timeout_minutes: "5"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
|||||||
name: Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issue_comment:
|
|
||||||
types: [created]
|
|
||||||
pull_request_review_comment:
|
|
||||||
types: [created]
|
|
||||||
issues:
|
|
||||||
types: [opened, assigned]
|
|
||||||
pull_request_review:
|
|
||||||
types: [submitted]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude:
|
|
||||||
if: |
|
|
||||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
|
||||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pull-requests: read
|
|
||||||
issues: read
|
|
||||||
id-token: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 1
|
|
||||||
|
|
||||||
- name: Run Claude Code
|
|
||||||
id: claude
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
140
.github/workflows/extension-ci.yml
vendored
140
.github/workflows/extension-ci.yml
vendored
@@ -1,140 +0,0 @@
|
|||||||
name: Extension CI
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
- next
|
|
||||||
paths:
|
|
||||||
- 'apps/extension/**'
|
|
||||||
- '.github/workflows/extension-ci.yml'
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
- next
|
|
||||||
paths:
|
|
||||||
- 'apps/extension/**'
|
|
||||||
- '.github/workflows/extension-ci.yml'
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
setup:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Cache node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
typecheck:
|
|
||||||
needs: setup
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
|
|
||||||
- name: Restore node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install if cache miss
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 3
|
|
||||||
|
|
||||||
- name: Type Check Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run check-types
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
build:
|
|
||||||
needs: setup
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
|
|
||||||
- name: Restore node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install if cache miss
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 3
|
|
||||||
|
|
||||||
- name: Build Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run build
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Package Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run package
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Verify Package Contents
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: |
|
|
||||||
echo "Checking vsix-build contents..."
|
|
||||||
ls -la vsix-build/
|
|
||||||
echo "Checking dist contents..."
|
|
||||||
ls -la vsix-build/dist/
|
|
||||||
echo "Checking package.json exists..."
|
|
||||||
test -f vsix-build/package.json
|
|
||||||
|
|
||||||
- name: Create VSIX Package (Test)
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce package --no-dependencies
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Upload Extension Artifact
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: extension-package
|
|
||||||
path: |
|
|
||||||
apps/extension/vsix-build/*.vsix
|
|
||||||
apps/extension/dist/
|
|
||||||
retention-days: 30
|
|
||||||
|
|
||||||
110
.github/workflows/extension-release.yml
vendored
110
.github/workflows/extension-release.yml
vendored
@@ -1,110 +0,0 @@
|
|||||||
name: Extension Release
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- "extension@*"
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
|
|
||||||
concurrency: extension-release-${{ github.ref }}
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
publish-extension:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Cache node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Type Check Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run check-types
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run build
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Package Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run package
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Create VSIX Package
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce package --no-dependencies
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Get VSIX filename
|
|
||||||
id: vsix-info
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: |
|
|
||||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
|
||||||
if [ -z "$VSIX_FILE" ]; then
|
|
||||||
echo "Error: No VSIX file found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
|
||||||
echo "Found VSIX: $VSIX_FILE"
|
|
||||||
|
|
||||||
- name: Publish to VS Code Marketplace
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}"
|
|
||||||
env:
|
|
||||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Install Open VSX CLI
|
|
||||||
run: npm install -g ovsx
|
|
||||||
|
|
||||||
- name: Publish to Open VSX Registry
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}"
|
|
||||||
env:
|
|
||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Upload Build Artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: extension-release-${{ github.ref_name }}
|
|
||||||
path: |
|
|
||||||
apps/extension/vsix-build/*.vsix
|
|
||||||
apps/extension/dist/
|
|
||||||
retention-days: 90
|
|
||||||
|
|
||||||
notify-success:
|
|
||||||
needs: publish-extension
|
|
||||||
if: success()
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Success Notification
|
|
||||||
run: |
|
|
||||||
echo "🎉 Extension ${{ github.ref_name }} successfully published!"
|
|
||||||
echo "📦 Available on VS Code Marketplace"
|
|
||||||
echo "🌍 Available on Open VSX Registry"
|
|
||||||
echo "🏷️ GitHub release created: ${{ github.ref_name }}"
|
|
||||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
|||||||
name: Log GitHub Issue Events
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened, closed]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
log-issue-created:
|
|
||||||
if: github.event.action == 'opened'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue creation to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
AUTHOR="${{ github.event.issue.user.login }}"
|
|
||||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg author "$AUTHOR" \
|
|
||||||
--arg created_at "$CREATED_AT" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_created",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
issue_author: $author,
|
|
||||||
created_at: $created_at
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log-issue-closed:
|
|
||||||
if: github.event.action == 'closed'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue closure to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
|
||||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
|
||||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get additional issue data via GitHub API
|
|
||||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
|
||||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
|
||||||
|
|
||||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
|
||||||
|
|
||||||
# Get reactions data
|
|
||||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
|
||||||
|
|
||||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
|
||||||
|
|
||||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
|
||||||
CLOSED_AUTOMATICALLY="false"
|
|
||||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
|
||||||
CLOSED_AUTOMATICALLY="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if closed as duplicate by state_reason
|
|
||||||
CLOSED_AS_DUPLICATE="false"
|
|
||||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
|
||||||
CLOSED_AS_DUPLICATE="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg closed_by "$CLOSED_BY" \
|
|
||||||
--arg closed_at "$CLOSED_AT" \
|
|
||||||
--arg state_reason "$STATE_REASON" \
|
|
||||||
--arg comments_count "$COMMENTS_COUNT" \
|
|
||||||
--arg reactions_count "$REACTIONS_COUNT" \
|
|
||||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
|
||||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_closed",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
closed_by: $closed_by,
|
|
||||||
closed_at: $closed_at,
|
|
||||||
state_reason: $state_reason,
|
|
||||||
comments_count: ($comments_count | tonumber),
|
|
||||||
reactions_count: ($reactions_count | tonumber),
|
|
||||||
closed_automatically: ($closed_automatically | test("true")),
|
|
||||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
|
||||||
echo "Closed by: $CLOSED_BY"
|
|
||||||
echo "Comments: $COMMENTS_COUNT"
|
|
||||||
echo "Reactions: $REACTIONS_COUNT"
|
|
||||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
|
||||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
55
.github/workflows/pre-release.yml
vendored
55
.github/workflows/pre-release.yml
vendored
@@ -3,13 +3,11 @@ name: Pre-Release (RC)
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||||
|
|
||||||
concurrency: pre-release-${{ github.ref_name }}
|
concurrency: pre-release-${{ github.ref }}
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
rc:
|
rc:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
# Only allow pre-releases on non-main branches
|
|
||||||
if: github.ref != 'refs/heads/main'
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -18,7 +16,7 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Cache node_modules
|
- name: Cache node_modules
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
@@ -34,30 +32,10 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Enter RC mode (if not already in RC mode)
|
- name: Enter RC mode
|
||||||
run: |
|
run: |
|
||||||
# Check if we're in pre-release mode with the "rc" tag
|
npx changeset pre exit || true
|
||||||
if [ -f .changeset/pre.json ]; then
|
npx changeset pre enter rc
|
||||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
|
|
||||||
if [ "$MODE" = "exit" ]; then
|
|
||||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
|
||||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
|
||||||
npx changeset pre exit
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
|
||||||
echo "Already in RC pre-release mode"
|
|
||||||
else
|
|
||||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "No pre.json found, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Version RC packages
|
- name: Version RC packages
|
||||||
run: npx changeset version
|
run: npx changeset version
|
||||||
@@ -65,31 +43,20 @@ jobs:
|
|||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
- name: Run format
|
|
||||||
run: npm run format
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build packages
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: npx changeset publish
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Exit RC mode
|
||||||
|
run: npx changeset pre exit
|
||||||
|
|
||||||
- name: Commit & Push changes
|
- name: Commit & Push changes
|
||||||
uses: actions-js/push@master
|
uses: actions-js/push@master
|
||||||
with:
|
with:
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
branch: ${{ github.ref }}
|
branch: ${{ github.ref }}
|
||||||
message: "chore: rc version bump"
|
message: 'chore: rc version bump'
|
||||||
|
|||||||
21
.github/workflows/release-check.yml
vendored
21
.github/workflows/release-check.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
name: Release Check
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: release-check-${{ github.head_ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check-release-mode:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Check release mode
|
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
|
||||||
22
.github/workflows/release.yml
vendored
22
.github/workflows/release.yml
vendored
@@ -6,11 +6,6 @@ on:
|
|||||||
|
|
||||||
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
id-token: write
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -22,7 +17,7 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Cache node_modules
|
- name: Cache node_modules
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
@@ -38,22 +33,13 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Check pre-release mode
|
- name: Exit pre-release mode (safety check)
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
run: npx changeset pre exit || true
|
||||||
|
|
||||||
- name: Build packages
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Create Release Pull Request or Publish to npm
|
- name: Create Release Pull Request or Publish to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: node ./.github/scripts/release.mjs
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
108
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,108 +0,0 @@
|
|||||||
name: Weekly Metrics to Discord
|
|
||||||
# description: Sends weekly metrics summary to Discord channel
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
pull-requests: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
weekly-metrics:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
env:
|
|
||||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: '20'
|
|
||||||
|
|
||||||
- name: Get dates for last 14 days
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
# Last 14 days
|
|
||||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
|
||||||
last_day=$(date +%Y-%m-%d)
|
|
||||||
|
|
||||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
|
||||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
|
||||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
|
||||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Generate issue metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
HIDE_TIME_TO_ANSWER: true
|
|
||||||
HIDE_LABEL_METRICS: false
|
|
||||||
OUTPUT_FILE: issue_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR created metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_created_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR merged metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_merged_metrics.md
|
|
||||||
|
|
||||||
- name: Debug generated metrics
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
echo "Listing markdown files in workspace:"
|
|
||||||
ls -la *.md || true
|
|
||||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
|
||||||
if [ -f "$f" ]; then
|
|
||||||
echo "== $f (first 10 lines) =="
|
|
||||||
head -n 10 "$f"
|
|
||||||
else
|
|
||||||
echo "Missing $f"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Parse metrics
|
|
||||||
id: metrics
|
|
||||||
run: node .github/scripts/parse-metrics.mjs
|
|
||||||
|
|
||||||
- name: Send to Discord
|
|
||||||
uses: sarisia/actions-status-discord@v1
|
|
||||||
if: env.DISCORD_WEBHOOK != ''
|
|
||||||
with:
|
|
||||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
|
||||||
status: Success
|
|
||||||
title: "📊 Weekly Metrics Report"
|
|
||||||
description: |
|
|
||||||
**${{ env.week_of }}**
|
|
||||||
*${{ env.date_range }}*
|
|
||||||
|
|
||||||
**🎯 Issues**
|
|
||||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
|
||||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
|
||||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
|
||||||
|
|
||||||
**🔀 Pull Requests**
|
|
||||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
|
||||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
|
||||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
|
||||||
|
|
||||||
**📈 Visual Analytics**
|
|
||||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
|
||||||
color: 0x58AFFF
|
|
||||||
username: Task Master Metrics Bot
|
|
||||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
|
||||||
23
.gitignore
vendored
23
.gitignore
vendored
@@ -77,26 +77,3 @@ dev-debug.log
|
|||||||
|
|
||||||
# NPMRC
|
# NPMRC
|
||||||
.npmrc
|
.npmrc
|
||||||
|
|
||||||
# Added by Task Master AI
|
|
||||||
# Editor directories and files
|
|
||||||
.idea
|
|
||||||
.vscode
|
|
||||||
*.suo
|
|
||||||
*.ntvs*
|
|
||||||
*.njsproj
|
|
||||||
*.sln
|
|
||||||
*.sw?
|
|
||||||
|
|
||||||
# VS Code extension test files
|
|
||||||
.vscode-test/
|
|
||||||
apps/extension/.vscode-test/
|
|
||||||
|
|
||||||
# apps/extension
|
|
||||||
apps/extension/vsix-build/
|
|
||||||
|
|
||||||
# turbo
|
|
||||||
.turbo
|
|
||||||
|
|
||||||
# TaskMaster Workflow State (now stored in ~/.taskmaster/sessions/)
|
|
||||||
# No longer needed in .gitignore as state is stored globally
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Code Change Task Tracker",
|
|
||||||
"description": "Track implementation progress by monitoring code changes",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
"**/*.{js,ts,jsx,tsx,py,go,rs,java,cpp,c,h,hpp,cs,rb,php,swift,kt,scala,clj}",
|
|
||||||
"!**/node_modules/**",
|
|
||||||
"!**/vendor/**",
|
|
||||||
"!**/.git/**",
|
|
||||||
"!**/build/**",
|
|
||||||
"!**/dist/**",
|
|
||||||
"!**/target/**",
|
|
||||||
"!**/__pycache__/**"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I just saved a source code file. Please:\n\n1. Check what task is currently 'in-progress' using 'tm list --status=in-progress'\n2. Look at the file I saved and summarize what was changed (considering the programming language and context)\n3. Update the task's notes with: 'tm update-subtask --id=<task_id> --prompt=\"Implemented: <summary_of_changes> in <file_path>\"'\n4. If the changes seem to complete the task based on its description, ask if I want to mark it as done"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": false,
|
|
||||||
"name": "[TM] Complexity Analyzer",
|
|
||||||
"description": "Analyze task complexity when new tasks are added",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
".taskmaster/tasks/tasks.json"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "New tasks were added to tasks.json. For each new task:\n\n1. Run 'tm analyze-complexity --id=<task_id>'\n2. If complexity score is > 7, automatically expand it: 'tm expand --id=<task_id> --num=5'\n3. Show the complexity analysis results\n4. Suggest task dependencies based on the expanded subtasks"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Daily Standup Assistant",
|
|
||||||
"description": "Morning workflow summary and task selection",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "userTriggered"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "Good morning! Please provide my daily standup summary:\n\n1. Run 'tm list --status=done' and show tasks completed in the last 24 hours\n2. Run 'tm list --status=in-progress' to show current work\n3. Run 'tm next' to suggest the highest priority task to start\n4. Show the dependency graph for upcoming work\n5. Ask which task I'd like to focus on today"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Git Commit Task Linker",
|
|
||||||
"description": "Link commits to tasks for traceability",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "manual"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I'm about to commit code. Please:\n\n1. Run 'git diff --staged' to see what's being committed\n2. Analyze the changes and suggest which tasks they relate to\n3. Generate a commit message in format: 'feat(task-<id>): <description>'\n4. Update the relevant tasks with a note about this commit\n5. Show the proposed commit message for approval"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] PR Readiness Checker",
|
|
||||||
"description": "Validate tasks before creating a pull request",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "manual"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I'm about to create a PR. Please:\n\n1. List all tasks marked as 'done' in this branch\n2. For each done task, verify:\n - All subtasks are also done\n - Test files exist for new functionality\n - No TODO comments remain related to the task\n3. Generate a PR description listing completed tasks\n4. Suggest a PR title based on the main tasks completed"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Task Dependency Auto-Progression",
|
|
||||||
"description": "Automatically progress tasks when dependencies are completed",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
".taskmaster/tasks/tasks.json",
|
|
||||||
".taskmaster/tasks/*.json"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "Check the tasks.json file for any tasks that just changed status to 'done'. For each completed task:\n\n1. Find all tasks that depend on it\n2. Check if those dependent tasks now have all their dependencies satisfied\n3. If a task has all dependencies met and is still 'pending', use the command 'tm set-status --id=<task_id> --status=in-progress' to start it\n4. Show me which tasks were auto-started and why"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Test Success Task Completer",
|
|
||||||
"description": "Mark tasks as done when their tests pass",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
"**/*test*.{js,ts,jsx,tsx,py,go,java,rb,php,rs,cpp,cs}",
|
|
||||||
"**/*spec*.{js,ts,jsx,tsx,rb}",
|
|
||||||
"**/test_*.py",
|
|
||||||
"**/*_test.go",
|
|
||||||
"**/*Test.java",
|
|
||||||
"**/*Tests.cs",
|
|
||||||
"!**/node_modules/**",
|
|
||||||
"!**/vendor/**"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "A test file was just saved. Please:\n\n1. Identify the test framework/language and run the appropriate test command for this file (npm test, pytest, go test, cargo test, dotnet test, mvn test, etc.)\n2. If all tests pass, check which tasks mention this functionality\n3. For any matching tasks that are 'in-progress', ask if the passing tests mean the task is complete\n4. If confirmed, mark the task as done with 'tm set-status --id=<task_id> --status=done'"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
|
||||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
|
||||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
|
||||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
|
||||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
|
||||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
|
||||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
|
||||||
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,422 +0,0 @@
|
|||||||
---
|
|
||||||
inclusion: always
|
|
||||||
---
|
|
||||||
|
|
||||||
# Taskmaster Development Workflow
|
|
||||||
|
|
||||||
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
|
||||||
|
|
||||||
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
|
||||||
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
|
||||||
|
|
||||||
## The Basic Loop
|
|
||||||
The fundamental development cycle you will facilitate is:
|
|
||||||
1. **`list`**: Show the user what needs to be done.
|
|
||||||
2. **`next`**: Help the user decide what to work on.
|
|
||||||
3. **`show <id>`**: Provide details for a specific task.
|
|
||||||
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
|
||||||
5. **Implement**: The user writes the code and tests.
|
|
||||||
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
|
||||||
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
|
||||||
8. **Repeat**.
|
|
||||||
|
|
||||||
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Standard Development Workflow Process
|
|
||||||
|
|
||||||
### Simple Workflow (Default Starting Point)
|
|
||||||
|
|
||||||
For new projects or when users are getting started, operate within the `master` tag context:
|
|
||||||
|
|
||||||
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see @`taskmaster.md`) to generate initial tasks.json with tagged structure
|
|
||||||
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules kiro,windsurf`) or manage them later with `task-master rules add/remove` commands
|
|
||||||
- Begin coding sessions with `get_tasks` / `task-master list` (see @`taskmaster.md`) to see current tasks, status, and IDs
|
|
||||||
- Determine the next task to work on using `next_task` / `task-master next` (see @`taskmaster.md`)
|
|
||||||
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) before breaking down tasks
|
|
||||||
- Review complexity report using `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`)
|
|
||||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
|
||||||
- View specific task details using `get_task` / `task-master show <id>` (see @`taskmaster.md`) to understand implementation requirements
|
|
||||||
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see @`taskmaster.md`) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
|
||||||
- Implement code following task details, dependencies, and project standards
|
|
||||||
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see @`taskmaster.md`)
|
|
||||||
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see @`taskmaster.md`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Leveling Up: Agent-Led Multi-Context Workflows
|
|
||||||
|
|
||||||
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
|
||||||
|
|
||||||
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
|
||||||
|
|
||||||
### When to Introduce Tags: Your Decision Patterns
|
|
||||||
|
|
||||||
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
|
||||||
|
|
||||||
#### Pattern 1: Simple Git Feature Branching
|
|
||||||
This is the most common and direct use case for tags.
|
|
||||||
|
|
||||||
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
|
||||||
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
|
||||||
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag --from-branch`
|
|
||||||
|
|
||||||
#### Pattern 2: Team Collaboration
|
|
||||||
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
|
||||||
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
|
||||||
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
|
||||||
|
|
||||||
#### Pattern 3: Experiments or Risky Refactors
|
|
||||||
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
|
||||||
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
|
||||||
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
|
||||||
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
|
||||||
|
|
||||||
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
|
||||||
This is a more structured approach for significant new features or epics.
|
|
||||||
|
|
||||||
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
|
||||||
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
|
||||||
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
|
||||||
- **Your Implementation Flow**:
|
|
||||||
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
|
||||||
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
|
||||||
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
|
||||||
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
|
||||||
|
|
||||||
#### Pattern 5: Version-Based Development
|
|
||||||
Tailor your approach based on the project maturity indicated by tag names.
|
|
||||||
|
|
||||||
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
|
||||||
- **Your Approach**: Focus on speed and functionality over perfection
|
|
||||||
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
|
||||||
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
|
||||||
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
|
||||||
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
|
||||||
|
|
||||||
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
|
||||||
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
|
||||||
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
|
||||||
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
|
||||||
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
|
||||||
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
|
||||||
|
|
||||||
### Advanced Workflow (Tag-Based & PRD-Driven)
|
|
||||||
|
|
||||||
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
|
||||||
- User mentions teammates or collaboration needs
|
|
||||||
- Project has grown to 15+ tasks with mixed priorities
|
|
||||||
- User creates feature branches or mentions major initiatives
|
|
||||||
- User initializes Taskmaster on an existing, complex codebase
|
|
||||||
- User describes large features that would benefit from dedicated planning
|
|
||||||
|
|
||||||
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
|
||||||
|
|
||||||
#### Master List Strategy (High-Value Focus)
|
|
||||||
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
|
||||||
- **High-level deliverables** that provide significant business value
|
|
||||||
- **Major milestones** and epic-level features
|
|
||||||
- **Critical infrastructure** work that affects the entire project
|
|
||||||
- **Release-blocking** items
|
|
||||||
|
|
||||||
**What NOT to put in master**:
|
|
||||||
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
|
||||||
- Refactoring work (create dedicated tags like `refactor-auth`)
|
|
||||||
- Experimental features (use `experiment-*` tags)
|
|
||||||
- Team member-specific tasks (use person-specific tags)
|
|
||||||
|
|
||||||
#### PRD-Driven Feature Development
|
|
||||||
|
|
||||||
**For New Major Features**:
|
|
||||||
1. **Identify the Initiative**: When user describes a significant feature
|
|
||||||
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
|
||||||
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
|
||||||
4. **Parse & Prepare**:
|
|
||||||
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
|
||||||
- `analyze_project_complexity --tag=feature-[name] --research`
|
|
||||||
- `expand_all --tag=feature-[name] --research`
|
|
||||||
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
|
||||||
|
|
||||||
**For Existing Codebase Analysis**:
|
|
||||||
When users initialize Taskmaster on existing projects:
|
|
||||||
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
|
||||||
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
|
||||||
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
|
||||||
- Current state analysis (based on your codebase research)
|
|
||||||
- Proposed improvements or new features
|
|
||||||
- Implementation strategy considering existing code
|
|
||||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
|
||||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
|
||||||
|
|
||||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
|
||||||
|
|
||||||
### Workflow Transition Examples
|
|
||||||
|
|
||||||
**Example 1: Simple → Team-Based**
|
|
||||||
```
|
|
||||||
User: "Alice is going to help with the API work"
|
|
||||||
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
|
||||||
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 2: Simple → PRD-Driven**
|
|
||||||
```
|
|
||||||
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
|
||||||
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
|
||||||
Actions:
|
|
||||||
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
|
||||||
2. Collaborate on PRD creation
|
|
||||||
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
|
||||||
4. Add high-level "User Dashboard" task to master
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 3: Existing Project → Strategic Planning**
|
|
||||||
```
|
|
||||||
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
|
||||||
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
|
||||||
Actions:
|
|
||||||
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
|
||||||
2. Collaborate on improvement PRD based on findings
|
|
||||||
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
|
||||||
4. Keep only major improvement initiatives in master
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Primary Interaction: MCP Server vs. CLI
|
|
||||||
|
|
||||||
Taskmaster offers two primary ways to interact:
|
|
||||||
|
|
||||||
1. **MCP Server (Recommended for Integrated Tools)**:
|
|
||||||
- For AI agents and integrated development environments (like Kiro), interacting via the **MCP server is the preferred method**.
|
|
||||||
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
|
||||||
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
|
||||||
- Refer to @`mcp.md` for details on the MCP architecture and available tools.
|
|
||||||
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in @`taskmaster.md`.
|
|
||||||
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
|
|
||||||
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
|
|
||||||
|
|
||||||
2. **`task-master` CLI (For Users & Fallback)**:
|
|
||||||
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
|
|
||||||
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
|
|
||||||
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
|
|
||||||
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
|
|
||||||
- Refer to @`taskmaster.md` for a detailed command reference.
|
|
||||||
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
|
||||||
|
|
||||||
## How the Tag System Works (For Your Reference)
|
|
||||||
|
|
||||||
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
|
||||||
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
|
||||||
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
|
||||||
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
|
||||||
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to @`taskmaster.md` for a full command list.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Task Complexity Analysis
|
|
||||||
|
|
||||||
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) for comprehensive analysis
|
|
||||||
- Review complexity report via `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`) for a formatted, readable version.
|
|
||||||
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
|
||||||
- Use analysis results to determine appropriate subtask allocation
|
|
||||||
- Note that reports are automatically used by the `expand_task` tool/command
|
|
||||||
|
|
||||||
## Task Breakdown Process
|
|
||||||
|
|
||||||
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
|
||||||
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
|
||||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
|
||||||
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
|
||||||
- Use `--prompt="<context>"` to provide additional context when needed.
|
|
||||||
- Review and adjust generated subtasks as necessary.
|
|
||||||
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
|
|
||||||
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
|
|
||||||
|
|
||||||
## Implementation Drift Handling
|
|
||||||
|
|
||||||
- When implementation differs significantly from planned approach
|
|
||||||
- When future tasks need modification due to current implementation choices
|
|
||||||
- When new dependencies or requirements emerge
|
|
||||||
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
|
|
||||||
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
|
|
||||||
|
|
||||||
## Task Status Management
|
|
||||||
|
|
||||||
- Use 'pending' for tasks ready to be worked on
|
|
||||||
- Use 'done' for completed and verified tasks
|
|
||||||
- Use 'deferred' for postponed tasks
|
|
||||||
- Add custom status values as needed for project-specific workflows
|
|
||||||
|
|
||||||
## Task Structure Fields
|
|
||||||
|
|
||||||
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
|
|
||||||
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
|
||||||
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
|
||||||
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
|
||||||
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
|
|
||||||
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
|
|
||||||
- This helps quickly identify which prerequisite tasks are blocking work
|
|
||||||
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
|
||||||
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
|
||||||
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
|
||||||
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
|
||||||
- Refer to task structure details (previously linked to `tasks.md`).
|
|
||||||
|
|
||||||
## Configuration Management (Updated)
|
|
||||||
|
|
||||||
Taskmaster configuration is managed through two main mechanisms:
|
|
||||||
|
|
||||||
1. **`.taskmaster/config.json` File (Primary):**
|
|
||||||
* Located in the project root directory.
|
|
||||||
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
|
|
||||||
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
|
|
||||||
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
|
|
||||||
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
|
|
||||||
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
|
|
||||||
|
|
||||||
2. **Environment Variables (`.env` / `mcp.json`):**
|
|
||||||
* Used **only** for sensitive API keys and specific endpoint URLs.
|
|
||||||
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
|
|
||||||
* For MCP/Kiro integration, configure these keys in the `env` section of `.kiro/mcp.json`.
|
|
||||||
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.md`).
|
|
||||||
|
|
||||||
3. **`.taskmaster/state.json` File (Tagged System State):**
|
|
||||||
* Tracks current tag context and migration status.
|
|
||||||
* Automatically created during tagged system migration.
|
|
||||||
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
|
|
||||||
|
|
||||||
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
|
||||||
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.kiro/mcp.json`.
|
|
||||||
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
|
||||||
|
|
||||||
## Rules Management
|
|
||||||
|
|
||||||
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
|
||||||
|
|
||||||
- **Available Profiles**: Claude Code, Cline, Codex, Kiro, Roo Code, Trae, Windsurf (claude, cline, codex, kiro, roo, trae, windsurf)
|
|
||||||
- **During Initialization**: Use `task-master init --rules kiro,windsurf` to specify which rule sets to include
|
|
||||||
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
|
||||||
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
|
||||||
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
|
||||||
- **Rule Structure**: Each profile creates its own directory (e.g., `.kiro/steering`, `.roo/rules`) with appropriate configuration files
|
|
||||||
|
|
||||||
## Determining the Next Task
|
|
||||||
|
|
||||||
- Run `next_task` / `task-master next` to show the next task to work on.
|
|
||||||
- The command identifies tasks with all dependencies satisfied
|
|
||||||
- Tasks are prioritized by priority level, dependency count, and ID
|
|
||||||
- The command shows comprehensive task information including:
|
|
||||||
- Basic task details and description
|
|
||||||
- Implementation details
|
|
||||||
- Subtasks (if they exist)
|
|
||||||
- Contextual suggested actions
|
|
||||||
- Recommended before starting any new development work
|
|
||||||
- Respects your project's dependency structure
|
|
||||||
- Ensures tasks are completed in the appropriate sequence
|
|
||||||
- Provides ready-to-use commands for common task actions
|
|
||||||
|
|
||||||
## Viewing Specific Task Details
|
|
||||||
|
|
||||||
- Run `get_task` / `task-master show <id>` to view a specific task.
|
|
||||||
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
|
|
||||||
- Displays comprehensive information similar to the next command, but for a specific task
|
|
||||||
- For parent tasks, shows all subtasks and their current status
|
|
||||||
- For subtasks, shows parent task information and relationship
|
|
||||||
- Provides contextual suggested actions appropriate for the specific task
|
|
||||||
- Useful for examining task details before implementation or checking status
|
|
||||||
|
|
||||||
## Managing Task Dependencies
|
|
||||||
|
|
||||||
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
|
|
||||||
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
|
|
||||||
- The system prevents circular dependencies and duplicate dependency entries
|
|
||||||
- Dependencies are checked for existence before being added or removed
|
|
||||||
- Task files are automatically regenerated after dependency changes
|
|
||||||
- Dependencies are visualized with status indicators in task listings and files
|
|
||||||
|
|
||||||
## Task Reorganization
|
|
||||||
|
|
||||||
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
|
|
||||||
- This command supports several use cases:
|
|
||||||
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
|
|
||||||
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
|
|
||||||
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
|
|
||||||
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
|
|
||||||
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
|
|
||||||
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
|
|
||||||
- The system includes validation to prevent data loss:
|
|
||||||
- Allows moving to non-existent IDs by creating placeholder tasks
|
|
||||||
- Prevents moving to existing task IDs that have content (to avoid overwriting)
|
|
||||||
- Validates source tasks exist before attempting to move them
|
|
||||||
- The system maintains proper parent-child relationships and dependency integrity
|
|
||||||
- Task files are automatically regenerated after the move operation
|
|
||||||
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
|
|
||||||
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
|
|
||||||
|
|
||||||
## Iterative Subtask Implementation
|
|
||||||
|
|
||||||
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
|
||||||
|
|
||||||
1. **Understand the Goal (Preparation):**
|
|
||||||
* Use `get_task` / `task-master show <subtaskId>` (see @`taskmaster.md`) to thoroughly understand the specific goals and requirements of the subtask.
|
|
||||||
|
|
||||||
2. **Initial Exploration & Planning (Iteration 1):**
|
|
||||||
* This is the first attempt at creating a concrete implementation plan.
|
|
||||||
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
|
|
||||||
* Determine the intended code changes (diffs) and their locations.
|
|
||||||
* Gather *all* relevant details from this exploration phase.
|
|
||||||
|
|
||||||
3. **Log the Plan:**
|
|
||||||
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
|
|
||||||
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
|
|
||||||
|
|
||||||
4. **Verify the Plan:**
|
|
||||||
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
|
|
||||||
|
|
||||||
5. **Begin Implementation:**
|
|
||||||
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
|
|
||||||
* Start coding based on the logged plan.
|
|
||||||
|
|
||||||
6. **Refine and Log Progress (Iteration 2+):**
|
|
||||||
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
|
|
||||||
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
|
|
||||||
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
|
|
||||||
* **Crucially, log:**
|
|
||||||
* What worked ("fundamental truths" discovered).
|
|
||||||
* What didn't work and why (to avoid repeating mistakes).
|
|
||||||
* Specific code snippets or configurations that were successful.
|
|
||||||
* Decisions made, especially if confirmed with user input.
|
|
||||||
* Any deviations from the initial plan and the reasoning.
|
|
||||||
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
|
|
||||||
|
|
||||||
7. **Review & Update Rules (Post-Implementation):**
|
|
||||||
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
|
|
||||||
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
|
|
||||||
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.md` and `self_improve.md`).
|
|
||||||
|
|
||||||
8. **Mark Task Complete:**
|
|
||||||
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
|
|
||||||
|
|
||||||
9. **Commit Changes (If using Git):**
|
|
||||||
* Stage the relevant code changes and any updated/new rule files (`git add .`).
|
|
||||||
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
|
|
||||||
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
|
|
||||||
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.md`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
|
|
||||||
|
|
||||||
10. **Proceed to Next Subtask:**
|
|
||||||
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
|
|
||||||
|
|
||||||
## Code Analysis & Refactoring Techniques
|
|
||||||
|
|
||||||
- **Top-Level Function Search**:
|
|
||||||
- Useful for understanding module structure or planning refactors.
|
|
||||||
- Use grep/ripgrep to find exported functions/constants:
|
|
||||||
`rg "export (async function|function|const) \w+"` or similar patterns.
|
|
||||||
- Can help compare functions between files during migrations or identify potential naming conflicts.
|
|
||||||
|
|
||||||
---
|
|
||||||
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
inclusion: always
|
|
||||||
---
|
|
||||||
|
|
||||||
- **Required Rule Structure:**
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
description: Clear, one-line description of what the rule enforces
|
|
||||||
globs: path/to/files/*.ext, other/path/**/*
|
|
||||||
alwaysApply: boolean
|
|
||||||
---
|
|
||||||
|
|
||||||
- **Main Points in Bold**
|
|
||||||
- Sub-points with details
|
|
||||||
- Examples and explanations
|
|
||||||
```
|
|
||||||
|
|
||||||
- **File References:**
|
|
||||||
- Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files
|
|
||||||
- Example: [prisma.md](.kiro/steering/prisma.md) for rule references
|
|
||||||
- Example: [schema.prisma](mdc:prisma/schema.prisma) for code references
|
|
||||||
|
|
||||||
- **Code Examples:**
|
|
||||||
- Use language-specific code blocks
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Show good examples
|
|
||||||
const goodExample = true;
|
|
||||||
|
|
||||||
// ❌ DON'T: Show anti-patterns
|
|
||||||
const badExample = false;
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Rule Content Guidelines:**
|
|
||||||
- Start with high-level overview
|
|
||||||
- Include specific, actionable requirements
|
|
||||||
- Show examples of correct implementation
|
|
||||||
- Reference existing code when possible
|
|
||||||
- Keep rules DRY by referencing other rules
|
|
||||||
|
|
||||||
- **Rule Maintenance:**
|
|
||||||
- Update rules when new patterns emerge
|
|
||||||
- Add examples from actual codebase
|
|
||||||
- Remove outdated patterns
|
|
||||||
- Cross-reference related rules
|
|
||||||
|
|
||||||
- **Best Practices:**
|
|
||||||
- Use bullet points for clarity
|
|
||||||
- Keep descriptions concise
|
|
||||||
- Include both DO and DON'T examples
|
|
||||||
- Reference actual code over theoretical examples
|
|
||||||
- Use consistent formatting across rules
|
|
||||||
@@ -1,70 +0,0 @@
|
|||||||
---
|
|
||||||
inclusion: always
|
|
||||||
---
|
|
||||||
|
|
||||||
- **Rule Improvement Triggers:**
|
|
||||||
- New code patterns not covered by existing rules
|
|
||||||
- Repeated similar implementations across files
|
|
||||||
- Common error patterns that could be prevented
|
|
||||||
- New libraries or tools being used consistently
|
|
||||||
- Emerging best practices in the codebase
|
|
||||||
|
|
||||||
- **Analysis Process:**
|
|
||||||
- Compare new code with existing rules
|
|
||||||
- Identify patterns that should be standardized
|
|
||||||
- Look for references to external documentation
|
|
||||||
- Check for consistent error handling patterns
|
|
||||||
- Monitor test patterns and coverage
|
|
||||||
|
|
||||||
- **Rule Updates:**
|
|
||||||
- **Add New Rules When:**
|
|
||||||
- A new technology/pattern is used in 3+ files
|
|
||||||
- Common bugs could be prevented by a rule
|
|
||||||
- Code reviews repeatedly mention the same feedback
|
|
||||||
- New security or performance patterns emerge
|
|
||||||
|
|
||||||
- **Modify Existing Rules When:**
|
|
||||||
- Better examples exist in the codebase
|
|
||||||
- Additional edge cases are discovered
|
|
||||||
- Related rules have been updated
|
|
||||||
- Implementation details have changed
|
|
||||||
|
|
||||||
- **Example Pattern Recognition:**
|
|
||||||
```typescript
|
|
||||||
// If you see repeated patterns like:
|
|
||||||
const data = await prisma.user.findMany({
|
|
||||||
select: { id: true, email: true },
|
|
||||||
where: { status: 'ACTIVE' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Consider adding to [prisma.md](.kiro/steering/prisma.md):
|
|
||||||
// - Standard select fields
|
|
||||||
// - Common where conditions
|
|
||||||
// - Performance optimization patterns
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Rule Quality Checks:**
|
|
||||||
- Rules should be actionable and specific
|
|
||||||
- Examples should come from actual code
|
|
||||||
- References should be up to date
|
|
||||||
- Patterns should be consistently enforced
|
|
||||||
|
|
||||||
- **Continuous Improvement:**
|
|
||||||
- Monitor code review comments
|
|
||||||
- Track common development questions
|
|
||||||
- Update rules after major refactors
|
|
||||||
- Add links to relevant documentation
|
|
||||||
- Cross-reference related rules
|
|
||||||
|
|
||||||
- **Rule Deprecation:**
|
|
||||||
- Mark outdated patterns as deprecated
|
|
||||||
- Remove rules that no longer apply
|
|
||||||
- Update references to deprecated rules
|
|
||||||
- Document migration paths for old patterns
|
|
||||||
|
|
||||||
- **Documentation Updates:**
|
|
||||||
- Keep examples synchronized with code
|
|
||||||
- Update references to external docs
|
|
||||||
- Maintain links between related rules
|
|
||||||
- Document breaking changes
|
|
||||||
Follow [kiro_rules.md](.kiro/steering/kiro_rules.md) for proper rule formatting and structure.
|
|
||||||
@@ -1,556 +0,0 @@
|
|||||||
---
|
|
||||||
inclusion: always
|
|
||||||
---
|
|
||||||
|
|
||||||
# Taskmaster Tool & Command Reference
|
|
||||||
|
|
||||||
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Kiro, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
|
|
||||||
|
|
||||||
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
|
|
||||||
|
|
||||||
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
|
|
||||||
|
|
||||||
**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Initialization & Setup
|
|
||||||
|
|
||||||
### 1. Initialize Project (`init`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `initialize_project`
|
|
||||||
* **CLI Command:** `task-master init [options]`
|
|
||||||
* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.`
|
|
||||||
* **Key CLI Options:**
|
|
||||||
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
|
||||||
* `--description <text>`: `Provide a brief description for your project.`
|
|
||||||
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
|
||||||
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
|
||||||
* **Usage:** Run this once at the beginning of a new project.
|
|
||||||
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
|
||||||
* **Key MCP Parameters/Options:**
|
|
||||||
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
|
|
||||||
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
|
|
||||||
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
|
|
||||||
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
|
||||||
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
|
||||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
|
||||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
|
||||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Kiro. Operates on the current working directory of the MCP server.
|
|
||||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
|
||||||
* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`.
|
|
||||||
|
|
||||||
### 2. Parse PRD (`parse_prd`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `parse_prd`
|
|
||||||
* **CLI Command:** `task-master parse-prd [file] [options]`
|
|
||||||
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
|
|
||||||
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
|
|
||||||
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
|
|
||||||
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
|
|
||||||
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
|
|
||||||
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## AI Model Configuration
|
|
||||||
|
|
||||||
### 2. Manage Models (`models`)
|
|
||||||
* **MCP Tool:** `models`
|
|
||||||
* **CLI Command:** `task-master models [options]`
|
|
||||||
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
|
|
||||||
* **Key MCP Parameters/Options:**
|
|
||||||
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
|
|
||||||
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
|
|
||||||
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
|
|
||||||
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
|
|
||||||
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
|
|
||||||
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
|
|
||||||
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
|
|
||||||
* **Key CLI Options:**
|
|
||||||
* `--set-main <model_id>`: `Set the primary model.`
|
|
||||||
* `--set-research <model_id>`: `Set the research model.`
|
|
||||||
* `--set-fallback <model_id>`: `Set the fallback model.`
|
|
||||||
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
|
||||||
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
|
||||||
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
|
||||||
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
|
||||||
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
|
||||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
|
||||||
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
|
||||||
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
|
||||||
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
|
||||||
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Task Listing & Viewing
|
|
||||||
|
|
||||||
### 3. Get Tasks (`get_tasks`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `get_tasks`
|
|
||||||
* **CLI Command:** `task-master list [options]`
|
|
||||||
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`)
|
|
||||||
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
|
|
||||||
* `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Get an overview of the project status, often used at the start of a work session.
|
|
||||||
|
|
||||||
### 4. Get Next Task (`next_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `next_task`
|
|
||||||
* **CLI Command:** `task-master next [options]`
|
|
||||||
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* **Usage:** Identify what to work on next according to the plan.
|
|
||||||
|
|
||||||
### 5. Get Task Details (`get_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `get_task`
|
|
||||||
* **CLI Command:** `task-master show [id] [options]`
|
|
||||||
* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
|
|
||||||
* `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown.
|
|
||||||
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Task Creation & Modification
|
|
||||||
|
|
||||||
### 6. Add Task (`add_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `add_task`
|
|
||||||
* **CLI Command:** `task-master add-task [options]`
|
|
||||||
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
|
||||||
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
|
||||||
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
|
||||||
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Quickly add newly identified tasks during development.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 7. Add Subtask (`add_subtask`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `add_subtask`
|
|
||||||
* **CLI Command:** `task-master add-subtask [options]`
|
|
||||||
* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
|
|
||||||
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
|
|
||||||
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
|
|
||||||
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
|
|
||||||
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
|
||||||
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
|
||||||
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
|
||||||
|
|
||||||
### 8. Update Tasks (`update`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `update`
|
|
||||||
* **CLI Command:** `task-master update [options]`
|
|
||||||
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
|
||||||
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 9. Update Task (`update_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `update_task`
|
|
||||||
* **CLI Command:** `task-master update-task [options]`
|
|
||||||
* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`)
|
|
||||||
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`)
|
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 10. Update Subtask (`update_subtask`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `update_subtask`
|
|
||||||
* **CLI Command:** `task-master update-subtask [options]`
|
|
||||||
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
|
||||||
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 11. Set Task Status (`set_task_status`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `set_task_status`
|
|
||||||
* **CLI Command:** `task-master set-status [options]`
|
|
||||||
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
|
||||||
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Mark progress as tasks move through the development cycle.
|
|
||||||
|
|
||||||
### 12. Remove Task (`remove_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `remove_task`
|
|
||||||
* **CLI Command:** `task-master remove-task [options]`
|
|
||||||
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
|
||||||
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
|
||||||
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Task Structure & Breakdown
|
|
||||||
|
|
||||||
### 13. Expand Task (`expand_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `expand_task`
|
|
||||||
* **CLI Command:** `task-master expand [options]`
|
|
||||||
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
|
|
||||||
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
|
|
||||||
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
|
|
||||||
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 14. Expand All Tasks (`expand_all`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `expand_all`
|
|
||||||
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
|
|
||||||
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
|
|
||||||
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
|
|
||||||
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
|
|
||||||
* `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 15. Clear Subtasks (`clear_subtasks`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `clear_subtasks`
|
|
||||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
|
||||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
|
||||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
|
||||||
|
|
||||||
### 16. Remove Subtask (`remove_subtask`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `remove_subtask`
|
|
||||||
* **CLI Command:** `task-master remove-subtask [options]`
|
|
||||||
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
|
||||||
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
|
||||||
|
|
||||||
### 17. Move Task (`move_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `move_task`
|
|
||||||
* **CLI Command:** `task-master move [options]`
|
|
||||||
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
|
||||||
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
|
||||||
* Moving a task to become a subtask
|
|
||||||
* Moving a subtask to become a standalone task
|
|
||||||
* Moving a subtask to a different parent
|
|
||||||
* Reordering subtasks within the same parent
|
|
||||||
* Moving a task to a new, non-existent ID (automatically creates placeholders)
|
|
||||||
* Moving multiple tasks at once with comma-separated IDs
|
|
||||||
* **Validation Features:**
|
|
||||||
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
|
|
||||||
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
|
|
||||||
* Validates that source tasks exist before attempting to move them
|
|
||||||
* Maintains proper parent-child relationships
|
|
||||||
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
|
|
||||||
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
|
|
||||||
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dependency Management
|
|
||||||
|
|
||||||
### 18. Add Dependency (`add_dependency`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `add_dependency`
|
|
||||||
* **CLI Command:** `task-master add-dependency [options]`
|
|
||||||
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
|
||||||
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
|
||||||
* **Usage:** Establish the correct order of execution between tasks.
|
|
||||||
|
|
||||||
### 19. Remove Dependency (`remove_dependency`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `remove_dependency`
|
|
||||||
* **CLI Command:** `task-master remove-dependency [options]`
|
|
||||||
* **Description:** `Remove a dependency relationship between two Taskmaster tasks.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
|
||||||
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Update task relationships when the order of execution changes.
|
|
||||||
|
|
||||||
### 20. Validate Dependencies (`validate_dependencies`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `validate_dependencies`
|
|
||||||
* **CLI Command:** `task-master validate-dependencies [options]`
|
|
||||||
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Audit the integrity of your task dependencies.
|
|
||||||
|
|
||||||
### 21. Fix Dependencies (`fix_dependencies`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `fix_dependencies`
|
|
||||||
* **CLI Command:** `task-master fix-dependencies [options]`
|
|
||||||
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Clean up dependency errors automatically.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Analysis & Reporting
|
|
||||||
|
|
||||||
### 22. Analyze Project Complexity (`analyze_project_complexity`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `analyze_project_complexity`
|
|
||||||
* **CLI Command:** `task-master analyze-complexity [options]`
|
|
||||||
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`)
|
|
||||||
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
|
|
||||||
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
|
|
||||||
* `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
|
||||||
|
|
||||||
### 23. View Complexity Report (`complexity_report`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `complexity_report`
|
|
||||||
* **CLI Command:** `task-master complexity-report [options]`
|
|
||||||
* **Description:** `Display the task complexity analysis report in a readable format.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## File Management
|
|
||||||
|
|
||||||
### 24. Generate Task Files (`generate`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `generate`
|
|
||||||
* **CLI Command:** `task-master generate [options]`
|
|
||||||
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
|
|
||||||
* `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## AI-Powered Research
|
|
||||||
|
|
||||||
### 25. Research (`research`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `research`
|
|
||||||
* **CLI Command:** `task-master research [options]`
|
|
||||||
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
|
|
||||||
* `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
|
|
||||||
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
|
|
||||||
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
|
|
||||||
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
|
|
||||||
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
|
|
||||||
* `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`)
|
|
||||||
* `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`)
|
|
||||||
* `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`)
|
|
||||||
* `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
|
|
||||||
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
|
|
||||||
* Get fresh information beyond knowledge cutoff dates
|
|
||||||
* Research latest best practices, library updates, security patches
|
|
||||||
* Find implementation examples for specific technologies
|
|
||||||
* Validate approaches against current industry standards
|
|
||||||
* Get contextual advice based on project files and tasks
|
|
||||||
* **When to Consider Using Research:**
|
|
||||||
* **Before implementing any task** - Research current best practices
|
|
||||||
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
|
|
||||||
* **For security-related tasks** - Find latest security recommendations
|
|
||||||
* **When updating dependencies** - Research breaking changes and migration guides
|
|
||||||
* **For performance optimization** - Get current performance best practices
|
|
||||||
* **When debugging complex issues** - Research known solutions and workarounds
|
|
||||||
* **Research + Action Pattern:**
|
|
||||||
* Use `research` to gather fresh information
|
|
||||||
* Use `update_subtask` to commit findings with timestamps
|
|
||||||
* Use `update_task` to incorporate research into task details
|
|
||||||
* Use `add_task` with research flag for informed task creation
|
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Tag Management
|
|
||||||
|
|
||||||
This new suite of commands allows you to manage different task contexts (tags).
|
|
||||||
|
|
||||||
### 26. List Tags (`tags`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `list_tags`
|
|
||||||
* **CLI Command:** `task-master tags [options]`
|
|
||||||
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
|
||||||
|
|
||||||
### 27. Add Tag (`add_tag`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `add_tag`
|
|
||||||
* **CLI Command:** `task-master add-tag <tagName> [options]`
|
|
||||||
* **Description:** `Create a new, empty tag context, or copy tasks from another tag.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional)
|
|
||||||
* `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`)
|
|
||||||
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
|
||||||
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
|
||||||
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 28. Delete Tag (`delete_tag`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `delete_tag`
|
|
||||||
* **CLI Command:** `task-master delete-tag <tagName> [options]`
|
|
||||||
* **Description:** `Permanently delete a tag and all of its associated tasks.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
|
||||||
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 29. Use Tag (`use_tag`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `use_tag`
|
|
||||||
* **CLI Command:** `task-master use-tag <tagName>`
|
|
||||||
* **Description:** `Switch your active task context to a different tag.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 30. Rename Tag (`rename_tag`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `rename_tag`
|
|
||||||
* **CLI Command:** `task-master rename-tag <oldName> <newName>`
|
|
||||||
* **Description:** `Rename an existing tag.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
|
||||||
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
|
|
||||||
### 31. Copy Tag (`copy_tag`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `copy_tag`
|
|
||||||
* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]`
|
|
||||||
* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional)
|
|
||||||
* `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional)
|
|
||||||
* `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Miscellaneous
|
|
||||||
|
|
||||||
### 32. Sync Readme (`sync-readme`) -- experimental
|
|
||||||
|
|
||||||
* **MCP Tool:** N/A
|
|
||||||
* **CLI Command:** `task-master sync-readme [options]`
|
|
||||||
* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
|
|
||||||
* `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`)
|
|
||||||
* `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Environment Variables Configuration (Updated)
|
|
||||||
|
|
||||||
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
|
||||||
|
|
||||||
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
|
|
||||||
|
|
||||||
* **API Keys (Required for corresponding provider):**
|
|
||||||
* `ANTHROPIC_API_KEY`
|
|
||||||
* `PERPLEXITY_API_KEY`
|
|
||||||
* `OPENAI_API_KEY`
|
|
||||||
* `GOOGLE_API_KEY`
|
|
||||||
* `MISTRAL_API_KEY`
|
|
||||||
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
|
|
||||||
* `OPENROUTER_API_KEY`
|
|
||||||
* `XAI_API_KEY`
|
|
||||||
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
|
|
||||||
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
|
|
||||||
* `AZURE_OPENAI_ENDPOINT`
|
|
||||||
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
|
|
||||||
|
|
||||||
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.kiro/mcp.json`** file (for MCP/Kiro integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
For details on how these commands fit into the development process, see the [dev_workflow.md](.kiro/steering/dev_workflow.md).
|
|
||||||
@@ -1,59 +0,0 @@
|
|||||||
---
|
|
||||||
inclusion: always
|
|
||||||
---
|
|
||||||
|
|
||||||
# Taskmaster Hook-Driven Workflow
|
|
||||||
|
|
||||||
## Core Principle: Hooks Automate Task Management
|
|
||||||
|
|
||||||
When working with Taskmaster in Kiro, **avoid manually marking tasks as done**. The hook system automatically handles task completion based on:
|
|
||||||
|
|
||||||
- **Test Success**: `[TM] Test Success Task Completer` detects passing tests and prompts for task completion
|
|
||||||
- **Code Changes**: `[TM] Code Change Task Tracker` monitors implementation progress
|
|
||||||
- **Dependency Chains**: `[TM] Task Dependency Auto-Progression` auto-starts dependent tasks
|
|
||||||
|
|
||||||
## AI Assistant Workflow
|
|
||||||
|
|
||||||
Follow this pattern when implementing features:
|
|
||||||
|
|
||||||
1. **Implement First**: Write code, create tests, make changes
|
|
||||||
2. **Save Frequently**: Hooks trigger on file saves to track progress automatically
|
|
||||||
3. **Let Hooks Decide**: Allow hooks to detect completion rather than manually setting status
|
|
||||||
4. **Respond to Prompts**: Confirm when hooks suggest task completion
|
|
||||||
|
|
||||||
## Key Rules for AI Assistants
|
|
||||||
|
|
||||||
- **Never use `tm set-status --status=done`** unless hooks fail to detect completion
|
|
||||||
- **Always write tests** - they provide the most reliable completion signal
|
|
||||||
- **Save files after implementation** - this triggers progress tracking
|
|
||||||
- **Trust hook suggestions** - if no completion prompt appears, more work may be needed
|
|
||||||
|
|
||||||
## Automatic Behaviors
|
|
||||||
|
|
||||||
The hook system provides:
|
|
||||||
|
|
||||||
- **Progress Logging**: Implementation details automatically added to task notes
|
|
||||||
- **Evidence-Based Completion**: Tasks marked done only when criteria are met
|
|
||||||
- **Dependency Management**: Next tasks auto-started when dependencies complete
|
|
||||||
- **Natural Flow**: Focus on coding, not task management overhead
|
|
||||||
|
|
||||||
## Manual Override Cases
|
|
||||||
|
|
||||||
Only manually set task status for:
|
|
||||||
|
|
||||||
- Documentation-only tasks
|
|
||||||
- Tasks without testable outcomes
|
|
||||||
- Emergency fixes without proper test coverage
|
|
||||||
|
|
||||||
Use `tm set-status` sparingly - prefer hook-driven completion.
|
|
||||||
|
|
||||||
## Implementation Pattern
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Implement feature → Save file
|
|
||||||
2. Write tests → Save test file
|
|
||||||
3. Tests pass → Hook prompts completion
|
|
||||||
4. Confirm completion → Next task auto-starts
|
|
||||||
```
|
|
||||||
|
|
||||||
This workflow ensures proper task tracking while maintaining development flow.
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
{
|
|
||||||
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
|
||||||
"defaultBranch": "main",
|
|
||||||
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
|
||||||
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
|
||||||
}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,417 +0,0 @@
|
|||||||
# Task Master AI - Agent Integration Guide
|
|
||||||
|
|
||||||
## Essential Commands
|
|
||||||
|
|
||||||
### Core Workflow Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Project Setup
|
|
||||||
task-master init # Initialize Task Master in current project
|
|
||||||
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
|
|
||||||
task-master models --setup # Configure AI models interactively
|
|
||||||
|
|
||||||
# Daily Development Workflow
|
|
||||||
task-master list # Show all tasks with status
|
|
||||||
task-master next # Get next available task to work on
|
|
||||||
task-master show <id> # View detailed task information (e.g., task-master show 1.2)
|
|
||||||
task-master set-status --id=<id> --status=done # Mark task complete
|
|
||||||
|
|
||||||
# Task Management
|
|
||||||
task-master add-task --prompt="description" --research # Add new task with AI assistance
|
|
||||||
task-master expand --id=<id> --research --force # Break task into subtasks
|
|
||||||
task-master update-task --id=<id> --prompt="changes" # Update specific task
|
|
||||||
task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards
|
|
||||||
task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask
|
|
||||||
|
|
||||||
# Analysis & Planning
|
|
||||||
task-master analyze-complexity --research # Analyze task complexity
|
|
||||||
task-master complexity-report # View complexity analysis
|
|
||||||
task-master expand --all --research # Expand all eligible tasks
|
|
||||||
|
|
||||||
# Dependencies & Organization
|
|
||||||
task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency
|
|
||||||
task-master move --from=<id> --to=<id> # Reorganize task hierarchy
|
|
||||||
task-master validate-dependencies # Check for dependency issues
|
|
||||||
task-master generate # Update task markdown files (usually auto-called)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Files & Project Structure
|
|
||||||
|
|
||||||
### Core Files
|
|
||||||
|
|
||||||
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
|
|
||||||
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
|
|
||||||
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
|
|
||||||
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
|
|
||||||
- `.env` - API keys for CLI usage
|
|
||||||
|
|
||||||
### Claude Code Integration Files
|
|
||||||
|
|
||||||
- `CLAUDE.md` - Auto-loaded context for Claude Code (this file)
|
|
||||||
- `.claude/settings.json` - Claude Code tool allowlist and preferences
|
|
||||||
- `.claude/commands/` - Custom slash commands for repeated workflows
|
|
||||||
- `.mcp.json` - MCP server configuration (project-specific)
|
|
||||||
|
|
||||||
### Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
project/
|
|
||||||
├── .taskmaster/
|
|
||||||
│ ├── tasks/ # Task files directory
|
|
||||||
│ │ ├── tasks.json # Main task database
|
|
||||||
│ │ ├── task-1.md # Individual task files
|
|
||||||
│ │ └── task-2.md
|
|
||||||
│ ├── docs/ # Documentation directory
|
|
||||||
│ │ ├── prd.txt # Product requirements
|
|
||||||
│ ├── reports/ # Analysis reports directory
|
|
||||||
│ │ └── task-complexity-report.json
|
|
||||||
│ ├── templates/ # Template files
|
|
||||||
│ │ └── example_prd.txt # Example PRD template
|
|
||||||
│ └── config.json # AI models & settings
|
|
||||||
├── .claude/
|
|
||||||
│ ├── settings.json # Claude Code configuration
|
|
||||||
│ └── commands/ # Custom slash commands
|
|
||||||
├── .env # API keys
|
|
||||||
├── .mcp.json # MCP configuration
|
|
||||||
└── CLAUDE.md # This file - auto-loaded by Claude Code
|
|
||||||
```
|
|
||||||
|
|
||||||
## MCP Integration
|
|
||||||
|
|
||||||
Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"ANTHROPIC_API_KEY": "your_key_here",
|
|
||||||
"PERPLEXITY_API_KEY": "your_key_here",
|
|
||||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
|
||||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
|
||||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
|
||||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
|
||||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
|
||||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
|
||||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Essential MCP Tools
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
help; // = shows available taskmaster commands
|
|
||||||
// Project setup
|
|
||||||
initialize_project; // = task-master init
|
|
||||||
parse_prd; // = task-master parse-prd
|
|
||||||
|
|
||||||
// Daily workflow
|
|
||||||
get_tasks; // = task-master list
|
|
||||||
next_task; // = task-master next
|
|
||||||
get_task; // = task-master show <id>
|
|
||||||
set_task_status; // = task-master set-status
|
|
||||||
|
|
||||||
// Task management
|
|
||||||
add_task; // = task-master add-task
|
|
||||||
expand_task; // = task-master expand
|
|
||||||
update_task; // = task-master update-task
|
|
||||||
update_subtask; // = task-master update-subtask
|
|
||||||
update; // = task-master update
|
|
||||||
|
|
||||||
// Analysis
|
|
||||||
analyze_project_complexity; // = task-master analyze-complexity
|
|
||||||
complexity_report; // = task-master complexity-report
|
|
||||||
```
|
|
||||||
|
|
||||||
## Claude Code Workflow Integration
|
|
||||||
|
|
||||||
### Standard Development Workflow
|
|
||||||
|
|
||||||
#### 1. Project Initialization
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize Task Master
|
|
||||||
task-master init
|
|
||||||
|
|
||||||
# Create or obtain PRD, then parse it
|
|
||||||
task-master parse-prd .taskmaster/docs/prd.txt
|
|
||||||
|
|
||||||
# Analyze complexity and expand tasks
|
|
||||||
task-master analyze-complexity --research
|
|
||||||
task-master expand --all --research
|
|
||||||
```
|
|
||||||
|
|
||||||
If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks..
|
|
||||||
|
|
||||||
#### 2. Daily Development Loop
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start each session
|
|
||||||
task-master next # Find next available task
|
|
||||||
task-master show <id> # Review task details
|
|
||||||
|
|
||||||
# During implementation, check in code context into the tasks and subtasks
|
|
||||||
task-master update-subtask --id=<id> --prompt="implementation notes..."
|
|
||||||
|
|
||||||
# Complete tasks
|
|
||||||
task-master set-status --id=<id> --status=done
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Multi-Claude Workflows
|
|
||||||
|
|
||||||
For complex projects, use multiple Claude Code sessions:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Terminal 1: Main implementation
|
|
||||||
cd project && claude
|
|
||||||
|
|
||||||
# Terminal 2: Testing and validation
|
|
||||||
cd project-test-worktree && claude
|
|
||||||
|
|
||||||
# Terminal 3: Documentation updates
|
|
||||||
cd project-docs-worktree && claude
|
|
||||||
```
|
|
||||||
|
|
||||||
### Custom Slash Commands
|
|
||||||
|
|
||||||
Create `.claude/commands/taskmaster-next.md`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Find the next available Task Master task and show its details.
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
|
|
||||||
1. Run `task-master next` to get the next task
|
|
||||||
2. If a task is available, run `task-master show <id>` for full details
|
|
||||||
3. Provide a summary of what needs to be implemented
|
|
||||||
4. Suggest the first implementation step
|
|
||||||
```
|
|
||||||
|
|
||||||
Create `.claude/commands/taskmaster-complete.md`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Complete a Task Master task: $ARGUMENTS
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
|
|
||||||
1. Review the current task with `task-master show $ARGUMENTS`
|
|
||||||
2. Verify all implementation is complete
|
|
||||||
3. Run any tests related to this task
|
|
||||||
4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done`
|
|
||||||
5. Show the next available task with `task-master next`
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Allowlist Recommendations
|
|
||||||
|
|
||||||
Add to `.claude/settings.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"allowedTools": [
|
|
||||||
"Edit",
|
|
||||||
"Bash(task-master *)",
|
|
||||||
"Bash(git commit:*)",
|
|
||||||
"Bash(git add:*)",
|
|
||||||
"Bash(npm run *)",
|
|
||||||
"mcp__task_master_ai__*"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration & Setup
|
|
||||||
|
|
||||||
### API Keys Required
|
|
||||||
|
|
||||||
At least **one** of these API keys must be configured:
|
|
||||||
|
|
||||||
- `ANTHROPIC_API_KEY` (Claude models) - **Recommended**
|
|
||||||
- `PERPLEXITY_API_KEY` (Research features) - **Highly recommended**
|
|
||||||
- `OPENAI_API_KEY` (GPT models)
|
|
||||||
- `GOOGLE_API_KEY` (Gemini models)
|
|
||||||
- `MISTRAL_API_KEY` (Mistral models)
|
|
||||||
- `OPENROUTER_API_KEY` (Multiple models)
|
|
||||||
- `XAI_API_KEY` (Grok models)
|
|
||||||
|
|
||||||
An API key is required for any provider used across any of the 3 roles defined in the `models` command.
|
|
||||||
|
|
||||||
### Model Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Interactive setup (recommended)
|
|
||||||
task-master models --setup
|
|
||||||
|
|
||||||
# Set specific models
|
|
||||||
task-master models --set-main claude-3-5-sonnet-20241022
|
|
||||||
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
|
||||||
task-master models --set-fallback gpt-4o-mini
|
|
||||||
```
|
|
||||||
|
|
||||||
## Task Structure & IDs
|
|
||||||
|
|
||||||
### Task ID Format
|
|
||||||
|
|
||||||
- Main tasks: `1`, `2`, `3`, etc.
|
|
||||||
- Subtasks: `1.1`, `1.2`, `2.1`, etc.
|
|
||||||
- Sub-subtasks: `1.1.1`, `1.1.2`, etc.
|
|
||||||
|
|
||||||
### Task Status Values
|
|
||||||
|
|
||||||
- `pending` - Ready to work on
|
|
||||||
- `in-progress` - Currently being worked on
|
|
||||||
- `done` - Completed and verified
|
|
||||||
- `deferred` - Postponed
|
|
||||||
- `cancelled` - No longer needed
|
|
||||||
- `blocked` - Waiting on external factors
|
|
||||||
|
|
||||||
### Task Fields
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "1.2",
|
|
||||||
"title": "Implement user authentication",
|
|
||||||
"description": "Set up JWT-based auth system",
|
|
||||||
"status": "pending",
|
|
||||||
"priority": "high",
|
|
||||||
"dependencies": ["1.1"],
|
|
||||||
"details": "Use bcrypt for hashing, JWT for tokens...",
|
|
||||||
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
|
|
||||||
"subtasks": []
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Claude Code Best Practices with Task Master
|
|
||||||
|
|
||||||
### Context Management
|
|
||||||
|
|
||||||
- Use `/clear` between different tasks to maintain focus
|
|
||||||
- This CLAUDE.md file is automatically loaded for context
|
|
||||||
- Use `task-master show <id>` to pull specific task context when needed
|
|
||||||
|
|
||||||
### Iterative Implementation
|
|
||||||
|
|
||||||
1. `task-master show <subtask-id>` - Understand requirements
|
|
||||||
2. Explore codebase and plan implementation
|
|
||||||
3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan
|
|
||||||
4. `task-master set-status --id=<id> --status=in-progress` - Start work
|
|
||||||
5. Implement code following logged plan
|
|
||||||
6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress
|
|
||||||
7. `task-master set-status --id=<id> --status=done` - Complete task
|
|
||||||
|
|
||||||
### Complex Workflows with Checklists
|
|
||||||
|
|
||||||
For large migrations or multi-step processes:
|
|
||||||
|
|
||||||
1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md)
|
|
||||||
2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP)
|
|
||||||
3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them.
|
|
||||||
4. Work through items systematically, checking them off as completed
|
|
||||||
5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck
|
|
||||||
|
|
||||||
### Git Integration
|
|
||||||
|
|
||||||
Task Master works well with `gh` CLI:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create PR for completed task
|
|
||||||
gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2"
|
|
||||||
|
|
||||||
# Reference task in commits
|
|
||||||
git commit -m "feat: implement JWT auth (task 1.2)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Parallel Development with Git Worktrees
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create worktrees for parallel task development
|
|
||||||
git worktree add ../project-auth feature/auth-system
|
|
||||||
git worktree add ../project-api feature/api-refactor
|
|
||||||
|
|
||||||
# Run Claude Code in each worktree
|
|
||||||
cd ../project-auth && claude # Terminal 1: Auth work
|
|
||||||
cd ../project-api && claude # Terminal 2: API work
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### AI Commands Failing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check API keys are configured
|
|
||||||
cat .env # For CLI usage
|
|
||||||
|
|
||||||
# Verify model configuration
|
|
||||||
task-master models
|
|
||||||
|
|
||||||
# Test with different model
|
|
||||||
task-master models --set-fallback gpt-4o-mini
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Connection Issues
|
|
||||||
|
|
||||||
- Check `.mcp.json` configuration
|
|
||||||
- Verify Node.js installation
|
|
||||||
- Use `--mcp-debug` flag when starting Claude Code
|
|
||||||
- Use CLI as fallback if MCP unavailable
|
|
||||||
|
|
||||||
### Task File Sync Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Regenerate task files from tasks.json
|
|
||||||
task-master generate
|
|
||||||
|
|
||||||
# Fix dependency issues
|
|
||||||
task-master fix-dependencies
|
|
||||||
```
|
|
||||||
|
|
||||||
DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files.
|
|
||||||
|
|
||||||
## Important Notes
|
|
||||||
|
|
||||||
### AI-Powered Operations
|
|
||||||
|
|
||||||
These commands make AI calls and may take up to a minute:
|
|
||||||
|
|
||||||
- `parse_prd` / `task-master parse-prd`
|
|
||||||
- `analyze_project_complexity` / `task-master analyze-complexity`
|
|
||||||
- `expand_task` / `task-master expand`
|
|
||||||
- `expand_all` / `task-master expand --all`
|
|
||||||
- `add_task` / `task-master add-task`
|
|
||||||
- `update` / `task-master update`
|
|
||||||
- `update_task` / `task-master update-task`
|
|
||||||
- `update_subtask` / `task-master update-subtask`
|
|
||||||
|
|
||||||
### File Management
|
|
||||||
|
|
||||||
- Never manually edit `tasks.json` - use commands instead
|
|
||||||
- Never manually edit `.taskmaster/config.json` - use `task-master models`
|
|
||||||
- Task markdown files in `tasks/` are auto-generated
|
|
||||||
- Run `task-master generate` after manual changes to tasks.json
|
|
||||||
|
|
||||||
### Claude Code Session Management
|
|
||||||
|
|
||||||
- Use `/clear` frequently to maintain focused context
|
|
||||||
- Create custom slash commands for repeated Task Master workflows
|
|
||||||
- Configure tool allowlist to streamline permissions
|
|
||||||
- Use headless mode for automation: `claude -p "task-master next"`
|
|
||||||
|
|
||||||
### Multi-Task Updates
|
|
||||||
|
|
||||||
- Use `update --from=<id>` to update multiple future tasks
|
|
||||||
- Use `update-task --id=<id>` for single task updates
|
|
||||||
- Use `update-subtask --id=<id>` for implementation logging
|
|
||||||
|
|
||||||
### Research Mode
|
|
||||||
|
|
||||||
- Add `--research` flag for research-based AI enhancement
|
|
||||||
- Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment
|
|
||||||
- Provides more informed task creation and updates
|
|
||||||
- Recommended for complex technical tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
_This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._
|
|
||||||
@@ -1,44 +1,34 @@
|
|||||||
{
|
{
|
||||||
"models": {
|
"models": {
|
||||||
"main": {
|
"main": {
|
||||||
"provider": "claude-code",
|
"provider": "anthropic",
|
||||||
"modelId": "sonnet",
|
"modelId": "claude-sonnet-4-20250514",
|
||||||
"maxTokens": 64000,
|
"maxTokens": 50000,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
},
|
},
|
||||||
"research": {
|
"research": {
|
||||||
"provider": "perplexity",
|
"provider": "perplexity",
|
||||||
"modelId": "sonar",
|
"modelId": "sonar-pro",
|
||||||
"maxTokens": 8700,
|
"maxTokens": 8700,
|
||||||
"temperature": 0.1
|
"temperature": 0.1
|
||||||
},
|
},
|
||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "anthropic",
|
"provider": "anthropic",
|
||||||
"modelId": "claude-3-7-sonnet-20250219",
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
"maxTokens": 120000,
|
"maxTokens": 128000,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"global": {
|
"global": {
|
||||||
|
"userId": "1234567890",
|
||||||
"logLevel": "info",
|
"logLevel": "info",
|
||||||
"debug": false,
|
"debug": false,
|
||||||
"defaultNumTasks": 10,
|
|
||||||
"defaultSubtasks": 5,
|
"defaultSubtasks": 5,
|
||||||
"defaultPriority": "medium",
|
"defaultPriority": "medium",
|
||||||
"projectName": "Taskmaster",
|
"projectName": "Taskmaster",
|
||||||
"ollamaBaseURL": "http://localhost:11434/api",
|
"ollamaBaseURL": "http://localhost:11434/api",
|
||||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||||
"responseLanguage": "English",
|
|
||||||
"enableCodebaseAnalysis": true,
|
|
||||||
"userId": "1234567890",
|
|
||||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||||
"defaultTag": "master"
|
"defaultTag": "master"
|
||||||
},
|
|
||||||
"claudeCode": {},
|
|
||||||
"codexCli": {},
|
|
||||||
"grokCli": {
|
|
||||||
"timeout": 120000,
|
|
||||||
"workingDirectory": null,
|
|
||||||
"defaultModel": "grok-4-latest"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,188 +0,0 @@
|
|||||||
# Task Master Migration Roadmap
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
|
||||||
|
|
||||||
## Architecture Vision
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────┐
|
|
||||||
│ User Interfaces │
|
|
||||||
├──────────┬──────────┬──────────┬────────────────┤
|
|
||||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
|
||||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
|
||||||
└──────────┴──────────┴──────────┴────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌──────────────────────┐
|
|
||||||
│ @tm/core │
|
|
||||||
│ (Business Logic) │
|
|
||||||
└──────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration Phases
|
|
||||||
|
|
||||||
### Phase 1: Core Extraction ✅ (In Progress)
|
|
||||||
**Goal**: Move all business logic to @tm/core
|
|
||||||
|
|
||||||
- [x] Create @tm/core package structure
|
|
||||||
- [x] Move types and interfaces
|
|
||||||
- [x] Implement TaskMasterCore facade
|
|
||||||
- [x] Move storage adapters
|
|
||||||
- [x] Move task services
|
|
||||||
- [ ] Move AI providers
|
|
||||||
- [ ] Move parser logic
|
|
||||||
- [ ] Complete test coverage
|
|
||||||
|
|
||||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
|
||||||
**Goal**: Create @tm/cli as a thin presentation layer
|
|
||||||
|
|
||||||
- [x] Create @tm/cli package structure
|
|
||||||
- [x] Implement Command interface pattern
|
|
||||||
- [x] Create CommandRegistry
|
|
||||||
- [x] Build legacy bridge/adapter
|
|
||||||
- [x] Migrate list-tasks command
|
|
||||||
- [ ] Migrate remaining commands one by one
|
|
||||||
- [ ] Remove UI logic from core
|
|
||||||
|
|
||||||
### Phase 3: Transitional Integration
|
|
||||||
**Goal**: Use new packages in existing scripts without breaking changes
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// scripts/modules/commands.js gradually adopts new commands
|
|
||||||
import { ListTasksCommand } from '@tm/cli';
|
|
||||||
const listCommand = new ListTasksCommand();
|
|
||||||
|
|
||||||
// Old interface remains the same
|
|
||||||
programInstance
|
|
||||||
.command('list')
|
|
||||||
.action(async (options) => {
|
|
||||||
// Use new command internally
|
|
||||||
const result = await listCommand.execute(convertOptions(options));
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: MCP Package
|
|
||||||
**Goal**: Separate MCP server as its own package
|
|
||||||
|
|
||||||
- [ ] Create @tm/mcp package
|
|
||||||
- [ ] Move MCP server code
|
|
||||||
- [ ] Use @tm/core for all logic
|
|
||||||
- [ ] MCP becomes a thin RPC layer
|
|
||||||
|
|
||||||
### Phase 5: Complete Migration
|
|
||||||
**Goal**: Remove old scripts, pure monorepo
|
|
||||||
|
|
||||||
- [ ] All commands migrated to @tm/cli
|
|
||||||
- [ ] Remove scripts/modules/task-manager/*
|
|
||||||
- [ ] Remove scripts/modules/commands.js
|
|
||||||
- [ ] Update bin/task-master.js to use @tm/cli
|
|
||||||
- [ ] Clean up dependencies
|
|
||||||
|
|
||||||
## Current Transitional Strategy
|
|
||||||
|
|
||||||
### 1. Adapter Pattern (commands-adapter.js)
|
|
||||||
```javascript
|
|
||||||
// Checks if new CLI is available and uses it
|
|
||||||
// Falls back to legacy implementation if not
|
|
||||||
export async function listTasksAdapter(...args) {
|
|
||||||
if (cliAvailable) {
|
|
||||||
return useNewImplementation(...args);
|
|
||||||
}
|
|
||||||
return useLegacyImplementation(...args);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Command Bridge Pattern
|
|
||||||
```javascript
|
|
||||||
// Allows new commands to work in old code
|
|
||||||
const bridge = new CommandBridge(new ListTasksCommand());
|
|
||||||
const data = await bridge.run(legacyOptions); // Legacy style
|
|
||||||
const result = await bridge.execute(newOptions); // New style
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Gradual File Migration
|
|
||||||
Instead of big-bang refactoring:
|
|
||||||
1. Create new implementation in @tm/cli
|
|
||||||
2. Add adapter in commands-adapter.js
|
|
||||||
3. Update commands.js to use adapter
|
|
||||||
4. Test both paths work
|
|
||||||
5. Eventually remove adapter when all migrated
|
|
||||||
|
|
||||||
## Benefits of This Approach
|
|
||||||
|
|
||||||
1. **No Breaking Changes**: Existing CLI continues to work
|
|
||||||
2. **Incremental PRs**: Each command can be migrated separately
|
|
||||||
3. **Parallel Development**: New features can use new architecture
|
|
||||||
4. **Easy Rollback**: Can disable new implementation if issues
|
|
||||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
|
||||||
|
|
||||||
## Example PR Sequence
|
|
||||||
|
|
||||||
### PR 1: Core Package Setup ✅
|
|
||||||
- Create @tm/core
|
|
||||||
- Move types and interfaces
|
|
||||||
- Basic TaskMasterCore implementation
|
|
||||||
|
|
||||||
### PR 2: CLI Package Foundation ✅
|
|
||||||
- Create @tm/cli
|
|
||||||
- Command interface and registry
|
|
||||||
- Legacy bridge utilities
|
|
||||||
|
|
||||||
### PR 3: First Command Migration
|
|
||||||
- Migrate list-tasks to new system
|
|
||||||
- Add adapter in scripts
|
|
||||||
- Test both implementations
|
|
||||||
|
|
||||||
### PR 4-N: Migrate Commands One by One
|
|
||||||
- Each PR migrates 1-2 related commands
|
|
||||||
- Small, reviewable changes
|
|
||||||
- Continuous delivery
|
|
||||||
|
|
||||||
### Final PR: Cleanup
|
|
||||||
- Remove legacy implementations
|
|
||||||
- Remove adapters
|
|
||||||
- Update documentation
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
|
|
||||||
### Dual Testing During Migration
|
|
||||||
```javascript
|
|
||||||
describe('List Tasks', () => {
|
|
||||||
it('works with legacy implementation', async () => {
|
|
||||||
// Force legacy
|
|
||||||
const result = await legacyListTasks(...);
|
|
||||||
expect(result).toBeDefined();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('works with new implementation', async () => {
|
|
||||||
// Force new
|
|
||||||
const command = new ListTasksCommand();
|
|
||||||
const result = await command.execute(...);
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('adapter chooses correctly', async () => {
|
|
||||||
// Let adapter decide
|
|
||||||
const result = await listTasksAdapter(...);
|
|
||||||
expect(result).toBeDefined();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Metrics
|
|
||||||
|
|
||||||
- [ ] All commands migrated without breaking changes
|
|
||||||
- [ ] Test coverage maintained or improved
|
|
||||||
- [ ] Performance maintained or improved
|
|
||||||
- [ ] Cleaner, more maintainable codebase
|
|
||||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
|
||||||
|
|
||||||
## Notes for Contributors
|
|
||||||
|
|
||||||
1. **Keep PRs Small**: Migrate one command at a time
|
|
||||||
2. **Test Both Paths**: Ensure legacy and new both work
|
|
||||||
3. **Document Changes**: Update this roadmap as you go
|
|
||||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
|
||||||
|
|
||||||
This is a living document - update as the migration progresses!
|
|
||||||
@@ -1,912 +0,0 @@
|
|||||||
## Summary
|
|
||||||
|
|
||||||
- Put the existing git and test workflows on rails: a repeatable, automated process that can run autonomously, with guardrails and a compact TUI for visibility.
|
|
||||||
|
|
||||||
- Flow: for a selected task, create a branch named with the tag + task id → generate tests for the first subtask (red) using the Surgical Test Generator → implement code (green) → verify tests → commit → repeat per subtask → final verify → push → open PR against the default branch.
|
|
||||||
|
|
||||||
- Build on existing rules: .cursor/rules/git_workflow.mdc, .cursor/rules/test_workflow.mdc, .claude/agents/surgical-test-generator.md, and existing CLI/core services.
|
|
||||||
|
|
||||||
## Goals
|
|
||||||
|
|
||||||
- Deterministic, resumable automation to execute the TDD loop per subtask with minimal human intervention.
|
|
||||||
|
|
||||||
- Strong guardrails: never commit to the default branch; only commit when tests pass; enforce status transitions; persist logs/state for debuggability.
|
|
||||||
|
|
||||||
- Visibility: a compact terminal UI (like lazygit) to pick tag, view tasks, and start work; right-side pane opens an executor terminal (via tmux) for agent coding.
|
|
||||||
|
|
||||||
- Extensible: framework-agnostic test generation via the Surgical Test Generator; detect and use the repo’s test command for execution with coverage thresholds.
|
|
||||||
|
|
||||||
## Non‑Goals (initial)
|
|
||||||
|
|
||||||
- Full multi-language runner parity beyond detection and executing the project’s test command.
|
|
||||||
|
|
||||||
- Complex GUI; start with CLI/TUI + tmux pane. IDE/extension can hook into the same state later.
|
|
||||||
|
|
||||||
- Rich executor selection UX (codex/gemini/claude) — we’ll prompt per run; defaults can come later.
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
- One command can autonomously complete a task's subtasks via TDD and open a PR when done.
|
|
||||||
|
|
||||||
- All commits made on a branch that includes the tag and task id (see Branch Naming); no commits to the default branch directly.
|
|
||||||
|
|
||||||
- Every subtask iteration: failing tests added first (red), then code added to pass them (green), commit only after green.
|
|
||||||
|
|
||||||
- End-to-end logs + artifacts stored in .taskmaster/reports/runs/<timestamp-or-id>/.
|
|
||||||
|
|
||||||
## Success Metrics (Phase 1)
|
|
||||||
|
|
||||||
- **Adoption**: 80% of tasks in a pilot repo completed via `tm autopilot`
|
|
||||||
- **Safety**: 0 commits to default branch; 100% of commits have green tests
|
|
||||||
- **Efficiency**: Average time from task start to PR < 30min for simple subtasks
|
|
||||||
- **Reliability**: < 5% of runs require manual intervention (timeout/conflicts)
|
|
||||||
|
|
||||||
## User Stories
|
|
||||||
|
|
||||||
- As a developer, I can run tm autopilot <taskId> and watch a structured, safe workflow execute.
|
|
||||||
|
|
||||||
- As a reviewer, I can inspect commits per subtask, and a PR summarizing the work when the task completes.
|
|
||||||
|
|
||||||
- As an operator, I can see current step, active subtask, tests status, and logs in a compact CLI view and read a final run report.
|
|
||||||
|
|
||||||
## Example Workflow Traces
|
|
||||||
|
|
||||||
### Happy Path: Complete a 3-subtask feature
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Developer starts
|
|
||||||
$ tm autopilot 42
|
|
||||||
→ Checks preflight: ✓ clean tree, ✓ npm test detected
|
|
||||||
→ Creates branch: analytics/task-42-user-metrics
|
|
||||||
→ Subtask 42.1: "Add metrics schema"
|
|
||||||
RED: generates test_metrics_schema.test.js → 3 failures
|
|
||||||
GREEN: implements schema.js → all pass
|
|
||||||
COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
→ Subtask 42.2: "Add collection endpoint"
|
|
||||||
RED: generates test_metrics_endpoint.test.js → 5 failures
|
|
||||||
GREEN: implements api/metrics.js → all pass
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
→ Subtask 42.3: "Add dashboard widget"
|
|
||||||
RED: generates test_metrics_widget.test.js → 4 failures
|
|
||||||
GREEN: implements components/MetricsWidget.jsx → all pass
|
|
||||||
COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
→ Final: all 3 subtasks complete
|
|
||||||
✓ Run full test suite → all pass
|
|
||||||
✓ Coverage check → 85% (meets 80% threshold)
|
|
||||||
PUSH: confirms with user → pushed to origin
|
|
||||||
PR: opens #123 "Task #42 [analytics]: User metrics tracking"
|
|
||||||
|
|
||||||
✓ Task 42 complete. PR: https://github.com/org/repo/pull/123
|
|
||||||
Run report: .taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Recovery: Failing tests timeout
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 1 fails (2 tests still red)
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 2 fails (1 test still red)
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 3 fails (1 test still red)
|
|
||||||
|
|
||||||
⚠️ Paused: Could not achieve green state after 3 attempts
|
|
||||||
📋 State saved to: .taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
Last error: "POST /api/metrics returns 500 instead of 201"
|
|
||||||
|
|
||||||
Next steps:
|
|
||||||
- Review diff: git diff HEAD
|
|
||||||
- Inspect logs: cat .taskmaster/reports/runs/2025-01-15-142033/log.jsonl
|
|
||||||
- Check test output: cat .taskmaster/reports/runs/2025-01-15-142033/test-results/subtask-42.2-green-attempt3.json
|
|
||||||
- Resume after manual fix: tm autopilot --resume
|
|
||||||
|
|
||||||
# Developer manually fixes the issue, then:
|
|
||||||
$ tm autopilot --resume
|
|
||||||
→ Resuming subtask 42.2 GREEN phase
|
|
||||||
GREEN: all tests pass
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
→ Continuing to subtask 42.3...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dry Run: Preview before execution
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --dry-run
|
|
||||||
Autopilot Plan for Task #42 [analytics]: User metrics tracking
|
|
||||||
─────────────────────────────────────────────────────────────
|
|
||||||
Preflight:
|
|
||||||
✓ Working tree is clean
|
|
||||||
✓ Test command detected: npm test
|
|
||||||
✓ Tools available: git, gh, node, npm
|
|
||||||
✓ Current branch: main (will create new branch)
|
|
||||||
|
|
||||||
Branch & Tag:
|
|
||||||
→ Create branch: analytics/task-42-user-metrics
|
|
||||||
→ Set active tag: analytics
|
|
||||||
|
|
||||||
Subtasks (3 pending):
|
|
||||||
1. 42.1: Add metrics schema
|
|
||||||
- RED: generate tests in src/__tests__/schema.test.js
|
|
||||||
- GREEN: implement src/schema.js
|
|
||||||
- COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
|
|
||||||
2. 42.2: Add collection endpoint [depends on 42.1]
|
|
||||||
- RED: generate tests in src/api/__tests__/metrics.test.js
|
|
||||||
- GREEN: implement src/api/metrics.js
|
|
||||||
- COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
|
|
||||||
3. 42.3: Add dashboard widget [depends on 42.2]
|
|
||||||
- RED: generate tests in src/components/__tests__/MetricsWidget.test.jsx
|
|
||||||
- GREEN: implement src/components/MetricsWidget.jsx
|
|
||||||
- COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
|
|
||||||
Finalization:
|
|
||||||
→ Run full test suite with coverage
|
|
||||||
→ Push branch to origin (will confirm)
|
|
||||||
→ Create PR targeting main
|
|
||||||
|
|
||||||
Run without --dry-run to execute.
|
|
||||||
```
|
|
||||||
|
|
||||||
## High‑Level Workflow
|
|
||||||
|
|
||||||
1) Pre‑flight
|
|
||||||
|
|
||||||
- Verify clean working tree or confirm staging/commit policy (configurable).
|
|
||||||
|
|
||||||
- Detect repo type and the project’s test command (e.g., npm test, pnpm test, pytest, go test).
|
|
||||||
|
|
||||||
- Validate tools: git, gh (optional for PR), node/npm, and (if used) claude CLI.
|
|
||||||
|
|
||||||
- Load TaskMaster state and selected task; if no subtasks exist, automatically run “expand” before working.
|
|
||||||
|
|
||||||
2) Branch & Tag Setup
|
|
||||||
|
|
||||||
- Checkout default branch and update (optional), then create a branch using Branch Naming (below).
|
|
||||||
|
|
||||||
- Map branch ↔ tag via existing tag management; explicitly set active tag to the branch’s tag.
|
|
||||||
|
|
||||||
3) Subtask Loop (for each pending/in-progress subtask in dependency order)
|
|
||||||
|
|
||||||
- Select next eligible subtask using tm-core TaskService getNextTask() and subtask eligibility logic.
|
|
||||||
|
|
||||||
- Red: generate or update failing tests for the subtask
|
|
||||||
|
|
||||||
- Use the Surgical Test Generator system prompt .claude/agents/surgical-test-generator.md) to produce high-signal tests following project conventions.
|
|
||||||
|
|
||||||
- Run tests to confirm red; record results. If not red (already passing), skip to next subtask or escalate.
|
|
||||||
|
|
||||||
- Green: implement code to pass tests
|
|
||||||
|
|
||||||
- Use executor to implement changes (initial: claude CLI prompt with focused context).
|
|
||||||
|
|
||||||
- Re-run tests until green or timeout/backoff policy triggers.
|
|
||||||
|
|
||||||
- Commit: when green
|
|
||||||
|
|
||||||
- Commit tests + code with conventional commit message. Optionally update subtask status to done.
|
|
||||||
|
|
||||||
- Persist run step metadata/logs.
|
|
||||||
|
|
||||||
4) Finalization
|
|
||||||
|
|
||||||
- Run full test suite and coverage (if configured); optionally lint/format.
|
|
||||||
|
|
||||||
- Commit any final adjustments.
|
|
||||||
|
|
||||||
- Push branch (ask user to confirm); create PR (via gh pr create) targeting the default branch. Title format: Task #<id> [<tag>]: <title>.
|
|
||||||
|
|
||||||
5) Post‑Run
|
|
||||||
|
|
||||||
- Update task status if desired (e.g., review).
|
|
||||||
|
|
||||||
- Persist run report (JSON + markdown summary) to .taskmaster/reports/runs/<run-id>/.
|
|
||||||
|
|
||||||
## Guardrails
|
|
||||||
|
|
||||||
- Never commit to the default branch.
|
|
||||||
|
|
||||||
- Commit only if all tests (targeted and suite) pass; allow override flags.
|
|
||||||
|
|
||||||
- Enforce 80% coverage thresholds (lines/branches/functions/statements) by default; configurable.
|
|
||||||
|
|
||||||
- Timebox/model ops and retries; if not green within N attempts, pause with actionable state for resume.
|
|
||||||
|
|
||||||
- Always log actions, commands, and outcomes; include dry-run mode.
|
|
||||||
|
|
||||||
- Ask before branch creation, pushing, and opening a PR unless --no-confirm is set.
|
|
||||||
|
|
||||||
## Integration Points (Current Repo)
|
|
||||||
|
|
||||||
- CLI: apps/cli provides command structure and UI components.
|
|
||||||
|
|
||||||
- New command: tm autopilot (alias: task-master autopilot).
|
|
||||||
|
|
||||||
- Reuse UI components under apps/cli/src/ui/components/ for headers/task details/next-task.
|
|
||||||
|
|
||||||
- Core services: packages/tm-core
|
|
||||||
|
|
||||||
- TaskService for selection, status, tags.
|
|
||||||
|
|
||||||
- TaskExecutionService for prompt formatting and executor prep.
|
|
||||||
|
|
||||||
- Executors: claude executor and ExecutorFactory to run external tools.
|
|
||||||
|
|
||||||
- Proposed new: WorkflowOrchestrator to drive the autonomous loop and emit progress events.
|
|
||||||
|
|
||||||
- Tag/Git utilities: scripts/modules/utils/git-utils.js and scripts/modules/task-manager/tag-management.js for branch→tag mapping and explicit tag switching.
|
|
||||||
|
|
||||||
- Rules: .cursor/rules/git_workflow.mdc and .cursor/rules/test_workflow.mdc to steer behavior and ensure consistency.
|
|
||||||
|
|
||||||
- Test generation prompt: .claude/agents/surgical-test-generator.md.
|
|
||||||
|
|
||||||
## Proposed Components
|
|
||||||
|
|
||||||
- Orchestrator (tm-core): WorkflowOrchestrator (new)
|
|
||||||
|
|
||||||
- State machine driving phases: Preflight → Branch/Tag → SubtaskIter (Red/Green/Commit) → Finalize → PR.
|
|
||||||
|
|
||||||
- Exposes an evented API (progress events) that the CLI can render.
|
|
||||||
|
|
||||||
- Stores run state artifacts.
|
|
||||||
|
|
||||||
- Test Runner Adapter
|
|
||||||
|
|
||||||
- Detects and runs tests via the project’s test command (e.g., npm test), with targeted runs where feasible.
|
|
||||||
|
|
||||||
- API: runTargeted(files/pattern), runAll(), report summary (failures, duration, coverage), enforce 80% threshold by default.
|
|
||||||
|
|
||||||
- Git/PR Adapter
|
|
||||||
|
|
||||||
- Encapsulates git ops: branch create/checkout, add/commit, push.
|
|
||||||
|
|
||||||
- Optional gh integration to open PR; fallback to instructions if gh unavailable.
|
|
||||||
|
|
||||||
- Confirmation gates for branch creation and pushes.
|
|
||||||
|
|
||||||
- Prompt/Exec Adapter
|
|
||||||
|
|
||||||
- Uses existing executor service to call the selected coding assistant (initially claude) with tight prompts: task/subtask context, surgical tests first, then minimal code to green.
|
|
||||||
|
|
||||||
- Run State + Reporting
|
|
||||||
|
|
||||||
- JSONL log of steps, timestamps, commands, test results.
|
|
||||||
|
|
||||||
- Markdown summary for PR description and post-run artifact.
|
|
||||||
|
|
||||||
## CLI UX (MVP)
|
|
||||||
|
|
||||||
- Command: tm autopilot [taskId]
|
|
||||||
|
|
||||||
- Flags: --dry-run, --no-push, --no-pr, --no-confirm, --force, --max-attempts <n>, --runner <auto|custom>, --commit-scope <scope>
|
|
||||||
|
|
||||||
- Output: compact header (project, tag, branch), current phase, subtask line, last test summary, next actions.
|
|
||||||
|
|
||||||
- Resume: If interrupted, tm autopilot --resume picks up from last checkpoint in run state.
|
|
||||||
|
|
||||||
### TUI with tmux (Linear Execution)
|
|
||||||
|
|
||||||
- Left pane: Tag selector, task list (status/priority), start/expand shortcuts; "Start" triggers the next task or a selected task.
|
|
||||||
|
|
||||||
- Right pane: Executor terminal (tmux split) that runs the coding agent (claude-code/codex). Autopilot can hand over to the right pane during green.
|
|
||||||
|
|
||||||
- MCP integration: use MCP tools for task queries/updates and for shell/test invocations where available.
|
|
||||||
|
|
||||||
## TUI Layout (tmux-based)
|
|
||||||
|
|
||||||
### Pane Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────┬──────────────────────────────────┐
|
|
||||||
│ Task Navigator (left) │ Executor Terminal (right) │
|
|
||||||
│ │ │
|
|
||||||
│ Project: my-app │ $ tm autopilot --executor-mode │
|
|
||||||
│ Branch: analytics/task-42 │ > Running subtask 42.2 GREEN... │
|
|
||||||
│ Tag: analytics │ > Implementing endpoint... │
|
|
||||||
│ │ > Tests: 3 passed, 0 failed │
|
|
||||||
│ Tasks: │ > Ready to commit │
|
|
||||||
│ → 42 [in-progress] User metrics │ │
|
|
||||||
│ → 42.1 [done] Schema │ [Live output from Claude Code] │
|
|
||||||
│ → 42.2 [active] Endpoint ◀ │ │
|
|
||||||
│ → 42.3 [pending] Dashboard │ │
|
|
||||||
│ │ │
|
|
||||||
│ [s] start [p] pause [q] quit │ │
|
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Implementation Notes
|
|
||||||
|
|
||||||
- **Left pane**: `apps/cli/src/ui/tui/navigator.ts` (new, uses `blessed` or `ink`)
|
|
||||||
- **Right pane**: spawned via `tmux split-window -h` running `tm autopilot --executor-mode`
|
|
||||||
- **Communication**: shared state file `.taskmaster/state/current-run.json` + file watching or event stream
|
|
||||||
- **Keybindings**:
|
|
||||||
- `s` - Start selected task
|
|
||||||
- `p` - Pause/resume current run
|
|
||||||
- `q` - Quit (with confirmation if run active)
|
|
||||||
- `↑/↓` - Navigate task list
|
|
||||||
- `Enter` - Expand/collapse subtasks
|
|
||||||
|
|
||||||
## Prompt Composition (Detailed)
|
|
||||||
|
|
||||||
### System Prompt Assembly
|
|
||||||
|
|
||||||
Prompts are composed in three layers:
|
|
||||||
|
|
||||||
1. **Base rules** (loaded in order from `.cursor/rules/` and `.claude/agents/`):
|
|
||||||
- `git_workflow.mdc` → git commit conventions, branch policy, PR guidelines
|
|
||||||
- `test_workflow.mdc` → TDD loop requirements, coverage thresholds, test structure
|
|
||||||
- `surgical-test-generator.md` → test generation methodology, project-specific test patterns
|
|
||||||
|
|
||||||
2. **Task context injection**:
|
|
||||||
```
|
|
||||||
You are implementing:
|
|
||||||
Task #42 [analytics]: User metrics tracking
|
|
||||||
Subtask 42.2: Add collection endpoint
|
|
||||||
|
|
||||||
Description:
|
|
||||||
Implement POST /api/metrics endpoint to collect user metrics events
|
|
||||||
|
|
||||||
Acceptance criteria:
|
|
||||||
- POST /api/metrics accepts { userId, eventType, timestamp }
|
|
||||||
- Validates input schema (reject missing/invalid fields)
|
|
||||||
- Persists to database
|
|
||||||
- Returns 201 on success with created record
|
|
||||||
- Returns 400 on validation errors
|
|
||||||
|
|
||||||
Dependencies:
|
|
||||||
- Subtask 42.1 (metrics schema) is complete
|
|
||||||
|
|
||||||
Current phase: RED (generate failing tests)
|
|
||||||
Test command: npm test
|
|
||||||
Test file convention: src/**/*.test.js (vitest framework detected)
|
|
||||||
Branch: analytics/task-42-user-metrics
|
|
||||||
Project language: JavaScript (Node.js)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Phase-specific instructions**:
|
|
||||||
- **RED phase**: "Generate minimal failing tests for this subtask. Do NOT implement any production code. Only create test files. Confirm tests fail with clear error messages indicating missing implementation."
|
|
||||||
- **GREEN phase**: "Implement minimal code to pass the failing tests. Follow existing project patterns in `src/`. Only modify files necessary for this subtask. Keep changes focused and reviewable."
|
|
||||||
|
|
||||||
### Example Full Prompt (RED Phase)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<SYSTEM PROMPT>
|
|
||||||
[Contents of .cursor/rules/git_workflow.mdc]
|
|
||||||
[Contents of .cursor/rules/test_workflow.mdc]
|
|
||||||
[Contents of .claude/agents/surgical-test-generator.md]
|
|
||||||
|
|
||||||
<TASK CONTEXT>
|
|
||||||
You are implementing:
|
|
||||||
Task #42.2: Add collection endpoint
|
|
||||||
|
|
||||||
Description:
|
|
||||||
Implement POST /api/metrics endpoint to collect user metrics events
|
|
||||||
|
|
||||||
Acceptance criteria:
|
|
||||||
- POST /api/metrics accepts { userId, eventType, timestamp }
|
|
||||||
- Validates input schema (reject missing/invalid fields)
|
|
||||||
- Persists to database using MetricsSchema from subtask 42.1
|
|
||||||
- Returns 201 on success with created record
|
|
||||||
- Returns 400 on validation errors with details
|
|
||||||
|
|
||||||
Dependencies: Subtask 42.1 (metrics schema) is complete
|
|
||||||
|
|
||||||
<INSTRUCTION>
|
|
||||||
Generate failing tests for this subtask. Follow project conventions:
|
|
||||||
- Test file: src/api/__tests__/metrics.test.js
|
|
||||||
- Framework: vitest (detected from package.json)
|
|
||||||
- Test cases to cover:
|
|
||||||
* POST /api/metrics with valid payload → should return 201 (will fail: endpoint not implemented)
|
|
||||||
* POST /api/metrics with missing userId → should return 400 (will fail: validation not implemented)
|
|
||||||
* POST /api/metrics with invalid timestamp → should return 400 (will fail: validation not implemented)
|
|
||||||
* POST /api/metrics should persist to database → should save record (will fail: persistence not implemented)
|
|
||||||
|
|
||||||
Do NOT implement the endpoint code yet. Only create test file(s).
|
|
||||||
Confirm tests fail with messages like "Cannot POST /api/metrics" or "endpoint not defined".
|
|
||||||
|
|
||||||
Output format:
|
|
||||||
1. File path to create: src/api/__tests__/metrics.test.js
|
|
||||||
2. Complete test code
|
|
||||||
3. Command to run: npm test src/api/__tests__/metrics.test.js
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Full Prompt (GREEN Phase)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<SYSTEM PROMPT>
|
|
||||||
[Contents of .cursor/rules/git_workflow.mdc]
|
|
||||||
[Contents of .cursor/rules/test_workflow.mdc]
|
|
||||||
|
|
||||||
<TASK CONTEXT>
|
|
||||||
Task #42.2: Add collection endpoint
|
|
||||||
[same context as RED phase]
|
|
||||||
|
|
||||||
<CURRENT STATE>
|
|
||||||
Tests created in RED phase:
|
|
||||||
- src/api/__tests__/metrics.test.js
|
|
||||||
- 5 tests written, all failing as expected
|
|
||||||
|
|
||||||
Test output:
|
|
||||||
```
|
|
||||||
FAIL src/api/__tests__/metrics.test.js
|
|
||||||
POST /api/metrics
|
|
||||||
✗ should return 201 with valid payload (endpoint not found)
|
|
||||||
✗ should return 400 with missing userId (endpoint not found)
|
|
||||||
✗ should return 400 with invalid timestamp (endpoint not found)
|
|
||||||
✗ should persist to database (endpoint not found)
|
|
||||||
```
|
|
||||||
|
|
||||||
<INSTRUCTION>
|
|
||||||
Implement minimal code to make all tests pass.
|
|
||||||
|
|
||||||
Guidelines:
|
|
||||||
- Create/modify file: src/api/metrics.js
|
|
||||||
- Use existing patterns from src/api/ (e.g., src/api/users.js for reference)
|
|
||||||
- Import MetricsSchema from subtask 42.1 (src/models/schema.js)
|
|
||||||
- Implement validation, persistence, and response handling
|
|
||||||
- Follow project error handling conventions
|
|
||||||
- Keep implementation focused on this subtask only
|
|
||||||
|
|
||||||
After implementation:
|
|
||||||
1. Run tests: npm test src/api/__tests__/metrics.test.js
|
|
||||||
2. Confirm all 5 tests pass
|
|
||||||
3. Report results
|
|
||||||
|
|
||||||
Output format:
|
|
||||||
1. File(s) created/modified
|
|
||||||
2. Implementation code
|
|
||||||
3. Test command and results
|
|
||||||
```
|
|
||||||
|
|
||||||
### Prompt Loading Configuration
|
|
||||||
|
|
||||||
See `.taskmaster/config.json` → `prompts` section for paths and load order.
|
|
||||||
|
|
||||||
## Configuration Schema
|
|
||||||
|
|
||||||
### .taskmaster/config.json
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"autopilot": {
|
|
||||||
"enabled": true,
|
|
||||||
"requireCleanWorkingTree": true,
|
|
||||||
"commitTemplate": "{type}({scope}): {msg}",
|
|
||||||
"defaultCommitType": "feat",
|
|
||||||
"maxGreenAttempts": 3,
|
|
||||||
"testTimeout": 300000
|
|
||||||
},
|
|
||||||
"test": {
|
|
||||||
"runner": "auto",
|
|
||||||
"coverageThresholds": {
|
|
||||||
"lines": 80,
|
|
||||||
"branches": 80,
|
|
||||||
"functions": 80,
|
|
||||||
"statements": 80
|
|
||||||
},
|
|
||||||
"targetedRunPattern": "**/*.test.js"
|
|
||||||
},
|
|
||||||
"git": {
|
|
||||||
"branchPattern": "{tag}/task-{id}-{slug}",
|
|
||||||
"pr": {
|
|
||||||
"enabled": true,
|
|
||||||
"base": "default",
|
|
||||||
"bodyTemplate": ".taskmaster/templates/pr-body.md"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"prompts": {
|
|
||||||
"rulesPath": ".cursor/rules",
|
|
||||||
"testGeneratorPath": ".claude/agents/surgical-test-generator.md",
|
|
||||||
"loadOrder": ["git_workflow.mdc", "test_workflow.mdc"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration Fields
|
|
||||||
|
|
||||||
#### autopilot
|
|
||||||
- `enabled` (boolean): Enable/disable autopilot functionality
|
|
||||||
- `requireCleanWorkingTree` (boolean): Require clean git state before starting
|
|
||||||
- `commitTemplate` (string): Template for commit messages (tokens: `{type}`, `{scope}`, `{msg}`)
|
|
||||||
- `defaultCommitType` (string): Default commit type (feat, fix, chore, etc.)
|
|
||||||
- `maxGreenAttempts` (number): Maximum retry attempts to achieve green tests (default: 3)
|
|
||||||
- `testTimeout` (number): Timeout in milliseconds per test run (default: 300000 = 5min)
|
|
||||||
|
|
||||||
#### test
|
|
||||||
- `runner` (string): Test runner detection mode (`"auto"` or explicit command like `"npm test"`)
|
|
||||||
- `coverageThresholds` (object): Minimum coverage percentages required
|
|
||||||
- `lines`, `branches`, `functions`, `statements` (number): Threshold percentages (0-100)
|
|
||||||
- `targetedRunPattern` (string): Glob pattern for targeted subtask test runs
|
|
||||||
|
|
||||||
#### git
|
|
||||||
- `branchPattern` (string): Branch naming pattern (tokens: `{tag}`, `{id}`, `{slug}`)
|
|
||||||
- `pr.enabled` (boolean): Enable automatic PR creation
|
|
||||||
- `pr.base` (string): Target branch for PRs (`"default"` uses repo default, or specify like `"main"`)
|
|
||||||
- `pr.bodyTemplate` (string): Path to PR body template file (optional)
|
|
||||||
|
|
||||||
#### prompts
|
|
||||||
- `rulesPath` (string): Directory containing rule files (e.g., `.cursor/rules`)
|
|
||||||
- `testGeneratorPath` (string): Path to test generator prompt file
|
|
||||||
- `loadOrder` (array): Order to load rule files from `rulesPath`
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Required for executor
|
|
||||||
ANTHROPIC_API_KEY=sk-ant-... # Claude API key
|
|
||||||
|
|
||||||
# Optional: for PR creation
|
|
||||||
GITHUB_TOKEN=ghp_... # GitHub personal access token
|
|
||||||
|
|
||||||
# Optional: for other executors (future)
|
|
||||||
OPENAI_API_KEY=sk-...
|
|
||||||
GOOGLE_API_KEY=...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run Artifacts & Observability
|
|
||||||
|
|
||||||
### Per-Run Artifact Structure
|
|
||||||
|
|
||||||
Each autopilot run creates a timestamped directory with complete traceability:
|
|
||||||
|
|
||||||
```
|
|
||||||
.taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
├── manifest.json # run metadata (task id, start/end time, status)
|
|
||||||
├── log.jsonl # timestamped event stream
|
|
||||||
├── commits.txt # list of commit SHAs made during run
|
|
||||||
├── test-results/
|
|
||||||
│ ├── subtask-42.1-red.json
|
|
||||||
│ ├── subtask-42.1-green.json
|
|
||||||
│ ├── subtask-42.2-red.json
|
|
||||||
│ ├── subtask-42.2-green-attempt1.json
|
|
||||||
│ ├── subtask-42.2-green-attempt2.json
|
|
||||||
│ ├── subtask-42.2-green-attempt3.json
|
|
||||||
│ └── final-suite.json
|
|
||||||
└── pr.md # generated PR body
|
|
||||||
```
|
|
||||||
|
|
||||||
### manifest.json Format
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"tag": "analytics",
|
|
||||||
"branch": "analytics/task-42-user-metrics",
|
|
||||||
"startTime": "2025-01-15T14:20:33Z",
|
|
||||||
"endTime": "2025-01-15T14:45:12Z",
|
|
||||||
"status": "completed",
|
|
||||||
"subtasksCompleted": ["42.1", "42.2", "42.3"],
|
|
||||||
"subtasksFailed": [],
|
|
||||||
"totalCommits": 3,
|
|
||||||
"prUrl": "https://github.com/org/repo/pull/123",
|
|
||||||
"finalCoverage": {
|
|
||||||
"lines": 85.3,
|
|
||||||
"branches": 82.1,
|
|
||||||
"functions": 88.9,
|
|
||||||
"statements": 85.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### log.jsonl Format
|
|
||||||
|
|
||||||
Event stream in JSON Lines format for easy parsing and debugging:
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"ts":"2025-01-15T14:20:33Z","phase":"preflight","status":"ok","details":{"testCmd":"npm test","gitClean":true}}
|
|
||||||
{"ts":"2025-01-15T14:20:45Z","phase":"branch","status":"ok","branch":"analytics/task-42-user-metrics"}
|
|
||||||
{"ts":"2025-01-15T14:21:00Z","phase":"red","subtask":"42.1","status":"ok","tests":{"failed":3,"passed":0}}
|
|
||||||
{"ts":"2025-01-15T14:22:15Z","phase":"green","subtask":"42.1","status":"ok","tests":{"passed":3,"failed":0},"attempts":2}
|
|
||||||
{"ts":"2025-01-15T14:22:20Z","phase":"commit","subtask":"42.1","status":"ok","sha":"a1b2c3d","message":"feat(metrics): add metrics schema (task 42.1)"}
|
|
||||||
{"ts":"2025-01-15T14:23:00Z","phase":"red","subtask":"42.2","status":"ok","tests":{"failed":5,"passed":0}}
|
|
||||||
{"ts":"2025-01-15T14:25:30Z","phase":"green","subtask":"42.2","status":"error","tests":{"passed":3,"failed":2},"attempts":3,"error":"Max attempts reached"}
|
|
||||||
{"ts":"2025-01-15T14:25:35Z","phase":"pause","reason":"max_attempts","nextAction":"manual_review"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Results Format
|
|
||||||
|
|
||||||
Each test run stores detailed results:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"subtask": "42.2",
|
|
||||||
"phase": "green",
|
|
||||||
"attempt": 3,
|
|
||||||
"timestamp": "2025-01-15T14:25:30Z",
|
|
||||||
"command": "npm test src/api/__tests__/metrics.test.js",
|
|
||||||
"exitCode": 1,
|
|
||||||
"duration": 2340,
|
|
||||||
"summary": {
|
|
||||||
"total": 5,
|
|
||||||
"passed": 3,
|
|
||||||
"failed": 2,
|
|
||||||
"skipped": 0
|
|
||||||
},
|
|
||||||
"failures": [
|
|
||||||
{
|
|
||||||
"test": "POST /api/metrics should return 201 with valid payload",
|
|
||||||
"error": "Expected status 201, got 500",
|
|
||||||
"stack": "..."
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"coverage": {
|
|
||||||
"lines": 78.5,
|
|
||||||
"branches": 75.0,
|
|
||||||
"functions": 80.0,
|
|
||||||
"statements": 78.5
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Model
|
|
||||||
|
|
||||||
### Orchestration vs Direct Execution
|
|
||||||
|
|
||||||
The autopilot system uses an **orchestration model** rather than direct code execution:
|
|
||||||
|
|
||||||
**Orchestrator Role** (tm-core WorkflowOrchestrator):
|
|
||||||
- Maintains state machine tracking current phase (RED/GREEN/COMMIT) per subtask
|
|
||||||
- Validates preconditions (tests pass, git state clean, etc.)
|
|
||||||
- Returns "work units" describing what needs to be done next
|
|
||||||
- Records completion and advances to next phase
|
|
||||||
- Persists state for resumability
|
|
||||||
|
|
||||||
**Executor Role** (Claude Code/AI session via MCP):
|
|
||||||
- Queries orchestrator for next work unit
|
|
||||||
- Executes the work (generates tests, writes code, runs tests, makes commits)
|
|
||||||
- Reports results back to orchestrator
|
|
||||||
- Handles file operations and tool invocations
|
|
||||||
|
|
||||||
**Why This Approach?**
|
|
||||||
- Leverages existing AI capabilities (Claude Code) rather than duplicating them
|
|
||||||
- MCP protocol provides clean separation between state management and execution
|
|
||||||
- Allows human oversight and intervention at each phase
|
|
||||||
- Simpler to implement: orchestrator is pure state logic, no code generation needed
|
|
||||||
- Enables multiple executor types (Claude Code, other AI tools, human developers)
|
|
||||||
|
|
||||||
**Example Flow**:
|
|
||||||
```typescript
|
|
||||||
// Claude Code (via MCP) queries orchestrator
|
|
||||||
const workUnit = await orchestrator.getNextWorkUnit('42');
|
|
||||||
// => {
|
|
||||||
// phase: 'RED',
|
|
||||||
// subtask: '42.1',
|
|
||||||
// action: 'Generate failing tests for metrics schema',
|
|
||||||
// context: { title, description, dependencies, testFile: 'src/__tests__/schema.test.js' }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// Claude Code executes the work (writes test file, runs tests)
|
|
||||||
// Then reports back
|
|
||||||
await orchestrator.completeWorkUnit('42', '42.1', 'RED', {
|
|
||||||
success: true,
|
|
||||||
testsCreated: ['src/__tests__/schema.test.js'],
|
|
||||||
testsFailed: 3
|
|
||||||
});
|
|
||||||
|
|
||||||
// Query again for next phase
|
|
||||||
const nextWorkUnit = await orchestrator.getNextWorkUnit('42');
|
|
||||||
// => { phase: 'GREEN', subtask: '42.1', action: 'Implement code to pass tests', ... }
|
|
||||||
```
|
|
||||||
|
|
||||||
## Design Decisions
|
|
||||||
|
|
||||||
### Why commit per subtask instead of per task?
|
|
||||||
|
|
||||||
**Decision**: Commit after each subtask's green state, not after the entire task.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Atomic commits make code review easier (reviewers can see logical progression)
|
|
||||||
- Easier to revert a single subtask if it causes issues downstream
|
|
||||||
- Matches the TDD loop's natural checkpoint and cognitive boundary
|
|
||||||
- Provides resumability points if the run is interrupted
|
|
||||||
|
|
||||||
**Trade-off**: More commits per task (can use squash-merge in PRs if desired)
|
|
||||||
|
|
||||||
### Why not support parallel subtask execution?
|
|
||||||
|
|
||||||
**Decision**: Sequential subtask execution in Phase 1; parallel execution deferred to Phase 3.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Subtasks often have implicit dependencies (e.g., schema before endpoint, endpoint before UI)
|
|
||||||
- Simpler orchestrator state machine (less complexity = faster to ship)
|
|
||||||
- Parallel execution requires explicit dependency DAG and conflict resolution
|
|
||||||
- Can be added in Phase 3 once core workflow is proven stable
|
|
||||||
|
|
||||||
**Trade-off**: Slower for truly independent subtasks (mitigated by keeping subtasks small and focused)
|
|
||||||
|
|
||||||
### Why require 80% coverage by default?
|
|
||||||
|
|
||||||
**Decision**: Enforce 80% coverage threshold (lines/branches/functions/statements) before allowing commits.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Industry standard baseline for production code quality
|
|
||||||
- Forces test generation to be comprehensive, not superficial
|
|
||||||
- Configurable per project via `.taskmaster/config.json` if too strict
|
|
||||||
- Prevents "green tests" that only test happy paths
|
|
||||||
|
|
||||||
**Trade-off**: May require more test generation iterations; can be lowered per project
|
|
||||||
|
|
||||||
### Why use tmux instead of a rich GUI?
|
|
||||||
|
|
||||||
**Decision**: MVP uses tmux split panes for TUI, not Electron/web-based GUI.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Tmux is universally available on dev machines; no installation burden
|
|
||||||
- Terminal-first workflows match developer mental model (no context switching)
|
|
||||||
- Simpler to implement and maintain; can add GUI later via extensions
|
|
||||||
- State stored in files allows IDE/extension integration without coupling
|
|
||||||
|
|
||||||
**Trade-off**: Less visual polish than GUI; requires tmux familiarity
|
|
||||||
|
|
||||||
### Why not support multiple executors (codex/gemini/claude) in Phase 1?
|
|
||||||
|
|
||||||
**Decision**: Start with Claude executor only; add others in Phase 2+.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Reduces scope and complexity for initial delivery
|
|
||||||
- Claude Code already integrated with existing executor service
|
|
||||||
- Executor abstraction already exists; adding more is straightforward later
|
|
||||||
- Different executors may need different prompt strategies (requires experimentation)
|
|
||||||
|
|
||||||
**Trade-off**: Users locked to Claude initially; can work around with manual executor selection
|
|
||||||
|
|
||||||
## Risks and Mitigations
|
|
||||||
|
|
||||||
- Model hallucination/large diffs: restrict prompt scope; enforce minimal changes; show diff previews (optional) before commit.
|
|
||||||
|
|
||||||
- Flaky tests: allow retries, isolate targeted runs for speed, then full suite before commit.
|
|
||||||
|
|
||||||
- Environment variability: detect runners/tools; provide fallbacks and actionable errors.
|
|
||||||
|
|
||||||
- PR creation fails: still push and print manual commands; persist PR body to reuse.
|
|
||||||
|
|
||||||
## Open Questions
|
|
||||||
|
|
||||||
1) Slugging rules for branch names; any length limits or normalization beyond {slug} token sanitize?
|
|
||||||
|
|
||||||
2) PR body standard sections beyond run report (e.g., checklist, coverage table)?
|
|
||||||
|
|
||||||
3) Default executor prompt fine-tuning once codex/gemini integration is available.
|
|
||||||
|
|
||||||
4) Where to store persistent TUI state (pane layout, last selection) in .taskmaster/state.json?
|
|
||||||
|
|
||||||
## Branch Naming
|
|
||||||
|
|
||||||
- Include both the tag and the task id in the branch name to make lineage explicit.
|
|
||||||
|
|
||||||
- Default pattern: <tag>/task-<id>[-slug] (e.g., master/task-12, tag-analytics/task-4-user-auth).
|
|
||||||
|
|
||||||
- Configurable via .taskmaster/config.json: git.branchPattern supports tokens {tag}, {id}, {slug}.
|
|
||||||
|
|
||||||
## PR Base Branch
|
|
||||||
|
|
||||||
- Use the repository’s default branch (detected via git) unless overridden.
|
|
||||||
|
|
||||||
- Title format: Task #<id> [<tag>]: <title>.
|
|
||||||
|
|
||||||
## RPG Mapping (Repository Planning Graph)
|
|
||||||
|
|
||||||
Functional nodes (capabilities):
|
|
||||||
|
|
||||||
- Autopilot Orchestration → drives TDD loop and lifecycle
|
|
||||||
|
|
||||||
- Test Generation (Surgical) → produces failing tests from subtask context
|
|
||||||
|
|
||||||
- Test Execution + Coverage → runs suite, enforces thresholds
|
|
||||||
|
|
||||||
- Git/Branch/PR Management → safe operations and PR creation
|
|
||||||
|
|
||||||
- TUI/Terminal Integration → interactive control and visibility via tmux
|
|
||||||
|
|
||||||
- MCP Integration → structured task/status/context operations
|
|
||||||
|
|
||||||
Structural nodes (code organization):
|
|
||||||
|
|
||||||
- packages/tm-core:
|
|
||||||
|
|
||||||
- services/workflow-orchestrator.ts (new)
|
|
||||||
|
|
||||||
- services/test-runner-adapter.ts (new)
|
|
||||||
|
|
||||||
- services/git-adapter.ts (new)
|
|
||||||
|
|
||||||
- existing: task-service.ts, task-execution-service.ts, executors/*
|
|
||||||
|
|
||||||
- apps/cli:
|
|
||||||
|
|
||||||
- src/commands/autopilot.command.ts (new)
|
|
||||||
|
|
||||||
- src/ui/tui/ (new tmux/TUI helpers)
|
|
||||||
|
|
||||||
- scripts/modules:
|
|
||||||
|
|
||||||
- reuse utils/git-utils.js, task-manager/tag-management.js
|
|
||||||
|
|
||||||
- .claude/agents/:
|
|
||||||
|
|
||||||
- surgical-test-generator.md
|
|
||||||
|
|
||||||
Edges (data/control flow):
|
|
||||||
|
|
||||||
- Autopilot → Test Generation → Test Execution → Git Commit → loop
|
|
||||||
|
|
||||||
- Autopilot → Git Adapter (branch, tag, PR)
|
|
||||||
|
|
||||||
- Autopilot → TUI (event stream) → tmux pane control
|
|
||||||
|
|
||||||
- Autopilot → MCP tools for task/status updates
|
|
||||||
|
|
||||||
- Test Execution → Coverage gate → Autopilot decision
|
|
||||||
|
|
||||||
Topological traversal (implementation order):
|
|
||||||
|
|
||||||
1) Git/Test adapters (foundations)
|
|
||||||
|
|
||||||
2) Orchestrator skeleton + events
|
|
||||||
|
|
||||||
3) CLI autopilot command and dry-run
|
|
||||||
|
|
||||||
4) Surgical test-gen integration and execution gate
|
|
||||||
|
|
||||||
5) PR creation, run reports, resumability
|
|
||||||
|
|
||||||
## Phased Roadmap
|
|
||||||
|
|
||||||
- Phase 0: Spike
|
|
||||||
|
|
||||||
- Implement CLI skeleton tm autopilot with dry-run showing planned steps from a real task + subtasks.
|
|
||||||
|
|
||||||
- Detect test runner (package.json) and git state; render a preflight report.
|
|
||||||
|
|
||||||
- Phase 1: Core Rails (State Machine & Orchestration)
|
|
||||||
|
|
||||||
- Implement WorkflowOrchestrator in tm-core as a **state machine** that tracks TDD phases per subtask.
|
|
||||||
|
|
||||||
- Orchestrator **guides** the current AI session (Claude Code/MCP client) rather than executing code itself.
|
|
||||||
|
|
||||||
- Add Git/Test adapters for status checks and validation (not direct execution).
|
|
||||||
|
|
||||||
- WorkflowOrchestrator API:
|
|
||||||
- `getNextWorkUnit(taskId)` → returns next phase to execute (RED/GREEN/COMMIT) with context
|
|
||||||
- `completeWorkUnit(taskId, subtaskId, phase, result)` → records completion and advances state
|
|
||||||
- `getRunState(taskId)` → returns current progress and resumability data
|
|
||||||
|
|
||||||
- MCP integration: expose work unit endpoints so Claude Code can query "what to do next" and report back.
|
|
||||||
|
|
||||||
- Branch/tag mapping via existing tag-management APIs.
|
|
||||||
|
|
||||||
- Run report persisted under .taskmaster/reports/runs/ with state checkpoints for resumability.
|
|
||||||
|
|
||||||
- Phase 2: PR + Resumability
|
|
||||||
|
|
||||||
- Add gh PR creation with well-formed body using the run report.
|
|
||||||
|
|
||||||
- Introduce resumable checkpoints and --resume flag.
|
|
||||||
|
|
||||||
- Add coverage enforcement and optional lint/format step.
|
|
||||||
|
|
||||||
- Phase 3: Extensibility + Guardrails
|
|
||||||
|
|
||||||
- Add support for basic pytest/go test adapters.
|
|
||||||
|
|
||||||
- Add safeguards: diff preview mode, manual confirm gates, aggressive minimal-change prompts.
|
|
||||||
|
|
||||||
- Optional: small TUI panel and extension panel leveraging the same run state file.
|
|
||||||
|
|
||||||
## References (Repo)
|
|
||||||
|
|
||||||
- Test Workflow: .cursor/rules/test_workflow.mdc
|
|
||||||
|
|
||||||
- Git Workflow: .cursor/rules/git_workflow.mdc
|
|
||||||
|
|
||||||
- CLI: apps/cli/src/commands/start.command.ts, apps/cli/src/ui/components/*.ts
|
|
||||||
|
|
||||||
- Core Services: packages/tm-core/src/services/task-service.ts, task-execution-service.ts
|
|
||||||
|
|
||||||
- Executors: packages/tm-core/src/executors/*
|
|
||||||
|
|
||||||
- Git Utilities: scripts/modules/utils/git-utils.js
|
|
||||||
|
|
||||||
- Tag Management: scripts/modules/task-manager/tag-management.js
|
|
||||||
|
|
||||||
- Surgical Test Generator: .claude/agents/surgical-test-generator.md
|
|
||||||
|
|
||||||
@@ -1,91 +0,0 @@
|
|||||||
<context>
|
|
||||||
# Overview
|
|
||||||
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
|
||||||
|
|
||||||
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
|
||||||
|
|
||||||
# Core Features
|
|
||||||
- `start` command (Commander class style)
|
|
||||||
- Hard-coded executor: `claude-code`
|
|
||||||
- Standardized prompt designed for minimal changes following existing patterns
|
|
||||||
- Shows claude-code output (no streaming)
|
|
||||||
- Git status check for success detection
|
|
||||||
- Auto-mark task done if successful
|
|
||||||
|
|
||||||
# User Experience
|
|
||||||
```
|
|
||||||
task-master start 12
|
|
||||||
```
|
|
||||||
1) Fetches Task #12 details
|
|
||||||
2) Builds standardized prompt with task context
|
|
||||||
3) Runs claude-code with the prompt
|
|
||||||
4) Shows output
|
|
||||||
5) Checks git status for changes
|
|
||||||
6) Auto-marks task done if changes detected
|
|
||||||
</context>
|
|
||||||
|
|
||||||
<PRD>
|
|
||||||
# Technical Architecture
|
|
||||||
|
|
||||||
- Command pattern:
|
|
||||||
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
|
||||||
|
|
||||||
- Task retrieval:
|
|
||||||
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
|
||||||
- Extract: id, title, description, details
|
|
||||||
|
|
||||||
- Executor (ultra-simple approach):
|
|
||||||
- Execute `claude "full prompt here"` command directly
|
|
||||||
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
|
||||||
- Then tells Claude to implement the code changes
|
|
||||||
- This opens Claude CLI interface naturally in the current terminal
|
|
||||||
- No subprocess management needed - just execute the command
|
|
||||||
|
|
||||||
- Execution flow:
|
|
||||||
1) Validate `<task_id>` exists; exit with error if not
|
|
||||||
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
|
||||||
3) Execute `claude "prompt"` command directly in terminal
|
|
||||||
4) Claude CLI opens, runs `tm show`, then implements changes
|
|
||||||
5) After Claude session ends, run `git status --porcelain` to detect changes
|
|
||||||
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
|
||||||
|
|
||||||
- Success criteria:
|
|
||||||
- Success = exit code 0 AND git shows modified/created files
|
|
||||||
- Print changed file paths; warn if no changes detected
|
|
||||||
|
|
||||||
# Development Roadmap
|
|
||||||
|
|
||||||
MVP (ship in ~1 hour):
|
|
||||||
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
|
||||||
2) Validate task exists via tm-core
|
|
||||||
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
|
||||||
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
|
||||||
|
|
||||||
# Risks and Mitigations
|
|
||||||
- Executor availability: Error clearly if `claude-code` provider fails
|
|
||||||
- False success: Git-change heuristic acceptable for hackathon MVP
|
|
||||||
|
|
||||||
# Appendix
|
|
||||||
|
|
||||||
**Standardized Prompt Template:**
|
|
||||||
```
|
|
||||||
You are an AI coding assistant with access to this repository's codebase.
|
|
||||||
|
|
||||||
First, run this command to get the task details:
|
|
||||||
tm show <task_id>
|
|
||||||
|
|
||||||
Then implement the task with these requirements:
|
|
||||||
- Make the SMALLEST number of code changes possible
|
|
||||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
|
||||||
- Do NOT over-engineer the solution
|
|
||||||
- Use existing files/functions/patterns wherever possible
|
|
||||||
- When complete, print: COMPLETED: <brief summary of changes>
|
|
||||||
|
|
||||||
Begin by running tm show <task_id> to understand what needs to be implemented.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key References:**
|
|
||||||
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
|
||||||
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
|
||||||
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
|
||||||
</PRD>
|
|
||||||
42
.taskmaster/docs/research/2025-06-13_what-is-typescript.md
Normal file
42
.taskmaster/docs/research/2025-06-13_what-is-typescript.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
title: Research Session
|
||||||
|
query: "What is TypeScript?"
|
||||||
|
date: 6/13/2025
|
||||||
|
time: 3:07:41 PM
|
||||||
|
timestamp: 2025-06-13T19:07:41.313Z
|
||||||
|
exchanges: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
# Research Session
|
||||||
|
|
||||||
|
**Query:** What is TypeScript?
|
||||||
|
**Date:** 6/13/2025 3:07:41 PM
|
||||||
|
**Exchanges:** 2
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Query
|
||||||
|
|
||||||
|
**Question:** What is TypeScript?
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
TypeScript is a strongly typed, object-oriented programming language that builds on JavaScript by adding static typing and type annotations. It is a syntactic superset of JavaScript, meaning all JavaScript code is valid TypeScript, but TypeScript adds extra syntax to specify types, enabling compile-time type checking and better tooling support. This helps catch errors early before running the code and improves code quality, especially in large-scale applications. TypeScript code is transpiled to plain JavaScript, which can run anywhere JavaScript runs, such as browsers, Node.js, or Deno. It is developed and maintained by Microsoft as open-source software under the Apache 2.0 license[1][2][3][4][5].
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-up 1
|
||||||
|
|
||||||
|
**Question:** And why is the sky blue?
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
- The sky appears blue because sunlight interacts with Earth's atmosphere, and the shorter wavelengths of light (blue and violet) are scattered in all directions by air molecules—a process called Rayleigh scattering[1][2][3][5].
|
||||||
|
- Blue light is scattered more than red light because it has a shorter wavelength, making the sky look blue to our eyes during the day[2][3][5].
|
||||||
|
- If Earth's atmosphere didn't scatter sunlight, the sky would appear dark, as it does on the Moon, which lacks an atmosphere[5].
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by Task Master Research Command*
|
||||||
|
*Timestamp: 2025-06-13T19:07:41.313Z*
|
||||||
@@ -1,128 +0,0 @@
|
|||||||
---
|
|
||||||
title: Research Session
|
|
||||||
query: "how can i improve the scope-up and scope-down command"
|
|
||||||
date: 6/14/2025
|
|
||||||
time: 4:35:03 PM
|
|
||||||
timestamp: 2025-06-14T20:35:03.231Z
|
|
||||||
exchanges: 2
|
|
||||||
---
|
|
||||||
|
|
||||||
# Research Session
|
|
||||||
|
|
||||||
## Initial Query
|
|
||||||
|
|
||||||
**Question:** how can i improve the scope-up and scope-down command
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
## Improving the `scope-up` and `scope-down` Commands
|
|
||||||
|
|
||||||
**Clarify Command Purpose and User Experience**
|
|
||||||
|
|
||||||
- Ensure the commands have clear, intuitive behavior. For example, `scope-up` should consistently broaden the context or task view, while `scope-down` should reliably narrow it.
|
|
||||||
- Provide immediate, user-friendly feedback after each command, indicating what changed (e.g., "Now viewing parent task and all subtasks" or "Focused on subtask: X").
|
|
||||||
- Document the commands with examples in your `docs/command-reference.md` and `docs/examples.md` to help users understand their effects and use cases.
|
|
||||||
|
|
||||||
**Enhance Context Awareness**
|
|
||||||
|
|
||||||
- Make the commands context-sensitive: if the user is already at the top or bottom of the scope hierarchy, return a helpful message instead of failing silently.
|
|
||||||
- Allow optional arguments to specify the exact level or target (e.g., `scope-up 2` to move up two levels, or `scope-down <subtask-id>` to jump directly to a subtask).
|
|
||||||
|
|
||||||
**Robust Error Handling and Validation**
|
|
||||||
|
|
||||||
- Implement checks to prevent invalid operations, such as attempting to scope up from the root or scope down when no subtasks exist.
|
|
||||||
- Provide clear error messages and suggestions for next steps, improving user trust and reducing confusion[1][5].
|
|
||||||
|
|
||||||
**Integrate with Task and Tag Systems**
|
|
||||||
|
|
||||||
- Ensure `scope-up` and `scope-down` interact smoothly with your tagging and task hierarchy features. For example, scoping up from a subtask should show the parent task and its context, while scoping down should allow selection from available subtasks or related tags.
|
|
||||||
- Consider supporting batch operations, such as scoping down into all subtasks and displaying them in a summarized view.
|
|
||||||
|
|
||||||
**Improve Performance and Synchronization**
|
|
||||||
|
|
||||||
- Optimize the underlying logic to minimize delays, especially in large task trees. Use asynchronous operations where possible to keep the UI responsive.
|
|
||||||
- If the commands trigger data loading or context switching, ensure synchronization so that subsequent commands operate on the correct, updated context[2][4].
|
|
||||||
|
|
||||||
**Testing and Stakeholder Feedback**
|
|
||||||
|
|
||||||
- Add or expand unit and integration tests for these commands (see `tests/unit/commands.test.js` and `integration/cli/commands.test.js`) to cover edge cases and ensure reliability.
|
|
||||||
- Gather feedback from users and stakeholders to identify pain points and desired enhancements, then iterate on the command design and implementation[1][5].
|
|
||||||
|
|
||||||
**Example Improvements Table**
|
|
||||||
|
|
||||||
| Area | Current Issue | Suggested Improvement |
|
|
||||||
|---------------------|-----------------------------|----------------------------------------------|
|
|
||||||
| User Feedback | Minimal or unclear output | Add explicit, contextual messages |
|
|
||||||
| Error Handling | Silent failures | Informative errors and recovery suggestions |
|
|
||||||
| Flexibility | Only single-level movement | Support multi-level and targeted navigation |
|
|
||||||
| Integration | Limited context awareness | Sync with tags, parent/child relationships |
|
|
||||||
| Performance | Slow on large hierarchies | Optimize data access and UI updates |
|
|
||||||
|
|
||||||
By focusing on these areas, you can make `scope-up` and `scope-down` more robust, user-friendly, and aligned with best practices in scope management and command design[1][5].
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Follow-up 1
|
|
||||||
|
|
||||||
**Question:** how might i do this relative to the tag system?
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
## Integrating Scope-Up and Scope-Down with the Tag System
|
|
||||||
|
|
||||||
**Conceptual Mapping**
|
|
||||||
|
|
||||||
- In your project, *scope-up* and *scope-down* commands currently operate on the task hierarchy, broadening or narrowing the context by moving between parent and child tasks.
|
|
||||||
- The tag system provides a parallel, orthogonal way to group and filter tasks—tags can represent categories, priorities, or any custom grouping, independent of strict parent/child relationships.
|
|
||||||
|
|
||||||
**How to Make Scope Commands Tag-Aware**
|
|
||||||
|
|
||||||
- **Scoping by Tag**: Allow users to "scope-down" into a tag, viewing only tasks associated with that tag. Conversely, "scope-up" could move from a specific tag context back to the broader task list or to a parent tag grouping if you support tag hierarchies.
|
|
||||||
- **Tag-Scoped Views**: When a tag is active as the current scope, all navigation and operations (listing, editing, etc.) are limited to tasks with that tag. This is similar to how some systems let you set the "Tag Scope" to filter navigation and search to only tagged items[1][5].
|
|
||||||
- **Combining Hierarchy and Tags**: Enable compound scoping, such as "all subtasks of Task X that also have Tag Y." This can be implemented by intersecting the results of hierarchical and tag-based queries.
|
|
||||||
|
|
||||||
**Implementation Strategies**
|
|
||||||
|
|
||||||
- **Scope Stack or Context Object**: Maintain a stack or context object that tracks both the current task scope and any active tag filters. Each scope change (up/down or tag in/out) updates this context.
|
|
||||||
- **Commands**:
|
|
||||||
- `scope-down --tag <tag>`: Narrows the view to tasks with the specified tag.
|
|
||||||
- `scope-up --tag`: Removes the tag filter, broadening the view.
|
|
||||||
- `scope-down <subtask-id> --tag <tag>`: Narrows to a specific subtask *and* applies a tag filter.
|
|
||||||
- **Tag Hierarchies (Optional)**: If your tag system supports parent/child tags or tag groups, allow "scope-up" to move from a specific tag to its parent tag or group, and "scope-down" to move into a child tag[4][5].
|
|
||||||
|
|
||||||
**Example User Flows**
|
|
||||||
|
|
||||||
- *Scoping Down by Tag*:
|
|
||||||
User runs `scope-down --tag urgent` → Only tasks tagged "urgent" are shown.
|
|
||||||
- *Combining Task and Tag*:
|
|
||||||
User runs `scope-down <task-id> --tag frontend` → Only subtasks of `<task-id>` that are tagged "frontend" are shown.
|
|
||||||
- *Scoping Up from Tag*:
|
|
||||||
User runs `scope-up --tag` → Removes the tag filter, returning to the previous broader context.
|
|
||||||
|
|
||||||
**Technical Considerations**
|
|
||||||
|
|
||||||
- **Efficient Filtering**: Use indexed queries or in-memory filtering to efficiently combine tag and hierarchy filters, especially for large task sets.
|
|
||||||
- **Feedback and UI**: Always display the current scope context (e.g., "Viewing tasks tagged 'urgent' under 'Feature X'") so users understand what they're seeing.
|
|
||||||
- **Extensibility**: Design the scope system so new types of scopes (e.g., by status, assignee) can be added in the future, not just tags and hierarchy.
|
|
||||||
|
|
||||||
**Relevant Precedents**
|
|
||||||
|
|
||||||
- Tag-based file systems and management tools often implement "scopes" as dynamic groups defined by tag criteria, allowing users to build up complex views by combining tags and other filters[5].
|
|
||||||
- Some systems allow scoping into tag management itself, where you can create, edit, or view tags and their associated objects, which could be mirrored in your CLI or UI[4].
|
|
||||||
|
|
||||||
**Summary Table**
|
|
||||||
|
|
||||||
| Command Example | Resulting Scope |
|
|
||||||
|----------------------------------|--------------------------------------------------|
|
|
||||||
| `scope-down --tag bug` | All tasks tagged "bug" |
|
|
||||||
| `scope-down <task-id> --tag api` | Subtasks of `<task-id>` tagged "api" |
|
|
||||||
| `scope-up --tag` | Removes tag filter, returns to previous scope |
|
|
||||||
| `scope-down --tag frontend` | All tasks tagged "frontend" |
|
|
||||||
|
|
||||||
By integrating tag-based scoping, you make your scope commands more flexible and powerful, supporting both hierarchical and ad-hoc organizational strategies[1][5].
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Generated by Task Master Research Command*
|
|
||||||
*Timestamp: 2025-06-14T20:35:03.231Z*
|
|
||||||
@@ -1,158 +0,0 @@
|
|||||||
---
|
|
||||||
title: Research Session
|
|
||||||
query: "should i be using any specific libraries for this?"
|
|
||||||
date: 6/14/2025
|
|
||||||
time: 5:20:00 PM
|
|
||||||
timestamp: 2025-06-14T21:20:00.561Z
|
|
||||||
exchanges: 2
|
|
||||||
---
|
|
||||||
|
|
||||||
# Research Session
|
|
||||||
|
|
||||||
## Initial Query
|
|
||||||
|
|
||||||
**Question:** should i be using any specific libraries for this?
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
HELL YEAH PARTNER
|
|
||||||
|
|
||||||
For implementing a GitHub Issue export feature with bidirectional linking in your project, you absolutely should use a dedicated library for interacting with the GitHub API. This will save you a ton of time, reduce boilerplate, and help you avoid common pitfalls with authentication, pagination, and error handling.
|
|
||||||
|
|
||||||
## Recommended Library: PyGithub
|
|
||||||
|
|
||||||
- **PyGithub** is the most widely used and well-documented Python library for accessing the GitHub REST API v3. It allows you to manage repositories, issues, user profiles, and more, all with a clean, object-oriented interface[1][2][3][5].
|
|
||||||
- It supports all the operations you’ll need for your export feature, including creating issues, updating them, and managing references for bidirectional linking.
|
|
||||||
|
|
||||||
### Key Features
|
|
||||||
|
|
||||||
- **Authentication**: Easily authenticate with a personal access token.
|
|
||||||
- **Issue Management**: Create, update, and link issues programmatically.
|
|
||||||
- **Repository Access**: Fetch and manipulate repository data.
|
|
||||||
- **User/Org Support**: Works for both user and organization repos.
|
|
||||||
- **Pagination & Rate Limiting**: Handles these for you.
|
|
||||||
|
|
||||||
### Example Usage
|
|
||||||
|
|
||||||
```python
|
|
||||||
from github import Github, Auth
|
|
||||||
|
|
||||||
# Authenticate with a personal access token
|
|
||||||
auth = Auth.Token("your_github_token")
|
|
||||||
g = Github(auth=auth)
|
|
||||||
|
|
||||||
# Get the repo
|
|
||||||
repo = g.get_repo("owner/repo_name")
|
|
||||||
|
|
||||||
# Create a new issue
|
|
||||||
issue = repo.create_issue(
|
|
||||||
title="Exported from Task Master",
|
|
||||||
body="This issue was created by Task Master. [Task ID: 123]"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add a comment or update the issue to include a backlink
|
|
||||||
issue.create_comment("Linked to Task Master task #123")
|
|
||||||
```
|
|
||||||
[3][5]
|
|
||||||
|
|
||||||
## Why Not Use Raw Requests?
|
|
||||||
|
|
||||||
- While you *can* use the `requests` library to call the GitHub API directly[4], PyGithub abstracts away authentication, error handling, and pagination, making your code much cleaner and less error-prone[2][5].
|
|
||||||
|
|
||||||
## Integration Tips
|
|
||||||
|
|
||||||
- Store your GitHub token securely (e.g., in environment variables).
|
|
||||||
- Use PyGithub’s objects to fetch, create, and update issues.
|
|
||||||
- For bidirectional linking, store the GitHub issue URL in your task data and the Task Master task ID in the GitHub issue body or as a label/comment.
|
|
||||||
- PyGithub is actively maintained and widely used, so you’ll find plenty of support and documentation.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install PyGithub
|
|
||||||
```
|
|
||||||
[3]
|
|
||||||
|
|
||||||
## Summary Table
|
|
||||||
|
|
||||||
| Library | Language | API Coverage | Ease of Use | Maintained | Docs/Support |
|
|
||||||
|-----------|----------|--------------|-------------|------------|--------------|
|
|
||||||
| PyGithub | Python | Full | High | Yes | Excellent |
|
|
||||||
| requests | Python | Full (manual)| Medium | Yes | Good |
|
|
||||||
|
|
||||||
**Use PyGithub for all GitHub API interactions in your export/import features.** It’s the standard for Python projects and will make your implementation faster, safer, and more maintainable[1][2][3][5].
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Follow-up 1
|
|
||||||
|
|
||||||
**Question:** we don't want to use py github because we have a node project
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
HELL YEAH PARTNER
|
|
||||||
|
|
||||||
For a Node.js project, you should use a Node-native library to interact with the GitHub API instead of PyGithub, which is Python-only. The industry standard for Node.js is **Octokit**, the official GitHub SDK for JavaScript/TypeScript[1][2].
|
|
||||||
|
|
||||||
## Why Use Octokit?
|
|
||||||
|
|
||||||
- **Official & Complete**: Octokit is maintained by GitHub and covers all REST and GraphQL API endpoints, including issues, repositories, users, and more[1][2].
|
|
||||||
- **Universal**: Works in Node.js, browsers, and Deno environments[2].
|
|
||||||
- **Typed & Tested**: Comes with TypeScript declarations and 100% test coverage[2].
|
|
||||||
- **Authentication**: Supports personal access tokens, GitHub Apps, OAuth, and more.
|
|
||||||
- **Pagination & Error Handling**: Handles pagination and common error scenarios out of the box.
|
|
||||||
- **Extendable**: Supports plugins and custom hooks for advanced use cases[2].
|
|
||||||
|
|
||||||
## How to Use Octokit for GitHub Issue Export
|
|
||||||
|
|
||||||
Install Octokit in your Node.js project:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install @octokit/rest
|
|
||||||
```
|
|
||||||
|
|
||||||
Basic usage for creating and linking issues:
|
|
||||||
|
|
||||||
```js
|
|
||||||
const { Octokit } = require("@octokit/rest");
|
|
||||||
|
|
||||||
// Authenticate with a personal access token
|
|
||||||
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
|
|
||||||
|
|
||||||
// Create a new issue
|
|
||||||
const { data: issue } = await octokit.issues.create({
|
|
||||||
owner: "owner",
|
|
||||||
repo: "repo_name",
|
|
||||||
title: "Exported from Task Master",
|
|
||||||
body: "This issue was created by Task Master. [Task ID: 123]"
|
|
||||||
});
|
|
||||||
|
|
||||||
// Add a comment for bidirectional linking
|
|
||||||
await octokit.issues.createComment({
|
|
||||||
owner: "owner",
|
|
||||||
repo: "repo_name",
|
|
||||||
issue_number: issue.number,
|
|
||||||
body: "Linked to Task Master task #123"
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Alternatives
|
|
||||||
|
|
||||||
- **node-github**: An older library, less maintained than Octokit, but still functional for basic API access[5].
|
|
||||||
- **Raw HTTP Requests**: You can use `node-fetch` or `axios` to call the GitHub API directly, but you’ll need to handle authentication, pagination, and error handling yourself[3]. Octokit abstracts all of this for you.
|
|
||||||
|
|
||||||
## Summary Table
|
|
||||||
|
|
||||||
| Library | Language | API Coverage | Maintained | TypeScript | Docs/Support |
|
|
||||||
|-----------------|----------|--------------|------------|------------|--------------|
|
|
||||||
| Octokit | Node.js | Full | Yes | Yes | Excellent |
|
|
||||||
| node-github | Node.js | Full | No | No | Limited |
|
|
||||||
| Raw HTTP | Node.js | Full (manual)| N/A | N/A | N/A |
|
|
||||||
|
|
||||||
**Recommendation:** Use Octokit for all GitHub API interactions in your Node.js project. It’s the official, most robust, and best-supported option for your use case[1][2].
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Generated by Task Master Research Command*
|
|
||||||
*Timestamp: 2025-06-14T21:20:00.561Z*
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
---
|
|
||||||
title: Research Session
|
|
||||||
query: "test save functionality"
|
|
||||||
date: 6/14/2025
|
|
||||||
time: 5:50:23 PM
|
|
||||||
timestamp: 2025-06-14T21:50:23.797Z
|
|
||||||
exchanges: 1
|
|
||||||
---
|
|
||||||
|
|
||||||
# Research Session
|
|
||||||
|
|
||||||
## Initial Query
|
|
||||||
|
|
||||||
**Question:** test save functionality
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
## Testing Save Functionality in a Task Management System
|
|
||||||
|
|
||||||
**Core Steps for Testing Save Functionality**
|
|
||||||
|
|
||||||
- Create or update a task in the system using the CLI or UI.
|
|
||||||
- Trigger the save operation (e.g., by issuing a command or clicking a save button).
|
|
||||||
- Verify that the data is correctly written to the persistent storage (e.g., `tasks.json`).
|
|
||||||
- Reload or re-query the data to ensure the saved changes persist and are accurately reflected.
|
|
||||||
- Validate that all required fields and relationships (such as dependencies and subtasks) are preserved and correctly structured.
|
|
||||||
|
|
||||||
**Key Test Scenarios**
|
|
||||||
|
|
||||||
- **Basic Save:** Add a new task and confirm it appears in `tasks.json` with all fields populated.
|
|
||||||
- **Update Save:** Modify an existing task (e.g., change status, description, or priority) and verify the changes are saved.
|
|
||||||
- **Subtask and Dependency Preservation:** Update a parent task and ensure that completed subtasks and dependency links remain intact, especially after using commands like `updateTask`.
|
|
||||||
- **Batch Operations:** Perform bulk updates or additions and confirm all affected tasks are saved correctly, with no data loss or corruption.
|
|
||||||
- **Validation:** Attempt to save invalid data and ensure the system rejects it or provides meaningful error messages, confirming validation logic is enforced.
|
|
||||||
|
|
||||||
**Recommended Practices**
|
|
||||||
|
|
||||||
- Use automated tests to simulate save operations for various scenarios, including edge cases (e.g., circular dependencies, missing required fields).
|
|
||||||
- After each save, reload the data and compare it to the expected state to ensure integrity.
|
|
||||||
- Log all save operations and errors for traceability and debugging.
|
|
||||||
- Regularly back up the `tasks.json` file to prevent data loss during testing and development.
|
|
||||||
|
|
||||||
**Relevant Features from Industry Standards**
|
|
||||||
|
|
||||||
- Task management systems typically provide robust save and update mechanisms, ensuring that all task attributes (including dependencies, subtasks, and comments) are consistently maintained[1][2][5].
|
|
||||||
- Validation and error handling are critical to prevent corrupt or incomplete saves, especially when handling complex relationships or batch operations[1][5].
|
|
||||||
- User feedback (such as success messages or error alerts) is important for confirming save actions and guiding users in case of issues[1][2].
|
|
||||||
|
|
||||||
**Example Test Case**
|
|
||||||
|
|
||||||
1. Create a task with subtasks and dependencies.
|
|
||||||
2. Mark some subtasks as complete.
|
|
||||||
3. Use the `updateTask` command to modify the parent task.
|
|
||||||
4. Save and reload the data.
|
|
||||||
5. Assert that:
|
|
||||||
- The parent task reflects the updates.
|
|
||||||
- Completed subtasks remain marked as complete.
|
|
||||||
- Dependencies are unchanged and valid.
|
|
||||||
|
|
||||||
**Summary Table: Save Functionality Test Aspects**
|
|
||||||
|
|
||||||
| Aspect | What to Test | Expected Outcome |
|
|
||||||
|-----------------------|---------------------------------------------------|-----------------------------------------|
|
|
||||||
| Basic Save | Add/update task, save, reload | Data persists and matches expectations |
|
|
||||||
| Subtask Preservation | Update parent, check subtask status | Completed subtasks remain complete |
|
|
||||||
| Dependency Handling | Save with dependencies, reload | Dependencies are intact and valid |
|
|
||||||
| Batch Operations | Bulk save/update, reload | All changes persist, no data loss |
|
|
||||||
| Validation | Save invalid data | Error is raised, data not corrupted |
|
|
||||||
|
|
||||||
These steps and scenarios ensure that the save functionality is robust, reliable, and meets the requirements of a modern task management system[1][2][5].
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Generated by Task Master Research Command*
|
|
||||||
*Timestamp: 2025-06-14T21:50:23.797Z*
|
|
||||||
@@ -1,81 +0,0 @@
|
|||||||
---
|
|
||||||
title: Research Session
|
|
||||||
query: "test the fix for duplicate saves - final test"
|
|
||||||
date: 6/14/2025
|
|
||||||
time: 6:10:00 PM
|
|
||||||
timestamp: 2025-06-14T22:10:00.394Z
|
|
||||||
exchanges: 1
|
|
||||||
---
|
|
||||||
|
|
||||||
# Research Session
|
|
||||||
|
|
||||||
## Initial Query
|
|
||||||
|
|
||||||
**Question:** test the fix for duplicate saves - final test
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
## Steps to Test the Fix for Duplicate Saves
|
|
||||||
|
|
||||||
**1. Prepare a Clean Test Environment**
|
|
||||||
- Ensure the tasks.json file and any related data stores are in a known, clean state with no pre-existing duplicate entries.
|
|
||||||
- Back up the current tasks.json in case rollback is needed.
|
|
||||||
|
|
||||||
**2. Define Test Scenarios**
|
|
||||||
- Attempt to save a new task with unique data.
|
|
||||||
- Attempt to save a task with the same ID or unique identifier as an existing task (simulate a duplicate).
|
|
||||||
- Attempt to save a task with the same title or content but a different ID (to test if the fix is based on ID or content).
|
|
||||||
- Simultaneously trigger multiple save operations (if the system supports concurrency) to test race conditions.
|
|
||||||
|
|
||||||
**3. Execute Manual and Automated Tests**
|
|
||||||
- Use the application's UI or API to create and save tasks as per the scenarios above.
|
|
||||||
- For each save operation, check the tasks.json file or the data store to verify:
|
|
||||||
- No duplicate entries are created.
|
|
||||||
- Existing tasks are not overwritten unless intended (e.g., for updates).
|
|
||||||
- The system returns appropriate error messages or warnings when a duplicate save is attempted.
|
|
||||||
|
|
||||||
**4. Validate System Behavior**
|
|
||||||
- Confirm that after all test operations, each task in tasks.json is unique according to the defined criteria (ID, title, or other unique fields).
|
|
||||||
- If the system is designed to merge duplicates, verify that merging occurs as expected.
|
|
||||||
- If the system is designed to reject duplicates, ensure that the rejection is clear and no duplicate is present.
|
|
||||||
|
|
||||||
**5. Test Edge Cases**
|
|
||||||
- Attempt to save tasks with minor variations (e.g., whitespace differences, case sensitivity) to ensure the duplicate detection logic is robust.
|
|
||||||
- Test with large numbers of tasks to ensure performance and correctness at scale.
|
|
||||||
|
|
||||||
**6. Review Logs and Error Handling**
|
|
||||||
- Check application logs for any errors or warnings related to duplicate saves.
|
|
||||||
- Ensure that error handling is user-friendly and provides actionable feedback.
|
|
||||||
|
|
||||||
**7. Regression Testing**
|
|
||||||
- Run the full suite of task operations (create, update, delete) to ensure the fix for duplicate saves does not introduce new issues elsewhere in the system.
|
|
||||||
|
|
||||||
## Additional Best Practices
|
|
||||||
|
|
||||||
- Communicate test results with the team to confirm the fix is effective and to prevent future regressions[1][2][5].
|
|
||||||
- Consider implementing automated duplicate detection as part of the save operation to prevent future issues[2][3].
|
|
||||||
- Document the test cases and results for future reference and audits.
|
|
||||||
|
|
||||||
## Example Test Case Table
|
|
||||||
|
|
||||||
| Test Scenario | Expected Result | Actual Result | Pass/Fail |
|
|
||||||
|--------------------------------------|-------------------------------|---------------|-----------|
|
|
||||||
| Save unique task | Task saved | | |
|
|
||||||
| Save duplicate task (same ID) | Duplicate rejected/merged | | |
|
|
||||||
| Save duplicate task (same title) | Duplicate rejected/merged | | |
|
|
||||||
| Simultaneous saves (race condition) | Only one unique task present | | |
|
|
||||||
| Save with minor variations | No false positives/negatives | | |
|
|
||||||
|
|
||||||
Fill in the "Actual Result" and "Pass/Fail" columns during testing.
|
|
||||||
|
|
||||||
## Action Items
|
|
||||||
|
|
||||||
- Complete all test scenarios above.
|
|
||||||
- Document any issues found and retest after fixes.
|
|
||||||
- Confirm with stakeholders before closing the issue.
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Generated by Task Master Research Command*
|
|
||||||
*Timestamp: 2025-06-14T22:10:00.394Z*
|
|
||||||
@@ -1,153 +0,0 @@
|
|||||||
---
|
|
||||||
title: Research Session
|
|
||||||
query: "do we need to add new commands or can we just weaponize the gh cli tool"
|
|
||||||
date: 2025-08-01
|
|
||||||
time: 2:27:52 p.m.
|
|
||||||
timestamp: 2025-08-01T11:27:52.249Z
|
|
||||||
exchanges: 3
|
|
||||||
---
|
|
||||||
|
|
||||||
# Research Session
|
|
||||||
|
|
||||||
## Initial Query
|
|
||||||
|
|
||||||
**Question:** do we need to add new commands or can we just weaponize the gh cli tool
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
For the implementation of the `task-master git` command suite to automate git workflows (Task 97), it is possible to leverage the existing **GitHub CLI (`gh`) tool** rather than creating entirely new git commands from scratch. The GitHub CLI provides a rich set of commands that cover many GitHub-related operations such as branch management, pull request handling, workflow triggering, and release management, which can be scripted and integrated into your CLI tool[1][2][3][4].
|
|
||||||
|
|
||||||
### Key Points Supporting Weaponizing GitHub CLI:
|
|
||||||
|
|
||||||
- **Comprehensive GitHub Operations**: The `gh` CLI supports creating branches, managing pull requests, releases, tags, and even triggering and monitoring GitHub Actions workflows. This aligns well with the requirements of automated branch management, commit generation, and PR description generation in Task 97[1][3].
|
|
||||||
|
|
||||||
- **Scripting and Automation Friendly**: GitHub CLI commands can be scripted and integrated into custom workflows or CLI tools. You can create aliases or wrap `gh` commands inside your own CLI commands to provide a seamless user experience while relying on the battle-tested GitHub CLI for underlying operations[1][3].
|
|
||||||
|
|
||||||
- **Authentication and Permissions**: `gh` uses environment variables like `GH_TOKEN` or `GITHUB_TOKEN` for authentication, which can be securely managed in CI/CD pipelines or local environments. This simplifies integration without needing to implement custom authentication logic[3][4].
|
|
||||||
|
|
||||||
- **Extensibility**: If some specific git workflow features are not directly supported by `gh`, you can extend it with custom scripts or combine it with native git commands. This hybrid approach reduces the need to reinvent core git operations while allowing customization[1].
|
|
||||||
|
|
||||||
### Considerations for Your Project:
|
|
||||||
|
|
||||||
- Since Task 97 requires **automated branch management** (branch creation, validation, switching, cleanup), **intelligent commit generation**, and **PR description generation**, you can implement these by invoking `gh` commands programmatically within your `task-master git` commands.
|
|
||||||
|
|
||||||
- For example, branch creation and switching can be done with native git commands wrapped in your CLI, while PR creation and management can be handled via `gh pr create`, `gh pr merge`, etc.
|
|
||||||
|
|
||||||
- You may still need to implement some **custom logic** for commit type detection, branch name validation, and PR description generation based on your task metadata, but the underlying git and GitHub operations can be delegated to `gh`.
|
|
||||||
|
|
||||||
- This approach aligns with your existing CLI foundation (Task 2) and leverages a well-supported tool, reducing development effort and increasing reliability.
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
|
|
||||||
You **do not necessarily need to add entirely new git commands** for Task 97. Instead, you can **weaponize the GitHub CLI (`gh`) tool** by wrapping and scripting its commands within your `task-master git` command suite. This will allow you to automate git workflows efficiently, maintain consistency, and leverage GitHub’s API capabilities without reinventing the wheel[1][2][3][4].
|
|
||||||
|
|
||||||
If there are gaps in `gh` functionality for your specific workflow needs, you can supplement with native git commands or custom scripts, but the core git and GitHub operations should be handled by `gh` for best results.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Follow-up 1
|
|
||||||
|
|
||||||
**Question:** how does this relate to git worktrees? i hear that they get super complicated when trying to implement them.
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
Git worktrees allow you to have multiple working directories attached to the same Git repository, each checked out to different branches simultaneously. This enables parallel development workflows without the overhead of cloning the repository multiple times or constantly switching branches in a single working directory[2][3].
|
|
||||||
|
|
||||||
However, **implementing or managing git worktrees can get complicated** for several reasons:
|
|
||||||
|
|
||||||
- **Complexity for users and tooling:** Managing multiple active worktrees requires careful tracking of which directory corresponds to which branch. This can be confusing and error-prone, especially for novices or when many worktrees are in use simultaneously. Mistakes like committing to the wrong branch or working in the wrong directory are common risks[1].
|
|
||||||
|
|
||||||
- **Maintenance overhead:** Worktrees accumulate over time and need to be pruned or cleaned up regularly (`git worktree prune`) to avoid clutter and potential confusion. This adds an extra maintenance step to the workflow[1].
|
|
||||||
|
|
||||||
- **Syncing and update challenges:** Each worktree is a separate checkout and can become outdated if not regularly updated from the main repository. This can lead to merge conflicts or redundant work if branches diverge significantly[1].
|
|
||||||
|
|
||||||
- **Tooling and integration limitations:** Not all Git tools, scripts, or GUIs fully support worktrees or are optimized for them. This can cause inconsistencies or require additional custom tooling to manage worktrees effectively[1].
|
|
||||||
|
|
||||||
- **Performance considerations:** Managing multiple worktrees simultaneously can have performance implications, especially in very large repositories, due to the overhead of maintaining multiple checkouts and metadata[1].
|
|
||||||
|
|
||||||
In the context of your project, which involves automating git workflows and possibly wrapping or extending git/GitHub CLI commands, these complexities imply:
|
|
||||||
|
|
||||||
- If you consider supporting git worktrees in your `task-master git` command suite, you will need to handle the complexity of managing multiple directories and branches carefully.
|
|
||||||
|
|
||||||
- You might need to implement additional logic or tooling to track and manage worktrees, avoid user confusion, and ensure synchronization between them.
|
|
||||||
|
|
||||||
- Since your project already plans to leverage the GitHub CLI (`gh`) and native git commands, you could integrate worktree commands (`git worktree add`, `git worktree remove`, `git worktree list`) but should be prepared for the added complexity in workflow management and user documentation.
|
|
||||||
|
|
||||||
- Custom wrappers or managers (like the one described in [3]) can help reduce friction by automating worktree creation, switching, and cleanup, which might be a useful approach to consider.
|
|
||||||
|
|
||||||
Overall, **git worktrees provide powerful parallel branch management but introduce complexity in implementation and user experience** that must be carefully managed through tooling, automation, and clear documentation[1][3][4].
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Summary Table of Git Worktree Challenges and Considerations**
|
|
||||||
|
|
||||||
| Aspect | Description | Implication for Project |
|
|
||||||
|-------------------------|-----------------------------------------------------------------------------------------------|----------------------------------------------------------|
|
|
||||||
| User Complexity | Confusing multiple directories and branches | Need clear UI/UX and safeguards |
|
|
||||||
| Maintenance Overhead | Requires pruning and cleanup of stale worktrees | Automate cleanup or provide commands |
|
|
||||||
| Syncing Issues | Worktrees can become outdated or diverge | Implement update checks and sync workflows |
|
|
||||||
| Tooling Support | Limited support in some Git tools and GUIs | May require custom tooling or wrappers |
|
|
||||||
| Performance | Overhead in large repos with many worktrees | Monitor and optimize performance |
|
|
||||||
| Workflow Automation | Manual worktree management is tedious | Build custom managers or wrappers to simplify usage |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Given your project’s focus on automating git workflows and leveraging existing CLI tools, **implementing git worktree support is feasible but will require careful design to handle these complexities effectively**. You may want to start with basic worktree commands and incrementally add automation and safeguards as needed.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Follow-up 2
|
|
||||||
|
|
||||||
**Question:** since we already have a way to have separated task lists per branch, couldn't we do the same thing for worktrees? each taskmaster list (tag) would have its own worktree based on the name of the tag?
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
|
|
||||||
Yes, you can apply a similar approach used for separated task lists per branch to git worktrees by associating each taskmaster list (tag) with its own git worktree named after the tag. This means each tag would correspond to a dedicated worktree directory checked out to a branch or commit related to that tag, enabling isolated parallel work environments for each task list.
|
|
||||||
|
|
||||||
### How This Maps to Git Worktrees
|
|
||||||
|
|
||||||
- **Git worktrees** allow multiple working directories linked to the same repository, each checked out to a different branch or commit[1][2]. This lets you work on multiple branches simultaneously without switching contexts in a single directory.
|
|
||||||
|
|
||||||
- When you create a worktree with `git worktree add <path> [<branch>]`, Git creates a new linked working tree at `<path>`, optionally checking out a specified branch or creating a new one[1][2]. Naming the worktree directory after the tag (taskmaster list) is straightforward and semantically clear.
|
|
||||||
|
|
||||||
- This setup mirrors your existing branch-based task list separation but extends it to physical directories, which can improve workflow isolation and reduce context switching.
|
|
||||||
|
|
||||||
### Benefits for Your Project
|
|
||||||
|
|
||||||
- **Isolated environments per task list/tag:** Each tag’s worktree can have its own checked-out branch, allowing independent development, testing, or review without interference.
|
|
||||||
|
|
||||||
- **Parallel workflows:** Developers can switch between worktrees (directories) instead of switching branches in one directory, reducing errors and improving productivity.
|
|
||||||
|
|
||||||
- **Automation potential:** Your `task-master git` commands can automate creating, switching, and cleaning up these worktrees based on tags, similar to how you manage branches and task lists.
|
|
||||||
|
|
||||||
### Challenges and Considerations
|
|
||||||
|
|
||||||
- **Complexity in management:** Git worktrees add complexity in tracking multiple directories and ensuring users operate in the correct one[1]. Your CLI should provide clear commands and feedback to avoid confusion.
|
|
||||||
|
|
||||||
- **Maintenance overhead:** Worktrees need pruning and cleanup (`git worktree prune` or `git worktree remove`) to avoid clutter[1]. Automating cleanup in your tool would be beneficial.
|
|
||||||
|
|
||||||
- **Syncing and updates:** Each worktree is a separate checkout and can become outdated. Your automation should include commands to update or rebase worktrees as needed.
|
|
||||||
|
|
||||||
- **Tooling support:** Not all Git tools or scripts fully support worktrees, so your CLI wrappers should handle edge cases and provide a consistent experience[1][3].
|
|
||||||
|
|
||||||
### Practical Implementation Ideas
|
|
||||||
|
|
||||||
- When a new tag (taskmaster list) is created, run `git worktree add` to create a new worktree directory named after the tag.
|
|
||||||
|
|
||||||
- Store metadata linking tags to worktree paths in your taskmaster configuration to track and manage them.
|
|
||||||
|
|
||||||
- Provide commands like `task-master worktree list`, `task-master worktree add <tag>`, and `task-master worktree remove <tag>` to manage these worktrees.
|
|
||||||
|
|
||||||
- Integrate with GitHub CLI (`gh`) and native git commands to automate branch creation, PR handling, and syncing within each worktree.
|
|
||||||
|
|
||||||
- Consider inspiration from existing tools like [wt](https://github.com/taecontrol/wt) that automate worktree environment setup and teardown[3].
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
|
|
||||||
Using git worktrees per taskmaster tag is a feasible and potentially powerful extension of your current branch-based task list separation. It offers isolated, parallel working directories aligned with your tags, improving workflow clarity and reducing branch-switching overhead. However, it requires careful tooling and automation to manage complexity, maintenance, and synchronization effectively. Given your project’s focus on automating git workflows and leveraging CLI tools, implementing this feature incrementally with robust user feedback and cleanup commands would be a practical approach.
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Generated by Task Master Research Command*
|
|
||||||
*Timestamp: 2025-08-01T11:27:52.249Z*
|
|
||||||
@@ -1,471 +0,0 @@
|
|||||||
# Task Template Importing System - Product Requirements Document
|
|
||||||
|
|
||||||
<context>
|
|
||||||
# Overview
|
|
||||||
The Task Template Importing system enables seamless integration of external task templates into the Task Master CLI through automatic file discovery. This system allows users to drop task template files into the tasks directory and immediately access them as new tag contexts without manual import commands or configuration. The solution addresses the need for multi-project task management, team collaboration through shared templates, and clean separation between permanent tasks and temporary project contexts.
|
|
||||||
|
|
||||||
# Core Features
|
|
||||||
## Silent Task Template Discovery
|
|
||||||
- **What it does**: Automatically scans for `tasks_*.json` files in the tasks directory during tag operations
|
|
||||||
- **Why it's important**: Eliminates friction in adding new task contexts and enables zero-configuration workflow
|
|
||||||
- **How it works**: File pattern matching extracts tag names from filenames and validates against internal tag keys
|
|
||||||
|
|
||||||
## External Tag Resolution System
|
|
||||||
- **What it does**: Provides fallback mechanism to external files when tags are not found in main tasks.json
|
|
||||||
- **Why it's important**: Maintains clean separation between core tasks and project-specific templates
|
|
||||||
- **How it works**: Tag resolution logic checks external files as secondary source while preserving main file precedence
|
|
||||||
|
|
||||||
## Read-Only External Tag Access
|
|
||||||
- **What it does**: Allows viewing and switching to external tags while preventing modifications
|
|
||||||
- **Why it's important**: Protects template integrity and prevents accidental changes to shared templates
|
|
||||||
- **How it works**: All task modifications route to main tasks.json regardless of current tag context
|
|
||||||
|
|
||||||
## Tag Precedence Management
|
|
||||||
- **What it does**: Ensures main tasks.json tags override external files with same tag names
|
|
||||||
- **Why it's important**: Prevents conflicts and maintains data integrity
|
|
||||||
- **How it works**: Priority system where main file tags take precedence over external file tags
|
|
||||||
|
|
||||||
# User Experience
|
|
||||||
## User Personas
|
|
||||||
- **Solo Developer**: Manages multiple projects with different task contexts
|
|
||||||
- **Team Lead**: Shares standardized task templates across team members
|
|
||||||
- **Project Manager**: Organizes tasks by project phases or feature branches
|
|
||||||
|
|
||||||
## Key User Flows
|
|
||||||
### Template Addition Flow
|
|
||||||
1. User receives or creates a `tasks_projectname.json` file
|
|
||||||
2. User drops file into `.taskmaster/tasks/` directory
|
|
||||||
3. Tag becomes immediately available via `task-master use-tag projectname`
|
|
||||||
4. User can list, view, and switch to external tag without configuration
|
|
||||||
|
|
||||||
### Template Usage Flow
|
|
||||||
1. User runs `task-master tags` to see available tags including external ones
|
|
||||||
2. External tags display with `(imported)` indicator
|
|
||||||
3. User switches to external tag with `task-master use-tag projectname`
|
|
||||||
4. User can view tasks but modifications are routed to main tasks.json
|
|
||||||
|
|
||||||
## UI/UX Considerations
|
|
||||||
- External tags clearly marked with `(imported)` suffix in listings
|
|
||||||
- Visual indicators distinguish between main and external tags
|
|
||||||
- Error messages guide users when external files are malformed
|
|
||||||
- Read-only warnings when attempting to modify external tag contexts
|
|
||||||
</context>
|
|
||||||
|
|
||||||
<PRD>
|
|
||||||
# Technical Architecture
|
|
||||||
## System Components
|
|
||||||
1. **External File Discovery Engine**
|
|
||||||
- File pattern scanner for `tasks_*.json` files
|
|
||||||
- Tag name extraction from filenames using regex
|
|
||||||
- Dynamic tag registry combining main and external sources
|
|
||||||
- Error handling for malformed external files
|
|
||||||
|
|
||||||
2. **Enhanced Tag Resolution System**
|
|
||||||
- Fallback mechanism to external files when tags not found in main tasks.json
|
|
||||||
- Precedence management ensuring main file tags override external files
|
|
||||||
- Read-only access enforcement for external tags
|
|
||||||
- Tag metadata preservation during discovery operations
|
|
||||||
|
|
||||||
3. **Silent Discovery Integration**
|
|
||||||
- Automatic scanning during tag-related operations
|
|
||||||
- Seamless integration with existing tag management functions
|
|
||||||
- Zero-configuration workflow requiring no manual import commands
|
|
||||||
- Dynamic tag availability without restart requirements
|
|
||||||
|
|
||||||
## Data Models
|
|
||||||
|
|
||||||
### External Task File Structure
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"meta": {
|
|
||||||
"projectName": "External Project Name",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"templateSource": "external",
|
|
||||||
"createdAt": "ISO-8601 timestamp"
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"projectname": {
|
|
||||||
"meta": {
|
|
||||||
"name": "Project Name",
|
|
||||||
"description": "Project description",
|
|
||||||
"createdAt": "ISO-8601 timestamp"
|
|
||||||
},
|
|
||||||
"tasks": [
|
|
||||||
// Array of task objects
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"master": {
|
|
||||||
// This section is ignored to prevent conflicts
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Enhanced Tag Registry Model
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mainTags": [
|
|
||||||
{
|
|
||||||
"name": "master",
|
|
||||||
"source": "main",
|
|
||||||
"taskCount": 150,
|
|
||||||
"isActive": true
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"externalTags": [
|
|
||||||
{
|
|
||||||
"name": "projectname",
|
|
||||||
"source": "external",
|
|
||||||
"filename": "tasks_projectname.json",
|
|
||||||
"taskCount": 25,
|
|
||||||
"isReadOnly": true
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## APIs and Integrations
|
|
||||||
1. **File System Discovery API**
|
|
||||||
- Directory scanning with pattern matching
|
|
||||||
- JSON file validation and parsing
|
|
||||||
- Error handling for corrupted or malformed files
|
|
||||||
- File modification time tracking for cache invalidation
|
|
||||||
|
|
||||||
2. **Enhanced Tag Management API**
|
|
||||||
- `scanForExternalTaskFiles(projectRoot)` - Discover external template files
|
|
||||||
- `getExternalTagsFromFiles(projectRoot)` - Extract tag names from external files
|
|
||||||
- `readExternalTagData(projectRoot, tagName)` - Read specific external tag data
|
|
||||||
- `getAvailableTags(projectRoot)` - Combined main and external tag listing
|
|
||||||
|
|
||||||
3. **Tag Resolution Enhancement**
|
|
||||||
- Modified `readJSON()` with external file fallback
|
|
||||||
- Enhanced `tags()` function with external tag display
|
|
||||||
- Updated `useTag()` function supporting external tag switching
|
|
||||||
- Read-only enforcement for external tag operations
|
|
||||||
|
|
||||||
## Infrastructure Requirements
|
|
||||||
1. **File System Access**
|
|
||||||
- Read permissions for tasks directory
|
|
||||||
- JSON parsing capabilities
|
|
||||||
- Pattern matching and regex support
|
|
||||||
- Error handling for file system operations
|
|
||||||
|
|
||||||
2. **Backward Compatibility**
|
|
||||||
- Existing tag operations continue unchanged
|
|
||||||
- Main tasks.json structure preserved
|
|
||||||
- No breaking changes to current workflows
|
|
||||||
- Graceful degradation when external files unavailable
|
|
||||||
|
|
||||||
# Development Roadmap
|
|
||||||
## Phase 1: Core External File Discovery (Foundation)
|
|
||||||
1. **External File Scanner Implementation**
|
|
||||||
- Create `scanForExternalTaskFiles()` function in utils.js
|
|
||||||
- Implement file pattern matching for `tasks_*.json` files
|
|
||||||
- Add error handling for file system access issues
|
|
||||||
- Test with various filename patterns and edge cases
|
|
||||||
|
|
||||||
2. **Tag Name Extraction System**
|
|
||||||
- Implement `getExternalTagsFromFiles()` function
|
|
||||||
- Create regex pattern for extracting tag names from filenames
|
|
||||||
- Add validation to ensure tag names match internal tag key format
|
|
||||||
- Handle special characters and invalid filename patterns
|
|
||||||
|
|
||||||
3. **External Tag Data Reader**
|
|
||||||
- Create `readExternalTagData()` function
|
|
||||||
- Implement JSON parsing with error handling
|
|
||||||
- Add validation for required tag structure
|
|
||||||
- Ignore 'master' key in external files to prevent conflicts
|
|
||||||
|
|
||||||
## Phase 2: Tag Resolution Enhancement (Core Integration)
|
|
||||||
1. **Enhanced Tag Registry**
|
|
||||||
- Implement `getAvailableTags()` function combining main and external sources
|
|
||||||
- Create tag metadata structure including source information
|
|
||||||
- Add deduplication logic prioritizing main tags over external
|
|
||||||
- Implement caching mechanism for performance optimization
|
|
||||||
|
|
||||||
2. **Modified readJSON Function**
|
|
||||||
- Add external file fallback when tag not found in main tasks.json
|
|
||||||
- Maintain precedence rule: main tasks.json overrides external files
|
|
||||||
- Preserve existing error handling and validation patterns
|
|
||||||
- Ensure read-only access for external tags
|
|
||||||
|
|
||||||
3. **Tag Listing Enhancement**
|
|
||||||
- Update `tags()` function to display external tags with `(imported)` indicator
|
|
||||||
- Show external tag metadata and task counts
|
|
||||||
- Maintain current tag highlighting and sorting functionality
|
|
||||||
- Add visual distinction between main and external tags
|
|
||||||
|
|
||||||
## Phase 3: User Interface Integration (User Experience)
|
|
||||||
1. **Tag Switching Enhancement**
|
|
||||||
- Update `useTag()` function to support external tag switching
|
|
||||||
- Add read-only warnings when switching to external tags
|
|
||||||
- Update state.json with external tag context information
|
|
||||||
- Maintain current tag switching behavior for main tags
|
|
||||||
|
|
||||||
2. **Error Handling and User Feedback**
|
|
||||||
- Implement comprehensive error messages for malformed external files
|
|
||||||
- Add user guidance for proper external file structure
|
|
||||||
- Create warnings for read-only operations on external tags
|
|
||||||
- Ensure graceful degradation when external files are corrupted
|
|
||||||
|
|
||||||
3. **Documentation and Help Integration**
|
|
||||||
- Update command help text to include external tag information
|
|
||||||
- Add examples of external file structure and usage
|
|
||||||
- Create troubleshooting guide for common external file issues
|
|
||||||
- Document file naming conventions and best practices
|
|
||||||
|
|
||||||
## Phase 4: Advanced Features and Optimization (Enhancement)
|
|
||||||
1. **Performance Optimization**
|
|
||||||
- Implement file modification time caching
|
|
||||||
- Add lazy loading for external tag data
|
|
||||||
- Optimize file scanning for directories with many files
|
|
||||||
- Create efficient tag resolution caching mechanism
|
|
||||||
|
|
||||||
2. **Advanced External File Features**
|
|
||||||
- Support for nested external file directories
|
|
||||||
- Batch external file validation and reporting
|
|
||||||
- External file metadata display and management
|
|
||||||
- Integration with version control ignore patterns
|
|
||||||
|
|
||||||
3. **Team Collaboration Features**
|
|
||||||
- Shared external file validation
|
|
||||||
- External file conflict detection and resolution
|
|
||||||
- Team template sharing guidelines and documentation
|
|
||||||
- Integration with git workflows for template management
|
|
||||||
|
|
||||||
# Logical Dependency Chain
|
|
||||||
## Foundation Layer (Must Be Built First)
|
|
||||||
1. **External File Scanner**
|
|
||||||
- Core requirement for all other functionality
|
|
||||||
- Provides the discovery mechanism for external template files
|
|
||||||
- Must handle file system access and pattern matching reliably
|
|
||||||
|
|
||||||
2. **Tag Name Extraction**
|
|
||||||
- Depends on file scanner functionality
|
|
||||||
- Required for identifying available external tags
|
|
||||||
- Must validate tag names against internal format requirements
|
|
||||||
|
|
||||||
3. **External Tag Data Reader**
|
|
||||||
- Depends on tag name extraction
|
|
||||||
- Provides access to external tag content
|
|
||||||
- Must handle JSON parsing and validation safely
|
|
||||||
|
|
||||||
## Integration Layer (Builds on Foundation)
|
|
||||||
4. **Enhanced Tag Registry**
|
|
||||||
- Depends on all foundation components
|
|
||||||
- Combines main and external tag sources
|
|
||||||
- Required for unified tag management across the system
|
|
||||||
|
|
||||||
5. **Modified readJSON Function**
|
|
||||||
- Depends on enhanced tag registry
|
|
||||||
- Provides fallback mechanism for tag resolution
|
|
||||||
- Critical for maintaining backward compatibility
|
|
||||||
|
|
||||||
6. **Tag Listing Enhancement**
|
|
||||||
- Depends on enhanced tag registry
|
|
||||||
- Provides user visibility into external tags
|
|
||||||
- Required for user discovery of available templates
|
|
||||||
|
|
||||||
## User Experience Layer (Completes the Feature)
|
|
||||||
7. **Tag Switching Enhancement**
|
|
||||||
- Depends on modified readJSON and tag listing
|
|
||||||
- Enables user interaction with external tags
|
|
||||||
- Must enforce read-only access properly
|
|
||||||
|
|
||||||
8. **Error Handling and User Feedback**
|
|
||||||
- Can be developed in parallel with other UX components
|
|
||||||
- Enhances reliability and user experience
|
|
||||||
- Should be integrated throughout development process
|
|
||||||
|
|
||||||
9. **Documentation and Help Integration**
|
|
||||||
- Should be developed alongside implementation
|
|
||||||
- Required for user adoption and proper usage
|
|
||||||
- Can be completed in parallel with advanced features
|
|
||||||
|
|
||||||
## Optimization Layer (Performance and Advanced Features)
|
|
||||||
10. **Performance Optimization**
|
|
||||||
- Can be developed after core functionality is stable
|
|
||||||
- Improves user experience with large numbers of external files
|
|
||||||
- Not blocking for initial release
|
|
||||||
|
|
||||||
11. **Advanced External File Features**
|
|
||||||
- Can be developed independently after core features
|
|
||||||
- Enhances power user workflows
|
|
||||||
- Optional for initial release
|
|
||||||
|
|
||||||
12. **Team Collaboration Features**
|
|
||||||
- Depends on stable core functionality
|
|
||||||
- Enhances team workflows and template sharing
|
|
||||||
- Can be prioritized based on user feedback
|
|
||||||
|
|
||||||
# Risks and Mitigations
|
|
||||||
## Technical Challenges
|
|
||||||
|
|
||||||
### File System Performance
|
|
||||||
**Risk**: Scanning for external files on every tag operation could impact performance with large directories.
|
|
||||||
**Mitigation**:
|
|
||||||
- Implement file modification time caching to avoid unnecessary rescans
|
|
||||||
- Use lazy loading for external tag data - only read when accessed
|
|
||||||
- Add configurable limits on number of external files to scan
|
|
||||||
- Optimize file pattern matching with efficient regex patterns
|
|
||||||
|
|
||||||
### External File Corruption
|
|
||||||
**Risk**: Malformed or corrupted external JSON files could break tag operations.
|
|
||||||
**Mitigation**:
|
|
||||||
- Implement robust JSON parsing with comprehensive error handling
|
|
||||||
- Add file validation before attempting to parse external files
|
|
||||||
- Gracefully skip corrupted files and continue with valid ones
|
|
||||||
- Provide clear error messages guiding users to fix malformed files
|
|
||||||
|
|
||||||
### Tag Name Conflicts
|
|
||||||
**Risk**: External files might contain tag names that conflict with main tasks.json tags.
|
|
||||||
**Mitigation**:
|
|
||||||
- Implement strict precedence rule: main tasks.json always overrides external files
|
|
||||||
- Add warnings when external tags are ignored due to conflicts
|
|
||||||
- Document naming conventions to avoid common conflicts
|
|
||||||
- Provide validation tools to check for potential conflicts
|
|
||||||
|
|
||||||
## MVP Definition
|
|
||||||
|
|
||||||
### Core Feature Scope
|
|
||||||
**Risk**: Including too many advanced features could delay the core functionality.
|
|
||||||
**Mitigation**:
|
|
||||||
- Define MVP as basic external file discovery + tag switching
|
|
||||||
- Focus on the silent discovery mechanism as the primary value proposition
|
|
||||||
- Defer advanced features like nested directories and batch operations
|
|
||||||
- Ensure each phase delivers complete, usable functionality
|
|
||||||
|
|
||||||
### User Experience Complexity
|
|
||||||
**Risk**: The read-only nature of external tags might confuse users.
|
|
||||||
**Mitigation**:
|
|
||||||
- Provide clear visual indicators for external tags in all interfaces
|
|
||||||
- Add explicit warnings when users attempt to modify external tag contexts
|
|
||||||
- Document the read-only behavior and its rationale clearly
|
|
||||||
- Consider future enhancement for external tag modification workflows
|
|
||||||
|
|
||||||
### Backward Compatibility
|
|
||||||
**Risk**: Changes to tag resolution logic might break existing workflows.
|
|
||||||
**Mitigation**:
|
|
||||||
- Maintain existing tag operations unchanged for main tasks.json
|
|
||||||
- Add external file support as enhancement, not replacement
|
|
||||||
- Test thoroughly with existing task structures and workflows
|
|
||||||
- Provide migration path if any breaking changes are necessary
|
|
||||||
|
|
||||||
## Resource Constraints
|
|
||||||
|
|
||||||
### Development Complexity
|
|
||||||
**Risk**: Integration with existing tag management system could be complex.
|
|
||||||
**Mitigation**:
|
|
||||||
- Phase implementation to minimize risk of breaking existing functionality
|
|
||||||
- Create comprehensive test suite covering both main and external tag scenarios
|
|
||||||
- Use feature flags to enable/disable external file support during development
|
|
||||||
- Implement thorough error handling to prevent system failures
|
|
||||||
|
|
||||||
### File System Dependencies
|
|
||||||
**Risk**: Different operating systems might handle file operations differently.
|
|
||||||
**Mitigation**:
|
|
||||||
- Use Node.js built-in file system APIs for cross-platform compatibility
|
|
||||||
- Test on multiple operating systems (Windows, macOS, Linux)
|
|
||||||
- Handle file path separators and naming conventions properly
|
|
||||||
- Add fallback mechanisms for file system access issues
|
|
||||||
|
|
||||||
### User Adoption
|
|
||||||
**Risk**: Users might not understand or adopt the external file template system.
|
|
||||||
**Mitigation**:
|
|
||||||
- Create clear documentation with practical examples
|
|
||||||
- Provide sample external template files for common use cases
|
|
||||||
- Integrate help and guidance directly into the CLI interface
|
|
||||||
- Gather user feedback early and iterate on the user experience
|
|
||||||
|
|
||||||
# Appendix
|
|
||||||
## External File Naming Convention
|
|
||||||
|
|
||||||
### Filename Pattern
|
|
||||||
- **Format**: `tasks_[tagname].json`
|
|
||||||
- **Examples**: `tasks_feature-auth.json`, `tasks_v2-migration.json`, `tasks_project-alpha.json`
|
|
||||||
- **Validation**: Tag name must match internal tag key format (alphanumeric, hyphens, underscores)
|
|
||||||
|
|
||||||
### File Structure Requirements
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"meta": {
|
|
||||||
"projectName": "Required: Human-readable project name",
|
|
||||||
"version": "Optional: Template version",
|
|
||||||
"templateSource": "Optional: Source identifier",
|
|
||||||
"createdAt": "Optional: ISO-8601 timestamp"
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"[tagname]": {
|
|
||||||
"meta": {
|
|
||||||
"name": "Required: Tag display name",
|
|
||||||
"description": "Optional: Tag description",
|
|
||||||
"createdAt": "Optional: ISO-8601 timestamp"
|
|
||||||
},
|
|
||||||
"tasks": [
|
|
||||||
// Required: Array of task objects following standard task structure
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Functions Specification
|
|
||||||
|
|
||||||
### Core Discovery Functions
|
|
||||||
```javascript
|
|
||||||
// Scan tasks directory for external template files
|
|
||||||
function scanForExternalTaskFiles(projectRoot) {
|
|
||||||
// Returns: Array of external file paths
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract tag names from external filenames
|
|
||||||
function getExternalTagsFromFiles(projectRoot) {
|
|
||||||
// Returns: Array of external tag names
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read specific external tag data
|
|
||||||
function readExternalTagData(projectRoot, tagName) {
|
|
||||||
// Returns: Tag data object or null if not found
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get combined main and external tags
|
|
||||||
function getAvailableTags(projectRoot) {
|
|
||||||
// Returns: Combined tag registry with metadata
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Integration Points
|
|
||||||
```javascript
|
|
||||||
// Enhanced readJSON with external fallback
|
|
||||||
function readJSON(projectRoot, tag = null) {
|
|
||||||
// Modified to check external files when tag not found in main
|
|
||||||
}
|
|
||||||
|
|
||||||
// Enhanced tags listing with external indicators
|
|
||||||
function tags(projectRoot, options = {}) {
|
|
||||||
// Modified to display external tags with (imported) suffix
|
|
||||||
}
|
|
||||||
|
|
||||||
// Enhanced tag switching with external support
|
|
||||||
function useTag(projectRoot, tagName) {
|
|
||||||
// Modified to support switching to external tags (read-only)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling Specifications
|
|
||||||
|
|
||||||
### File System Errors
|
|
||||||
- **ENOENT**: External file not found - gracefully skip and continue
|
|
||||||
- **EACCES**: Permission denied - warn user and continue with available files
|
|
||||||
- **EISDIR**: Directory instead of file - skip and continue scanning
|
|
||||||
|
|
||||||
### JSON Parsing Errors
|
|
||||||
- **SyntaxError**: Malformed JSON - skip file and log warning with filename
|
|
||||||
- **Missing required fields**: Skip file and provide specific error message
|
|
||||||
- **Invalid tag structure**: Skip file and guide user to correct format
|
|
||||||
|
|
||||||
### Tag Conflict Resolution
|
|
||||||
- **Duplicate tag names**: Main tasks.json takes precedence, log warning
|
|
||||||
- **Invalid tag names**: Skip external file and provide naming guidance
|
|
||||||
- **Master key in external**: Ignore master key, process other tags normally
|
|
||||||
</PRD>
|
|
||||||
@@ -1,130 +0,0 @@
|
|||||||
# Phase 0: Spike - Autonomous TDD Workflow ✅ COMPLETE
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Validate feasibility and build foundational understanding before full implementation.
|
|
||||||
|
|
||||||
## Status
|
|
||||||
**COMPLETED** - All deliverables implemented and validated.
|
|
||||||
|
|
||||||
See `apps/cli/src/commands/autopilot.command.ts` for implementation.
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
- Implement CLI skeleton `tm autopilot` with dry-run mode
|
|
||||||
- Show planned steps from a real task with subtasks
|
|
||||||
- Detect test runner from package.json
|
|
||||||
- Detect git state and render preflight report
|
|
||||||
|
|
||||||
## Deliverables
|
|
||||||
|
|
||||||
### 1. CLI Command Skeleton
|
|
||||||
- Create `apps/cli/src/commands/autopilot.command.ts`
|
|
||||||
- Support `tm autopilot <taskId>` command
|
|
||||||
- Implement `--dry-run` flag
|
|
||||||
- Basic help text and usage information
|
|
||||||
|
|
||||||
### 2. Preflight Detection System
|
|
||||||
- Detect test runner from package.json (npm test, pnpm test, etc.)
|
|
||||||
- Check git working tree state (clean/dirty)
|
|
||||||
- Validate required tools are available (git, gh, node/npm)
|
|
||||||
- Detect default branch
|
|
||||||
|
|
||||||
### 3. Dry-Run Execution Plan Display
|
|
||||||
Display planned execution for a task including:
|
|
||||||
- Preflight checks status
|
|
||||||
- Branch name that would be created
|
|
||||||
- Tag that would be set
|
|
||||||
- List of subtasks in execution order
|
|
||||||
- For each subtask:
|
|
||||||
- RED phase: test file that would be created
|
|
||||||
- GREEN phase: implementation files that would be modified
|
|
||||||
- COMMIT: commit message that would be used
|
|
||||||
- Finalization steps: test suite run, coverage check, push, PR creation
|
|
||||||
|
|
||||||
### 4. Task Loading & Validation
|
|
||||||
- Load task from TaskMaster state
|
|
||||||
- Validate task exists and has subtasks
|
|
||||||
- If no subtasks, show message about needing to expand first
|
|
||||||
- Show dependency order for subtasks
|
|
||||||
|
|
||||||
## Example Output
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --dry-run
|
|
||||||
|
|
||||||
Autopilot Plan for Task #42 [analytics]: User metrics tracking
|
|
||||||
─────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
Preflight Checks:
|
|
||||||
✓ Working tree is clean
|
|
||||||
✓ Test command detected: npm test
|
|
||||||
✓ Tools available: git, gh, node, npm
|
|
||||||
✓ Current branch: main (will create new branch)
|
|
||||||
✓ Task has 3 subtasks ready to execute
|
|
||||||
|
|
||||||
Branch & Tag:
|
|
||||||
→ Will create branch: analytics/task-42-user-metrics
|
|
||||||
→ Will set active tag: analytics
|
|
||||||
|
|
||||||
Execution Plan (3 subtasks):
|
|
||||||
|
|
||||||
1. Subtask 42.1: Add metrics schema
|
|
||||||
RED: Generate tests → src/__tests__/schema.test.js
|
|
||||||
GREEN: Implement code → src/schema.js
|
|
||||||
COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
|
|
||||||
2. Subtask 42.2: Add collection endpoint [depends on 42.1]
|
|
||||||
RED: Generate tests → src/api/__tests__/metrics.test.js
|
|
||||||
GREEN: Implement code → src/api/metrics.js
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
|
|
||||||
3. Subtask 42.3: Add dashboard widget [depends on 42.2]
|
|
||||||
RED: Generate tests → src/components/__tests__/MetricsWidget.test.jsx
|
|
||||||
GREEN: Implement code → src/components/MetricsWidget.jsx
|
|
||||||
COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
|
|
||||||
Finalization:
|
|
||||||
→ Run full test suite with coverage (threshold: 80%)
|
|
||||||
→ Push branch to origin (will confirm)
|
|
||||||
→ Create PR targeting main
|
|
||||||
|
|
||||||
Estimated commits: 3
|
|
||||||
Estimated duration: ~20-30 minutes (depends on implementation complexity)
|
|
||||||
|
|
||||||
Run without --dry-run to execute.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- Dry-run output is clear and matches expected workflow
|
|
||||||
- Preflight detection works correctly on the project repo
|
|
||||||
- Task loading integrates with existing TaskMaster state
|
|
||||||
- No actual git operations or file modifications occur in dry-run mode
|
|
||||||
|
|
||||||
## Out of Scope
|
|
||||||
- Actual test generation
|
|
||||||
- Actual code implementation
|
|
||||||
- Git operations (branch creation, commits, push)
|
|
||||||
- PR creation
|
|
||||||
- Test execution
|
|
||||||
|
|
||||||
## Implementation Notes
|
|
||||||
- Reuse existing `TaskService` from `packages/tm-core`
|
|
||||||
- Use existing git utilities from `scripts/modules/utils/git-utils.js`
|
|
||||||
- Load task/subtask data from `.taskmaster/tasks/tasks.json`
|
|
||||||
- Detect test command via package.json → scripts.test field
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Existing TaskMaster CLI structure
|
|
||||||
- Existing task storage format
|
|
||||||
- Git utilities
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
2-3 days
|
|
||||||
|
|
||||||
## Validation
|
|
||||||
Test dry-run mode with:
|
|
||||||
- Task with 1 subtask
|
|
||||||
- Task with multiple subtasks
|
|
||||||
- Task with dependencies between subtasks
|
|
||||||
- Task without subtasks (should show warning)
|
|
||||||
- Dirty git working tree (should warn)
|
|
||||||
- Missing tools (should error with helpful message)
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,369 +0,0 @@
|
|||||||
# Phase 1: Core Rails - State Machine & Orchestration
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Build the WorkflowOrchestrator as a state machine that guides AI sessions through TDD workflow, rather than directly executing code.
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
### Execution Model
|
|
||||||
The orchestrator acts as a **state manager and guide**, not a code executor:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ Claude Code (MCP Client) │
|
|
||||||
│ - Queries "what to do next" │
|
|
||||||
│ - Executes work (writes tests, code, runs commands) │
|
|
||||||
│ - Reports completion │
|
|
||||||
└────────────────┬────────────────────────────────────────────┘
|
|
||||||
│ MCP Protocol
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ WorkflowOrchestrator (tm-core) │
|
|
||||||
│ - Maintains state machine (RED → GREEN → COMMIT) │
|
|
||||||
│ - Returns work units with context │
|
|
||||||
│ - Validates preconditions │
|
|
||||||
│ - Records progress │
|
|
||||||
│ - Persists state for resumability │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why This Approach?
|
|
||||||
1. **Separation of Concerns**: State management separate from code execution
|
|
||||||
2. **Leverage Existing Tools**: Uses Claude Code's capabilities instead of reimplementing
|
|
||||||
3. **Human-in-the-Loop**: Easy to inspect state and intervene at any phase
|
|
||||||
4. **Simpler Implementation**: Orchestrator is pure logic, no AI model integration needed
|
|
||||||
5. **Flexible Executors**: Any tool (Claude Code, human, other AI) can execute work units
|
|
||||||
|
|
||||||
## Core Components
|
|
||||||
|
|
||||||
### 1. WorkflowOrchestrator Service
|
|
||||||
**Location**: `packages/tm-core/src/services/workflow-orchestrator.service.ts`
|
|
||||||
|
|
||||||
**Responsibilities**:
|
|
||||||
- Track current phase (RED/GREEN/COMMIT) per subtask
|
|
||||||
- Generate work units with context for each phase
|
|
||||||
- Validate phase completion criteria
|
|
||||||
- Advance state machine on successful completion
|
|
||||||
- Handle errors and retry logic
|
|
||||||
- Persist run state for resumability
|
|
||||||
|
|
||||||
**API**:
|
|
||||||
```typescript
|
|
||||||
interface WorkflowOrchestrator {
|
|
||||||
// Start a new autopilot run
|
|
||||||
startRun(taskId: string, options?: RunOptions): Promise<RunContext>;
|
|
||||||
|
|
||||||
// Get next work unit to execute
|
|
||||||
getNextWorkUnit(runId: string): Promise<WorkUnit | null>;
|
|
||||||
|
|
||||||
// Report work unit completion
|
|
||||||
completeWorkUnit(
|
|
||||||
runId: string,
|
|
||||||
workUnitId: string,
|
|
||||||
result: WorkUnitResult
|
|
||||||
): Promise<void>;
|
|
||||||
|
|
||||||
// Get current run state
|
|
||||||
getRunState(runId: string): Promise<RunState>;
|
|
||||||
|
|
||||||
// Pause/resume
|
|
||||||
pauseRun(runId: string): Promise<void>;
|
|
||||||
resumeRun(runId: string): Promise<void>;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnit {
|
|
||||||
id: string; // Unique work unit ID
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
subtaskId: string; // e.g., "42.1"
|
|
||||||
action: string; // Human-readable description
|
|
||||||
context: WorkUnitContext; // All info needed to execute
|
|
||||||
preconditions: Precondition[]; // Checks before execution
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnitContext {
|
|
||||||
taskId: string;
|
|
||||||
taskTitle: string;
|
|
||||||
subtaskTitle: string;
|
|
||||||
subtaskDescription: string;
|
|
||||||
dependencies: string[]; // Completed subtask IDs
|
|
||||||
testCommand: string; // e.g., "npm test"
|
|
||||||
|
|
||||||
// Phase-specific context
|
|
||||||
redPhase?: {
|
|
||||||
testFile: string; // Where to create test
|
|
||||||
testFramework: string; // e.g., "vitest"
|
|
||||||
acceptanceCriteria: string[];
|
|
||||||
};
|
|
||||||
|
|
||||||
greenPhase?: {
|
|
||||||
testFile: string; // Test to make pass
|
|
||||||
implementationHints: string[];
|
|
||||||
expectedFiles: string[]; // Files likely to modify
|
|
||||||
};
|
|
||||||
|
|
||||||
commitPhase?: {
|
|
||||||
commitMessage: string; // Pre-generated message
|
|
||||||
filesToCommit: string[]; // Files modified in RED+GREEN
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnitResult {
|
|
||||||
success: boolean;
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
|
|
||||||
// RED phase results
|
|
||||||
testsCreated?: string[];
|
|
||||||
testsFailed?: number;
|
|
||||||
|
|
||||||
// GREEN phase results
|
|
||||||
testsPassed?: number;
|
|
||||||
filesModified?: string[];
|
|
||||||
attempts?: number;
|
|
||||||
|
|
||||||
// COMMIT phase results
|
|
||||||
commitSha?: string;
|
|
||||||
|
|
||||||
// Common
|
|
||||||
error?: string;
|
|
||||||
logs?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface RunState {
|
|
||||||
runId: string;
|
|
||||||
taskId: string;
|
|
||||||
status: 'running' | 'paused' | 'completed' | 'failed';
|
|
||||||
currentPhase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
currentSubtask: string;
|
|
||||||
completedSubtasks: string[];
|
|
||||||
failedSubtasks: string[];
|
|
||||||
startTime: Date;
|
|
||||||
lastUpdateTime: Date;
|
|
||||||
|
|
||||||
// Resumability
|
|
||||||
checkpoint: {
|
|
||||||
subtaskId: string;
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
attemptNumber: number;
|
|
||||||
};
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. State Machine Logic
|
|
||||||
|
|
||||||
**Phase Transitions**:
|
|
||||||
```
|
|
||||||
START → RED(subtask 1) → GREEN(subtask 1) → COMMIT(subtask 1)
|
|
||||||
↓
|
|
||||||
RED(subtask 2) ← ─ ─ ─ ┘
|
|
||||||
↓
|
|
||||||
GREEN(subtask 2)
|
|
||||||
↓
|
|
||||||
COMMIT(subtask 2)
|
|
||||||
↓
|
|
||||||
(repeat for remaining subtasks)
|
|
||||||
↓
|
|
||||||
FINALIZE → END
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase Rules**:
|
|
||||||
- **RED**: Can only transition to GREEN if tests created and failing
|
|
||||||
- **GREEN**: Can only transition to COMMIT if tests passing (attempt < maxAttempts)
|
|
||||||
- **COMMIT**: Can only transition to next RED if commit successful
|
|
||||||
- **FINALIZE**: Can only start if all subtasks completed
|
|
||||||
|
|
||||||
**Preconditions**:
|
|
||||||
- RED: No uncommitted changes (or staged from previous GREEN that failed)
|
|
||||||
- GREEN: RED phase complete, tests exist and are failing
|
|
||||||
- COMMIT: GREEN phase complete, all tests passing, coverage meets threshold
|
|
||||||
|
|
||||||
### 3. MCP Integration
|
|
||||||
|
|
||||||
**New MCP Tools** (expose WorkflowOrchestrator via MCP):
|
|
||||||
```typescript
|
|
||||||
// Start an autopilot run
|
|
||||||
mcp__task_master_ai__autopilot_start(taskId: string, dryRun?: boolean)
|
|
||||||
|
|
||||||
// Get next work unit
|
|
||||||
mcp__task_master_ai__autopilot_next_work_unit(runId: string)
|
|
||||||
|
|
||||||
// Complete current work unit
|
|
||||||
mcp__task_master_ai__autopilot_complete_work_unit(
|
|
||||||
runId: string,
|
|
||||||
workUnitId: string,
|
|
||||||
result: WorkUnitResult
|
|
||||||
)
|
|
||||||
|
|
||||||
// Get run state
|
|
||||||
mcp__task_master_ai__autopilot_get_state(runId: string)
|
|
||||||
|
|
||||||
// Pause/resume
|
|
||||||
mcp__task_master_ai__autopilot_pause(runId: string)
|
|
||||||
mcp__task_master_ai__autopilot_resume(runId: string)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Git/Test Adapters
|
|
||||||
|
|
||||||
**GitAdapter** (`packages/tm-core/src/services/git-adapter.service.ts`):
|
|
||||||
- Check working tree status
|
|
||||||
- Validate branch state
|
|
||||||
- Read git config (user, remote, default branch)
|
|
||||||
- **Does NOT execute** git commands (that's executor's job)
|
|
||||||
|
|
||||||
**TestAdapter** (`packages/tm-core/src/services/test-adapter.service.ts`):
|
|
||||||
- Detect test framework from package.json
|
|
||||||
- Parse test output (failures, passes, coverage)
|
|
||||||
- Validate coverage thresholds
|
|
||||||
- **Does NOT run** tests (that's executor's job)
|
|
||||||
|
|
||||||
### 5. Run State Persistence
|
|
||||||
|
|
||||||
**Storage Location**: `.taskmaster/reports/runs/<runId>/`
|
|
||||||
|
|
||||||
**Files**:
|
|
||||||
- `state.json` - Current run state (for resumability)
|
|
||||||
- `log.jsonl` - Event stream (timestamped work unit completions)
|
|
||||||
- `manifest.json` - Run metadata
|
|
||||||
- `work-units.json` - All work units generated for this run
|
|
||||||
|
|
||||||
**Example `state.json`**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"status": "paused",
|
|
||||||
"currentPhase": "GREEN",
|
|
||||||
"currentSubtask": "42.2",
|
|
||||||
"completedSubtasks": ["42.1"],
|
|
||||||
"failedSubtasks": [],
|
|
||||||
"checkpoint": {
|
|
||||||
"subtaskId": "42.2",
|
|
||||||
"phase": "GREEN",
|
|
||||||
"attemptNumber": 2
|
|
||||||
},
|
|
||||||
"startTime": "2025-01-15T14:20:33Z",
|
|
||||||
"lastUpdateTime": "2025-01-15T14:35:12Z"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Plan
|
|
||||||
|
|
||||||
### Step 1: WorkflowOrchestrator Skeleton
|
|
||||||
- [ ] Create `workflow-orchestrator.service.ts` with interfaces
|
|
||||||
- [ ] Implement state machine logic (phase transitions)
|
|
||||||
- [ ] Add run state persistence (state.json, log.jsonl)
|
|
||||||
- [ ] Write unit tests for state machine
|
|
||||||
|
|
||||||
### Step 2: Work Unit Generation
|
|
||||||
- [ ] Implement `getNextWorkUnit()` with context assembly
|
|
||||||
- [ ] Generate RED phase work units (test file paths, criteria)
|
|
||||||
- [ ] Generate GREEN phase work units (implementation hints)
|
|
||||||
- [ ] Generate COMMIT phase work units (commit messages)
|
|
||||||
|
|
||||||
### Step 3: Git/Test Adapters
|
|
||||||
- [ ] Create GitAdapter for status checks only
|
|
||||||
- [ ] Create TestAdapter for output parsing only
|
|
||||||
- [ ] Add precondition validation using adapters
|
|
||||||
- [ ] Write adapter unit tests
|
|
||||||
|
|
||||||
### Step 4: MCP Integration
|
|
||||||
- [ ] Add MCP tool definitions in `packages/mcp-server/src/tools/`
|
|
||||||
- [ ] Wire up WorkflowOrchestrator to MCP tools
|
|
||||||
- [ ] Test MCP tools via Claude Code
|
|
||||||
- [ ] Document MCP workflow in CLAUDE.md
|
|
||||||
|
|
||||||
### Step 5: CLI Integration
|
|
||||||
- [ ] Update `autopilot.command.ts` to call WorkflowOrchestrator
|
|
||||||
- [ ] Add `--interactive` mode that shows work units and waits for completion
|
|
||||||
- [ ] Add `--resume` flag to continue paused runs
|
|
||||||
- [ ] Test end-to-end flow
|
|
||||||
|
|
||||||
### Step 6: Integration Testing
|
|
||||||
- [ ] Create test task with 2-3 subtasks
|
|
||||||
- [ ] Run autopilot start → get work unit → complete → repeat
|
|
||||||
- [ ] Verify state persistence and resumability
|
|
||||||
- [ ] Test failure scenarios (test failures, git issues)
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- [ ] WorkflowOrchestrator can generate work units for all phases
|
|
||||||
- [ ] MCP tools allow Claude Code to query and complete work units
|
|
||||||
- [ ] State persists correctly between work unit completions
|
|
||||||
- [ ] Run can be paused and resumed from checkpoint
|
|
||||||
- [ ] Adapters validate preconditions without executing commands
|
|
||||||
- [ ] End-to-end: Claude Code can complete a simple task via work units
|
|
||||||
|
|
||||||
## Out of Scope (Phase 1)
|
|
||||||
- Actual git operations (branch creation, commits) - executor handles this
|
|
||||||
- Actual test execution - executor handles this
|
|
||||||
- PR creation - deferred to Phase 2
|
|
||||||
- TUI interface - deferred to Phase 3
|
|
||||||
- Coverage enforcement - deferred to Phase 2
|
|
||||||
|
|
||||||
## Example Usage Flow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Terminal 1: Claude Code session
|
|
||||||
$ claude
|
|
||||||
|
|
||||||
# In Claude Code (via MCP):
|
|
||||||
> Start autopilot for task 42
|
|
||||||
[Calls mcp__task_master_ai__autopilot_start(42)]
|
|
||||||
→ Run started: run-2025-01-15-142033
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_next_work_unit(run-2025-01-15-142033)]
|
|
||||||
→ Work unit: RED phase for subtask 42.1
|
|
||||||
→ Action: Generate failing tests for metrics schema
|
|
||||||
→ Test file: src/__tests__/schema.test.js
|
|
||||||
→ Framework: vitest
|
|
||||||
|
|
||||||
> [Claude Code creates test file, runs tests]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_complete_work_unit(
|
|
||||||
run-2025-01-15-142033,
|
|
||||||
workUnit-42.1-RED,
|
|
||||||
{ success: true, testsCreated: ['src/__tests__/schema.test.js'], testsFailed: 3 }
|
|
||||||
)]
|
|
||||||
→ Work unit completed. State saved.
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_next_work_unit(run-2025-01-15-142033)]
|
|
||||||
→ Work unit: GREEN phase for subtask 42.1
|
|
||||||
→ Action: Implement code to pass failing tests
|
|
||||||
→ Test file: src/__tests__/schema.test.js
|
|
||||||
→ Expected implementation: src/schema.js
|
|
||||||
|
|
||||||
> [Claude Code implements schema.js, runs tests, confirms all pass]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[...]
|
|
||||||
→ Work unit completed. Ready for COMMIT.
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[...]
|
|
||||||
→ Work unit: COMMIT phase for subtask 42.1
|
|
||||||
→ Commit message: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
→ Files to commit: src/__tests__/schema.test.js, src/schema.js
|
|
||||||
|
|
||||||
> [Claude Code stages files and commits]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[...]
|
|
||||||
→ Subtask 42.1 complete! Moving to 42.2...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Existing TaskService (task loading, status updates)
|
|
||||||
- Existing PreflightChecker (environment validation)
|
|
||||||
- Existing TaskLoaderService (dependency ordering)
|
|
||||||
- MCP server infrastructure
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
7-10 days
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
Phase 2 will add:
|
|
||||||
- PR creation via gh CLI
|
|
||||||
- Coverage enforcement
|
|
||||||
- Enhanced error recovery
|
|
||||||
- Full resumability testing
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user