Compare commits
2 Commits
task-maste
...
chore/fix.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
565090257c | ||
|
|
2c27aeda20 |
@@ -2,16 +2,13 @@
|
|||||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||||
"changelog": [
|
"changelog": [
|
||||||
"@changesets/changelog-github",
|
"@changesets/changelog-github",
|
||||||
{
|
{ "repo": "eyaltoledano/claude-task-master" }
|
||||||
"repo": "eyaltoledano/claude-task-master"
|
|
||||||
}
|
|
||||||
],
|
],
|
||||||
"commit": false,
|
"commit": false,
|
||||||
"fixed": [],
|
"fixed": [],
|
||||||
|
"linked": [],
|
||||||
"access": "public",
|
"access": "public",
|
||||||
"baseBranch": "main",
|
"baseBranch": "main",
|
||||||
"ignore": [
|
"updateInternalDependencies": "patch",
|
||||||
"docs",
|
"ignore": []
|
||||||
"@tm/claude-code-plugin"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve auth token refresh flow
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
|
||||||
|
|
||||||
Fixes #1301
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"@tm/cli": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix warning message box width to match dashboard box width for consistent UI alignment
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix MCP server compatibility with Draft-07 clients (Augment IDE, gemini-cli, gemini code assist)
|
|
||||||
|
|
||||||
- Resolves #1284
|
|
||||||
|
|
||||||
**Problem:**
|
|
||||||
|
|
||||||
- MCP tools were using Zod v4, which outputs JSON Schema Draft 2020-12
|
|
||||||
- MCP clients only support Draft-07
|
|
||||||
- Tools were not discoverable in gemini-cli and other clients
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
|
|
||||||
- Updated all MCP tools to import from `zod/v3` instead of `zod`
|
|
||||||
- Zod v3 schemas convert to Draft-07 via FastMCP's zod-to-json-schema
|
|
||||||
- Fixed logger to use stderr instead of stdout (MCP protocol requirement)
|
|
||||||
|
|
||||||
This is a temporary workaround until FastMCP adds JSON Schema version configuration.
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add configurable MCP tool loading to optimize LLM context usage
|
|
||||||
|
|
||||||
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
|
||||||
|
|
||||||
**Configuration Options:**
|
|
||||||
|
|
||||||
- `all` (default): Load all 36 tools
|
|
||||||
- `core` or `lean`: Load only 7 essential tools for daily development
|
|
||||||
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
|
||||||
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
|
||||||
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
|
||||||
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
|
||||||
|
|
||||||
**Example .mcp.json configuration:**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"TASK_MASTER_TOOLS": "standard",
|
|
||||||
"ANTHROPIC_API_KEY": "your_key_here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve next command to work with remote
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
{
|
|
||||||
"mode": "pre",
|
|
||||||
"tag": "rc",
|
|
||||||
"initialVersions": {
|
|
||||||
"task-master-ai": "0.29.0",
|
|
||||||
"@tm/cli": "",
|
|
||||||
"docs": "0.0.6",
|
|
||||||
"extension": "0.25.6",
|
|
||||||
"@tm/mcp": "0.28.0-rc.2",
|
|
||||||
"@tm/ai-sdk-provider-grok-cli": "",
|
|
||||||
"@tm/build-config": "",
|
|
||||||
"@tm/claude-code-plugin": "0.0.2",
|
|
||||||
"@tm/core": ""
|
|
||||||
},
|
|
||||||
"changesets": [
|
|
||||||
"dirty-hairs-know",
|
|
||||||
"fix-parent-directory-traversal",
|
|
||||||
"fix-warning-box-alignment",
|
|
||||||
"light-owls-stay",
|
|
||||||
"metal-rocks-help"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add autonomous TDD workflow automation system with new `tm autopilot` commands and MCP tools for AI-driven test-driven development.
|
|
||||||
|
|
||||||
**New CLI Commands:**
|
|
||||||
|
|
||||||
- `tm autopilot start <taskId>` - Initialize TDD workflow
|
|
||||||
- `tm autopilot next` - Get next action in workflow
|
|
||||||
- `tm autopilot status` - Check workflow progress
|
|
||||||
- `tm autopilot complete` - Advance phase with test results
|
|
||||||
- `tm autopilot commit` - Save progress with metadata
|
|
||||||
- `tm autopilot resume` - Continue from checkpoint
|
|
||||||
- `tm autopilot abort` - Cancel workflow
|
|
||||||
|
|
||||||
**New MCP Tools:**
|
|
||||||
Seven new autopilot tools for programmatic control: `autopilot_start`, `autopilot_next`, `autopilot_status`, `autopilot_complete_phase`, `autopilot_commit`, `autopilot_resume`, `autopilot_abort`
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
|
|
||||||
- Complete RED → GREEN → COMMIT cycle enforcement
|
|
||||||
- Intelligent commit message generation with metadata
|
|
||||||
- Activity logging and state persistence
|
|
||||||
- Configurable workflow settings via `.taskmaster/config.json`
|
|
||||||
- Comprehensive AI agent integration documentation
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
|
|
||||||
- AI Agent Integration Guide (2,800+ lines)
|
|
||||||
- TDD Quick Start Guide
|
|
||||||
- Example prompts and integration patterns
|
|
||||||
|
|
||||||
> **Learn more:** [TDD Workflow Quickstart Guide](https://dev.task-master.dev/tdd-workflow/quickstart)
|
|
||||||
|
|
||||||
This release enables AI agents to autonomously execute test-driven development workflows with full state management and recovery capabilities.
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"owner": {
|
|
||||||
"name": "Hamster",
|
|
||||||
"email": "ralph@tryhamster.com"
|
|
||||||
},
|
|
||||||
"metadata": {
|
|
||||||
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
|
|
||||||
"version": "1.0.0"
|
|
||||||
},
|
|
||||||
"plugins": [
|
|
||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"source": "./packages/claude-code-plugin",
|
|
||||||
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
|
|
||||||
"author": {
|
|
||||||
"name": "Hamster"
|
|
||||||
},
|
|
||||||
"homepage": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"repository": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"keywords": [
|
|
||||||
"task-management",
|
|
||||||
"ai",
|
|
||||||
"workflow",
|
|
||||||
"orchestration",
|
|
||||||
"automation",
|
|
||||||
"mcp"
|
|
||||||
],
|
|
||||||
"category": "productivity"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
|
||||||
description: Find duplicate GitHub issues
|
|
||||||
---
|
|
||||||
|
|
||||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
|
||||||
|
|
||||||
To do this, follow these steps precisely:
|
|
||||||
|
|
||||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
|
||||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
|
||||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
|
||||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
|
||||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
|
||||||
|
|
||||||
Notes (be sure to tell this to your agents, too):
|
|
||||||
|
|
||||||
- Use `gh` to interact with Github, rather than web fetch
|
|
||||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
|
||||||
- Make a todo list first
|
|
||||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Found 3 possible duplicate issues:
|
|
||||||
|
|
||||||
1. <link to issue>
|
|
||||||
2. <link to issue>
|
|
||||||
3. <link to issue>
|
|
||||||
|
|
||||||
This issue will be automatically closed as a duplicate in 3 days.
|
|
||||||
|
|
||||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
|
||||||
- To prevent auto-closure, add a comment or 👎 this comment
|
|
||||||
|
|
||||||
🤖 Generated with \[Task Master Bot\]
|
|
||||||
|
|
||||||
---
|
|
||||||
@@ -48,7 +48,7 @@ After adding dependency:
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-dependency 5 needs 3
|
/project:tm/add-dependency 5 needs 3
|
||||||
→ Task #5 now depends on Task #3
|
→ Task #5 now depends on Task #3
|
||||||
→ Task #5 is now blocked until #3 completes
|
→ Task #5 is now blocked until #3 completes
|
||||||
→ Suggested: Also consider if #5 needs #4
|
→ Suggested: Also consider if #5 needs #4
|
||||||
@@ -56,12 +56,12 @@ task-master add-subtask --parent=<id> --task-id=<existing-id>
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-subtask to 5: implement user authentication
|
/project:tm/add-subtask to 5: implement user authentication
|
||||||
→ Created subtask #5.1: "implement user authentication"
|
→ Created subtask #5.1: "implement user authentication"
|
||||||
→ Parent task #5 now has 1 subtask
|
→ Parent task #5 now has 1 subtask
|
||||||
→ Suggested next subtasks: tests, documentation
|
→ Suggested next subtasks: tests, documentation
|
||||||
|
|
||||||
/taskmaster:add-subtask 5: setup, implement, test
|
/project:tm/add-subtask 5: setup, implement, test
|
||||||
→ Created 3 subtasks:
|
→ Created 3 subtasks:
|
||||||
#5.1: setup
|
#5.1: setup
|
||||||
#5.2: implement
|
#5.2: implement
|
||||||
@@ -53,7 +53,7 @@ task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-subtask/from-task 5 8
|
/project:tm/add-subtask/from-task 5 8
|
||||||
→ Converting: Task #8 becomes subtask #5.1
|
→ Converting: Task #8 becomes subtask #5.1
|
||||||
→ Updated: 3 dependency references
|
→ Updated: 3 dependency references
|
||||||
→ Parent task #5 now has 1 subtask
|
→ Parent task #5 now has 1 subtask
|
||||||
@@ -115,7 +115,7 @@ Results are:
|
|||||||
|
|
||||||
After analysis:
|
After analysis:
|
||||||
```
|
```
|
||||||
/taskmaster:expand 5 # Expand specific task
|
/project:tm/expand 5 # Expand specific task
|
||||||
/taskmaster:expand-all # Expand all recommended
|
/project:tm/expand/all # Expand all recommended
|
||||||
/taskmaster:complexity-report # View detailed report
|
/project:tm/complexity-report # View detailed report
|
||||||
```
|
```
|
||||||
@@ -77,7 +77,7 @@ Suggest alternatives:
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:clear-subtasks 5
|
/project:tm/clear-subtasks 5
|
||||||
→ Found 4 subtasks to remove
|
→ Found 4 subtasks to remove
|
||||||
→ Warning: Subtask #5.2 is in-progress
|
→ Warning: Subtask #5.2 is in-progress
|
||||||
→ Cleared all subtasks from task #5
|
→ Cleared all subtasks from task #5
|
||||||
@@ -105,13 +105,13 @@ Use report for:
|
|||||||
## Example Usage
|
## Example Usage
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:complexity-report
|
/project:tm/complexity-report
|
||||||
→ Opens latest analysis
|
→ Opens latest analysis
|
||||||
|
|
||||||
/taskmaster:complexity-report --file=archived/2024-01-01.md
|
/project:tm/complexity-report --file=archived/2024-01-01.md
|
||||||
→ View historical analysis
|
→ View historical analysis
|
||||||
|
|
||||||
After viewing:
|
After viewing:
|
||||||
/taskmaster:expand 5
|
/project:tm/expand 5
|
||||||
→ Expand high-complexity task
|
→ Expand high-complexity task
|
||||||
```
|
```
|
||||||
@@ -70,7 +70,7 @@ Manual Review Needed:
|
|||||||
⚠️ Task #45 has 8 dependencies
|
⚠️ Task #45 has 8 dependencies
|
||||||
Suggestion: Break into subtasks
|
Suggestion: Break into subtasks
|
||||||
|
|
||||||
Run '/taskmaster:validate-dependencies' to verify fixes
|
Run '/project:tm/validate-dependencies' to verify fixes
|
||||||
```
|
```
|
||||||
|
|
||||||
## Safety
|
## Safety
|
||||||
81
.claude/commands/tm/help.md
Normal file
81
.claude/commands/tm/help.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
Show help for Task Master commands.
|
||||||
|
|
||||||
|
Arguments: $ARGUMENTS
|
||||||
|
|
||||||
|
Display help for Task Master commands. If arguments provided, show specific command help.
|
||||||
|
|
||||||
|
## Task Master Command Help
|
||||||
|
|
||||||
|
### Quick Navigation
|
||||||
|
|
||||||
|
Type `/project:tm/` and use tab completion to explore all commands.
|
||||||
|
|
||||||
|
### Command Categories
|
||||||
|
|
||||||
|
#### 🚀 Setup & Installation
|
||||||
|
- `/project:tm/setup/install` - Comprehensive installation guide
|
||||||
|
- `/project:tm/setup/quick-install` - One-line global install
|
||||||
|
|
||||||
|
#### 📋 Project Setup
|
||||||
|
- `/project:tm/init` - Initialize new project
|
||||||
|
- `/project:tm/init/quick` - Quick setup with auto-confirm
|
||||||
|
- `/project:tm/models` - View AI configuration
|
||||||
|
- `/project:tm/models/setup` - Configure AI providers
|
||||||
|
|
||||||
|
#### 🎯 Task Generation
|
||||||
|
- `/project:tm/parse-prd` - Generate tasks from PRD
|
||||||
|
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||||
|
- `/project:tm/generate` - Create task files
|
||||||
|
|
||||||
|
#### 📝 Task Management
|
||||||
|
- `/project:tm/list` - List tasks (natural language filters)
|
||||||
|
- `/project:tm/show <id>` - Display task details
|
||||||
|
- `/project:tm/add-task` - Create new task
|
||||||
|
- `/project:tm/update` - Update tasks naturally
|
||||||
|
- `/project:tm/next` - Get next task recommendation
|
||||||
|
|
||||||
|
#### 🔄 Status Management
|
||||||
|
- `/project:tm/set-status/to-pending <id>`
|
||||||
|
- `/project:tm/set-status/to-in-progress <id>`
|
||||||
|
- `/project:tm/set-status/to-done <id>`
|
||||||
|
- `/project:tm/set-status/to-review <id>`
|
||||||
|
- `/project:tm/set-status/to-deferred <id>`
|
||||||
|
- `/project:tm/set-status/to-cancelled <id>`
|
||||||
|
|
||||||
|
#### 🔍 Analysis & Breakdown
|
||||||
|
- `/project:tm/analyze-complexity` - Analyze task complexity
|
||||||
|
- `/project:tm/expand <id>` - Break down complex task
|
||||||
|
- `/project:tm/expand/all` - Expand all eligible tasks
|
||||||
|
|
||||||
|
#### 🔗 Dependencies
|
||||||
|
- `/project:tm/add-dependency` - Add task dependency
|
||||||
|
- `/project:tm/remove-dependency` - Remove dependency
|
||||||
|
- `/project:tm/validate-dependencies` - Check for issues
|
||||||
|
|
||||||
|
#### 🤖 Workflows
|
||||||
|
- `/project:tm/workflows/smart-flow` - Intelligent workflows
|
||||||
|
- `/project:tm/workflows/pipeline` - Command chaining
|
||||||
|
- `/project:tm/workflows/auto-implement` - Auto-implementation
|
||||||
|
|
||||||
|
#### 📊 Utilities
|
||||||
|
- `/project:tm/utils/analyze` - Project analysis
|
||||||
|
- `/project:tm/status` - Project dashboard
|
||||||
|
- `/project:tm/learn` - Interactive learning
|
||||||
|
|
||||||
|
### Natural Language Examples
|
||||||
|
|
||||||
|
```
|
||||||
|
/project:tm/list pending high priority
|
||||||
|
/project:tm/update mark all API tasks as done
|
||||||
|
/project:tm/add-task create login system with OAuth
|
||||||
|
/project:tm/show current
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
|
||||||
|
1. Install: `/project:tm/setup/quick-install`
|
||||||
|
2. Initialize: `/project:tm/init/quick`
|
||||||
|
3. Learn: `/project:tm/learn start`
|
||||||
|
4. Work: `/project:tm/workflows/smart-flow`
|
||||||
|
|
||||||
|
For detailed command info: `/project:tm/help <command-name>`
|
||||||
@@ -30,17 +30,17 @@ task-master init -y
|
|||||||
After quick init:
|
After quick init:
|
||||||
1. Configure AI models if needed:
|
1. Configure AI models if needed:
|
||||||
```
|
```
|
||||||
/taskmaster:models/setup
|
/project:tm/models/setup
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Parse PRD if available:
|
2. Parse PRD if available:
|
||||||
```
|
```
|
||||||
/taskmaster:parse-prd <file>
|
/project:tm/parse-prd <file>
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Or create first task:
|
3. Or create first task:
|
||||||
```
|
```
|
||||||
/taskmaster:add-task create initial setup
|
/project:tm/add-task create initial setup
|
||||||
```
|
```
|
||||||
|
|
||||||
Perfect for rapid project setup!
|
Perfect for rapid project setup!
|
||||||
@@ -45,6 +45,6 @@ After successful init:
|
|||||||
|
|
||||||
If PRD file provided:
|
If PRD file provided:
|
||||||
```
|
```
|
||||||
/taskmaster:init my-prd.md
|
/project:tm/init my-prd.md
|
||||||
→ Automatically runs parse-prd after init
|
→ Automatically runs parse-prd after init
|
||||||
```
|
```
|
||||||
@@ -55,7 +55,7 @@ After removing:
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-dependency 5 from 3
|
/project:tm/remove-dependency 5 from 3
|
||||||
→ Removed: Task #5 no longer depends on #3
|
→ Removed: Task #5 no longer depends on #3
|
||||||
→ Task #5 is now UNBLOCKED and ready to start
|
→ Task #5 is now UNBLOCKED and ready to start
|
||||||
→ Warning: Consider if #5 still needs #2 completed first
|
→ Warning: Consider if #5 still needs #2 completed first
|
||||||
@@ -63,13 +63,13 @@ task-master remove-subtask --id=<parentId.subtaskId> --convert
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-subtask 5.1
|
/project:tm/remove-subtask 5.1
|
||||||
→ Warning: Subtask #5.1 is in-progress
|
→ Warning: Subtask #5.1 is in-progress
|
||||||
→ This will delete all subtask data
|
→ This will delete all subtask data
|
||||||
→ Parent task #5 will be updated
|
→ Parent task #5 will be updated
|
||||||
Confirm deletion? (y/n)
|
Confirm deletion? (y/n)
|
||||||
|
|
||||||
/taskmaster:remove-subtask 5.1 convert
|
/project:tm/remove-subtask 5.1 convert
|
||||||
→ Converting subtask #5.1 to standalone task #89
|
→ Converting subtask #5.1 to standalone task #89
|
||||||
→ Preserved: All task data and history
|
→ Preserved: All task data and history
|
||||||
→ Updated: 2 dependency references
|
→ Updated: 2 dependency references
|
||||||
@@ -85,17 +85,17 @@ Suggest before deletion:
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-task 5
|
/project:tm/remove-task 5
|
||||||
→ Task #5 is in-progress with 8 hours logged
|
→ Task #5 is in-progress with 8 hours logged
|
||||||
→ 3 other tasks depend on this
|
→ 3 other tasks depend on this
|
||||||
→ Suggestion: Mark as cancelled instead?
|
→ Suggestion: Mark as cancelled instead?
|
||||||
Remove anyway? (y/n)
|
Remove anyway? (y/n)
|
||||||
|
|
||||||
/taskmaster:remove-task 5 -y
|
/project:tm/remove-task 5 -y
|
||||||
→ Removed: Task #5 and 4 subtasks
|
→ Removed: Task #5 and 4 subtasks
|
||||||
→ Updated: 3 task dependencies
|
→ Updated: 3 task dependencies
|
||||||
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
||||||
→ Run /taskmaster:fix-dependencies to resolve
|
→ Run /project:tm/fix-dependencies to resolve
|
||||||
```
|
```
|
||||||
|
|
||||||
## Safety Features
|
## Safety Features
|
||||||
@@ -8,11 +8,11 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
|
|
||||||
## Project Setup & Configuration
|
## Project Setup & Configuration
|
||||||
|
|
||||||
### `/taskmaster:init`
|
### `/project:tm/init`
|
||||||
- `init-project` - Initialize new project (handles PRD files intelligently)
|
- `init-project` - Initialize new project (handles PRD files intelligently)
|
||||||
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
||||||
|
|
||||||
### `/taskmaster:models`
|
### `/project:tm/models`
|
||||||
- `view-models` - View current AI model configuration
|
- `view-models` - View current AI model configuration
|
||||||
- `setup-models` - Interactive model configuration
|
- `setup-models` - Interactive model configuration
|
||||||
- `set-main` - Set primary generation model
|
- `set-main` - Set primary generation model
|
||||||
@@ -21,21 +21,21 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
|
|
||||||
## Task Generation
|
## Task Generation
|
||||||
|
|
||||||
### `/taskmaster:parse-prd`
|
### `/project:tm/parse-prd`
|
||||||
- `parse-prd` - Generate tasks from PRD document
|
- `parse-prd` - Generate tasks from PRD document
|
||||||
- `parse-prd-with-research` - Enhanced parsing with research mode
|
- `parse-prd-with-research` - Enhanced parsing with research mode
|
||||||
|
|
||||||
### `/taskmaster:generate`
|
### `/project:tm/generate`
|
||||||
- `generate-tasks` - Create individual task files from tasks.json
|
- `generate-tasks` - Create individual task files from tasks.json
|
||||||
|
|
||||||
## Task Management
|
## Task Management
|
||||||
|
|
||||||
### `/taskmaster:list`
|
### `/project:tm/list`
|
||||||
- `list-tasks` - Smart listing with natural language filters
|
- `list-tasks` - Smart listing with natural language filters
|
||||||
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
||||||
- `list-tasks-by-status` - Filter by specific status
|
- `list-tasks-by-status` - Filter by specific status
|
||||||
|
|
||||||
### `/taskmaster:set-status`
|
### `/project:tm/set-status`
|
||||||
- `to-pending` - Reset task to pending
|
- `to-pending` - Reset task to pending
|
||||||
- `to-in-progress` - Start working on task
|
- `to-in-progress` - Start working on task
|
||||||
- `to-done` - Mark task complete
|
- `to-done` - Mark task complete
|
||||||
@@ -43,84 +43,84 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
- `to-deferred` - Defer task
|
- `to-deferred` - Defer task
|
||||||
- `to-cancelled` - Cancel task
|
- `to-cancelled` - Cancel task
|
||||||
|
|
||||||
### `/taskmaster:sync-readme`
|
### `/project:tm/sync-readme`
|
||||||
- `sync-readme` - Export tasks to README.md with formatting
|
- `sync-readme` - Export tasks to README.md with formatting
|
||||||
|
|
||||||
### `/taskmaster:update`
|
### `/project:tm/update`
|
||||||
- `update-task` - Update tasks with natural language
|
- `update-task` - Update tasks with natural language
|
||||||
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
||||||
- `update-single-task` - Update specific task
|
- `update-single-task` - Update specific task
|
||||||
|
|
||||||
### `/taskmaster:add-task`
|
### `/project:tm/add-task`
|
||||||
- `add-task` - Add new task with AI assistance
|
- `add-task` - Add new task with AI assistance
|
||||||
|
|
||||||
### `/taskmaster:remove-task`
|
### `/project:tm/remove-task`
|
||||||
- `remove-task` - Remove task with confirmation
|
- `remove-task` - Remove task with confirmation
|
||||||
|
|
||||||
## Subtask Management
|
## Subtask Management
|
||||||
|
|
||||||
### `/taskmaster:add-subtask`
|
### `/project:tm/add-subtask`
|
||||||
- `add-subtask` - Add new subtask to parent
|
- `add-subtask` - Add new subtask to parent
|
||||||
- `convert-task-to-subtask` - Convert existing task to subtask
|
- `convert-task-to-subtask` - Convert existing task to subtask
|
||||||
|
|
||||||
### `/taskmaster:remove-subtask`
|
### `/project:tm/remove-subtask`
|
||||||
- `remove-subtask` - Remove subtask (with optional conversion)
|
- `remove-subtask` - Remove subtask (with optional conversion)
|
||||||
|
|
||||||
### `/taskmaster:clear-subtasks`
|
### `/project:tm/clear-subtasks`
|
||||||
- `clear-subtasks` - Clear subtasks from specific task
|
- `clear-subtasks` - Clear subtasks from specific task
|
||||||
- `clear-all-subtasks` - Clear all subtasks globally
|
- `clear-all-subtasks` - Clear all subtasks globally
|
||||||
|
|
||||||
## Task Analysis & Breakdown
|
## Task Analysis & Breakdown
|
||||||
|
|
||||||
### `/taskmaster:analyze-complexity`
|
### `/project:tm/analyze-complexity`
|
||||||
- `analyze-complexity` - Analyze and generate expansion recommendations
|
- `analyze-complexity` - Analyze and generate expansion recommendations
|
||||||
|
|
||||||
### `/taskmaster:complexity-report`
|
### `/project:tm/complexity-report`
|
||||||
- `complexity-report` - Display complexity analysis report
|
- `complexity-report` - Display complexity analysis report
|
||||||
|
|
||||||
### `/taskmaster:expand`
|
### `/project:tm/expand`
|
||||||
- `expand-task` - Break down specific task
|
- `expand-task` - Break down specific task
|
||||||
- `expand-all-tasks` - Expand all eligible tasks
|
- `expand-all-tasks` - Expand all eligible tasks
|
||||||
- `with-research` - Enhanced expansion
|
- `with-research` - Enhanced expansion
|
||||||
|
|
||||||
## Task Navigation
|
## Task Navigation
|
||||||
|
|
||||||
### `/taskmaster:next`
|
### `/project:tm/next`
|
||||||
- `next-task` - Intelligent next task recommendation
|
- `next-task` - Intelligent next task recommendation
|
||||||
|
|
||||||
### `/taskmaster:show`
|
### `/project:tm/show`
|
||||||
- `show-task` - Display detailed task information
|
- `show-task` - Display detailed task information
|
||||||
|
|
||||||
### `/taskmaster:status`
|
### `/project:tm/status`
|
||||||
- `project-status` - Comprehensive project dashboard
|
- `project-status` - Comprehensive project dashboard
|
||||||
|
|
||||||
## Dependency Management
|
## Dependency Management
|
||||||
|
|
||||||
### `/taskmaster:add-dependency`
|
### `/project:tm/add-dependency`
|
||||||
- `add-dependency` - Add task dependency
|
- `add-dependency` - Add task dependency
|
||||||
|
|
||||||
### `/taskmaster:remove-dependency`
|
### `/project:tm/remove-dependency`
|
||||||
- `remove-dependency` - Remove task dependency
|
- `remove-dependency` - Remove task dependency
|
||||||
|
|
||||||
### `/taskmaster:validate-dependencies`
|
### `/project:tm/validate-dependencies`
|
||||||
- `validate-dependencies` - Check for dependency issues
|
- `validate-dependencies` - Check for dependency issues
|
||||||
|
|
||||||
### `/taskmaster:fix-dependencies`
|
### `/project:tm/fix-dependencies`
|
||||||
- `fix-dependencies` - Automatically fix dependency problems
|
- `fix-dependencies` - Automatically fix dependency problems
|
||||||
|
|
||||||
## Workflows & Automation
|
## Workflows & Automation
|
||||||
|
|
||||||
### `/taskmaster:workflows`
|
### `/project:tm/workflows`
|
||||||
- `smart-workflow` - Context-aware intelligent workflow execution
|
- `smart-workflow` - Context-aware intelligent workflow execution
|
||||||
- `command-pipeline` - Chain multiple commands together
|
- `command-pipeline` - Chain multiple commands together
|
||||||
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
||||||
|
|
||||||
## Utilities
|
## Utilities
|
||||||
|
|
||||||
### `/taskmaster:utils`
|
### `/project:tm/utils`
|
||||||
- `analyze-project` - Deep project analysis and insights
|
- `analyze-project` - Deep project analysis and insights
|
||||||
|
|
||||||
### `/taskmaster:setup`
|
### `/project:tm/setup`
|
||||||
- `install-taskmaster` - Comprehensive installation guide
|
- `install-taskmaster` - Comprehensive installation guide
|
||||||
- `quick-install-taskmaster` - One-line global installation
|
- `quick-install-taskmaster` - One-line global installation
|
||||||
|
|
||||||
@@ -129,17 +129,17 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
### Natural Language
|
### Natural Language
|
||||||
Most commands accept natural language arguments:
|
Most commands accept natural language arguments:
|
||||||
```
|
```
|
||||||
/taskmaster:add-task create user authentication system
|
/project:tm/add-task create user authentication system
|
||||||
/taskmaster:update mark all API tasks as high priority
|
/project:tm/update mark all API tasks as high priority
|
||||||
/taskmaster:list show blocked tasks
|
/project:tm/list show blocked tasks
|
||||||
```
|
```
|
||||||
|
|
||||||
### ID-Based Commands
|
### ID-Based Commands
|
||||||
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
||||||
```
|
```
|
||||||
/taskmaster:show 45
|
/project:tm/show 45
|
||||||
/taskmaster:expand 23
|
/project:tm/expand 23
|
||||||
/taskmaster:set-status/to-done 67
|
/project:tm/set-status/to-done 67
|
||||||
```
|
```
|
||||||
|
|
||||||
### Smart Defaults
|
### Smart Defaults
|
||||||
@@ -66,7 +66,7 @@ The AI:
|
|||||||
## Example Updates
|
## Example Updates
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:update/single 5: add rate limiting
|
/project:tm/update/single 5: add rate limiting
|
||||||
→ Updating Task #5: "Implement API endpoints"
|
→ Updating Task #5: "Implement API endpoints"
|
||||||
|
|
||||||
Current: Basic CRUD endpoints
|
Current: Basic CRUD endpoints
|
||||||
@@ -77,7 +77,7 @@ AI analyzes the update context and:
|
|||||||
## Example Updates
|
## Example Updates
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:update/from-id 5: change database to PostgreSQL
|
/project:tm/update/from-id 5: change database to PostgreSQL
|
||||||
→ Analyzing impact starting from task #5
|
→ Analyzing impact starting from task #5
|
||||||
→ Found 6 related tasks to update
|
→ Found 6 related tasks to update
|
||||||
→ Updates will maintain consistency
|
→ Updates will maintain consistency
|
||||||
@@ -66,6 +66,6 @@ For each issue found:
|
|||||||
## Next Steps
|
## Next Steps
|
||||||
|
|
||||||
After validation:
|
After validation:
|
||||||
- Run `/taskmaster:fix-dependencies` to auto-fix
|
- Run `/project:tm/fix-dependencies` to auto-fix
|
||||||
- Manually adjust problematic dependencies
|
- Manually adjust problematic dependencies
|
||||||
- Rerun to verify fixes
|
- Rerun to verify fixes
|
||||||
@@ -1,7 +1,10 @@
|
|||||||
reviews:
|
reviews:
|
||||||
profile: chill
|
profile: assertive
|
||||||
poem: false
|
poem: false
|
||||||
auto_review:
|
auto_review:
|
||||||
enabled: true
|
|
||||||
base_branches:
|
base_branches:
|
||||||
- ".*"
|
- rc
|
||||||
|
- beta
|
||||||
|
- alpha
|
||||||
|
- production
|
||||||
|
- next
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"task-master-ai": {
|
"task-master-ai": {
|
||||||
"command": "node",
|
"command": "node",
|
||||||
"args": ["./dist/mcp-server.js"],
|
"args": ["./mcp-server/server.js"],
|
||||||
"env": {
|
"env": {
|
||||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||||
|
|||||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'auto-close-duplicates-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
function extractDuplicateIssueNumber(commentBody) {
|
|
||||||
const match = commentBody.match(/#(\d+)/);
|
|
||||||
return match ? parseInt(match[1], 10) : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
duplicateOfNumber,
|
|
||||||
token
|
|
||||||
) {
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
|
||||||
token,
|
|
||||||
'PATCH',
|
|
||||||
{
|
|
||||||
state: 'closed',
|
|
||||||
state_reason: 'not_planned',
|
|
||||||
labels: ['duplicate']
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
|
||||||
|
|
||||||
If this is incorrect, please re-open this issue or create a new one.
|
|
||||||
|
|
||||||
🤖 Generated with [Task Master Bot]`
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function autoCloseDuplicates() {
|
|
||||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
|
|
||||||
const threeDaysAgo = new Date();
|
|
||||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
|
||||||
let foundRecentIssue = false;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
// Filter for issues created more than 3 days ago
|
|
||||||
const oldEnoughIssues = pageIssues.filter(
|
|
||||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
|
||||||
);
|
|
||||||
|
|
||||||
allIssues.push(...oldEnoughIssues);
|
|
||||||
|
|
||||||
// If all issues on this page are newer than 3 days, we can stop
|
|
||||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
|
||||||
foundRecentIssue = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we found some old issues but not all, continue to next page
|
|
||||||
// as there might be more old issues
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > MAX_PAGES) {
|
|
||||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const issues = allIssues;
|
|
||||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
|
|
||||||
for (const issue of issues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
const dupeComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeComments.length === 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
|
||||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeCommentDate > threeDaysAgo) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - duplicate comment is old enough (${Math.floor(
|
|
||||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
|
||||||
)} days)`
|
|
||||||
);
|
|
||||||
|
|
||||||
const commentsAfterDupe = comments.filter(
|
|
||||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (commentsAfterDupe.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
|
||||||
);
|
|
||||||
const reactions = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
|
||||||
);
|
|
||||||
|
|
||||||
const authorThumbsDown = reactions.some(
|
|
||||||
(reaction) =>
|
|
||||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (authorThumbsDown) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
|
||||||
lastDupeComment.body
|
|
||||||
);
|
|
||||||
if (!duplicateIssueNumber) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issue.number,
|
|
||||||
duplicateIssueNumber,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
autoCloseDuplicates().catch(console.error);
|
|
||||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'backfill-duplicate-comments-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function triggerDedupeWorkflow(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
token,
|
|
||||||
dryRun = true
|
|
||||||
) {
|
|
||||||
if (dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
ref: 'main',
|
|
||||||
inputs: {
|
|
||||||
issue_number: issueNumber.toString()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function backfillDuplicateComments() {
|
|
||||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
|
|
||||||
Environment Variables:
|
|
||||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
|
||||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
|
||||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
const dryRun = process.env.DRY_RUN !== 'false';
|
|
||||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
|
||||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
|
||||||
|
|
||||||
const cutoffDate = new Date();
|
|
||||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
|
||||||
);
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
allIssues.push(...pageIssues);
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > 100) {
|
|
||||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
|
||||||
);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
let triggeredCount = 0;
|
|
||||||
|
|
||||||
for (const issue of allIssues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
|
||||||
const dupeDetectionComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Skip if there's already a duplicate detection comment
|
|
||||||
if (dupeDetectionComments.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
|
||||||
|
|
||||||
if (!dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
triggeredCount++;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
backfillDuplicateComments().catch(console.error);
|
|
||||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { readFileSync, existsSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
// Get context from command line argument or environment
|
|
||||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
|
||||||
|
|
||||||
function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
if (existsSync(join(currentDir, 'package.json'))) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(
|
|
||||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
|
||||||
);
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
function checkPreReleaseMode() {
|
|
||||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
|
|
||||||
// Check if pre.json exists
|
|
||||||
if (!existsSync(preJsonPath)) {
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Read and parse pre.json
|
|
||||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
|
||||||
const preJson = JSON.parse(preJsonContent);
|
|
||||||
|
|
||||||
// Check if we're in active pre-release mode
|
|
||||||
if (preJson.mode === 'pre') {
|
|
||||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
|
||||||
console.error('');
|
|
||||||
|
|
||||||
// Provide context-specific error messages
|
|
||||||
if (context === 'Release Check' || context === 'pull_request') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode must be exited before merging to main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error(
|
|
||||||
'To fix this, run the following commands in your branch:'
|
|
||||||
);
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then update this pull request.');
|
|
||||||
} else if (context === 'Release' || context === 'main') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode should only be used on feature branches, not main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands locally:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push origin main');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then re-run this workflow.');
|
|
||||||
} else {
|
|
||||||
console.error('Pre-release mode must be exited before proceeding.');
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
}
|
|
||||||
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
|
||||||
console.error(`Error details: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run the check
|
|
||||||
checkPreReleaseMode();
|
|
||||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
|
||||||
|
|
||||||
function parseMetricsTable(content, metricName) {
|
|
||||||
const lines = content.split('\n');
|
|
||||||
|
|
||||||
for (let i = 0; i < lines.length; i++) {
|
|
||||||
const line = lines[i].trim();
|
|
||||||
// Match a markdown table row like: | Metric Name | value | ...
|
|
||||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
|
||||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
|
||||||
const match = line.match(re);
|
|
||||||
if (match) {
|
|
||||||
return match[1].trim() || 'N/A';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return 'N/A';
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseCountMetric(content, metricName) {
|
|
||||||
const result = parseMetricsTable(content, metricName);
|
|
||||||
// Extract number from string, handling commas and spaces
|
|
||||||
const numberMatch = result.toString().match(/[\d,]+/);
|
|
||||||
if (numberMatch) {
|
|
||||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
|
||||||
return isNaN(number) ? 0 : number;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
function main() {
|
|
||||||
const metrics = {
|
|
||||||
issues_created: 0,
|
|
||||||
issues_closed: 0,
|
|
||||||
prs_created: 0,
|
|
||||||
prs_merged: 0,
|
|
||||||
issue_avg_first_response: 'N/A',
|
|
||||||
issue_avg_time_to_close: 'N/A',
|
|
||||||
pr_avg_first_response: 'N/A',
|
|
||||||
pr_avg_merge_time: 'N/A'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Parse issue metrics
|
|
||||||
if (existsSync('issue_metrics.md')) {
|
|
||||||
console.log('📄 Found issue_metrics.md, parsing...');
|
|
||||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.issues_created = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.issues_closed = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Number of items closed'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_first_response = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR created metrics
|
|
||||||
if (existsSync('pr_created_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
|
||||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_created = parseCountMetric(
|
|
||||||
prCreatedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_first_response = parseMetricsTable(
|
|
||||||
prCreatedContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR merged metrics (for more accurate merge data)
|
|
||||||
if (existsSync('pr_merged_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
|
||||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_merged = parseCountMetric(
|
|
||||||
prMergedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
// For merged PRs, "Time to close" is actually time to merge
|
|
||||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
|
||||||
prMergedContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
|
||||||
);
|
|
||||||
// Fallback: try old pr_metrics.md if it exists
|
|
||||||
if (existsSync('pr_metrics.md')) {
|
|
||||||
console.log('📄 Falling back to pr_metrics.md...');
|
|
||||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
|
||||||
metrics.prs_merged =
|
|
||||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
|
||||||
|
|
||||||
const maybeMergeTime = parseMetricsTable(
|
|
||||||
prContent,
|
|
||||||
'Average time to merge'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_merge_time =
|
|
||||||
maybeMergeTime !== 'N/A'
|
|
||||||
? maybeMergeTime
|
|
||||||
: parseMetricsTable(prContent, 'Time to close');
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output for GitHub Actions
|
|
||||||
const output = Object.entries(metrics)
|
|
||||||
.map(([key, value]) => `${key}=${value}`)
|
|
||||||
.join('\n');
|
|
||||||
|
|
||||||
// Always output to stdout for debugging
|
|
||||||
console.log('\n=== FINAL METRICS ===');
|
|
||||||
Object.entries(metrics).forEach(([key, value]) => {
|
|
||||||
console.log(`${key}: ${value}`);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
|
||||||
if (process.env.GITHUB_OUTPUT) {
|
|
||||||
try {
|
|
||||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
|
||||||
console.log(
|
|
||||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
console.log(
|
|
||||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
main();
|
|
||||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { existsSync, unlinkSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, runCommand } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
console.log('🚀 Starting release process...');
|
|
||||||
|
|
||||||
// Double-check we're not in pre-release mode (safety net)
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
if (existsSync(preJsonPath)) {
|
|
||||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
|
||||||
unlinkSync(preJsonPath);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the extension version has changed and tag it
|
|
||||||
// This prevents changeset from trying to publish the private package
|
|
||||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
|
||||||
|
|
||||||
// Run changeset publish for npm packages
|
|
||||||
runCommand('npx', ['changeset', 'publish']);
|
|
||||||
|
|
||||||
console.log('✅ Release process completed!');
|
|
||||||
|
|
||||||
// The extension tag (if created) will trigger the extension-release workflow
|
|
||||||
21
.github/scripts/release.sh
vendored
Executable file
21
.github/scripts/release.sh
vendored
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🚀 Starting release process..."
|
||||||
|
|
||||||
|
# Double-check we're not in pre-release mode (safety net)
|
||||||
|
if [ -f .changeset/pre.json ]; then
|
||||||
|
echo "⚠️ Warning: pre.json still exists. Removing it..."
|
||||||
|
rm -f .changeset/pre.json
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if the extension version has changed and tag it
|
||||||
|
# This prevents changeset from trying to publish the private package
|
||||||
|
node .github/scripts/tag-extension.mjs
|
||||||
|
|
||||||
|
# Run changeset publish for npm packages
|
||||||
|
npx changeset publish
|
||||||
|
|
||||||
|
echo "✅ Release process completed!"
|
||||||
|
|
||||||
|
# The extension tag (if created) will trigger the extension-release workflow
|
||||||
76
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
76
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
@@ -1,13 +1,33 @@
|
|||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
import assert from 'node:assert/strict';
|
import assert from 'node:assert/strict';
|
||||||
import { readFileSync } from 'node:fs';
|
import { spawnSync } from 'node:child_process';
|
||||||
import { join, dirname } from 'node:path';
|
import { readFileSync, existsSync } from 'node:fs';
|
||||||
|
import { join, dirname, resolve } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
const __dirname = dirname(__filename);
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
|
// Find the root directory by looking for package.json
|
||||||
|
function findRootDir(startDir) {
|
||||||
|
let currentDir = resolve(startDir);
|
||||||
|
while (currentDir !== '/') {
|
||||||
|
if (existsSync(join(currentDir, 'package.json'))) {
|
||||||
|
// Verify it's the root package.json by checking for expected fields
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(
|
||||||
|
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
||||||
|
);
|
||||||
|
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
||||||
|
return currentDir;
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
currentDir = dirname(currentDir);
|
||||||
|
}
|
||||||
|
throw new Error('Could not find root directory');
|
||||||
|
}
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
const rootDir = findRootDir(__dirname);
|
||||||
|
|
||||||
// Read the extension's package.json
|
// Read the extension's package.json
|
||||||
@@ -23,11 +43,57 @@ try {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Read root package.json for repository info
|
||||||
|
const rootPkgPath = join(rootDir, 'package.json');
|
||||||
|
let rootPkg;
|
||||||
|
try {
|
||||||
|
const rootPkgContent = readFileSync(rootPkgPath, 'utf8');
|
||||||
|
rootPkg = JSON.parse(rootPkgContent);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to read root package.json:', error.message);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
// Ensure we have required fields
|
// Ensure we have required fields
|
||||||
assert(pkg.name, 'package.json must have a name field');
|
assert(pkg.name, 'package.json must have a name field');
|
||||||
assert(pkg.version, 'package.json must have a version field');
|
assert(pkg.version, 'package.json must have a version field');
|
||||||
|
assert(rootPkg.repository, 'root package.json must have a repository field');
|
||||||
|
|
||||||
const tag = `${pkg.name}@${pkg.version}`;
|
const tag = `${pkg.name}@${pkg.version}`;
|
||||||
|
|
||||||
// Create and push the tag if it doesn't exist
|
// Get repository URL from root package.json
|
||||||
createAndPushTag(tag);
|
const repoUrl = rootPkg.repository.url;
|
||||||
|
|
||||||
|
const { status, stdout, error } = spawnSync('git', ['ls-remote', repoUrl, tag]);
|
||||||
|
|
||||||
|
assert.equal(status, 0, error);
|
||||||
|
|
||||||
|
const exists = String(stdout).trim() !== '';
|
||||||
|
|
||||||
|
if (!exists) {
|
||||||
|
console.log(`Creating new extension tag: ${tag}`);
|
||||||
|
|
||||||
|
// Create the tag
|
||||||
|
const tagResult = spawnSync('git', ['tag', tag]);
|
||||||
|
if (tagResult.status !== 0) {
|
||||||
|
console.error(
|
||||||
|
'Failed to create tag:',
|
||||||
|
tagResult.error || tagResult.stderr.toString()
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Push the tag
|
||||||
|
const pushResult = spawnSync('git', ['push', 'origin', tag]);
|
||||||
|
if (pushResult.status !== 0) {
|
||||||
|
console.error(
|
||||||
|
'Failed to push tag:',
|
||||||
|
pushResult.error || pushResult.stderr.toString()
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||||
|
} else {
|
||||||
|
console.log(`Extension tag already exists: ${tag}`);
|
||||||
|
}
|
||||||
|
|||||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { spawnSync } from 'node:child_process';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
|
|
||||||
// Find the root directory by looking for package.json with task-master-ai
|
|
||||||
export function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
const pkgPath = join(currentDir, 'package.json');
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run a command with proper error handling
|
|
||||||
export function runCommand(command, args = [], options = {}) {
|
|
||||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
|
||||||
const result = spawnSync(command, args, {
|
|
||||||
encoding: 'utf8',
|
|
||||||
stdio: 'inherit',
|
|
||||||
...options
|
|
||||||
});
|
|
||||||
|
|
||||||
if (result.status !== 0) {
|
|
||||||
console.error(`Command failed with exit code ${result.status}`);
|
|
||||||
process.exit(result.status);
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get package version from a package.json file
|
|
||||||
export function getPackageVersion(packagePath) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
|
||||||
return pkg.version;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`Failed to read package version from ${packagePath}:`,
|
|
||||||
error.message
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if a git tag exists on remote
|
|
||||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
|
||||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
|
||||||
encoding: 'utf8'
|
|
||||||
});
|
|
||||||
|
|
||||||
return result.status === 0 && result.stdout.trim() !== '';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create and push a git tag if it doesn't exist
|
|
||||||
export function createAndPushTag(tag, remote = 'origin') {
|
|
||||||
// Check if tag already exists
|
|
||||||
if (tagExistsOnRemote(tag, remote)) {
|
|
||||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`Creating new tag: ${tag}`);
|
|
||||||
|
|
||||||
// Create the tag locally
|
|
||||||
const tagResult = spawnSync('git', ['tag', tag]);
|
|
||||||
if (tagResult.status !== 0) {
|
|
||||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Push the tag to remote
|
|
||||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
|
||||||
if (pushResult.status !== 0) {
|
|
||||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
|||||||
name: Auto-close duplicate issues
|
|
||||||
# description: Auto-closes issues that are duplicates of existing issues
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
auto-close-duplicates:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write # Need write permission to close issues and add comments
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Auto-close duplicate issues
|
|
||||||
run: node .github/scripts/auto-close-duplicates.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
name: Backfill Duplicate Comments
|
|
||||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
days_back:
|
|
||||||
description: "How many days back to look for old issues"
|
|
||||||
required: false
|
|
||||||
default: "90"
|
|
||||||
type: string
|
|
||||||
dry_run:
|
|
||||||
description: "Dry run mode (true to only log what would be done)"
|
|
||||||
required: false
|
|
||||||
default: "true"
|
|
||||||
type: choice
|
|
||||||
options:
|
|
||||||
- "true"
|
|
||||||
- "false"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
backfill-duplicate-comments:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 30
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
actions: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Backfill duplicate comments
|
|
||||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
DAYS_BACK: ${{ inputs.days_back }}
|
|
||||||
DRY_RUN: ${{ inputs.dry_run }}
|
|
||||||
126
.github/workflows/ci.yml
vendored
126
.github/workflows/ci.yml
vendored
@@ -6,124 +6,73 @@ on:
|
|||||||
- main
|
- main
|
||||||
- next
|
- next
|
||||||
pull_request:
|
pull_request:
|
||||||
workflow_dispatch:
|
branches:
|
||||||
|
- main
|
||||||
concurrency:
|
- next
|
||||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
|
||||||
env:
|
|
||||||
DO_NOT_TRACK: 1
|
|
||||||
NODE_ENV: development
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
# Fast checks that can run in parallel
|
setup:
|
||||||
format-check:
|
|
||||||
name: Format Check
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 0
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Install Dependencies
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
id: install
|
||||||
timeout-minutes: 5
|
run: npm ci
|
||||||
|
timeout-minutes: 2
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
|
format-check:
|
||||||
|
needs: setup
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Restore node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Format Check
|
- name: Format Check
|
||||||
run: npm run format-check
|
run: npm run format-check
|
||||||
env:
|
env:
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
typecheck:
|
|
||||||
name: Typecheck
|
|
||||||
timeout-minutes: 10
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Typecheck
|
|
||||||
run: npm run turbo:typecheck
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
# Build job to ensure everything compiles
|
|
||||||
build:
|
|
||||||
name: Build
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Build
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Upload build artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: build-artifacts
|
|
||||||
path: dist/
|
|
||||||
retention-days: 1
|
|
||||||
|
|
||||||
test:
|
test:
|
||||||
name: Test
|
needs: setup
|
||||||
timeout-minutes: 15
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [format-check, typecheck, build]
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Restore node_modules
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
uses: actions/cache@v4
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Download build artifacts
|
|
||||||
uses: actions/download-artifact@v4
|
|
||||||
with:
|
with:
|
||||||
name: build-artifacts
|
path: node_modules
|
||||||
path: dist/
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Run Tests
|
- name: Run Tests
|
||||||
run: |
|
run: |
|
||||||
@@ -132,6 +81,7 @@ jobs:
|
|||||||
NODE_ENV: test
|
NODE_ENV: test
|
||||||
CI: true
|
CI: true
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
timeout-minutes: 10
|
||||||
|
|
||||||
- name: Upload Test Results
|
- name: Upload Test Results
|
||||||
if: always()
|
if: always()
|
||||||
|
|||||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
|||||||
name: Claude Issue Dedupe
|
|
||||||
# description: Automatically dedupe GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
issue_number:
|
|
||||||
description: "Issue number to process for duplicate detection"
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude-dedupe-issues:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Run Claude Code slash command
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Log duplicate comment event to Statsig
|
|
||||||
if: always()
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg triggered_by "${{ github.event_name }}" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_duplicate_comment_added",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
triggered_by: $triggered_by,
|
|
||||||
workflow_run_id: "${{ github.run_id }}"
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
57
.github/workflows/claude-docs-trigger.yml
vendored
57
.github/workflows/claude-docs-trigger.yml
vendored
@@ -1,57 +0,0 @@
|
|||||||
name: Trigger Claude Documentation Update
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- next
|
|
||||||
paths-ignore:
|
|
||||||
- "apps/docs/**"
|
|
||||||
- "*.md"
|
|
||||||
- ".github/workflows/**"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
trigger-docs-update:
|
|
||||||
# Only run if changes were merged (not direct pushes from bots)
|
|
||||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
actions: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2 # Need previous commit for comparison
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
run: |
|
|
||||||
echo "Changed files in this push:"
|
|
||||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
|
||||||
|
|
||||||
# Store changed files for Claude to analyze (escaped for JSON)
|
|
||||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
|
||||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get the commit message (escaped for JSON)
|
|
||||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
|
||||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get diff for documentation context (escaped for JSON)
|
|
||||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
|
||||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get commit SHA
|
|
||||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Trigger Claude workflow
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
# Trigger the Claude docs updater workflow with the change information
|
|
||||||
gh workflow run claude-docs-updater.yml \
|
|
||||||
--ref next \
|
|
||||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
|
||||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
|
||||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
|
||||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
|
||||||
145
.github/workflows/claude-docs-updater.yml
vendored
145
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,145 +0,0 @@
|
|||||||
name: Claude Documentation Updater
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
commit_sha:
|
|
||||||
description: 'The commit SHA that triggered this update'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_message:
|
|
||||||
description: 'The commit message'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
changed_files:
|
|
||||||
description: 'List of changed files'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_diff:
|
|
||||||
description: 'Diff summary of changes'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
update-docs:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
issues: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
ref: next
|
|
||||||
fetch-depth: 0 # Need full history to checkout specific commit
|
|
||||||
|
|
||||||
- name: Create docs update branch
|
|
||||||
id: create-branch
|
|
||||||
run: |
|
|
||||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
git checkout -b $BRANCH_NAME
|
|
||||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Run Claude Code to Update Documentation
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
timeout_minutes: "30"
|
|
||||||
mode: "agent"
|
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
experimental_allowed_domains: |
|
|
||||||
.anthropic.com
|
|
||||||
.github.com
|
|
||||||
api.github.com
|
|
||||||
.githubusercontent.com
|
|
||||||
registry.npmjs.org
|
|
||||||
.task-master.dev
|
|
||||||
base_branch: "next"
|
|
||||||
direct_prompt: |
|
|
||||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
|
||||||
|
|
||||||
Recent changes:
|
|
||||||
- Commit: ${{ inputs.commit_message }}
|
|
||||||
- Changed files:
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
|
|
||||||
- Changes summary:
|
|
||||||
${{ inputs.commit_diff }}
|
|
||||||
|
|
||||||
Your task:
|
|
||||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
|
||||||
2. Check if these changes require documentation updates in apps/docs/
|
|
||||||
3. If documentation updates are needed:
|
|
||||||
- Update relevant documentation files in apps/docs/
|
|
||||||
- Ensure examples are updated if APIs changed
|
|
||||||
- Update any configuration documentation if config options changed
|
|
||||||
- Add new documentation pages if new features were added
|
|
||||||
- Update the changelog or release notes if applicable
|
|
||||||
4. If no documentation updates are needed, skip creating changes
|
|
||||||
|
|
||||||
Guidelines:
|
|
||||||
- Focus only on user-facing changes that need documentation
|
|
||||||
- Keep documentation clear, concise, and helpful
|
|
||||||
- Include code examples where appropriate
|
|
||||||
- Maintain consistent documentation style with existing docs
|
|
||||||
- Don't document internal implementation details unless they affect users
|
|
||||||
- Update navigation/menu files if new pages are added
|
|
||||||
|
|
||||||
Only make changes if the documentation truly needs updating based on the code changes.
|
|
||||||
|
|
||||||
- name: Check if changes were made
|
|
||||||
id: check-changes
|
|
||||||
run: |
|
|
||||||
if git diff --quiet; then
|
|
||||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
|
||||||
else
|
|
||||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
|
||||||
git add -A
|
|
||||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
|
||||||
git config --local user.name "github-actions[bot]"
|
|
||||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
|
||||||
|
|
||||||
This PR was automatically generated to update documentation based on recent changes.
|
|
||||||
|
|
||||||
Original commit: ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Push changes and create PR
|
|
||||||
if: steps.check-changes.outputs.has_changes == 'true'
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
|
||||||
|
|
||||||
# Create PR using GitHub CLI
|
|
||||||
gh pr create \
|
|
||||||
--title "docs: update documentation for recent changes" \
|
|
||||||
--body "## 📚 Documentation Update
|
|
||||||
|
|
||||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
|
||||||
|
|
||||||
### Original Changes
|
|
||||||
**Commit:** ${{ inputs.commit_sha }}
|
|
||||||
**Message:** ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
### Changed Files in Original Commit
|
|
||||||
\`\`\`
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
### Documentation Updates
|
|
||||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
|
||||||
- [ ] Documentation accurately reflects the changes
|
|
||||||
- [ ] Examples are correct and working
|
|
||||||
- [ ] No important details are missing
|
|
||||||
- [ ] Style is consistent with existing documentation
|
|
||||||
|
|
||||||
---
|
|
||||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
|
||||||
--base next \
|
|
||||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
|
||||||
--label "documentation" \
|
|
||||||
--label "automated"
|
|
||||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
|||||||
name: Claude Issue Triage
|
|
||||||
# description: Automatically triage GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
triage-issue:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Create triage prompt
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/claude-prompts
|
|
||||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
|
||||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
|
||||||
|
|
||||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
|
||||||
|
|
||||||
Issue Information:
|
|
||||||
- REPO: ${{ github.repository }}
|
|
||||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
|
||||||
|
|
||||||
TASK OVERVIEW:
|
|
||||||
|
|
||||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
|
||||||
|
|
||||||
2. Next, use the GitHub tools to get context about the issue:
|
|
||||||
- You have access to these tools:
|
|
||||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
|
||||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
|
||||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
|
||||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
|
||||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
|
||||||
- Start by using mcp__github__get_issue to get the issue details
|
|
||||||
|
|
||||||
3. Analyze the issue content, considering:
|
|
||||||
- The issue title and description
|
|
||||||
- The type of issue (bug report, feature request, question, etc.)
|
|
||||||
- Technical areas mentioned
|
|
||||||
- Severity or priority indicators
|
|
||||||
- User impact
|
|
||||||
- Components affected
|
|
||||||
|
|
||||||
4. Select appropriate labels from the available labels list provided above:
|
|
||||||
- Choose labels that accurately reflect the issue's nature
|
|
||||||
- Be specific but comprehensive
|
|
||||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
|
||||||
- Consider platform labels (android, ios) if applicable
|
|
||||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
|
||||||
|
|
||||||
5. Apply the selected labels:
|
|
||||||
- Use mcp__github__update_issue to apply your selected labels
|
|
||||||
- DO NOT post any comments explaining your decision
|
|
||||||
- DO NOT communicate directly with users
|
|
||||||
- If no labels are clearly applicable, do not apply any labels
|
|
||||||
|
|
||||||
IMPORTANT GUIDELINES:
|
|
||||||
- Be thorough in your analysis
|
|
||||||
- Only select labels from the provided list above
|
|
||||||
- DO NOT post any comments to the issue
|
|
||||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
|
||||||
- It's okay to not add any labels if none are clearly applicable
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Setup GitHub MCP Server
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/mcp-config
|
|
||||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"github": {
|
|
||||||
"command": "docker",
|
|
||||||
"args": [
|
|
||||||
"run",
|
|
||||||
"-i",
|
|
||||||
"--rm",
|
|
||||||
"-e",
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
|
||||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
|
||||||
],
|
|
||||||
"env": {
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Run Claude Code for Issue Triage
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
|
||||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
|
||||||
timeout_minutes: "5"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
|||||||
name: Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issue_comment:
|
|
||||||
types: [created]
|
|
||||||
pull_request_review_comment:
|
|
||||||
types: [created]
|
|
||||||
issues:
|
|
||||||
types: [opened, assigned]
|
|
||||||
pull_request_review:
|
|
||||||
types: [submitted]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude:
|
|
||||||
if: |
|
|
||||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
|
||||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pull-requests: read
|
|
||||||
issues: read
|
|
||||||
id-token: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 1
|
|
||||||
|
|
||||||
- name: Run Claude Code
|
|
||||||
id: claude
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
5
.github/workflows/extension-ci.yml
vendored
5
.github/workflows/extension-ci.yml
vendored
@@ -41,7 +41,8 @@ jobs:
|
|||||||
restore-keys: |
|
restore-keys: |
|
||||||
${{ runner.os }}-node-
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
- name: Install Extension Dependencies
|
||||||
|
working-directory: apps/extension
|
||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
|
|
||||||
@@ -67,6 +68,7 @@ jobs:
|
|||||||
${{ runner.os }}-node-
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
- name: Install if cache miss
|
- name: Install if cache miss
|
||||||
|
working-directory: apps/extension
|
||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
|
|
||||||
@@ -98,6 +100,7 @@ jobs:
|
|||||||
${{ runner.os }}-node-
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
- name: Install if cache miss
|
- name: Install if cache miss
|
||||||
|
working-directory: apps/extension
|
||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
|
|
||||||
|
|||||||
29
.github/workflows/extension-release.yml
vendored
29
.github/workflows/extension-release.yml
vendored
@@ -31,7 +31,8 @@ jobs:
|
|||||||
restore-keys: |
|
restore-keys: |
|
||||||
${{ runner.os }}-node-
|
${{ runner.os }}-node-
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
- name: Install Extension Dependencies
|
||||||
|
working-directory: apps/extension
|
||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
|
|
||||||
@@ -88,6 +89,32 @@ jobs:
|
|||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Create GitHub Release
|
||||||
|
uses: actions/create-release@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
tag_name: ${{ github.ref_name }}
|
||||||
|
release_name: Extension ${{ github.ref_name }}
|
||||||
|
body: |
|
||||||
|
VS Code Extension Release ${{ github.ref_name }}
|
||||||
|
|
||||||
|
**Marketplaces:**
|
||||||
|
- [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Hamster.task-master-hamster)
|
||||||
|
- [Open VSX Registry](https://open-vsx.org/extension/Hamster/task-master-hamster)
|
||||||
|
draft: false
|
||||||
|
prerelease: false
|
||||||
|
|
||||||
|
- name: Upload VSIX to Release
|
||||||
|
uses: actions/upload-release-asset@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||||
|
asset_path: apps/extension/vsix-build/${{ steps.vsix-info.outputs.vsix-filename }}
|
||||||
|
asset_name: ${{ steps.vsix-info.outputs.vsix-filename }}
|
||||||
|
asset_content_type: application/zip
|
||||||
|
|
||||||
- name: Upload Build Artifacts
|
- name: Upload Build Artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
|
|||||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
|||||||
name: Log GitHub Issue Events
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened, closed]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
log-issue-created:
|
|
||||||
if: github.event.action == 'opened'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue creation to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
AUTHOR="${{ github.event.issue.user.login }}"
|
|
||||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg author "$AUTHOR" \
|
|
||||||
--arg created_at "$CREATED_AT" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_created",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
issue_author: $author,
|
|
||||||
created_at: $created_at
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log-issue-closed:
|
|
||||||
if: github.event.action == 'closed'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue closure to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
|
||||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
|
||||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get additional issue data via GitHub API
|
|
||||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
|
||||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
|
||||||
|
|
||||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
|
||||||
|
|
||||||
# Get reactions data
|
|
||||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
|
||||||
|
|
||||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
|
||||||
|
|
||||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
|
||||||
CLOSED_AUTOMATICALLY="false"
|
|
||||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
|
||||||
CLOSED_AUTOMATICALLY="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if closed as duplicate by state_reason
|
|
||||||
CLOSED_AS_DUPLICATE="false"
|
|
||||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
|
||||||
CLOSED_AS_DUPLICATE="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg closed_by "$CLOSED_BY" \
|
|
||||||
--arg closed_at "$CLOSED_AT" \
|
|
||||||
--arg state_reason "$STATE_REASON" \
|
|
||||||
--arg comments_count "$COMMENTS_COUNT" \
|
|
||||||
--arg reactions_count "$REACTIONS_COUNT" \
|
|
||||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
|
||||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_closed",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
closed_by: $closed_by,
|
|
||||||
closed_at: $closed_at,
|
|
||||||
state_reason: $state_reason,
|
|
||||||
comments_count: ($comments_count | tonumber),
|
|
||||||
reactions_count: ($reactions_count | tonumber),
|
|
||||||
closed_automatically: ($closed_automatically | test("true")),
|
|
||||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
|
||||||
echo "Closed by: $CLOSED_BY"
|
|
||||||
echo "Comments: $COMMENTS_COUNT"
|
|
||||||
echo "Reactions: $REACTIONS_COUNT"
|
|
||||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
|
||||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
45
.github/workflows/pre-release.yml
vendored
45
.github/workflows/pre-release.yml
vendored
@@ -3,13 +3,11 @@ name: Pre-Release (RC)
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||||
|
|
||||||
concurrency: pre-release-${{ github.ref_name }}
|
concurrency: pre-release-${{ github.ref }}
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
rc:
|
rc:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
# Only allow pre-releases on non-main branches
|
|
||||||
if: github.ref != 'refs/heads/main'
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -36,26 +34,9 @@ jobs:
|
|||||||
|
|
||||||
- name: Enter RC mode (if not already in RC mode)
|
- name: Enter RC mode (if not already in RC mode)
|
||||||
run: |
|
run: |
|
||||||
# Check if we're in pre-release mode with the "rc" tag
|
# ensure we’re in the right pre-mode (tag "rc")
|
||||||
if [ -f .changeset/pre.json ]; then
|
if [ ! -f .changeset/pre.json ] \
|
||||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
|
||||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
|
|
||||||
if [ "$MODE" = "exit" ]; then
|
|
||||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
|
||||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
|
||||||
npx changeset pre exit
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
|
||||||
echo "Already in RC pre-release mode"
|
|
||||||
else
|
|
||||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "No pre.json found, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
npx changeset pre enter rc
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -65,24 +46,10 @@ jobs:
|
|||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
- name: Run format
|
|
||||||
run: npm run format
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build packages
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: npx changeset publish
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
21
.github/workflows/release-check.yml
vendored
21
.github/workflows/release-check.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
name: Release Check
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: release-check-${{ github.head_ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check-release-mode:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Check release mode
|
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
|
||||||
33
.github/workflows/release.yml
vendored
33
.github/workflows/release.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Cache node_modules
|
- name: Cache node_modules
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
@@ -38,22 +38,31 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Check pre-release mode
|
- name: Exit pre-release mode and clean up
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
run: |
|
||||||
|
echo "🔄 Ensuring we're not in pre-release mode for main branch..."
|
||||||
|
|
||||||
- name: Build packages
|
# Exit pre-release mode if we're in it
|
||||||
run: npm run turbo:build
|
npx changeset pre exit || echo "Not in pre-release mode"
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
# Remove pre.json file if it exists (belt and suspenders approach)
|
||||||
FORCE_COLOR: 1
|
if [ -f .changeset/pre.json ]; then
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
echo "🧹 Removing pre.json file..."
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
rm -f .changeset/pre.json
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
fi
|
||||||
|
|
||||||
|
# Verify the file is gone
|
||||||
|
if [ ! -f .changeset/pre.json ]; then
|
||||||
|
echo "✅ pre.json successfully removed"
|
||||||
|
else
|
||||||
|
echo "❌ Failed to remove pre.json"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
- name: Create Release Pull Request or Publish to npm
|
- name: Create Release Pull Request or Publish to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: node ./.github/scripts/release.mjs
|
publish: ./.github/scripts/release.sh
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
108
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,108 +0,0 @@
|
|||||||
name: Weekly Metrics to Discord
|
|
||||||
# description: Sends weekly metrics summary to Discord channel
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
pull-requests: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
weekly-metrics:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
env:
|
|
||||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: '20'
|
|
||||||
|
|
||||||
- name: Get dates for last 14 days
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
# Last 14 days
|
|
||||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
|
||||||
last_day=$(date +%Y-%m-%d)
|
|
||||||
|
|
||||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
|
||||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
|
||||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
|
||||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Generate issue metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
HIDE_TIME_TO_ANSWER: true
|
|
||||||
HIDE_LABEL_METRICS: false
|
|
||||||
OUTPUT_FILE: issue_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR created metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_created_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR merged metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_merged_metrics.md
|
|
||||||
|
|
||||||
- name: Debug generated metrics
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
echo "Listing markdown files in workspace:"
|
|
||||||
ls -la *.md || true
|
|
||||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
|
||||||
if [ -f "$f" ]; then
|
|
||||||
echo "== $f (first 10 lines) =="
|
|
||||||
head -n 10 "$f"
|
|
||||||
else
|
|
||||||
echo "Missing $f"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Parse metrics
|
|
||||||
id: metrics
|
|
||||||
run: node .github/scripts/parse-metrics.mjs
|
|
||||||
|
|
||||||
- name: Send to Discord
|
|
||||||
uses: sarisia/actions-status-discord@v1
|
|
||||||
if: env.DISCORD_WEBHOOK != ''
|
|
||||||
with:
|
|
||||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
|
||||||
status: Success
|
|
||||||
title: "📊 Weekly Metrics Report"
|
|
||||||
description: |
|
|
||||||
**${{ env.week_of }}**
|
|
||||||
*${{ env.date_range }}*
|
|
||||||
|
|
||||||
**🎯 Issues**
|
|
||||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
|
||||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
|
||||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
|
||||||
|
|
||||||
**🔀 Pull Requests**
|
|
||||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
|
||||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
|
||||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
|
||||||
|
|
||||||
**📈 Visual Analytics**
|
|
||||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
|
||||||
color: 0x58AFFF
|
|
||||||
username: Task Master Metrics Bot
|
|
||||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
|
||||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -94,9 +94,3 @@ apps/extension/.vscode-test/
|
|||||||
|
|
||||||
# apps/extension
|
# apps/extension
|
||||||
apps/extension/vsix-build/
|
apps/extension/vsix-build/
|
||||||
|
|
||||||
# turbo
|
|
||||||
.turbo
|
|
||||||
|
|
||||||
# TaskMaster Workflow State (now stored in ~/.taskmaster/sessions/)
|
|
||||||
# No longer needed in .gitignore as state is stored globally
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"task-master-ai": {
|
"task-master-ai": {
|
||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
|
|||||||
@@ -1,6 +0,0 @@
|
|||||||
{
|
|
||||||
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
|
||||||
"defaultBranch": "main",
|
|
||||||
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
|
||||||
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
|
||||||
}
|
|
||||||
@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
|||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"task-master-ai": {
|
"task-master-ai": {
|
||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
"ANTHROPIC_API_KEY": "your_key_here",
|
"ANTHROPIC_API_KEY": "your_key_here",
|
||||||
"PERPLEXITY_API_KEY": "your_key_here",
|
"PERPLEXITY_API_KEY": "your_key_here",
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
{
|
{
|
||||||
"models": {
|
"models": {
|
||||||
"main": {
|
"main": {
|
||||||
"provider": "claude-code",
|
"provider": "anthropic",
|
||||||
"modelId": "sonnet",
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
"maxTokens": 64000,
|
"maxTokens": 120000,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
},
|
},
|
||||||
"research": {
|
"research": {
|
||||||
@@ -14,8 +14,8 @@
|
|||||||
},
|
},
|
||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "anthropic",
|
"provider": "anthropic",
|
||||||
"modelId": "claude-3-7-sonnet-20250219",
|
"modelId": "claude-3-5-sonnet-20241022",
|
||||||
"maxTokens": 120000,
|
"maxTokens": 8192,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -29,16 +29,9 @@
|
|||||||
"ollamaBaseURL": "http://localhost:11434/api",
|
"ollamaBaseURL": "http://localhost:11434/api",
|
||||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||||
"responseLanguage": "English",
|
"responseLanguage": "English",
|
||||||
"enableCodebaseAnalysis": true,
|
|
||||||
"userId": "1234567890",
|
"userId": "1234567890",
|
||||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||||
"defaultTag": "master"
|
"defaultTag": "master"
|
||||||
},
|
},
|
||||||
"claudeCode": {},
|
"claudeCode": {}
|
||||||
"codexCli": {},
|
|
||||||
"grokCli": {
|
|
||||||
"timeout": 120000,
|
|
||||||
"workingDirectory": null,
|
|
||||||
"defaultModel": "grok-4-latest"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,188 +0,0 @@
|
|||||||
# Task Master Migration Roadmap
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
|
||||||
|
|
||||||
## Architecture Vision
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────┐
|
|
||||||
│ User Interfaces │
|
|
||||||
├──────────┬──────────┬──────────┬────────────────┤
|
|
||||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
|
||||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
|
||||||
└──────────┴──────────┴──────────┴────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌──────────────────────┐
|
|
||||||
│ @tm/core │
|
|
||||||
│ (Business Logic) │
|
|
||||||
└──────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration Phases
|
|
||||||
|
|
||||||
### Phase 1: Core Extraction ✅ (In Progress)
|
|
||||||
**Goal**: Move all business logic to @tm/core
|
|
||||||
|
|
||||||
- [x] Create @tm/core package structure
|
|
||||||
- [x] Move types and interfaces
|
|
||||||
- [x] Implement TaskMasterCore facade
|
|
||||||
- [x] Move storage adapters
|
|
||||||
- [x] Move task services
|
|
||||||
- [ ] Move AI providers
|
|
||||||
- [ ] Move parser logic
|
|
||||||
- [ ] Complete test coverage
|
|
||||||
|
|
||||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
|
||||||
**Goal**: Create @tm/cli as a thin presentation layer
|
|
||||||
|
|
||||||
- [x] Create @tm/cli package structure
|
|
||||||
- [x] Implement Command interface pattern
|
|
||||||
- [x] Create CommandRegistry
|
|
||||||
- [x] Build legacy bridge/adapter
|
|
||||||
- [x] Migrate list-tasks command
|
|
||||||
- [ ] Migrate remaining commands one by one
|
|
||||||
- [ ] Remove UI logic from core
|
|
||||||
|
|
||||||
### Phase 3: Transitional Integration
|
|
||||||
**Goal**: Use new packages in existing scripts without breaking changes
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// scripts/modules/commands.js gradually adopts new commands
|
|
||||||
import { ListTasksCommand } from '@tm/cli';
|
|
||||||
const listCommand = new ListTasksCommand();
|
|
||||||
|
|
||||||
// Old interface remains the same
|
|
||||||
programInstance
|
|
||||||
.command('list')
|
|
||||||
.action(async (options) => {
|
|
||||||
// Use new command internally
|
|
||||||
const result = await listCommand.execute(convertOptions(options));
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: MCP Package
|
|
||||||
**Goal**: Separate MCP server as its own package
|
|
||||||
|
|
||||||
- [ ] Create @tm/mcp package
|
|
||||||
- [ ] Move MCP server code
|
|
||||||
- [ ] Use @tm/core for all logic
|
|
||||||
- [ ] MCP becomes a thin RPC layer
|
|
||||||
|
|
||||||
### Phase 5: Complete Migration
|
|
||||||
**Goal**: Remove old scripts, pure monorepo
|
|
||||||
|
|
||||||
- [ ] All commands migrated to @tm/cli
|
|
||||||
- [ ] Remove scripts/modules/task-manager/*
|
|
||||||
- [ ] Remove scripts/modules/commands.js
|
|
||||||
- [ ] Update bin/task-master.js to use @tm/cli
|
|
||||||
- [ ] Clean up dependencies
|
|
||||||
|
|
||||||
## Current Transitional Strategy
|
|
||||||
|
|
||||||
### 1. Adapter Pattern (commands-adapter.js)
|
|
||||||
```javascript
|
|
||||||
// Checks if new CLI is available and uses it
|
|
||||||
// Falls back to legacy implementation if not
|
|
||||||
export async function listTasksAdapter(...args) {
|
|
||||||
if (cliAvailable) {
|
|
||||||
return useNewImplementation(...args);
|
|
||||||
}
|
|
||||||
return useLegacyImplementation(...args);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Command Bridge Pattern
|
|
||||||
```javascript
|
|
||||||
// Allows new commands to work in old code
|
|
||||||
const bridge = new CommandBridge(new ListTasksCommand());
|
|
||||||
const data = await bridge.run(legacyOptions); // Legacy style
|
|
||||||
const result = await bridge.execute(newOptions); // New style
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Gradual File Migration
|
|
||||||
Instead of big-bang refactoring:
|
|
||||||
1. Create new implementation in @tm/cli
|
|
||||||
2. Add adapter in commands-adapter.js
|
|
||||||
3. Update commands.js to use adapter
|
|
||||||
4. Test both paths work
|
|
||||||
5. Eventually remove adapter when all migrated
|
|
||||||
|
|
||||||
## Benefits of This Approach
|
|
||||||
|
|
||||||
1. **No Breaking Changes**: Existing CLI continues to work
|
|
||||||
2. **Incremental PRs**: Each command can be migrated separately
|
|
||||||
3. **Parallel Development**: New features can use new architecture
|
|
||||||
4. **Easy Rollback**: Can disable new implementation if issues
|
|
||||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
|
||||||
|
|
||||||
## Example PR Sequence
|
|
||||||
|
|
||||||
### PR 1: Core Package Setup ✅
|
|
||||||
- Create @tm/core
|
|
||||||
- Move types and interfaces
|
|
||||||
- Basic TaskMasterCore implementation
|
|
||||||
|
|
||||||
### PR 2: CLI Package Foundation ✅
|
|
||||||
- Create @tm/cli
|
|
||||||
- Command interface and registry
|
|
||||||
- Legacy bridge utilities
|
|
||||||
|
|
||||||
### PR 3: First Command Migration
|
|
||||||
- Migrate list-tasks to new system
|
|
||||||
- Add adapter in scripts
|
|
||||||
- Test both implementations
|
|
||||||
|
|
||||||
### PR 4-N: Migrate Commands One by One
|
|
||||||
- Each PR migrates 1-2 related commands
|
|
||||||
- Small, reviewable changes
|
|
||||||
- Continuous delivery
|
|
||||||
|
|
||||||
### Final PR: Cleanup
|
|
||||||
- Remove legacy implementations
|
|
||||||
- Remove adapters
|
|
||||||
- Update documentation
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
|
|
||||||
### Dual Testing During Migration
|
|
||||||
```javascript
|
|
||||||
describe('List Tasks', () => {
|
|
||||||
it('works with legacy implementation', async () => {
|
|
||||||
// Force legacy
|
|
||||||
const result = await legacyListTasks(...);
|
|
||||||
expect(result).toBeDefined();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('works with new implementation', async () => {
|
|
||||||
// Force new
|
|
||||||
const command = new ListTasksCommand();
|
|
||||||
const result = await command.execute(...);
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('adapter chooses correctly', async () => {
|
|
||||||
// Let adapter decide
|
|
||||||
const result = await listTasksAdapter(...);
|
|
||||||
expect(result).toBeDefined();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Metrics
|
|
||||||
|
|
||||||
- [ ] All commands migrated without breaking changes
|
|
||||||
- [ ] Test coverage maintained or improved
|
|
||||||
- [ ] Performance maintained or improved
|
|
||||||
- [ ] Cleaner, more maintainable codebase
|
|
||||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
|
||||||
|
|
||||||
## Notes for Contributors
|
|
||||||
|
|
||||||
1. **Keep PRs Small**: Migrate one command at a time
|
|
||||||
2. **Test Both Paths**: Ensure legacy and new both work
|
|
||||||
3. **Document Changes**: Update this roadmap as you go
|
|
||||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
|
||||||
|
|
||||||
This is a living document - update as the migration progresses!
|
|
||||||
@@ -1,912 +0,0 @@
|
|||||||
## Summary
|
|
||||||
|
|
||||||
- Put the existing git and test workflows on rails: a repeatable, automated process that can run autonomously, with guardrails and a compact TUI for visibility.
|
|
||||||
|
|
||||||
- Flow: for a selected task, create a branch named with the tag + task id → generate tests for the first subtask (red) using the Surgical Test Generator → implement code (green) → verify tests → commit → repeat per subtask → final verify → push → open PR against the default branch.
|
|
||||||
|
|
||||||
- Build on existing rules: .cursor/rules/git_workflow.mdc, .cursor/rules/test_workflow.mdc, .claude/agents/surgical-test-generator.md, and existing CLI/core services.
|
|
||||||
|
|
||||||
## Goals
|
|
||||||
|
|
||||||
- Deterministic, resumable automation to execute the TDD loop per subtask with minimal human intervention.
|
|
||||||
|
|
||||||
- Strong guardrails: never commit to the default branch; only commit when tests pass; enforce status transitions; persist logs/state for debuggability.
|
|
||||||
|
|
||||||
- Visibility: a compact terminal UI (like lazygit) to pick tag, view tasks, and start work; right-side pane opens an executor terminal (via tmux) for agent coding.
|
|
||||||
|
|
||||||
- Extensible: framework-agnostic test generation via the Surgical Test Generator; detect and use the repo’s test command for execution with coverage thresholds.
|
|
||||||
|
|
||||||
## Non‑Goals (initial)
|
|
||||||
|
|
||||||
- Full multi-language runner parity beyond detection and executing the project’s test command.
|
|
||||||
|
|
||||||
- Complex GUI; start with CLI/TUI + tmux pane. IDE/extension can hook into the same state later.
|
|
||||||
|
|
||||||
- Rich executor selection UX (codex/gemini/claude) — we’ll prompt per run; defaults can come later.
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
- One command can autonomously complete a task's subtasks via TDD and open a PR when done.
|
|
||||||
|
|
||||||
- All commits made on a branch that includes the tag and task id (see Branch Naming); no commits to the default branch directly.
|
|
||||||
|
|
||||||
- Every subtask iteration: failing tests added first (red), then code added to pass them (green), commit only after green.
|
|
||||||
|
|
||||||
- End-to-end logs + artifacts stored in .taskmaster/reports/runs/<timestamp-or-id>/.
|
|
||||||
|
|
||||||
## Success Metrics (Phase 1)
|
|
||||||
|
|
||||||
- **Adoption**: 80% of tasks in a pilot repo completed via `tm autopilot`
|
|
||||||
- **Safety**: 0 commits to default branch; 100% of commits have green tests
|
|
||||||
- **Efficiency**: Average time from task start to PR < 30min for simple subtasks
|
|
||||||
- **Reliability**: < 5% of runs require manual intervention (timeout/conflicts)
|
|
||||||
|
|
||||||
## User Stories
|
|
||||||
|
|
||||||
- As a developer, I can run tm autopilot <taskId> and watch a structured, safe workflow execute.
|
|
||||||
|
|
||||||
- As a reviewer, I can inspect commits per subtask, and a PR summarizing the work when the task completes.
|
|
||||||
|
|
||||||
- As an operator, I can see current step, active subtask, tests status, and logs in a compact CLI view and read a final run report.
|
|
||||||
|
|
||||||
## Example Workflow Traces
|
|
||||||
|
|
||||||
### Happy Path: Complete a 3-subtask feature
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Developer starts
|
|
||||||
$ tm autopilot 42
|
|
||||||
→ Checks preflight: ✓ clean tree, ✓ npm test detected
|
|
||||||
→ Creates branch: analytics/task-42-user-metrics
|
|
||||||
→ Subtask 42.1: "Add metrics schema"
|
|
||||||
RED: generates test_metrics_schema.test.js → 3 failures
|
|
||||||
GREEN: implements schema.js → all pass
|
|
||||||
COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
→ Subtask 42.2: "Add collection endpoint"
|
|
||||||
RED: generates test_metrics_endpoint.test.js → 5 failures
|
|
||||||
GREEN: implements api/metrics.js → all pass
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
→ Subtask 42.3: "Add dashboard widget"
|
|
||||||
RED: generates test_metrics_widget.test.js → 4 failures
|
|
||||||
GREEN: implements components/MetricsWidget.jsx → all pass
|
|
||||||
COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
→ Final: all 3 subtasks complete
|
|
||||||
✓ Run full test suite → all pass
|
|
||||||
✓ Coverage check → 85% (meets 80% threshold)
|
|
||||||
PUSH: confirms with user → pushed to origin
|
|
||||||
PR: opens #123 "Task #42 [analytics]: User metrics tracking"
|
|
||||||
|
|
||||||
✓ Task 42 complete. PR: https://github.com/org/repo/pull/123
|
|
||||||
Run report: .taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Recovery: Failing tests timeout
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 1 fails (2 tests still red)
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 2 fails (1 test still red)
|
|
||||||
→ Subtask 42.2 GREEN phase: attempt 3 fails (1 test still red)
|
|
||||||
|
|
||||||
⚠️ Paused: Could not achieve green state after 3 attempts
|
|
||||||
📋 State saved to: .taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
Last error: "POST /api/metrics returns 500 instead of 201"
|
|
||||||
|
|
||||||
Next steps:
|
|
||||||
- Review diff: git diff HEAD
|
|
||||||
- Inspect logs: cat .taskmaster/reports/runs/2025-01-15-142033/log.jsonl
|
|
||||||
- Check test output: cat .taskmaster/reports/runs/2025-01-15-142033/test-results/subtask-42.2-green-attempt3.json
|
|
||||||
- Resume after manual fix: tm autopilot --resume
|
|
||||||
|
|
||||||
# Developer manually fixes the issue, then:
|
|
||||||
$ tm autopilot --resume
|
|
||||||
→ Resuming subtask 42.2 GREEN phase
|
|
||||||
GREEN: all tests pass
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
→ Continuing to subtask 42.3...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dry Run: Preview before execution
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --dry-run
|
|
||||||
Autopilot Plan for Task #42 [analytics]: User metrics tracking
|
|
||||||
─────────────────────────────────────────────────────────────
|
|
||||||
Preflight:
|
|
||||||
✓ Working tree is clean
|
|
||||||
✓ Test command detected: npm test
|
|
||||||
✓ Tools available: git, gh, node, npm
|
|
||||||
✓ Current branch: main (will create new branch)
|
|
||||||
|
|
||||||
Branch & Tag:
|
|
||||||
→ Create branch: analytics/task-42-user-metrics
|
|
||||||
→ Set active tag: analytics
|
|
||||||
|
|
||||||
Subtasks (3 pending):
|
|
||||||
1. 42.1: Add metrics schema
|
|
||||||
- RED: generate tests in src/__tests__/schema.test.js
|
|
||||||
- GREEN: implement src/schema.js
|
|
||||||
- COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
|
|
||||||
2. 42.2: Add collection endpoint [depends on 42.1]
|
|
||||||
- RED: generate tests in src/api/__tests__/metrics.test.js
|
|
||||||
- GREEN: implement src/api/metrics.js
|
|
||||||
- COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
|
|
||||||
3. 42.3: Add dashboard widget [depends on 42.2]
|
|
||||||
- RED: generate tests in src/components/__tests__/MetricsWidget.test.jsx
|
|
||||||
- GREEN: implement src/components/MetricsWidget.jsx
|
|
||||||
- COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
|
|
||||||
Finalization:
|
|
||||||
→ Run full test suite with coverage
|
|
||||||
→ Push branch to origin (will confirm)
|
|
||||||
→ Create PR targeting main
|
|
||||||
|
|
||||||
Run without --dry-run to execute.
|
|
||||||
```
|
|
||||||
|
|
||||||
## High‑Level Workflow
|
|
||||||
|
|
||||||
1) Pre‑flight
|
|
||||||
|
|
||||||
- Verify clean working tree or confirm staging/commit policy (configurable).
|
|
||||||
|
|
||||||
- Detect repo type and the project’s test command (e.g., npm test, pnpm test, pytest, go test).
|
|
||||||
|
|
||||||
- Validate tools: git, gh (optional for PR), node/npm, and (if used) claude CLI.
|
|
||||||
|
|
||||||
- Load TaskMaster state and selected task; if no subtasks exist, automatically run “expand” before working.
|
|
||||||
|
|
||||||
2) Branch & Tag Setup
|
|
||||||
|
|
||||||
- Checkout default branch and update (optional), then create a branch using Branch Naming (below).
|
|
||||||
|
|
||||||
- Map branch ↔ tag via existing tag management; explicitly set active tag to the branch’s tag.
|
|
||||||
|
|
||||||
3) Subtask Loop (for each pending/in-progress subtask in dependency order)
|
|
||||||
|
|
||||||
- Select next eligible subtask using tm-core TaskService getNextTask() and subtask eligibility logic.
|
|
||||||
|
|
||||||
- Red: generate or update failing tests for the subtask
|
|
||||||
|
|
||||||
- Use the Surgical Test Generator system prompt .claude/agents/surgical-test-generator.md) to produce high-signal tests following project conventions.
|
|
||||||
|
|
||||||
- Run tests to confirm red; record results. If not red (already passing), skip to next subtask or escalate.
|
|
||||||
|
|
||||||
- Green: implement code to pass tests
|
|
||||||
|
|
||||||
- Use executor to implement changes (initial: claude CLI prompt with focused context).
|
|
||||||
|
|
||||||
- Re-run tests until green or timeout/backoff policy triggers.
|
|
||||||
|
|
||||||
- Commit: when green
|
|
||||||
|
|
||||||
- Commit tests + code with conventional commit message. Optionally update subtask status to done.
|
|
||||||
|
|
||||||
- Persist run step metadata/logs.
|
|
||||||
|
|
||||||
4) Finalization
|
|
||||||
|
|
||||||
- Run full test suite and coverage (if configured); optionally lint/format.
|
|
||||||
|
|
||||||
- Commit any final adjustments.
|
|
||||||
|
|
||||||
- Push branch (ask user to confirm); create PR (via gh pr create) targeting the default branch. Title format: Task #<id> [<tag>]: <title>.
|
|
||||||
|
|
||||||
5) Post‑Run
|
|
||||||
|
|
||||||
- Update task status if desired (e.g., review).
|
|
||||||
|
|
||||||
- Persist run report (JSON + markdown summary) to .taskmaster/reports/runs/<run-id>/.
|
|
||||||
|
|
||||||
## Guardrails
|
|
||||||
|
|
||||||
- Never commit to the default branch.
|
|
||||||
|
|
||||||
- Commit only if all tests (targeted and suite) pass; allow override flags.
|
|
||||||
|
|
||||||
- Enforce 80% coverage thresholds (lines/branches/functions/statements) by default; configurable.
|
|
||||||
|
|
||||||
- Timebox/model ops and retries; if not green within N attempts, pause with actionable state for resume.
|
|
||||||
|
|
||||||
- Always log actions, commands, and outcomes; include dry-run mode.
|
|
||||||
|
|
||||||
- Ask before branch creation, pushing, and opening a PR unless --no-confirm is set.
|
|
||||||
|
|
||||||
## Integration Points (Current Repo)
|
|
||||||
|
|
||||||
- CLI: apps/cli provides command structure and UI components.
|
|
||||||
|
|
||||||
- New command: tm autopilot (alias: task-master autopilot).
|
|
||||||
|
|
||||||
- Reuse UI components under apps/cli/src/ui/components/ for headers/task details/next-task.
|
|
||||||
|
|
||||||
- Core services: packages/tm-core
|
|
||||||
|
|
||||||
- TaskService for selection, status, tags.
|
|
||||||
|
|
||||||
- TaskExecutionService for prompt formatting and executor prep.
|
|
||||||
|
|
||||||
- Executors: claude executor and ExecutorFactory to run external tools.
|
|
||||||
|
|
||||||
- Proposed new: WorkflowOrchestrator to drive the autonomous loop and emit progress events.
|
|
||||||
|
|
||||||
- Tag/Git utilities: scripts/modules/utils/git-utils.js and scripts/modules/task-manager/tag-management.js for branch→tag mapping and explicit tag switching.
|
|
||||||
|
|
||||||
- Rules: .cursor/rules/git_workflow.mdc and .cursor/rules/test_workflow.mdc to steer behavior and ensure consistency.
|
|
||||||
|
|
||||||
- Test generation prompt: .claude/agents/surgical-test-generator.md.
|
|
||||||
|
|
||||||
## Proposed Components
|
|
||||||
|
|
||||||
- Orchestrator (tm-core): WorkflowOrchestrator (new)
|
|
||||||
|
|
||||||
- State machine driving phases: Preflight → Branch/Tag → SubtaskIter (Red/Green/Commit) → Finalize → PR.
|
|
||||||
|
|
||||||
- Exposes an evented API (progress events) that the CLI can render.
|
|
||||||
|
|
||||||
- Stores run state artifacts.
|
|
||||||
|
|
||||||
- Test Runner Adapter
|
|
||||||
|
|
||||||
- Detects and runs tests via the project’s test command (e.g., npm test), with targeted runs where feasible.
|
|
||||||
|
|
||||||
- API: runTargeted(files/pattern), runAll(), report summary (failures, duration, coverage), enforce 80% threshold by default.
|
|
||||||
|
|
||||||
- Git/PR Adapter
|
|
||||||
|
|
||||||
- Encapsulates git ops: branch create/checkout, add/commit, push.
|
|
||||||
|
|
||||||
- Optional gh integration to open PR; fallback to instructions if gh unavailable.
|
|
||||||
|
|
||||||
- Confirmation gates for branch creation and pushes.
|
|
||||||
|
|
||||||
- Prompt/Exec Adapter
|
|
||||||
|
|
||||||
- Uses existing executor service to call the selected coding assistant (initially claude) with tight prompts: task/subtask context, surgical tests first, then minimal code to green.
|
|
||||||
|
|
||||||
- Run State + Reporting
|
|
||||||
|
|
||||||
- JSONL log of steps, timestamps, commands, test results.
|
|
||||||
|
|
||||||
- Markdown summary for PR description and post-run artifact.
|
|
||||||
|
|
||||||
## CLI UX (MVP)
|
|
||||||
|
|
||||||
- Command: tm autopilot [taskId]
|
|
||||||
|
|
||||||
- Flags: --dry-run, --no-push, --no-pr, --no-confirm, --force, --max-attempts <n>, --runner <auto|custom>, --commit-scope <scope>
|
|
||||||
|
|
||||||
- Output: compact header (project, tag, branch), current phase, subtask line, last test summary, next actions.
|
|
||||||
|
|
||||||
- Resume: If interrupted, tm autopilot --resume picks up from last checkpoint in run state.
|
|
||||||
|
|
||||||
### TUI with tmux (Linear Execution)
|
|
||||||
|
|
||||||
- Left pane: Tag selector, task list (status/priority), start/expand shortcuts; "Start" triggers the next task or a selected task.
|
|
||||||
|
|
||||||
- Right pane: Executor terminal (tmux split) that runs the coding agent (claude-code/codex). Autopilot can hand over to the right pane during green.
|
|
||||||
|
|
||||||
- MCP integration: use MCP tools for task queries/updates and for shell/test invocations where available.
|
|
||||||
|
|
||||||
## TUI Layout (tmux-based)
|
|
||||||
|
|
||||||
### Pane Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────┬──────────────────────────────────┐
|
|
||||||
│ Task Navigator (left) │ Executor Terminal (right) │
|
|
||||||
│ │ │
|
|
||||||
│ Project: my-app │ $ tm autopilot --executor-mode │
|
|
||||||
│ Branch: analytics/task-42 │ > Running subtask 42.2 GREEN... │
|
|
||||||
│ Tag: analytics │ > Implementing endpoint... │
|
|
||||||
│ │ > Tests: 3 passed, 0 failed │
|
|
||||||
│ Tasks: │ > Ready to commit │
|
|
||||||
│ → 42 [in-progress] User metrics │ │
|
|
||||||
│ → 42.1 [done] Schema │ [Live output from Claude Code] │
|
|
||||||
│ → 42.2 [active] Endpoint ◀ │ │
|
|
||||||
│ → 42.3 [pending] Dashboard │ │
|
|
||||||
│ │ │
|
|
||||||
│ [s] start [p] pause [q] quit │ │
|
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Implementation Notes
|
|
||||||
|
|
||||||
- **Left pane**: `apps/cli/src/ui/tui/navigator.ts` (new, uses `blessed` or `ink`)
|
|
||||||
- **Right pane**: spawned via `tmux split-window -h` running `tm autopilot --executor-mode`
|
|
||||||
- **Communication**: shared state file `.taskmaster/state/current-run.json` + file watching or event stream
|
|
||||||
- **Keybindings**:
|
|
||||||
- `s` - Start selected task
|
|
||||||
- `p` - Pause/resume current run
|
|
||||||
- `q` - Quit (with confirmation if run active)
|
|
||||||
- `↑/↓` - Navigate task list
|
|
||||||
- `Enter` - Expand/collapse subtasks
|
|
||||||
|
|
||||||
## Prompt Composition (Detailed)
|
|
||||||
|
|
||||||
### System Prompt Assembly
|
|
||||||
|
|
||||||
Prompts are composed in three layers:
|
|
||||||
|
|
||||||
1. **Base rules** (loaded in order from `.cursor/rules/` and `.claude/agents/`):
|
|
||||||
- `git_workflow.mdc` → git commit conventions, branch policy, PR guidelines
|
|
||||||
- `test_workflow.mdc` → TDD loop requirements, coverage thresholds, test structure
|
|
||||||
- `surgical-test-generator.md` → test generation methodology, project-specific test patterns
|
|
||||||
|
|
||||||
2. **Task context injection**:
|
|
||||||
```
|
|
||||||
You are implementing:
|
|
||||||
Task #42 [analytics]: User metrics tracking
|
|
||||||
Subtask 42.2: Add collection endpoint
|
|
||||||
|
|
||||||
Description:
|
|
||||||
Implement POST /api/metrics endpoint to collect user metrics events
|
|
||||||
|
|
||||||
Acceptance criteria:
|
|
||||||
- POST /api/metrics accepts { userId, eventType, timestamp }
|
|
||||||
- Validates input schema (reject missing/invalid fields)
|
|
||||||
- Persists to database
|
|
||||||
- Returns 201 on success with created record
|
|
||||||
- Returns 400 on validation errors
|
|
||||||
|
|
||||||
Dependencies:
|
|
||||||
- Subtask 42.1 (metrics schema) is complete
|
|
||||||
|
|
||||||
Current phase: RED (generate failing tests)
|
|
||||||
Test command: npm test
|
|
||||||
Test file convention: src/**/*.test.js (vitest framework detected)
|
|
||||||
Branch: analytics/task-42-user-metrics
|
|
||||||
Project language: JavaScript (Node.js)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Phase-specific instructions**:
|
|
||||||
- **RED phase**: "Generate minimal failing tests for this subtask. Do NOT implement any production code. Only create test files. Confirm tests fail with clear error messages indicating missing implementation."
|
|
||||||
- **GREEN phase**: "Implement minimal code to pass the failing tests. Follow existing project patterns in `src/`. Only modify files necessary for this subtask. Keep changes focused and reviewable."
|
|
||||||
|
|
||||||
### Example Full Prompt (RED Phase)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<SYSTEM PROMPT>
|
|
||||||
[Contents of .cursor/rules/git_workflow.mdc]
|
|
||||||
[Contents of .cursor/rules/test_workflow.mdc]
|
|
||||||
[Contents of .claude/agents/surgical-test-generator.md]
|
|
||||||
|
|
||||||
<TASK CONTEXT>
|
|
||||||
You are implementing:
|
|
||||||
Task #42.2: Add collection endpoint
|
|
||||||
|
|
||||||
Description:
|
|
||||||
Implement POST /api/metrics endpoint to collect user metrics events
|
|
||||||
|
|
||||||
Acceptance criteria:
|
|
||||||
- POST /api/metrics accepts { userId, eventType, timestamp }
|
|
||||||
- Validates input schema (reject missing/invalid fields)
|
|
||||||
- Persists to database using MetricsSchema from subtask 42.1
|
|
||||||
- Returns 201 on success with created record
|
|
||||||
- Returns 400 on validation errors with details
|
|
||||||
|
|
||||||
Dependencies: Subtask 42.1 (metrics schema) is complete
|
|
||||||
|
|
||||||
<INSTRUCTION>
|
|
||||||
Generate failing tests for this subtask. Follow project conventions:
|
|
||||||
- Test file: src/api/__tests__/metrics.test.js
|
|
||||||
- Framework: vitest (detected from package.json)
|
|
||||||
- Test cases to cover:
|
|
||||||
* POST /api/metrics with valid payload → should return 201 (will fail: endpoint not implemented)
|
|
||||||
* POST /api/metrics with missing userId → should return 400 (will fail: validation not implemented)
|
|
||||||
* POST /api/metrics with invalid timestamp → should return 400 (will fail: validation not implemented)
|
|
||||||
* POST /api/metrics should persist to database → should save record (will fail: persistence not implemented)
|
|
||||||
|
|
||||||
Do NOT implement the endpoint code yet. Only create test file(s).
|
|
||||||
Confirm tests fail with messages like "Cannot POST /api/metrics" or "endpoint not defined".
|
|
||||||
|
|
||||||
Output format:
|
|
||||||
1. File path to create: src/api/__tests__/metrics.test.js
|
|
||||||
2. Complete test code
|
|
||||||
3. Command to run: npm test src/api/__tests__/metrics.test.js
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Full Prompt (GREEN Phase)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<SYSTEM PROMPT>
|
|
||||||
[Contents of .cursor/rules/git_workflow.mdc]
|
|
||||||
[Contents of .cursor/rules/test_workflow.mdc]
|
|
||||||
|
|
||||||
<TASK CONTEXT>
|
|
||||||
Task #42.2: Add collection endpoint
|
|
||||||
[same context as RED phase]
|
|
||||||
|
|
||||||
<CURRENT STATE>
|
|
||||||
Tests created in RED phase:
|
|
||||||
- src/api/__tests__/metrics.test.js
|
|
||||||
- 5 tests written, all failing as expected
|
|
||||||
|
|
||||||
Test output:
|
|
||||||
```
|
|
||||||
FAIL src/api/__tests__/metrics.test.js
|
|
||||||
POST /api/metrics
|
|
||||||
✗ should return 201 with valid payload (endpoint not found)
|
|
||||||
✗ should return 400 with missing userId (endpoint not found)
|
|
||||||
✗ should return 400 with invalid timestamp (endpoint not found)
|
|
||||||
✗ should persist to database (endpoint not found)
|
|
||||||
```
|
|
||||||
|
|
||||||
<INSTRUCTION>
|
|
||||||
Implement minimal code to make all tests pass.
|
|
||||||
|
|
||||||
Guidelines:
|
|
||||||
- Create/modify file: src/api/metrics.js
|
|
||||||
- Use existing patterns from src/api/ (e.g., src/api/users.js for reference)
|
|
||||||
- Import MetricsSchema from subtask 42.1 (src/models/schema.js)
|
|
||||||
- Implement validation, persistence, and response handling
|
|
||||||
- Follow project error handling conventions
|
|
||||||
- Keep implementation focused on this subtask only
|
|
||||||
|
|
||||||
After implementation:
|
|
||||||
1. Run tests: npm test src/api/__tests__/metrics.test.js
|
|
||||||
2. Confirm all 5 tests pass
|
|
||||||
3. Report results
|
|
||||||
|
|
||||||
Output format:
|
|
||||||
1. File(s) created/modified
|
|
||||||
2. Implementation code
|
|
||||||
3. Test command and results
|
|
||||||
```
|
|
||||||
|
|
||||||
### Prompt Loading Configuration
|
|
||||||
|
|
||||||
See `.taskmaster/config.json` → `prompts` section for paths and load order.
|
|
||||||
|
|
||||||
## Configuration Schema
|
|
||||||
|
|
||||||
### .taskmaster/config.json
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"autopilot": {
|
|
||||||
"enabled": true,
|
|
||||||
"requireCleanWorkingTree": true,
|
|
||||||
"commitTemplate": "{type}({scope}): {msg}",
|
|
||||||
"defaultCommitType": "feat",
|
|
||||||
"maxGreenAttempts": 3,
|
|
||||||
"testTimeout": 300000
|
|
||||||
},
|
|
||||||
"test": {
|
|
||||||
"runner": "auto",
|
|
||||||
"coverageThresholds": {
|
|
||||||
"lines": 80,
|
|
||||||
"branches": 80,
|
|
||||||
"functions": 80,
|
|
||||||
"statements": 80
|
|
||||||
},
|
|
||||||
"targetedRunPattern": "**/*.test.js"
|
|
||||||
},
|
|
||||||
"git": {
|
|
||||||
"branchPattern": "{tag}/task-{id}-{slug}",
|
|
||||||
"pr": {
|
|
||||||
"enabled": true,
|
|
||||||
"base": "default",
|
|
||||||
"bodyTemplate": ".taskmaster/templates/pr-body.md"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"prompts": {
|
|
||||||
"rulesPath": ".cursor/rules",
|
|
||||||
"testGeneratorPath": ".claude/agents/surgical-test-generator.md",
|
|
||||||
"loadOrder": ["git_workflow.mdc", "test_workflow.mdc"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration Fields
|
|
||||||
|
|
||||||
#### autopilot
|
|
||||||
- `enabled` (boolean): Enable/disable autopilot functionality
|
|
||||||
- `requireCleanWorkingTree` (boolean): Require clean git state before starting
|
|
||||||
- `commitTemplate` (string): Template for commit messages (tokens: `{type}`, `{scope}`, `{msg}`)
|
|
||||||
- `defaultCommitType` (string): Default commit type (feat, fix, chore, etc.)
|
|
||||||
- `maxGreenAttempts` (number): Maximum retry attempts to achieve green tests (default: 3)
|
|
||||||
- `testTimeout` (number): Timeout in milliseconds per test run (default: 300000 = 5min)
|
|
||||||
|
|
||||||
#### test
|
|
||||||
- `runner` (string): Test runner detection mode (`"auto"` or explicit command like `"npm test"`)
|
|
||||||
- `coverageThresholds` (object): Minimum coverage percentages required
|
|
||||||
- `lines`, `branches`, `functions`, `statements` (number): Threshold percentages (0-100)
|
|
||||||
- `targetedRunPattern` (string): Glob pattern for targeted subtask test runs
|
|
||||||
|
|
||||||
#### git
|
|
||||||
- `branchPattern` (string): Branch naming pattern (tokens: `{tag}`, `{id}`, `{slug}`)
|
|
||||||
- `pr.enabled` (boolean): Enable automatic PR creation
|
|
||||||
- `pr.base` (string): Target branch for PRs (`"default"` uses repo default, or specify like `"main"`)
|
|
||||||
- `pr.bodyTemplate` (string): Path to PR body template file (optional)
|
|
||||||
|
|
||||||
#### prompts
|
|
||||||
- `rulesPath` (string): Directory containing rule files (e.g., `.cursor/rules`)
|
|
||||||
- `testGeneratorPath` (string): Path to test generator prompt file
|
|
||||||
- `loadOrder` (array): Order to load rule files from `rulesPath`
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Required for executor
|
|
||||||
ANTHROPIC_API_KEY=sk-ant-... # Claude API key
|
|
||||||
|
|
||||||
# Optional: for PR creation
|
|
||||||
GITHUB_TOKEN=ghp_... # GitHub personal access token
|
|
||||||
|
|
||||||
# Optional: for other executors (future)
|
|
||||||
OPENAI_API_KEY=sk-...
|
|
||||||
GOOGLE_API_KEY=...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run Artifacts & Observability
|
|
||||||
|
|
||||||
### Per-Run Artifact Structure
|
|
||||||
|
|
||||||
Each autopilot run creates a timestamped directory with complete traceability:
|
|
||||||
|
|
||||||
```
|
|
||||||
.taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
├── manifest.json # run metadata (task id, start/end time, status)
|
|
||||||
├── log.jsonl # timestamped event stream
|
|
||||||
├── commits.txt # list of commit SHAs made during run
|
|
||||||
├── test-results/
|
|
||||||
│ ├── subtask-42.1-red.json
|
|
||||||
│ ├── subtask-42.1-green.json
|
|
||||||
│ ├── subtask-42.2-red.json
|
|
||||||
│ ├── subtask-42.2-green-attempt1.json
|
|
||||||
│ ├── subtask-42.2-green-attempt2.json
|
|
||||||
│ ├── subtask-42.2-green-attempt3.json
|
|
||||||
│ └── final-suite.json
|
|
||||||
└── pr.md # generated PR body
|
|
||||||
```
|
|
||||||
|
|
||||||
### manifest.json Format
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"tag": "analytics",
|
|
||||||
"branch": "analytics/task-42-user-metrics",
|
|
||||||
"startTime": "2025-01-15T14:20:33Z",
|
|
||||||
"endTime": "2025-01-15T14:45:12Z",
|
|
||||||
"status": "completed",
|
|
||||||
"subtasksCompleted": ["42.1", "42.2", "42.3"],
|
|
||||||
"subtasksFailed": [],
|
|
||||||
"totalCommits": 3,
|
|
||||||
"prUrl": "https://github.com/org/repo/pull/123",
|
|
||||||
"finalCoverage": {
|
|
||||||
"lines": 85.3,
|
|
||||||
"branches": 82.1,
|
|
||||||
"functions": 88.9,
|
|
||||||
"statements": 85.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### log.jsonl Format
|
|
||||||
|
|
||||||
Event stream in JSON Lines format for easy parsing and debugging:
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"ts":"2025-01-15T14:20:33Z","phase":"preflight","status":"ok","details":{"testCmd":"npm test","gitClean":true}}
|
|
||||||
{"ts":"2025-01-15T14:20:45Z","phase":"branch","status":"ok","branch":"analytics/task-42-user-metrics"}
|
|
||||||
{"ts":"2025-01-15T14:21:00Z","phase":"red","subtask":"42.1","status":"ok","tests":{"failed":3,"passed":0}}
|
|
||||||
{"ts":"2025-01-15T14:22:15Z","phase":"green","subtask":"42.1","status":"ok","tests":{"passed":3,"failed":0},"attempts":2}
|
|
||||||
{"ts":"2025-01-15T14:22:20Z","phase":"commit","subtask":"42.1","status":"ok","sha":"a1b2c3d","message":"feat(metrics): add metrics schema (task 42.1)"}
|
|
||||||
{"ts":"2025-01-15T14:23:00Z","phase":"red","subtask":"42.2","status":"ok","tests":{"failed":5,"passed":0}}
|
|
||||||
{"ts":"2025-01-15T14:25:30Z","phase":"green","subtask":"42.2","status":"error","tests":{"passed":3,"failed":2},"attempts":3,"error":"Max attempts reached"}
|
|
||||||
{"ts":"2025-01-15T14:25:35Z","phase":"pause","reason":"max_attempts","nextAction":"manual_review"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Results Format
|
|
||||||
|
|
||||||
Each test run stores detailed results:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"subtask": "42.2",
|
|
||||||
"phase": "green",
|
|
||||||
"attempt": 3,
|
|
||||||
"timestamp": "2025-01-15T14:25:30Z",
|
|
||||||
"command": "npm test src/api/__tests__/metrics.test.js",
|
|
||||||
"exitCode": 1,
|
|
||||||
"duration": 2340,
|
|
||||||
"summary": {
|
|
||||||
"total": 5,
|
|
||||||
"passed": 3,
|
|
||||||
"failed": 2,
|
|
||||||
"skipped": 0
|
|
||||||
},
|
|
||||||
"failures": [
|
|
||||||
{
|
|
||||||
"test": "POST /api/metrics should return 201 with valid payload",
|
|
||||||
"error": "Expected status 201, got 500",
|
|
||||||
"stack": "..."
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"coverage": {
|
|
||||||
"lines": 78.5,
|
|
||||||
"branches": 75.0,
|
|
||||||
"functions": 80.0,
|
|
||||||
"statements": 78.5
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Model
|
|
||||||
|
|
||||||
### Orchestration vs Direct Execution
|
|
||||||
|
|
||||||
The autopilot system uses an **orchestration model** rather than direct code execution:
|
|
||||||
|
|
||||||
**Orchestrator Role** (tm-core WorkflowOrchestrator):
|
|
||||||
- Maintains state machine tracking current phase (RED/GREEN/COMMIT) per subtask
|
|
||||||
- Validates preconditions (tests pass, git state clean, etc.)
|
|
||||||
- Returns "work units" describing what needs to be done next
|
|
||||||
- Records completion and advances to next phase
|
|
||||||
- Persists state for resumability
|
|
||||||
|
|
||||||
**Executor Role** (Claude Code/AI session via MCP):
|
|
||||||
- Queries orchestrator for next work unit
|
|
||||||
- Executes the work (generates tests, writes code, runs tests, makes commits)
|
|
||||||
- Reports results back to orchestrator
|
|
||||||
- Handles file operations and tool invocations
|
|
||||||
|
|
||||||
**Why This Approach?**
|
|
||||||
- Leverages existing AI capabilities (Claude Code) rather than duplicating them
|
|
||||||
- MCP protocol provides clean separation between state management and execution
|
|
||||||
- Allows human oversight and intervention at each phase
|
|
||||||
- Simpler to implement: orchestrator is pure state logic, no code generation needed
|
|
||||||
- Enables multiple executor types (Claude Code, other AI tools, human developers)
|
|
||||||
|
|
||||||
**Example Flow**:
|
|
||||||
```typescript
|
|
||||||
// Claude Code (via MCP) queries orchestrator
|
|
||||||
const workUnit = await orchestrator.getNextWorkUnit('42');
|
|
||||||
// => {
|
|
||||||
// phase: 'RED',
|
|
||||||
// subtask: '42.1',
|
|
||||||
// action: 'Generate failing tests for metrics schema',
|
|
||||||
// context: { title, description, dependencies, testFile: 'src/__tests__/schema.test.js' }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// Claude Code executes the work (writes test file, runs tests)
|
|
||||||
// Then reports back
|
|
||||||
await orchestrator.completeWorkUnit('42', '42.1', 'RED', {
|
|
||||||
success: true,
|
|
||||||
testsCreated: ['src/__tests__/schema.test.js'],
|
|
||||||
testsFailed: 3
|
|
||||||
});
|
|
||||||
|
|
||||||
// Query again for next phase
|
|
||||||
const nextWorkUnit = await orchestrator.getNextWorkUnit('42');
|
|
||||||
// => { phase: 'GREEN', subtask: '42.1', action: 'Implement code to pass tests', ... }
|
|
||||||
```
|
|
||||||
|
|
||||||
## Design Decisions
|
|
||||||
|
|
||||||
### Why commit per subtask instead of per task?
|
|
||||||
|
|
||||||
**Decision**: Commit after each subtask's green state, not after the entire task.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Atomic commits make code review easier (reviewers can see logical progression)
|
|
||||||
- Easier to revert a single subtask if it causes issues downstream
|
|
||||||
- Matches the TDD loop's natural checkpoint and cognitive boundary
|
|
||||||
- Provides resumability points if the run is interrupted
|
|
||||||
|
|
||||||
**Trade-off**: More commits per task (can use squash-merge in PRs if desired)
|
|
||||||
|
|
||||||
### Why not support parallel subtask execution?
|
|
||||||
|
|
||||||
**Decision**: Sequential subtask execution in Phase 1; parallel execution deferred to Phase 3.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Subtasks often have implicit dependencies (e.g., schema before endpoint, endpoint before UI)
|
|
||||||
- Simpler orchestrator state machine (less complexity = faster to ship)
|
|
||||||
- Parallel execution requires explicit dependency DAG and conflict resolution
|
|
||||||
- Can be added in Phase 3 once core workflow is proven stable
|
|
||||||
|
|
||||||
**Trade-off**: Slower for truly independent subtasks (mitigated by keeping subtasks small and focused)
|
|
||||||
|
|
||||||
### Why require 80% coverage by default?
|
|
||||||
|
|
||||||
**Decision**: Enforce 80% coverage threshold (lines/branches/functions/statements) before allowing commits.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Industry standard baseline for production code quality
|
|
||||||
- Forces test generation to be comprehensive, not superficial
|
|
||||||
- Configurable per project via `.taskmaster/config.json` if too strict
|
|
||||||
- Prevents "green tests" that only test happy paths
|
|
||||||
|
|
||||||
**Trade-off**: May require more test generation iterations; can be lowered per project
|
|
||||||
|
|
||||||
### Why use tmux instead of a rich GUI?
|
|
||||||
|
|
||||||
**Decision**: MVP uses tmux split panes for TUI, not Electron/web-based GUI.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Tmux is universally available on dev machines; no installation burden
|
|
||||||
- Terminal-first workflows match developer mental model (no context switching)
|
|
||||||
- Simpler to implement and maintain; can add GUI later via extensions
|
|
||||||
- State stored in files allows IDE/extension integration without coupling
|
|
||||||
|
|
||||||
**Trade-off**: Less visual polish than GUI; requires tmux familiarity
|
|
||||||
|
|
||||||
### Why not support multiple executors (codex/gemini/claude) in Phase 1?
|
|
||||||
|
|
||||||
**Decision**: Start with Claude executor only; add others in Phase 2+.
|
|
||||||
|
|
||||||
**Rationale**:
|
|
||||||
- Reduces scope and complexity for initial delivery
|
|
||||||
- Claude Code already integrated with existing executor service
|
|
||||||
- Executor abstraction already exists; adding more is straightforward later
|
|
||||||
- Different executors may need different prompt strategies (requires experimentation)
|
|
||||||
|
|
||||||
**Trade-off**: Users locked to Claude initially; can work around with manual executor selection
|
|
||||||
|
|
||||||
## Risks and Mitigations
|
|
||||||
|
|
||||||
- Model hallucination/large diffs: restrict prompt scope; enforce minimal changes; show diff previews (optional) before commit.
|
|
||||||
|
|
||||||
- Flaky tests: allow retries, isolate targeted runs for speed, then full suite before commit.
|
|
||||||
|
|
||||||
- Environment variability: detect runners/tools; provide fallbacks and actionable errors.
|
|
||||||
|
|
||||||
- PR creation fails: still push and print manual commands; persist PR body to reuse.
|
|
||||||
|
|
||||||
## Open Questions
|
|
||||||
|
|
||||||
1) Slugging rules for branch names; any length limits or normalization beyond {slug} token sanitize?
|
|
||||||
|
|
||||||
2) PR body standard sections beyond run report (e.g., checklist, coverage table)?
|
|
||||||
|
|
||||||
3) Default executor prompt fine-tuning once codex/gemini integration is available.
|
|
||||||
|
|
||||||
4) Where to store persistent TUI state (pane layout, last selection) in .taskmaster/state.json?
|
|
||||||
|
|
||||||
## Branch Naming
|
|
||||||
|
|
||||||
- Include both the tag and the task id in the branch name to make lineage explicit.
|
|
||||||
|
|
||||||
- Default pattern: <tag>/task-<id>[-slug] (e.g., master/task-12, tag-analytics/task-4-user-auth).
|
|
||||||
|
|
||||||
- Configurable via .taskmaster/config.json: git.branchPattern supports tokens {tag}, {id}, {slug}.
|
|
||||||
|
|
||||||
## PR Base Branch
|
|
||||||
|
|
||||||
- Use the repository’s default branch (detected via git) unless overridden.
|
|
||||||
|
|
||||||
- Title format: Task #<id> [<tag>]: <title>.
|
|
||||||
|
|
||||||
## RPG Mapping (Repository Planning Graph)
|
|
||||||
|
|
||||||
Functional nodes (capabilities):
|
|
||||||
|
|
||||||
- Autopilot Orchestration → drives TDD loop and lifecycle
|
|
||||||
|
|
||||||
- Test Generation (Surgical) → produces failing tests from subtask context
|
|
||||||
|
|
||||||
- Test Execution + Coverage → runs suite, enforces thresholds
|
|
||||||
|
|
||||||
- Git/Branch/PR Management → safe operations and PR creation
|
|
||||||
|
|
||||||
- TUI/Terminal Integration → interactive control and visibility via tmux
|
|
||||||
|
|
||||||
- MCP Integration → structured task/status/context operations
|
|
||||||
|
|
||||||
Structural nodes (code organization):
|
|
||||||
|
|
||||||
- packages/tm-core:
|
|
||||||
|
|
||||||
- services/workflow-orchestrator.ts (new)
|
|
||||||
|
|
||||||
- services/test-runner-adapter.ts (new)
|
|
||||||
|
|
||||||
- services/git-adapter.ts (new)
|
|
||||||
|
|
||||||
- existing: task-service.ts, task-execution-service.ts, executors/*
|
|
||||||
|
|
||||||
- apps/cli:
|
|
||||||
|
|
||||||
- src/commands/autopilot.command.ts (new)
|
|
||||||
|
|
||||||
- src/ui/tui/ (new tmux/TUI helpers)
|
|
||||||
|
|
||||||
- scripts/modules:
|
|
||||||
|
|
||||||
- reuse utils/git-utils.js, task-manager/tag-management.js
|
|
||||||
|
|
||||||
- .claude/agents/:
|
|
||||||
|
|
||||||
- surgical-test-generator.md
|
|
||||||
|
|
||||||
Edges (data/control flow):
|
|
||||||
|
|
||||||
- Autopilot → Test Generation → Test Execution → Git Commit → loop
|
|
||||||
|
|
||||||
- Autopilot → Git Adapter (branch, tag, PR)
|
|
||||||
|
|
||||||
- Autopilot → TUI (event stream) → tmux pane control
|
|
||||||
|
|
||||||
- Autopilot → MCP tools for task/status updates
|
|
||||||
|
|
||||||
- Test Execution → Coverage gate → Autopilot decision
|
|
||||||
|
|
||||||
Topological traversal (implementation order):
|
|
||||||
|
|
||||||
1) Git/Test adapters (foundations)
|
|
||||||
|
|
||||||
2) Orchestrator skeleton + events
|
|
||||||
|
|
||||||
3) CLI autopilot command and dry-run
|
|
||||||
|
|
||||||
4) Surgical test-gen integration and execution gate
|
|
||||||
|
|
||||||
5) PR creation, run reports, resumability
|
|
||||||
|
|
||||||
## Phased Roadmap
|
|
||||||
|
|
||||||
- Phase 0: Spike
|
|
||||||
|
|
||||||
- Implement CLI skeleton tm autopilot with dry-run showing planned steps from a real task + subtasks.
|
|
||||||
|
|
||||||
- Detect test runner (package.json) and git state; render a preflight report.
|
|
||||||
|
|
||||||
- Phase 1: Core Rails (State Machine & Orchestration)
|
|
||||||
|
|
||||||
- Implement WorkflowOrchestrator in tm-core as a **state machine** that tracks TDD phases per subtask.
|
|
||||||
|
|
||||||
- Orchestrator **guides** the current AI session (Claude Code/MCP client) rather than executing code itself.
|
|
||||||
|
|
||||||
- Add Git/Test adapters for status checks and validation (not direct execution).
|
|
||||||
|
|
||||||
- WorkflowOrchestrator API:
|
|
||||||
- `getNextWorkUnit(taskId)` → returns next phase to execute (RED/GREEN/COMMIT) with context
|
|
||||||
- `completeWorkUnit(taskId, subtaskId, phase, result)` → records completion and advances state
|
|
||||||
- `getRunState(taskId)` → returns current progress and resumability data
|
|
||||||
|
|
||||||
- MCP integration: expose work unit endpoints so Claude Code can query "what to do next" and report back.
|
|
||||||
|
|
||||||
- Branch/tag mapping via existing tag-management APIs.
|
|
||||||
|
|
||||||
- Run report persisted under .taskmaster/reports/runs/ with state checkpoints for resumability.
|
|
||||||
|
|
||||||
- Phase 2: PR + Resumability
|
|
||||||
|
|
||||||
- Add gh PR creation with well-formed body using the run report.
|
|
||||||
|
|
||||||
- Introduce resumable checkpoints and --resume flag.
|
|
||||||
|
|
||||||
- Add coverage enforcement and optional lint/format step.
|
|
||||||
|
|
||||||
- Phase 3: Extensibility + Guardrails
|
|
||||||
|
|
||||||
- Add support for basic pytest/go test adapters.
|
|
||||||
|
|
||||||
- Add safeguards: diff preview mode, manual confirm gates, aggressive minimal-change prompts.
|
|
||||||
|
|
||||||
- Optional: small TUI panel and extension panel leveraging the same run state file.
|
|
||||||
|
|
||||||
## References (Repo)
|
|
||||||
|
|
||||||
- Test Workflow: .cursor/rules/test_workflow.mdc
|
|
||||||
|
|
||||||
- Git Workflow: .cursor/rules/git_workflow.mdc
|
|
||||||
|
|
||||||
- CLI: apps/cli/src/commands/start.command.ts, apps/cli/src/ui/components/*.ts
|
|
||||||
|
|
||||||
- Core Services: packages/tm-core/src/services/task-service.ts, task-execution-service.ts
|
|
||||||
|
|
||||||
- Executors: packages/tm-core/src/executors/*
|
|
||||||
|
|
||||||
- Git Utilities: scripts/modules/utils/git-utils.js
|
|
||||||
|
|
||||||
- Tag Management: scripts/modules/task-manager/tag-management.js
|
|
||||||
|
|
||||||
- Surgical Test Generator: .claude/agents/surgical-test-generator.md
|
|
||||||
|
|
||||||
@@ -1,91 +0,0 @@
|
|||||||
<context>
|
|
||||||
# Overview
|
|
||||||
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
|
||||||
|
|
||||||
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
|
||||||
|
|
||||||
# Core Features
|
|
||||||
- `start` command (Commander class style)
|
|
||||||
- Hard-coded executor: `claude-code`
|
|
||||||
- Standardized prompt designed for minimal changes following existing patterns
|
|
||||||
- Shows claude-code output (no streaming)
|
|
||||||
- Git status check for success detection
|
|
||||||
- Auto-mark task done if successful
|
|
||||||
|
|
||||||
# User Experience
|
|
||||||
```
|
|
||||||
task-master start 12
|
|
||||||
```
|
|
||||||
1) Fetches Task #12 details
|
|
||||||
2) Builds standardized prompt with task context
|
|
||||||
3) Runs claude-code with the prompt
|
|
||||||
4) Shows output
|
|
||||||
5) Checks git status for changes
|
|
||||||
6) Auto-marks task done if changes detected
|
|
||||||
</context>
|
|
||||||
|
|
||||||
<PRD>
|
|
||||||
# Technical Architecture
|
|
||||||
|
|
||||||
- Command pattern:
|
|
||||||
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
|
||||||
|
|
||||||
- Task retrieval:
|
|
||||||
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
|
||||||
- Extract: id, title, description, details
|
|
||||||
|
|
||||||
- Executor (ultra-simple approach):
|
|
||||||
- Execute `claude "full prompt here"` command directly
|
|
||||||
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
|
||||||
- Then tells Claude to implement the code changes
|
|
||||||
- This opens Claude CLI interface naturally in the current terminal
|
|
||||||
- No subprocess management needed - just execute the command
|
|
||||||
|
|
||||||
- Execution flow:
|
|
||||||
1) Validate `<task_id>` exists; exit with error if not
|
|
||||||
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
|
||||||
3) Execute `claude "prompt"` command directly in terminal
|
|
||||||
4) Claude CLI opens, runs `tm show`, then implements changes
|
|
||||||
5) After Claude session ends, run `git status --porcelain` to detect changes
|
|
||||||
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
|
||||||
|
|
||||||
- Success criteria:
|
|
||||||
- Success = exit code 0 AND git shows modified/created files
|
|
||||||
- Print changed file paths; warn if no changes detected
|
|
||||||
|
|
||||||
# Development Roadmap
|
|
||||||
|
|
||||||
MVP (ship in ~1 hour):
|
|
||||||
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
|
||||||
2) Validate task exists via tm-core
|
|
||||||
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
|
||||||
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
|
||||||
|
|
||||||
# Risks and Mitigations
|
|
||||||
- Executor availability: Error clearly if `claude-code` provider fails
|
|
||||||
- False success: Git-change heuristic acceptable for hackathon MVP
|
|
||||||
|
|
||||||
# Appendix
|
|
||||||
|
|
||||||
**Standardized Prompt Template:**
|
|
||||||
```
|
|
||||||
You are an AI coding assistant with access to this repository's codebase.
|
|
||||||
|
|
||||||
First, run this command to get the task details:
|
|
||||||
tm show <task_id>
|
|
||||||
|
|
||||||
Then implement the task with these requirements:
|
|
||||||
- Make the SMALLEST number of code changes possible
|
|
||||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
|
||||||
- Do NOT over-engineer the solution
|
|
||||||
- Use existing files/functions/patterns wherever possible
|
|
||||||
- When complete, print: COMPLETED: <brief summary of changes>
|
|
||||||
|
|
||||||
Begin by running tm show <task_id> to understand what needs to be implemented.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key References:**
|
|
||||||
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
|
||||||
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
|
||||||
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
|
||||||
</PRD>
|
|
||||||
@@ -1,130 +0,0 @@
|
|||||||
# Phase 0: Spike - Autonomous TDD Workflow ✅ COMPLETE
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Validate feasibility and build foundational understanding before full implementation.
|
|
||||||
|
|
||||||
## Status
|
|
||||||
**COMPLETED** - All deliverables implemented and validated.
|
|
||||||
|
|
||||||
See `apps/cli/src/commands/autopilot.command.ts` for implementation.
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
- Implement CLI skeleton `tm autopilot` with dry-run mode
|
|
||||||
- Show planned steps from a real task with subtasks
|
|
||||||
- Detect test runner from package.json
|
|
||||||
- Detect git state and render preflight report
|
|
||||||
|
|
||||||
## Deliverables
|
|
||||||
|
|
||||||
### 1. CLI Command Skeleton
|
|
||||||
- Create `apps/cli/src/commands/autopilot.command.ts`
|
|
||||||
- Support `tm autopilot <taskId>` command
|
|
||||||
- Implement `--dry-run` flag
|
|
||||||
- Basic help text and usage information
|
|
||||||
|
|
||||||
### 2. Preflight Detection System
|
|
||||||
- Detect test runner from package.json (npm test, pnpm test, etc.)
|
|
||||||
- Check git working tree state (clean/dirty)
|
|
||||||
- Validate required tools are available (git, gh, node/npm)
|
|
||||||
- Detect default branch
|
|
||||||
|
|
||||||
### 3. Dry-Run Execution Plan Display
|
|
||||||
Display planned execution for a task including:
|
|
||||||
- Preflight checks status
|
|
||||||
- Branch name that would be created
|
|
||||||
- Tag that would be set
|
|
||||||
- List of subtasks in execution order
|
|
||||||
- For each subtask:
|
|
||||||
- RED phase: test file that would be created
|
|
||||||
- GREEN phase: implementation files that would be modified
|
|
||||||
- COMMIT: commit message that would be used
|
|
||||||
- Finalization steps: test suite run, coverage check, push, PR creation
|
|
||||||
|
|
||||||
### 4. Task Loading & Validation
|
|
||||||
- Load task from TaskMaster state
|
|
||||||
- Validate task exists and has subtasks
|
|
||||||
- If no subtasks, show message about needing to expand first
|
|
||||||
- Show dependency order for subtasks
|
|
||||||
|
|
||||||
## Example Output
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --dry-run
|
|
||||||
|
|
||||||
Autopilot Plan for Task #42 [analytics]: User metrics tracking
|
|
||||||
─────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
Preflight Checks:
|
|
||||||
✓ Working tree is clean
|
|
||||||
✓ Test command detected: npm test
|
|
||||||
✓ Tools available: git, gh, node, npm
|
|
||||||
✓ Current branch: main (will create new branch)
|
|
||||||
✓ Task has 3 subtasks ready to execute
|
|
||||||
|
|
||||||
Branch & Tag:
|
|
||||||
→ Will create branch: analytics/task-42-user-metrics
|
|
||||||
→ Will set active tag: analytics
|
|
||||||
|
|
||||||
Execution Plan (3 subtasks):
|
|
||||||
|
|
||||||
1. Subtask 42.1: Add metrics schema
|
|
||||||
RED: Generate tests → src/__tests__/schema.test.js
|
|
||||||
GREEN: Implement code → src/schema.js
|
|
||||||
COMMIT: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
|
|
||||||
2. Subtask 42.2: Add collection endpoint [depends on 42.1]
|
|
||||||
RED: Generate tests → src/api/__tests__/metrics.test.js
|
|
||||||
GREEN: Implement code → src/api/metrics.js
|
|
||||||
COMMIT: "feat(metrics): add collection endpoint (task 42.2)"
|
|
||||||
|
|
||||||
3. Subtask 42.3: Add dashboard widget [depends on 42.2]
|
|
||||||
RED: Generate tests → src/components/__tests__/MetricsWidget.test.jsx
|
|
||||||
GREEN: Implement code → src/components/MetricsWidget.jsx
|
|
||||||
COMMIT: "feat(metrics): add dashboard widget (task 42.3)"
|
|
||||||
|
|
||||||
Finalization:
|
|
||||||
→ Run full test suite with coverage (threshold: 80%)
|
|
||||||
→ Push branch to origin (will confirm)
|
|
||||||
→ Create PR targeting main
|
|
||||||
|
|
||||||
Estimated commits: 3
|
|
||||||
Estimated duration: ~20-30 minutes (depends on implementation complexity)
|
|
||||||
|
|
||||||
Run without --dry-run to execute.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- Dry-run output is clear and matches expected workflow
|
|
||||||
- Preflight detection works correctly on the project repo
|
|
||||||
- Task loading integrates with existing TaskMaster state
|
|
||||||
- No actual git operations or file modifications occur in dry-run mode
|
|
||||||
|
|
||||||
## Out of Scope
|
|
||||||
- Actual test generation
|
|
||||||
- Actual code implementation
|
|
||||||
- Git operations (branch creation, commits, push)
|
|
||||||
- PR creation
|
|
||||||
- Test execution
|
|
||||||
|
|
||||||
## Implementation Notes
|
|
||||||
- Reuse existing `TaskService` from `packages/tm-core`
|
|
||||||
- Use existing git utilities from `scripts/modules/utils/git-utils.js`
|
|
||||||
- Load task/subtask data from `.taskmaster/tasks/tasks.json`
|
|
||||||
- Detect test command via package.json → scripts.test field
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Existing TaskMaster CLI structure
|
|
||||||
- Existing task storage format
|
|
||||||
- Git utilities
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
2-3 days
|
|
||||||
|
|
||||||
## Validation
|
|
||||||
Test dry-run mode with:
|
|
||||||
- Task with 1 subtask
|
|
||||||
- Task with multiple subtasks
|
|
||||||
- Task with dependencies between subtasks
|
|
||||||
- Task without subtasks (should show warning)
|
|
||||||
- Dirty git working tree (should warn)
|
|
||||||
- Missing tools (should error with helpful message)
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,369 +0,0 @@
|
|||||||
# Phase 1: Core Rails - State Machine & Orchestration
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Build the WorkflowOrchestrator as a state machine that guides AI sessions through TDD workflow, rather than directly executing code.
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
### Execution Model
|
|
||||||
The orchestrator acts as a **state manager and guide**, not a code executor:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ Claude Code (MCP Client) │
|
|
||||||
│ - Queries "what to do next" │
|
|
||||||
│ - Executes work (writes tests, code, runs commands) │
|
|
||||||
│ - Reports completion │
|
|
||||||
└────────────────┬────────────────────────────────────────────┘
|
|
||||||
│ MCP Protocol
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ WorkflowOrchestrator (tm-core) │
|
|
||||||
│ - Maintains state machine (RED → GREEN → COMMIT) │
|
|
||||||
│ - Returns work units with context │
|
|
||||||
│ - Validates preconditions │
|
|
||||||
│ - Records progress │
|
|
||||||
│ - Persists state for resumability │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why This Approach?
|
|
||||||
1. **Separation of Concerns**: State management separate from code execution
|
|
||||||
2. **Leverage Existing Tools**: Uses Claude Code's capabilities instead of reimplementing
|
|
||||||
3. **Human-in-the-Loop**: Easy to inspect state and intervene at any phase
|
|
||||||
4. **Simpler Implementation**: Orchestrator is pure logic, no AI model integration needed
|
|
||||||
5. **Flexible Executors**: Any tool (Claude Code, human, other AI) can execute work units
|
|
||||||
|
|
||||||
## Core Components
|
|
||||||
|
|
||||||
### 1. WorkflowOrchestrator Service
|
|
||||||
**Location**: `packages/tm-core/src/services/workflow-orchestrator.service.ts`
|
|
||||||
|
|
||||||
**Responsibilities**:
|
|
||||||
- Track current phase (RED/GREEN/COMMIT) per subtask
|
|
||||||
- Generate work units with context for each phase
|
|
||||||
- Validate phase completion criteria
|
|
||||||
- Advance state machine on successful completion
|
|
||||||
- Handle errors and retry logic
|
|
||||||
- Persist run state for resumability
|
|
||||||
|
|
||||||
**API**:
|
|
||||||
```typescript
|
|
||||||
interface WorkflowOrchestrator {
|
|
||||||
// Start a new autopilot run
|
|
||||||
startRun(taskId: string, options?: RunOptions): Promise<RunContext>;
|
|
||||||
|
|
||||||
// Get next work unit to execute
|
|
||||||
getNextWorkUnit(runId: string): Promise<WorkUnit | null>;
|
|
||||||
|
|
||||||
// Report work unit completion
|
|
||||||
completeWorkUnit(
|
|
||||||
runId: string,
|
|
||||||
workUnitId: string,
|
|
||||||
result: WorkUnitResult
|
|
||||||
): Promise<void>;
|
|
||||||
|
|
||||||
// Get current run state
|
|
||||||
getRunState(runId: string): Promise<RunState>;
|
|
||||||
|
|
||||||
// Pause/resume
|
|
||||||
pauseRun(runId: string): Promise<void>;
|
|
||||||
resumeRun(runId: string): Promise<void>;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnit {
|
|
||||||
id: string; // Unique work unit ID
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
subtaskId: string; // e.g., "42.1"
|
|
||||||
action: string; // Human-readable description
|
|
||||||
context: WorkUnitContext; // All info needed to execute
|
|
||||||
preconditions: Precondition[]; // Checks before execution
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnitContext {
|
|
||||||
taskId: string;
|
|
||||||
taskTitle: string;
|
|
||||||
subtaskTitle: string;
|
|
||||||
subtaskDescription: string;
|
|
||||||
dependencies: string[]; // Completed subtask IDs
|
|
||||||
testCommand: string; // e.g., "npm test"
|
|
||||||
|
|
||||||
// Phase-specific context
|
|
||||||
redPhase?: {
|
|
||||||
testFile: string; // Where to create test
|
|
||||||
testFramework: string; // e.g., "vitest"
|
|
||||||
acceptanceCriteria: string[];
|
|
||||||
};
|
|
||||||
|
|
||||||
greenPhase?: {
|
|
||||||
testFile: string; // Test to make pass
|
|
||||||
implementationHints: string[];
|
|
||||||
expectedFiles: string[]; // Files likely to modify
|
|
||||||
};
|
|
||||||
|
|
||||||
commitPhase?: {
|
|
||||||
commitMessage: string; // Pre-generated message
|
|
||||||
filesToCommit: string[]; // Files modified in RED+GREEN
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
interface WorkUnitResult {
|
|
||||||
success: boolean;
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
|
|
||||||
// RED phase results
|
|
||||||
testsCreated?: string[];
|
|
||||||
testsFailed?: number;
|
|
||||||
|
|
||||||
// GREEN phase results
|
|
||||||
testsPassed?: number;
|
|
||||||
filesModified?: string[];
|
|
||||||
attempts?: number;
|
|
||||||
|
|
||||||
// COMMIT phase results
|
|
||||||
commitSha?: string;
|
|
||||||
|
|
||||||
// Common
|
|
||||||
error?: string;
|
|
||||||
logs?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface RunState {
|
|
||||||
runId: string;
|
|
||||||
taskId: string;
|
|
||||||
status: 'running' | 'paused' | 'completed' | 'failed';
|
|
||||||
currentPhase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
currentSubtask: string;
|
|
||||||
completedSubtasks: string[];
|
|
||||||
failedSubtasks: string[];
|
|
||||||
startTime: Date;
|
|
||||||
lastUpdateTime: Date;
|
|
||||||
|
|
||||||
// Resumability
|
|
||||||
checkpoint: {
|
|
||||||
subtaskId: string;
|
|
||||||
phase: 'RED' | 'GREEN' | 'COMMIT';
|
|
||||||
attemptNumber: number;
|
|
||||||
};
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. State Machine Logic
|
|
||||||
|
|
||||||
**Phase Transitions**:
|
|
||||||
```
|
|
||||||
START → RED(subtask 1) → GREEN(subtask 1) → COMMIT(subtask 1)
|
|
||||||
↓
|
|
||||||
RED(subtask 2) ← ─ ─ ─ ┘
|
|
||||||
↓
|
|
||||||
GREEN(subtask 2)
|
|
||||||
↓
|
|
||||||
COMMIT(subtask 2)
|
|
||||||
↓
|
|
||||||
(repeat for remaining subtasks)
|
|
||||||
↓
|
|
||||||
FINALIZE → END
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase Rules**:
|
|
||||||
- **RED**: Can only transition to GREEN if tests created and failing
|
|
||||||
- **GREEN**: Can only transition to COMMIT if tests passing (attempt < maxAttempts)
|
|
||||||
- **COMMIT**: Can only transition to next RED if commit successful
|
|
||||||
- **FINALIZE**: Can only start if all subtasks completed
|
|
||||||
|
|
||||||
**Preconditions**:
|
|
||||||
- RED: No uncommitted changes (or staged from previous GREEN that failed)
|
|
||||||
- GREEN: RED phase complete, tests exist and are failing
|
|
||||||
- COMMIT: GREEN phase complete, all tests passing, coverage meets threshold
|
|
||||||
|
|
||||||
### 3. MCP Integration
|
|
||||||
|
|
||||||
**New MCP Tools** (expose WorkflowOrchestrator via MCP):
|
|
||||||
```typescript
|
|
||||||
// Start an autopilot run
|
|
||||||
mcp__task_master_ai__autopilot_start(taskId: string, dryRun?: boolean)
|
|
||||||
|
|
||||||
// Get next work unit
|
|
||||||
mcp__task_master_ai__autopilot_next_work_unit(runId: string)
|
|
||||||
|
|
||||||
// Complete current work unit
|
|
||||||
mcp__task_master_ai__autopilot_complete_work_unit(
|
|
||||||
runId: string,
|
|
||||||
workUnitId: string,
|
|
||||||
result: WorkUnitResult
|
|
||||||
)
|
|
||||||
|
|
||||||
// Get run state
|
|
||||||
mcp__task_master_ai__autopilot_get_state(runId: string)
|
|
||||||
|
|
||||||
// Pause/resume
|
|
||||||
mcp__task_master_ai__autopilot_pause(runId: string)
|
|
||||||
mcp__task_master_ai__autopilot_resume(runId: string)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Git/Test Adapters
|
|
||||||
|
|
||||||
**GitAdapter** (`packages/tm-core/src/services/git-adapter.service.ts`):
|
|
||||||
- Check working tree status
|
|
||||||
- Validate branch state
|
|
||||||
- Read git config (user, remote, default branch)
|
|
||||||
- **Does NOT execute** git commands (that's executor's job)
|
|
||||||
|
|
||||||
**TestAdapter** (`packages/tm-core/src/services/test-adapter.service.ts`):
|
|
||||||
- Detect test framework from package.json
|
|
||||||
- Parse test output (failures, passes, coverage)
|
|
||||||
- Validate coverage thresholds
|
|
||||||
- **Does NOT run** tests (that's executor's job)
|
|
||||||
|
|
||||||
### 5. Run State Persistence
|
|
||||||
|
|
||||||
**Storage Location**: `.taskmaster/reports/runs/<runId>/`
|
|
||||||
|
|
||||||
**Files**:
|
|
||||||
- `state.json` - Current run state (for resumability)
|
|
||||||
- `log.jsonl` - Event stream (timestamped work unit completions)
|
|
||||||
- `manifest.json` - Run metadata
|
|
||||||
- `work-units.json` - All work units generated for this run
|
|
||||||
|
|
||||||
**Example `state.json`**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"status": "paused",
|
|
||||||
"currentPhase": "GREEN",
|
|
||||||
"currentSubtask": "42.2",
|
|
||||||
"completedSubtasks": ["42.1"],
|
|
||||||
"failedSubtasks": [],
|
|
||||||
"checkpoint": {
|
|
||||||
"subtaskId": "42.2",
|
|
||||||
"phase": "GREEN",
|
|
||||||
"attemptNumber": 2
|
|
||||||
},
|
|
||||||
"startTime": "2025-01-15T14:20:33Z",
|
|
||||||
"lastUpdateTime": "2025-01-15T14:35:12Z"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Plan
|
|
||||||
|
|
||||||
### Step 1: WorkflowOrchestrator Skeleton
|
|
||||||
- [ ] Create `workflow-orchestrator.service.ts` with interfaces
|
|
||||||
- [ ] Implement state machine logic (phase transitions)
|
|
||||||
- [ ] Add run state persistence (state.json, log.jsonl)
|
|
||||||
- [ ] Write unit tests for state machine
|
|
||||||
|
|
||||||
### Step 2: Work Unit Generation
|
|
||||||
- [ ] Implement `getNextWorkUnit()` with context assembly
|
|
||||||
- [ ] Generate RED phase work units (test file paths, criteria)
|
|
||||||
- [ ] Generate GREEN phase work units (implementation hints)
|
|
||||||
- [ ] Generate COMMIT phase work units (commit messages)
|
|
||||||
|
|
||||||
### Step 3: Git/Test Adapters
|
|
||||||
- [ ] Create GitAdapter for status checks only
|
|
||||||
- [ ] Create TestAdapter for output parsing only
|
|
||||||
- [ ] Add precondition validation using adapters
|
|
||||||
- [ ] Write adapter unit tests
|
|
||||||
|
|
||||||
### Step 4: MCP Integration
|
|
||||||
- [ ] Add MCP tool definitions in `packages/mcp-server/src/tools/`
|
|
||||||
- [ ] Wire up WorkflowOrchestrator to MCP tools
|
|
||||||
- [ ] Test MCP tools via Claude Code
|
|
||||||
- [ ] Document MCP workflow in CLAUDE.md
|
|
||||||
|
|
||||||
### Step 5: CLI Integration
|
|
||||||
- [ ] Update `autopilot.command.ts` to call WorkflowOrchestrator
|
|
||||||
- [ ] Add `--interactive` mode that shows work units and waits for completion
|
|
||||||
- [ ] Add `--resume` flag to continue paused runs
|
|
||||||
- [ ] Test end-to-end flow
|
|
||||||
|
|
||||||
### Step 6: Integration Testing
|
|
||||||
- [ ] Create test task with 2-3 subtasks
|
|
||||||
- [ ] Run autopilot start → get work unit → complete → repeat
|
|
||||||
- [ ] Verify state persistence and resumability
|
|
||||||
- [ ] Test failure scenarios (test failures, git issues)
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- [ ] WorkflowOrchestrator can generate work units for all phases
|
|
||||||
- [ ] MCP tools allow Claude Code to query and complete work units
|
|
||||||
- [ ] State persists correctly between work unit completions
|
|
||||||
- [ ] Run can be paused and resumed from checkpoint
|
|
||||||
- [ ] Adapters validate preconditions without executing commands
|
|
||||||
- [ ] End-to-end: Claude Code can complete a simple task via work units
|
|
||||||
|
|
||||||
## Out of Scope (Phase 1)
|
|
||||||
- Actual git operations (branch creation, commits) - executor handles this
|
|
||||||
- Actual test execution - executor handles this
|
|
||||||
- PR creation - deferred to Phase 2
|
|
||||||
- TUI interface - deferred to Phase 3
|
|
||||||
- Coverage enforcement - deferred to Phase 2
|
|
||||||
|
|
||||||
## Example Usage Flow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Terminal 1: Claude Code session
|
|
||||||
$ claude
|
|
||||||
|
|
||||||
# In Claude Code (via MCP):
|
|
||||||
> Start autopilot for task 42
|
|
||||||
[Calls mcp__task_master_ai__autopilot_start(42)]
|
|
||||||
→ Run started: run-2025-01-15-142033
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_next_work_unit(run-2025-01-15-142033)]
|
|
||||||
→ Work unit: RED phase for subtask 42.1
|
|
||||||
→ Action: Generate failing tests for metrics schema
|
|
||||||
→ Test file: src/__tests__/schema.test.js
|
|
||||||
→ Framework: vitest
|
|
||||||
|
|
||||||
> [Claude Code creates test file, runs tests]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_complete_work_unit(
|
|
||||||
run-2025-01-15-142033,
|
|
||||||
workUnit-42.1-RED,
|
|
||||||
{ success: true, testsCreated: ['src/__tests__/schema.test.js'], testsFailed: 3 }
|
|
||||||
)]
|
|
||||||
→ Work unit completed. State saved.
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[Calls mcp__task_master_ai__autopilot_next_work_unit(run-2025-01-15-142033)]
|
|
||||||
→ Work unit: GREEN phase for subtask 42.1
|
|
||||||
→ Action: Implement code to pass failing tests
|
|
||||||
→ Test file: src/__tests__/schema.test.js
|
|
||||||
→ Expected implementation: src/schema.js
|
|
||||||
|
|
||||||
> [Claude Code implements schema.js, runs tests, confirms all pass]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[...]
|
|
||||||
→ Work unit completed. Ready for COMMIT.
|
|
||||||
|
|
||||||
> Get next work unit
|
|
||||||
[...]
|
|
||||||
→ Work unit: COMMIT phase for subtask 42.1
|
|
||||||
→ Commit message: "feat(metrics): add metrics schema (task 42.1)"
|
|
||||||
→ Files to commit: src/__tests__/schema.test.js, src/schema.js
|
|
||||||
|
|
||||||
> [Claude Code stages files and commits]
|
|
||||||
|
|
||||||
> Complete work unit
|
|
||||||
[...]
|
|
||||||
→ Subtask 42.1 complete! Moving to 42.2...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Existing TaskService (task loading, status updates)
|
|
||||||
- Existing PreflightChecker (environment validation)
|
|
||||||
- Existing TaskLoaderService (dependency ordering)
|
|
||||||
- MCP server infrastructure
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
7-10 days
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
Phase 2 will add:
|
|
||||||
- PR creation via gh CLI
|
|
||||||
- Coverage enforcement
|
|
||||||
- Enhanced error recovery
|
|
||||||
- Full resumability testing
|
|
||||||
@@ -1,433 +0,0 @@
|
|||||||
# Phase 2: PR + Resumability - Autonomous TDD Workflow
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Add PR creation with GitHub CLI integration, resumable checkpoints for interrupted runs, and enhanced guardrails with coverage enforcement.
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
- GitHub PR creation via `gh` CLI
|
|
||||||
- Well-formed PR body using run report
|
|
||||||
- Resumable checkpoints and `--resume` flag
|
|
||||||
- Coverage enforcement before finalization
|
|
||||||
- Optional lint/format step
|
|
||||||
- Enhanced error recovery
|
|
||||||
|
|
||||||
## Deliverables
|
|
||||||
|
|
||||||
### 1. PR Creation Integration
|
|
||||||
|
|
||||||
**PRAdapter** (`packages/tm-core/src/services/pr-adapter.ts`):
|
|
||||||
```typescript
|
|
||||||
class PRAdapter {
|
|
||||||
async isGHAvailable(): Promise<boolean>
|
|
||||||
async createPR(options: PROptions): Promise<PRResult>
|
|
||||||
async getPRTemplate(runReport: RunReport): Promise<string>
|
|
||||||
|
|
||||||
// Fallback for missing gh CLI
|
|
||||||
async getManualPRInstructions(options: PROptions): Promise<string>
|
|
||||||
}
|
|
||||||
|
|
||||||
interface PROptions {
|
|
||||||
branch: string
|
|
||||||
base: string
|
|
||||||
title: string
|
|
||||||
body: string
|
|
||||||
draft?: boolean
|
|
||||||
}
|
|
||||||
|
|
||||||
interface PRResult {
|
|
||||||
url: string
|
|
||||||
number: number
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**PR Title Format:**
|
|
||||||
```
|
|
||||||
Task #<id> [<tag>]: <title>
|
|
||||||
```
|
|
||||||
|
|
||||||
Example: `Task #42 [analytics]: User metrics tracking`
|
|
||||||
|
|
||||||
**PR Body Template:**
|
|
||||||
|
|
||||||
Located at `.taskmaster/templates/pr-body.md`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
Implements Task #42 from TaskMaster autonomous workflow.
|
|
||||||
|
|
||||||
**Branch:** {branch}
|
|
||||||
**Tag:** {tag}
|
|
||||||
**Subtasks completed:** {subtaskCount}
|
|
||||||
|
|
||||||
{taskDescription}
|
|
||||||
|
|
||||||
## Subtasks
|
|
||||||
|
|
||||||
{subtasksList}
|
|
||||||
|
|
||||||
## Test Coverage
|
|
||||||
|
|
||||||
| Metric | Coverage |
|
|
||||||
|--------|----------|
|
|
||||||
| Lines | {lines}% |
|
|
||||||
| Branches | {branches}% |
|
|
||||||
| Functions | {functions}% |
|
|
||||||
| Statements | {statements}% |
|
|
||||||
|
|
||||||
**All subtasks passed with {totalTests} tests.**
|
|
||||||
|
|
||||||
## Commits
|
|
||||||
|
|
||||||
{commitsList}
|
|
||||||
|
|
||||||
## Run Report
|
|
||||||
|
|
||||||
Full execution report: `.taskmaster/reports/runs/{runId}/`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
🤖 Generated with [Task Master](https://github.com/cline/task-master) autonomous TDD workflow
|
|
||||||
```
|
|
||||||
|
|
||||||
**Token replacement:**
|
|
||||||
- `{branch}` → branch name
|
|
||||||
- `{tag}` → active tag
|
|
||||||
- `{subtaskCount}` → number of completed subtasks
|
|
||||||
- `{taskDescription}` → task description from TaskMaster
|
|
||||||
- `{subtasksList}` → markdown list of subtask titles
|
|
||||||
- `{lines}`, `{branches}`, `{functions}`, `{statements}` → coverage percentages
|
|
||||||
- `{totalTests}` → total test count
|
|
||||||
- `{commitsList}` → markdown list of commit SHAs and messages
|
|
||||||
- `{runId}` → run ID timestamp
|
|
||||||
|
|
||||||
### 2. GitHub CLI Integration
|
|
||||||
|
|
||||||
**Detection:**
|
|
||||||
```bash
|
|
||||||
which gh
|
|
||||||
```
|
|
||||||
|
|
||||||
If not found, show fallback instructions:
|
|
||||||
```bash
|
|
||||||
✓ Branch pushed: analytics/task-42-user-metrics
|
|
||||||
✗ gh CLI not found - cannot create PR automatically
|
|
||||||
|
|
||||||
To create PR manually:
|
|
||||||
gh pr create \
|
|
||||||
--base main \
|
|
||||||
--head analytics/task-42-user-metrics \
|
|
||||||
--title "Task #42 [analytics]: User metrics tracking" \
|
|
||||||
--body-file .taskmaster/reports/runs/2025-01-15-142033/pr.md
|
|
||||||
|
|
||||||
Or visit:
|
|
||||||
https://github.com/org/repo/compare/main...analytics/task-42-user-metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
**Confirmation gate:**
|
|
||||||
```bash
|
|
||||||
Ready to create PR:
|
|
||||||
Title: Task #42 [analytics]: User metrics tracking
|
|
||||||
Base: main
|
|
||||||
Head: analytics/task-42-user-metrics
|
|
||||||
|
|
||||||
Create PR? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
Unless `--no-confirm` flag is set.
|
|
||||||
|
|
||||||
### 3. Resumable Workflow
|
|
||||||
|
|
||||||
**State Checkpoint** (`state.json`):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"phase": "subtask-loop",
|
|
||||||
"currentSubtask": "42.2",
|
|
||||||
"currentPhase": "green",
|
|
||||||
"attempts": 2,
|
|
||||||
"completedSubtasks": ["42.1"],
|
|
||||||
"commits": ["a1b2c3d"],
|
|
||||||
"branch": "analytics/task-42-user-metrics",
|
|
||||||
"tag": "analytics",
|
|
||||||
"canResume": true,
|
|
||||||
"pausedAt": "2025-01-15T14:25:35Z",
|
|
||||||
"pausedReason": "max_attempts_reached",
|
|
||||||
"nextAction": "manual_review_required"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resume Command:**
|
|
||||||
```bash
|
|
||||||
$ tm autopilot --resume
|
|
||||||
|
|
||||||
Resuming run: 2025-01-15-142033
|
|
||||||
Task: #42 [analytics] User metrics tracking
|
|
||||||
Branch: analytics/task-42-user-metrics
|
|
||||||
Last subtask: 42.2 (GREEN phase, attempt 2/3 failed)
|
|
||||||
Paused: 5 minutes ago
|
|
||||||
|
|
||||||
Reason: Could not achieve green state after 3 attempts
|
|
||||||
Last error: POST /api/metrics returns 500 instead of 201
|
|
||||||
|
|
||||||
Resume from subtask 42.2 GREEN phase? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resume logic:**
|
|
||||||
1. Load state from `.taskmaster/reports/runs/<runId>/state.json`
|
|
||||||
2. Verify branch still exists and is checked out
|
|
||||||
3. Verify no uncommitted changes (unless `--force`)
|
|
||||||
4. Continue from last checkpoint phase
|
|
||||||
5. Update state file as execution progresses
|
|
||||||
|
|
||||||
**Multiple interrupted runs:**
|
|
||||||
```bash
|
|
||||||
$ tm autopilot --resume
|
|
||||||
|
|
||||||
Found 2 resumable runs:
|
|
||||||
1. 2025-01-15-142033 - Task #42 (paused 5 min ago at subtask 42.2 GREEN)
|
|
||||||
2. 2025-01-14-103022 - Task #38 (paused 2 hours ago at subtask 38.3 RED)
|
|
||||||
|
|
||||||
Select run to resume [1-2]:
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Coverage Enforcement
|
|
||||||
|
|
||||||
**Coverage Check Phase** (before finalization):
|
|
||||||
```typescript
|
|
||||||
async function enforceCoverage(runId: string): Promise<void> {
|
|
||||||
const testResults = await testRunner.runAll()
|
|
||||||
const coverage = await testRunner.getCoverage()
|
|
||||||
|
|
||||||
const thresholds = config.test.coverageThresholds
|
|
||||||
const failures = []
|
|
||||||
|
|
||||||
if (coverage.lines < thresholds.lines) {
|
|
||||||
failures.push(`Lines: ${coverage.lines}% < ${thresholds.lines}%`)
|
|
||||||
}
|
|
||||||
// ... check branches, functions, statements
|
|
||||||
|
|
||||||
if (failures.length > 0) {
|
|
||||||
throw new CoverageError(
|
|
||||||
`Coverage thresholds not met:\n${failures.join('\n')}`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Store coverage in run report
|
|
||||||
await storeRunArtifact(runId, 'coverage.json', coverage)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Handling coverage failures:**
|
|
||||||
```bash
|
|
||||||
⚠️ Coverage check failed:
|
|
||||||
Lines: 78.5% < 80%
|
|
||||||
Branches: 75.0% < 80%
|
|
||||||
|
|
||||||
Options:
|
|
||||||
1. Add more tests and resume
|
|
||||||
2. Lower thresholds in .taskmaster/config.json
|
|
||||||
3. Skip coverage check: tm autopilot --resume --skip-coverage
|
|
||||||
|
|
||||||
Run paused. Fix coverage and resume with:
|
|
||||||
tm autopilot --resume
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Optional Lint/Format Step
|
|
||||||
|
|
||||||
**Configuration:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"autopilot": {
|
|
||||||
"finalization": {
|
|
||||||
"lint": {
|
|
||||||
"enabled": true,
|
|
||||||
"command": "npm run lint",
|
|
||||||
"fix": true,
|
|
||||||
"failOnError": false
|
|
||||||
},
|
|
||||||
"format": {
|
|
||||||
"enabled": true,
|
|
||||||
"command": "npm run format",
|
|
||||||
"commitChanges": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Execution:**
|
|
||||||
```bash
|
|
||||||
Finalization Steps:
|
|
||||||
|
|
||||||
✓ All tests passing (12 tests, 0 failures)
|
|
||||||
✓ Coverage thresholds met (85% lines, 82% branches)
|
|
||||||
|
|
||||||
LINT Running linter... ⏳
|
|
||||||
LINT ✓ No lint errors
|
|
||||||
|
|
||||||
FORMAT Running formatter... ⏳
|
|
||||||
FORMAT ✓ Formatted 3 files
|
|
||||||
FORMAT ✓ Committed formatting changes: "chore: auto-format code"
|
|
||||||
|
|
||||||
PUSH Pushing to origin... ⏳
|
|
||||||
PUSH ✓ Pushed analytics/task-42-user-metrics
|
|
||||||
|
|
||||||
PR Creating pull request... ⏳
|
|
||||||
PR ✓ Created PR #123
|
|
||||||
https://github.com/org/repo/pull/123
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Enhanced Error Recovery
|
|
||||||
|
|
||||||
**Pause Points:**
|
|
||||||
- Max GREEN attempts reached (current)
|
|
||||||
- Coverage check failed (new)
|
|
||||||
- Lint errors (if `failOnError: true`)
|
|
||||||
- Git push failed (new)
|
|
||||||
- PR creation failed (new)
|
|
||||||
|
|
||||||
**Each pause saves:**
|
|
||||||
- Full state checkpoint
|
|
||||||
- Last command output
|
|
||||||
- Suggested next actions
|
|
||||||
- Resume instructions
|
|
||||||
|
|
||||||
**Automatic recovery attempts:**
|
|
||||||
- Git push: retry up to 3 times with backoff
|
|
||||||
- PR creation: fall back to manual instructions
|
|
||||||
- Lint: auto-fix if enabled, otherwise pause
|
|
||||||
|
|
||||||
### 7. Finalization Phase Enhancement
|
|
||||||
|
|
||||||
**Updated workflow:**
|
|
||||||
1. Run full test suite
|
|
||||||
2. Check coverage thresholds → pause if failed
|
|
||||||
3. Run lint (if enabled) → pause if failed and `failOnError: true`
|
|
||||||
4. Run format (if enabled) → auto-commit changes
|
|
||||||
5. Confirm push (unless `--no-confirm`)
|
|
||||||
6. Push branch → retry on failure
|
|
||||||
7. Generate PR body from template
|
|
||||||
8. Create PR via gh → fall back to manual instructions
|
|
||||||
9. Update task status to 'review' (configurable)
|
|
||||||
10. Save final run report
|
|
||||||
|
|
||||||
**Final output:**
|
|
||||||
```bash
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
||||||
|
|
||||||
✅ Task #42 [analytics]: User metrics tracking - COMPLETE
|
|
||||||
|
|
||||||
Branch: analytics/task-42-user-metrics
|
|
||||||
Subtasks completed: 3/3
|
|
||||||
Commits: 3
|
|
||||||
Total tests: 12 (12 passed, 0 failed)
|
|
||||||
Coverage: 85% lines, 82% branches, 88% functions, 85% statements
|
|
||||||
|
|
||||||
PR #123: https://github.com/org/repo/pull/123
|
|
||||||
|
|
||||||
Run report: .taskmaster/reports/runs/2025-01-15-142033/
|
|
||||||
|
|
||||||
Next steps:
|
|
||||||
- Review PR and request changes if needed
|
|
||||||
- Merge when ready
|
|
||||||
- Task status updated to 'review'
|
|
||||||
|
|
||||||
Completed in 24 minutes
|
|
||||||
```
|
|
||||||
|
|
||||||
## CLI Updates
|
|
||||||
|
|
||||||
**New flags:**
|
|
||||||
- `--resume` → Resume from last checkpoint
|
|
||||||
- `--skip-coverage` → Skip coverage checks
|
|
||||||
- `--skip-lint` → Skip lint step
|
|
||||||
- `--skip-format` → Skip format step
|
|
||||||
- `--skip-pr` → Push branch but don't create PR
|
|
||||||
- `--draft-pr` → Create draft PR instead of ready-for-review
|
|
||||||
|
|
||||||
## Configuration Updates
|
|
||||||
|
|
||||||
**Add to `.taskmaster/config.json`:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"autopilot": {
|
|
||||||
"finalization": {
|
|
||||||
"lint": {
|
|
||||||
"enabled": false,
|
|
||||||
"command": "npm run lint",
|
|
||||||
"fix": true,
|
|
||||||
"failOnError": false
|
|
||||||
},
|
|
||||||
"format": {
|
|
||||||
"enabled": false,
|
|
||||||
"command": "npm run format",
|
|
||||||
"commitChanges": true
|
|
||||||
},
|
|
||||||
"updateTaskStatus": "review"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"git": {
|
|
||||||
"pr": {
|
|
||||||
"enabled": true,
|
|
||||||
"base": "default",
|
|
||||||
"bodyTemplate": ".taskmaster/templates/pr-body.md",
|
|
||||||
"draft": false
|
|
||||||
},
|
|
||||||
"pushRetries": 3,
|
|
||||||
"pushRetryDelay": 5000
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- Can create PR automatically with well-formed body
|
|
||||||
- Can resume interrupted runs from any checkpoint
|
|
||||||
- Coverage checks prevent low-quality code from being merged
|
|
||||||
- Clear error messages and recovery paths for all failure modes
|
|
||||||
- Run reports include full PR context for review
|
|
||||||
|
|
||||||
## Out of Scope (defer to Phase 3)
|
|
||||||
- Multiple test framework support (pytest, go test)
|
|
||||||
- Diff preview before commits
|
|
||||||
- TUI panel implementation
|
|
||||||
- Extension/IDE integration
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
- Mock `gh` CLI for PR creation tests
|
|
||||||
- Test resume from each possible pause point
|
|
||||||
- Test coverage failure scenarios
|
|
||||||
- Test lint/format integration with mock commands
|
|
||||||
- End-to-end test with PR creation on test repo
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Phase 1 completed (core workflow)
|
|
||||||
- GitHub CLI (`gh`) installed (optional, fallback provided)
|
|
||||||
- Test framework supports coverage output
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
1-2 weeks
|
|
||||||
|
|
||||||
## Risks & Mitigations
|
|
||||||
- **Risk:** GitHub CLI auth issues
|
|
||||||
- **Mitigation:** Clear auth setup docs, fallback to manual instructions
|
|
||||||
|
|
||||||
- **Risk:** PR body template doesn't match all project needs
|
|
||||||
- **Mitigation:** Make template customizable via config path
|
|
||||||
|
|
||||||
- **Risk:** Resume state gets corrupted
|
|
||||||
- **Mitigation:** Validate state on load, provide --force-reset option
|
|
||||||
|
|
||||||
- **Risk:** Coverage calculation differs between runs
|
|
||||||
- **Mitigation:** Store coverage with each test run for comparison
|
|
||||||
|
|
||||||
## Validation
|
|
||||||
Test with:
|
|
||||||
- Successful PR creation end-to-end
|
|
||||||
- Resume from GREEN attempt failure
|
|
||||||
- Resume from coverage failure
|
|
||||||
- Resume from lint failure
|
|
||||||
- Missing `gh` CLI (fallback to manual)
|
|
||||||
- Lint/format integration enabled
|
|
||||||
- Multiple interrupted runs (selection UI)
|
|
||||||
@@ -1,534 +0,0 @@
|
|||||||
# Phase 3: Extensibility + Guardrails - Autonomous TDD Workflow
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
Add multi-language/framework support, enhanced safety guardrails, TUI interface, and extensibility for IDE/editor integration.
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
- Multi-language test runner support (pytest, go test, etc.)
|
|
||||||
- Enhanced safety: diff preview, confirmation gates, minimal-change prompts
|
|
||||||
- Optional TUI panel with tmux integration
|
|
||||||
- State-based extension API for IDE integration
|
|
||||||
- Parallel subtask execution (experimental)
|
|
||||||
|
|
||||||
## Deliverables
|
|
||||||
|
|
||||||
### 1. Multi-Language Test Runner Support
|
|
||||||
|
|
||||||
**Extend TestRunnerAdapter:**
|
|
||||||
```typescript
|
|
||||||
class TestRunnerAdapter {
|
|
||||||
// Existing methods...
|
|
||||||
|
|
||||||
async detectLanguage(): Promise<Language>
|
|
||||||
async detectFramework(language: Language): Promise<Framework>
|
|
||||||
async getFrameworkAdapter(framework: Framework): Promise<FrameworkAdapter>
|
|
||||||
}
|
|
||||||
|
|
||||||
enum Language {
|
|
||||||
JavaScript = 'javascript',
|
|
||||||
TypeScript = 'typescript',
|
|
||||||
Python = 'python',
|
|
||||||
Go = 'go',
|
|
||||||
Rust = 'rust'
|
|
||||||
}
|
|
||||||
|
|
||||||
enum Framework {
|
|
||||||
Vitest = 'vitest',
|
|
||||||
Jest = 'jest',
|
|
||||||
Pytest = 'pytest',
|
|
||||||
GoTest = 'gotest',
|
|
||||||
CargoTest = 'cargotest'
|
|
||||||
}
|
|
||||||
|
|
||||||
interface FrameworkAdapter {
|
|
||||||
runTargeted(pattern: string): Promise<TestResults>
|
|
||||||
runAll(): Promise<TestResults>
|
|
||||||
parseCoverage(output: string): Promise<CoverageReport>
|
|
||||||
getTestFilePattern(): string
|
|
||||||
getTestFileExtension(): string
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Framework-specific adapters:**
|
|
||||||
|
|
||||||
**PytestAdapter** (`packages/tm-core/src/services/test-adapters/pytest-adapter.ts`):
|
|
||||||
```typescript
|
|
||||||
class PytestAdapter implements FrameworkAdapter {
|
|
||||||
async runTargeted(pattern: string): Promise<TestResults> {
|
|
||||||
const output = await exec(`pytest ${pattern} --json-report`)
|
|
||||||
return this.parseResults(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
async runAll(): Promise<TestResults> {
|
|
||||||
const output = await exec('pytest --cov --json-report')
|
|
||||||
return this.parseResults(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
parseCoverage(output: string): Promise<CoverageReport> {
|
|
||||||
// Parse pytest-cov XML output
|
|
||||||
}
|
|
||||||
|
|
||||||
getTestFilePattern(): string {
|
|
||||||
return '**/test_*.py'
|
|
||||||
}
|
|
||||||
|
|
||||||
getTestFileExtension(): string {
|
|
||||||
return '.py'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**GoTestAdapter** (`packages/tm-core/src/services/test-adapters/gotest-adapter.ts`):
|
|
||||||
```typescript
|
|
||||||
class GoTestAdapter implements FrameworkAdapter {
|
|
||||||
async runTargeted(pattern: string): Promise<TestResults> {
|
|
||||||
const output = await exec(`go test ${pattern} -json`)
|
|
||||||
return this.parseResults(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
async runAll(): Promise<TestResults> {
|
|
||||||
const output = await exec('go test ./... -coverprofile=coverage.out -json')
|
|
||||||
return this.parseResults(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
parseCoverage(output: string): Promise<CoverageReport> {
|
|
||||||
// Parse go test coverage output
|
|
||||||
}
|
|
||||||
|
|
||||||
getTestFilePattern(): string {
|
|
||||||
return '**/*_test.go'
|
|
||||||
}
|
|
||||||
|
|
||||||
getTestFileExtension(): string {
|
|
||||||
return '_test.go'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Detection Logic:**
|
|
||||||
```typescript
|
|
||||||
async function detectFramework(): Promise<Framework> {
|
|
||||||
// Check for package.json
|
|
||||||
if (await exists('package.json')) {
|
|
||||||
const pkg = await readJSON('package.json')
|
|
||||||
if (pkg.devDependencies?.vitest) return Framework.Vitest
|
|
||||||
if (pkg.devDependencies?.jest) return Framework.Jest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for Python files
|
|
||||||
if (await exists('pytest.ini') || await exists('setup.py')) {
|
|
||||||
return Framework.Pytest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for Go files
|
|
||||||
if (await exists('go.mod')) {
|
|
||||||
return Framework.GoTest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for Rust files
|
|
||||||
if (await exists('Cargo.toml')) {
|
|
||||||
return Framework.CargoTest
|
|
||||||
}
|
|
||||||
|
|
||||||
throw new Error('Could not detect test framework')
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Enhanced Safety Guardrails
|
|
||||||
|
|
||||||
**Diff Preview Mode:**
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --preview-diffs
|
|
||||||
|
|
||||||
[2/3] Subtask 42.2: Add collection endpoint
|
|
||||||
|
|
||||||
RED ✓ Tests created: src/api/__tests__/metrics.test.js
|
|
||||||
|
|
||||||
GREEN Implementing code...
|
|
||||||
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
||||||
Proposed changes (src/api/metrics.js):
|
|
||||||
|
|
||||||
+ import { MetricsSchema } from '../models/schema.js'
|
|
||||||
+
|
|
||||||
+ export async function createMetric(data) {
|
|
||||||
+ const validated = MetricsSchema.parse(data)
|
|
||||||
+ const result = await db.metrics.create(validated)
|
|
||||||
+ return result
|
|
||||||
+ }
|
|
||||||
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
||||||
|
|
||||||
Apply these changes? [Y/n/e(dit)/s(kip)]
|
|
||||||
Y - Apply and continue
|
|
||||||
n - Reject and retry GREEN phase
|
|
||||||
e - Open in editor for manual changes
|
|
||||||
s - Skip this subtask
|
|
||||||
```
|
|
||||||
|
|
||||||
**Minimal Change Enforcement:**
|
|
||||||
|
|
||||||
Add to system prompt:
|
|
||||||
```markdown
|
|
||||||
CRITICAL: Make MINIMAL changes to pass the failing tests.
|
|
||||||
- Only modify files directly related to the subtask
|
|
||||||
- Do not refactor existing code unless absolutely necessary
|
|
||||||
- Do not add features beyond the acceptance criteria
|
|
||||||
- Keep changes under 50 lines per file when possible
|
|
||||||
- Prefer composition over modification
|
|
||||||
```
|
|
||||||
|
|
||||||
**Change Size Warnings:**
|
|
||||||
```bash
|
|
||||||
⚠️ Large change detected:
|
|
||||||
Files modified: 5
|
|
||||||
Lines changed: +234, -12
|
|
||||||
|
|
||||||
This subtask was expected to be small (~50 lines).
|
|
||||||
Consider:
|
|
||||||
- Breaking into smaller subtasks
|
|
||||||
- Reviewing acceptance criteria
|
|
||||||
- Checking for unintended changes
|
|
||||||
|
|
||||||
Continue anyway? [y/N]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. TUI Interface with tmux
|
|
||||||
|
|
||||||
**Layout:**
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────┬─────────────────────────────────┐
|
|
||||||
│ Task Navigator (left) │ Executor Terminal (right) │
|
|
||||||
│ │ │
|
|
||||||
│ Project: my-app │ $ tm autopilot --executor-mode │
|
|
||||||
│ Branch: analytics/task-42 │ > Running subtask 42.2 GREEN... │
|
|
||||||
│ Tag: analytics │ > Implementing endpoint... │
|
|
||||||
│ │ > Tests: 3 passed, 0 failed │
|
|
||||||
│ Tasks: │ > Ready to commit │
|
|
||||||
│ → 42 [in-progress] User metrics │ │
|
|
||||||
│ → 42.1 [done] Schema │ [Live output from executor] │
|
|
||||||
│ → 42.2 [active] Endpoint ◀ │ │
|
|
||||||
│ → 42.3 [pending] Dashboard │ │
|
|
||||||
│ │ │
|
|
||||||
│ [s] start [p] pause [q] quit │ │
|
|
||||||
└──────────────────────────────────┴─────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
|
|
||||||
**TUI Navigator** (`apps/cli/src/ui/tui/navigator.ts`):
|
|
||||||
```typescript
|
|
||||||
import blessed from 'blessed'
|
|
||||||
|
|
||||||
class AutopilotTUI {
|
|
||||||
private screen: blessed.Widgets.Screen
|
|
||||||
private taskList: blessed.Widgets.ListElement
|
|
||||||
private statusBox: blessed.Widgets.BoxElement
|
|
||||||
private executorPane: string // tmux pane ID
|
|
||||||
|
|
||||||
async start(taskId?: string) {
|
|
||||||
// Create blessed screen
|
|
||||||
this.screen = blessed.screen()
|
|
||||||
|
|
||||||
// Create task list widget
|
|
||||||
this.taskList = blessed.list({
|
|
||||||
label: 'Tasks',
|
|
||||||
keys: true,
|
|
||||||
vi: true,
|
|
||||||
style: { selected: { bg: 'blue' } }
|
|
||||||
})
|
|
||||||
|
|
||||||
// Spawn tmux pane for executor
|
|
||||||
this.executorPane = await this.spawnExecutorPane()
|
|
||||||
|
|
||||||
// Watch state file for updates
|
|
||||||
this.watchStateFile()
|
|
||||||
|
|
||||||
// Handle keybindings
|
|
||||||
this.setupKeybindings()
|
|
||||||
}
|
|
||||||
|
|
||||||
private async spawnExecutorPane(): Promise<string> {
|
|
||||||
const paneId = await exec('tmux split-window -h -P -F "#{pane_id}"')
|
|
||||||
await exec(`tmux send-keys -t ${paneId} "tm autopilot --executor-mode" Enter`)
|
|
||||||
return paneId.trim()
|
|
||||||
}
|
|
||||||
|
|
||||||
private watchStateFile() {
|
|
||||||
watch('.taskmaster/state/current-run.json', (event, filename) => {
|
|
||||||
this.updateDisplay()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
private setupKeybindings() {
|
|
||||||
this.screen.key(['s'], () => this.startTask())
|
|
||||||
this.screen.key(['p'], () => this.pauseTask())
|
|
||||||
this.screen.key(['q'], () => this.quit())
|
|
||||||
this.screen.key(['up', 'down'], () => this.navigateTasks())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Executor Mode:**
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --executor-mode
|
|
||||||
|
|
||||||
# Runs in executor pane, writes state to shared file
|
|
||||||
# Left pane reads state file and updates display
|
|
||||||
```
|
|
||||||
|
|
||||||
**State File** (`.taskmaster/state/current-run.json`):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runId": "2025-01-15-142033",
|
|
||||||
"taskId": "42",
|
|
||||||
"status": "running",
|
|
||||||
"currentPhase": "green",
|
|
||||||
"currentSubtask": "42.2",
|
|
||||||
"lastOutput": "Implementing endpoint...",
|
|
||||||
"testsStatus": {
|
|
||||||
"passed": 3,
|
|
||||||
"failed": 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Extension API for IDE Integration
|
|
||||||
|
|
||||||
**State-based API:**
|
|
||||||
|
|
||||||
Expose run state via JSON files that IDEs can read:
|
|
||||||
- `.taskmaster/state/current-run.json` - live run state
|
|
||||||
- `.taskmaster/reports/runs/<runId>/manifest.json` - run metadata
|
|
||||||
- `.taskmaster/reports/runs/<runId>/log.jsonl` - event stream
|
|
||||||
|
|
||||||
**WebSocket API (optional):**
|
|
||||||
```typescript
|
|
||||||
// packages/tm-core/src/services/autopilot-server.ts
|
|
||||||
class AutopilotServer {
|
|
||||||
private wss: WebSocketServer
|
|
||||||
|
|
||||||
start(port: number = 7890) {
|
|
||||||
this.wss = new WebSocketServer({ port })
|
|
||||||
|
|
||||||
this.wss.on('connection', (ws) => {
|
|
||||||
// Send current state
|
|
||||||
ws.send(JSON.stringify(this.getCurrentState()))
|
|
||||||
|
|
||||||
// Stream events
|
|
||||||
this.orchestrator.on('*', (event) => {
|
|
||||||
ws.send(JSON.stringify(event))
|
|
||||||
})
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage from IDE extension:**
|
|
||||||
```typescript
|
|
||||||
// VS Code extension example
|
|
||||||
const ws = new WebSocket('ws://localhost:7890')
|
|
||||||
|
|
||||||
ws.on('message', (data) => {
|
|
||||||
const event = JSON.parse(data)
|
|
||||||
|
|
||||||
if (event.type === 'subtask:complete') {
|
|
||||||
vscode.window.showInformationMessage(
|
|
||||||
`Subtask ${event.subtaskId} completed`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Parallel Subtask Execution (Experimental)
|
|
||||||
|
|
||||||
**Dependency Analysis:**
|
|
||||||
```typescript
|
|
||||||
class SubtaskScheduler {
|
|
||||||
async buildDependencyGraph(subtasks: Subtask[]): Promise<DAG> {
|
|
||||||
const graph = new DAG()
|
|
||||||
|
|
||||||
for (const subtask of subtasks) {
|
|
||||||
graph.addNode(subtask.id)
|
|
||||||
|
|
||||||
for (const depId of subtask.dependencies) {
|
|
||||||
graph.addEdge(depId, subtask.id)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return graph
|
|
||||||
}
|
|
||||||
|
|
||||||
async getParallelBatches(graph: DAG): Promise<Subtask[][]> {
|
|
||||||
const batches: Subtask[][] = []
|
|
||||||
const completed = new Set<string>()
|
|
||||||
|
|
||||||
while (completed.size < graph.size()) {
|
|
||||||
const ready = graph.nodes.filter(node =>
|
|
||||||
!completed.has(node.id) &&
|
|
||||||
node.dependencies.every(dep => completed.has(dep))
|
|
||||||
)
|
|
||||||
|
|
||||||
batches.push(ready)
|
|
||||||
ready.forEach(node => completed.add(node.id))
|
|
||||||
}
|
|
||||||
|
|
||||||
return batches
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parallel Execution:**
|
|
||||||
```bash
|
|
||||||
$ tm autopilot 42 --parallel
|
|
||||||
|
|
||||||
[Batch 1] Running 2 subtasks in parallel:
|
|
||||||
→ 42.1: Add metrics schema
|
|
||||||
→ 42.4: Add API documentation
|
|
||||||
|
|
||||||
42.1 RED ✓ Tests created
|
|
||||||
42.4 RED ✓ Tests created
|
|
||||||
|
|
||||||
42.1 GREEN ✓ Implementation complete
|
|
||||||
42.4 GREEN ✓ Implementation complete
|
|
||||||
|
|
||||||
42.1 COMMIT ✓ Committed: a1b2c3d
|
|
||||||
42.4 COMMIT ✓ Committed: e5f6g7h
|
|
||||||
|
|
||||||
[Batch 2] Running 2 subtasks in parallel (depend on 42.1):
|
|
||||||
→ 42.2: Add collection endpoint
|
|
||||||
→ 42.3: Add dashboard widget
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Conflict Detection:**
|
|
||||||
```typescript
|
|
||||||
async function detectConflicts(subtasks: Subtask[]): Promise<Conflict[]> {
|
|
||||||
const conflicts: Conflict[] = []
|
|
||||||
|
|
||||||
for (let i = 0; i < subtasks.length; i++) {
|
|
||||||
for (let j = i + 1; j < subtasks.length; j++) {
|
|
||||||
const filesA = await predictAffectedFiles(subtasks[i])
|
|
||||||
const filesB = await predictAffectedFiles(subtasks[j])
|
|
||||||
|
|
||||||
const overlap = filesA.filter(f => filesB.includes(f))
|
|
||||||
|
|
||||||
if (overlap.length > 0) {
|
|
||||||
conflicts.push({
|
|
||||||
subtasks: [subtasks[i].id, subtasks[j].id],
|
|
||||||
files: overlap
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return conflicts
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Advanced Configuration
|
|
||||||
|
|
||||||
**Add to `.taskmaster/config.json`:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"autopilot": {
|
|
||||||
"safety": {
|
|
||||||
"previewDiffs": false,
|
|
||||||
"maxChangeLinesPerFile": 100,
|
|
||||||
"warnOnLargeChanges": true,
|
|
||||||
"requireConfirmOnLargeChanges": true
|
|
||||||
},
|
|
||||||
"parallel": {
|
|
||||||
"enabled": false,
|
|
||||||
"maxConcurrent": 3,
|
|
||||||
"detectConflicts": true
|
|
||||||
},
|
|
||||||
"tui": {
|
|
||||||
"enabled": false,
|
|
||||||
"tmuxSession": "taskmaster-autopilot"
|
|
||||||
},
|
|
||||||
"api": {
|
|
||||||
"enabled": false,
|
|
||||||
"port": 7890,
|
|
||||||
"allowRemote": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"test": {
|
|
||||||
"frameworks": {
|
|
||||||
"python": {
|
|
||||||
"runner": "pytest",
|
|
||||||
"coverageCommand": "pytest --cov",
|
|
||||||
"testPattern": "**/test_*.py"
|
|
||||||
},
|
|
||||||
"go": {
|
|
||||||
"runner": "go test",
|
|
||||||
"coverageCommand": "go test ./... -coverprofile=coverage.out",
|
|
||||||
"testPattern": "**/*_test.go"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## CLI Updates
|
|
||||||
|
|
||||||
**New commands:**
|
|
||||||
```bash
|
|
||||||
tm autopilot <taskId> --tui # Launch TUI interface
|
|
||||||
tm autopilot <taskId> --parallel # Enable parallel execution
|
|
||||||
tm autopilot <taskId> --preview-diffs # Show diffs before applying
|
|
||||||
tm autopilot <taskId> --executor-mode # Run as executor pane
|
|
||||||
tm autopilot-server start # Start WebSocket API
|
|
||||||
```
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- Supports Python projects with pytest
|
|
||||||
- Supports Go projects with go test
|
|
||||||
- Diff preview prevents unwanted changes
|
|
||||||
- TUI provides better visibility for long-running tasks
|
|
||||||
- IDE extensions can integrate via state files or WebSocket
|
|
||||||
- Parallel execution reduces total time for independent subtasks
|
|
||||||
|
|
||||||
## Out of Scope
|
|
||||||
- Full Electron/web GUI
|
|
||||||
- AI executor selection UI (defer to Phase 4)
|
|
||||||
- Multi-repository support
|
|
||||||
- Remote execution on cloud runners
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
- Test with Python project (pytest)
|
|
||||||
- Test with Go project (go test)
|
|
||||||
- Test diff preview UI with mock changes
|
|
||||||
- Test parallel execution with independent subtasks
|
|
||||||
- Test conflict detection with overlapping file changes
|
|
||||||
- Test TUI with mock tmux environment
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
- Phase 2 completed (PR + resumability)
|
|
||||||
- tmux installed (for TUI)
|
|
||||||
- blessed or ink library (for TUI rendering)
|
|
||||||
|
|
||||||
## Estimated Effort
|
|
||||||
3-4 weeks
|
|
||||||
|
|
||||||
## Risks & Mitigations
|
|
||||||
- **Risk:** Parallel execution causes git conflicts
|
|
||||||
- **Mitigation:** Conservative conflict detection, sequential fallback
|
|
||||||
|
|
||||||
- **Risk:** TUI adds complexity and maintenance burden
|
|
||||||
- **Mitigation:** Keep TUI optional, state-based design allows alternatives
|
|
||||||
|
|
||||||
- **Risk:** Framework adapters hard to maintain across versions
|
|
||||||
- **Mitigation:** Abstract common parsing logic, document adapter interface
|
|
||||||
|
|
||||||
- **Risk:** Diff preview slows down workflow
|
|
||||||
- **Mitigation:** Make optional, use --preview-diffs flag only when needed
|
|
||||||
|
|
||||||
## Validation
|
|
||||||
Test with:
|
|
||||||
- Python project with pytest and pytest-cov
|
|
||||||
- Go project with go test
|
|
||||||
- Large changes requiring confirmation
|
|
||||||
- Parallel execution with 3+ independent subtasks
|
|
||||||
- TUI with task selection and live status updates
|
|
||||||
- VS Code extension reading state files
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
Simple Todo App PRD
|
|
||||||
|
|
||||||
Create a basic todo list application with the following features:
|
|
||||||
1. Add new todos
|
|
||||||
2. Mark todos as complete
|
|
||||||
3. Delete todos
|
|
||||||
|
|
||||||
That's it. Keep it simple.
|
|
||||||
@@ -1,343 +0,0 @@
|
|||||||
# Product Requirements Document: tm-core Package - Parse PRD Feature
|
|
||||||
|
|
||||||
## Project Overview
|
|
||||||
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
|
|
||||||
|
|
||||||
## Design Patterns & Architecture
|
|
||||||
|
|
||||||
### Patterns to Apply
|
|
||||||
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
|
|
||||||
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
|
|
||||||
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
|
|
||||||
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
|
|
||||||
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
|
|
||||||
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
|
|
||||||
|
|
||||||
### Naming Conventions
|
|
||||||
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
|
|
||||||
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
|
|
||||||
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
|
|
||||||
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
|
|
||||||
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
|
|
||||||
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
|
|
||||||
|
|
||||||
## Exact Folder Structure Required
|
|
||||||
```
|
|
||||||
packages/tm-core/
|
|
||||||
├── src/
|
|
||||||
│ ├── index.ts
|
|
||||||
│ ├── types/
|
|
||||||
│ │ └── index.ts
|
|
||||||
│ ├── interfaces/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── storage.interface.ts
|
|
||||||
│ │ ├── ai-provider.interface.ts
|
|
||||||
│ │ └── configuration.interface.ts
|
|
||||||
│ ├── tasks/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── task-parser.ts
|
|
||||||
│ ├── ai/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── base-provider.ts
|
|
||||||
│ │ ├── provider-factory.ts
|
|
||||||
│ │ ├── prompt-builder.ts
|
|
||||||
│ │ └── providers/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── anthropic-provider.ts
|
|
||||||
│ │ ├── openai-provider.ts
|
|
||||||
│ │ └── google-provider.ts
|
|
||||||
│ ├── storage/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── file-storage.ts
|
|
||||||
│ ├── config/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── config-manager.ts
|
|
||||||
│ ├── utils/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── id-generator.ts
|
|
||||||
│ └── errors/
|
|
||||||
│ ├── index.ts # Barrel export
|
|
||||||
│ └── task-master-error.ts
|
|
||||||
├── tests/
|
|
||||||
│ ├── task-parser.test.ts
|
|
||||||
│ ├── integration/
|
|
||||||
│ │ └── parse-prd.test.ts
|
|
||||||
│ └── mocks/
|
|
||||||
│ └── mock-provider.ts
|
|
||||||
├── package.json
|
|
||||||
├── tsconfig.json
|
|
||||||
├── tsup.config.js
|
|
||||||
└── jest.config.js
|
|
||||||
```
|
|
||||||
|
|
||||||
## Specific Implementation Requirements
|
|
||||||
|
|
||||||
### 1. Create types/index.ts
|
|
||||||
Define these exact TypeScript interfaces:
|
|
||||||
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
|
|
||||||
- `Subtask` interface with fields: id, title, description, completed
|
|
||||||
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
|
|
||||||
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
|
|
||||||
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
|
|
||||||
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
|
|
||||||
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
|
|
||||||
|
|
||||||
### 2. Create interfaces/storage.interface.ts
|
|
||||||
Define `IStorage` interface with these exact methods:
|
|
||||||
- `loadTasks(tag?: string): Promise<Task[]>`
|
|
||||||
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
|
|
||||||
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
|
|
||||||
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
|
|
||||||
- `deleteTask(id: string, tag?: string): Promise<void>`
|
|
||||||
- `exists(tag?: string): Promise<boolean>`
|
|
||||||
|
|
||||||
### 3. Create interfaces/ai-provider.interface.ts
|
|
||||||
Define `IAIProvider` interface with these exact methods:
|
|
||||||
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
|
|
||||||
- `calculateTokens(text: string): number`
|
|
||||||
- `getName(): string`
|
|
||||||
- `getModel(): string`
|
|
||||||
|
|
||||||
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
|
|
||||||
|
|
||||||
### 4. Create interfaces/configuration.interface.ts
|
|
||||||
Define `IConfiguration` interface with fields:
|
|
||||||
- `projectPath: string`
|
|
||||||
- `aiProvider: string`
|
|
||||||
- `apiKey?: string`
|
|
||||||
- `aiOptions?: AIOptions`
|
|
||||||
- `mainModel?: string`
|
|
||||||
- `researchModel?: string`
|
|
||||||
- `fallbackModel?: string`
|
|
||||||
- `tasksPath?: string`
|
|
||||||
- `enableTags?: boolean`
|
|
||||||
|
|
||||||
### 5. Create tasks/task-parser.ts
|
|
||||||
Create class `TaskParser` with:
|
|
||||||
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
|
|
||||||
- Private property `promptBuilder: PromptBuilder`
|
|
||||||
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
|
||||||
- Private method `readPRD(prdPath: string): Promise<string>`
|
|
||||||
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
|
|
||||||
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
|
|
||||||
- Apply **Dependency Injection** pattern via constructor
|
|
||||||
|
|
||||||
### 6. Create ai/base-provider.ts
|
|
||||||
Copy existing base-provider.js and convert to TypeScript abstract class:
|
|
||||||
- Abstract class `BaseProvider` implementing `IAIProvider`
|
|
||||||
- Protected properties: `apiKey: string`, `model: string`
|
|
||||||
- Constructor accepting `apiKey: string` and `options: { model?: string }`
|
|
||||||
- Abstract methods matching IAIProvider interface
|
|
||||||
- Abstract method `getDefaultModel(): string`
|
|
||||||
- Apply **Template Method** pattern for common provider logic
|
|
||||||
|
|
||||||
### 7. Create ai/provider-factory.ts
|
|
||||||
Create class `ProviderFactory` with:
|
|
||||||
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
|
|
||||||
- Switch statement for providers: 'anthropic', 'openai', 'google'
|
|
||||||
- Dynamic imports for each provider
|
|
||||||
- Throw error for unknown providers
|
|
||||||
- Apply **Factory** pattern for creating provider instances
|
|
||||||
|
|
||||||
Example implementation structure:
|
|
||||||
```typescript
|
|
||||||
switch (provider.toLowerCase()) {
|
|
||||||
case 'anthropic':
|
|
||||||
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
|
|
||||||
return new AnthropicProvider(apiKey, { model });
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8. Create ai/providers/anthropic-provider.ts
|
|
||||||
Create class `AnthropicProvider` extending `BaseProvider`:
|
|
||||||
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
|
|
||||||
- Private property `client: Anthropic`
|
|
||||||
- Implement all abstract methods from BaseProvider
|
|
||||||
- Default model: 'claude-3-sonnet-20240229'
|
|
||||||
- Handle API errors and wrap with meaningful messages
|
|
||||||
|
|
||||||
### 9. Create ai/providers/openai-provider.ts (placeholder)
|
|
||||||
Create class `OpenAIProvider` extending `BaseProvider`:
|
|
||||||
- Import OpenAI SDK when implemented
|
|
||||||
- For now, throw error: "OpenAI provider not yet implemented"
|
|
||||||
|
|
||||||
### 10. Create ai/providers/google-provider.ts (placeholder)
|
|
||||||
Create class `GoogleProvider` extending `BaseProvider`:
|
|
||||||
- Import Google Generative AI SDK when implemented
|
|
||||||
- For now, throw error: "Google provider not yet implemented"
|
|
||||||
|
|
||||||
### 11. Create ai/prompt-builder.ts
|
|
||||||
Create class `PromptBuilder` with:
|
|
||||||
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
|
|
||||||
- Method `buildExpandPrompt(task: string, context?: string): string`
|
|
||||||
- Use template literals for prompt construction
|
|
||||||
- Include specific JSON format instructions in prompts
|
|
||||||
|
|
||||||
### 9. Create storage/file-storage.ts
|
|
||||||
Create class `FileStorage` implementing `IStorage`:
|
|
||||||
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
|
|
||||||
- Constructor accepting `projectPath: string`
|
|
||||||
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
|
|
||||||
- Private method `ensureDirectory(dir: string): Promise<void>`
|
|
||||||
- Implement all IStorage methods
|
|
||||||
- Handle ENOENT errors by returning empty arrays
|
|
||||||
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
|
|
||||||
- Apply **Repository** pattern for data access abstraction
|
|
||||||
|
|
||||||
### 10. Create config/config-manager.ts
|
|
||||||
Create class `ConfigManager`:
|
|
||||||
- Private property `config: IConfiguration`
|
|
||||||
- Constructor accepting `options: Partial<IConfiguration>`
|
|
||||||
- Use Zod for validation with schema matching IConfiguration
|
|
||||||
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
|
|
||||||
- Method `getAll(): IConfiguration`
|
|
||||||
- Method `validate(): boolean`
|
|
||||||
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
|
|
||||||
|
|
||||||
### 11. Create utils/id-generator.ts
|
|
||||||
Export functions:
|
|
||||||
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
|
|
||||||
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
|
|
||||||
|
|
||||||
### 16. Create src/index.ts
|
|
||||||
Create main class `TaskMasterCore`:
|
|
||||||
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
|
|
||||||
- Constructor accepting `options: Partial<IConfiguration>`
|
|
||||||
- Method `initialize(): Promise<void>` for lazy loading
|
|
||||||
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
|
||||||
- Method `getTasks(tag?: string): Promise<Task[]>`
|
|
||||||
- Apply **Facade** pattern to provide simple API over complex subsystems
|
|
||||||
|
|
||||||
Export:
|
|
||||||
- Class `TaskMasterCore`
|
|
||||||
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
|
|
||||||
- All types from './types'
|
|
||||||
- All interfaces from './interfaces/*'
|
|
||||||
|
|
||||||
Import statements should use kebab-case:
|
|
||||||
```typescript
|
|
||||||
import { TaskParser } from './tasks/task-parser';
|
|
||||||
import { FileStorage } from './storage/file-storage';
|
|
||||||
import { ConfigManager } from './config/config-manager';
|
|
||||||
import { ProviderFactory } from './ai/provider-factory';
|
|
||||||
```
|
|
||||||
|
|
||||||
### 17. Configure package.json
|
|
||||||
Create package.json with:
|
|
||||||
- name: "@task-master/core"
|
|
||||||
- version: "0.1.0"
|
|
||||||
- type: "module"
|
|
||||||
- main: "./dist/index.js"
|
|
||||||
- module: "./dist/index.mjs"
|
|
||||||
- types: "./dist/index.d.ts"
|
|
||||||
- exports map for proper ESM/CJS support
|
|
||||||
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
|
|
||||||
- dependencies: zod@^3.23.8
|
|
||||||
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
|
|
||||||
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
|
|
||||||
|
|
||||||
### 18. Configure TypeScript
|
|
||||||
Create tsconfig.json with:
|
|
||||||
- target: "ES2022"
|
|
||||||
- module: "ESNext"
|
|
||||||
- strict: true (with all strict flags enabled)
|
|
||||||
- declaration: true
|
|
||||||
- outDir: "./dist"
|
|
||||||
- rootDir: "./src"
|
|
||||||
|
|
||||||
### 19. Configure tsup
|
|
||||||
Create tsup.config.js with:
|
|
||||||
- entry: ['src/index.ts']
|
|
||||||
- format: ['cjs', 'esm']
|
|
||||||
- dts: true
|
|
||||||
- sourcemap: true
|
|
||||||
- clean: true
|
|
||||||
- external: AI provider SDKs
|
|
||||||
|
|
||||||
### 20. Configure Jest
|
|
||||||
Create jest.config.js with:
|
|
||||||
- preset: 'ts-jest'
|
|
||||||
- testEnvironment: 'node'
|
|
||||||
- Coverage threshold: 80% for all metrics
|
|
||||||
|
|
||||||
## Build Process
|
|
||||||
1. Use tsup to compile TypeScript to both CommonJS and ESM
|
|
||||||
2. Generate .d.ts files for TypeScript consumers
|
|
||||||
3. Output to dist/ directory
|
|
||||||
4. Ensure tree-shaking works properly
|
|
||||||
|
|
||||||
## Testing Requirements
|
|
||||||
- Create unit tests for TaskParser in tests/task-parser.test.ts
|
|
||||||
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
|
|
||||||
- Test error scenarios (file not found, invalid JSON, etc.)
|
|
||||||
- Create integration test in tests/integration/parse-prd.test.ts
|
|
||||||
- Follow kebab-case naming for all test files
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- TypeScript compilation with zero errors
|
|
||||||
- No use of 'any' type
|
|
||||||
- All interfaces properly exported
|
|
||||||
- Compatible with existing tasks.json format
|
|
||||||
- Feature flag support via USE_TM_CORE environment variable
|
|
||||||
|
|
||||||
## Import/Export Conventions
|
|
||||||
- Use named exports for all classes and interfaces
|
|
||||||
- Use barrel exports (index.ts) in each directory
|
|
||||||
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
|
|
||||||
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
|
|
||||||
- Use .js extension in import paths for ESM compatibility
|
|
||||||
|
|
||||||
## Error Handling Patterns
|
|
||||||
- Create custom error classes in `src/errors/` directory
|
|
||||||
- All public methods should catch and wrap errors with context
|
|
||||||
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
|
|
||||||
- Never expose internal implementation details in error messages
|
|
||||||
- Log errors to console.error only in development mode
|
|
||||||
|
|
||||||
## Barrel Exports Content
|
|
||||||
|
|
||||||
### interfaces/index.ts
|
|
||||||
```typescript
|
|
||||||
export type { IStorage } from './storage.interface';
|
|
||||||
export type { IAIProvider, AIOptions } from './ai-provider.interface';
|
|
||||||
export type { IConfiguration } from './configuration.interface';
|
|
||||||
```
|
|
||||||
|
|
||||||
### tasks/index.ts
|
|
||||||
```typescript
|
|
||||||
export { TaskParser } from './task-parser';
|
|
||||||
```
|
|
||||||
|
|
||||||
### ai/index.ts
|
|
||||||
```typescript
|
|
||||||
export { BaseProvider } from './base-provider';
|
|
||||||
export { ProviderFactory } from './provider-factory';
|
|
||||||
export { PromptBuilder } from './prompt-builder';
|
|
||||||
```
|
|
||||||
|
|
||||||
### ai/providers/index.ts
|
|
||||||
```typescript
|
|
||||||
export { AnthropicProvider } from './anthropic-provider';
|
|
||||||
export { OpenAIProvider } from './openai-provider';
|
|
||||||
export { GoogleProvider } from './google-provider';
|
|
||||||
```
|
|
||||||
|
|
||||||
### storage/index.ts
|
|
||||||
```typescript
|
|
||||||
export { FileStorage } from './file-storage';
|
|
||||||
```
|
|
||||||
|
|
||||||
### config/index.ts
|
|
||||||
```typescript
|
|
||||||
export { ConfigManager } from './config-manager';
|
|
||||||
```
|
|
||||||
|
|
||||||
### utils/index.ts
|
|
||||||
```typescript
|
|
||||||
export { generateTaskId, generateSubtaskId } from './id-generator';
|
|
||||||
```
|
|
||||||
|
|
||||||
### errors/index.ts
|
|
||||||
```typescript
|
|
||||||
export { TaskMasterError } from './task-master-error';
|
|
||||||
```
|
|
||||||
@@ -1,197 +0,0 @@
|
|||||||
{
|
|
||||||
"meta": {
|
|
||||||
"generatedAt": "2025-10-07T09:46:06.248Z",
|
|
||||||
"tasksAnalyzed": 23,
|
|
||||||
"totalTasks": 23,
|
|
||||||
"analysisCount": 23,
|
|
||||||
"thresholdScore": 5,
|
|
||||||
"projectName": "Taskmaster",
|
|
||||||
"usedResearch": false
|
|
||||||
},
|
|
||||||
"complexityAnalysis": [
|
|
||||||
{
|
|
||||||
"taskId": 31,
|
|
||||||
"taskTitle": "Create WorkflowOrchestrator service foundation",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Break down the WorkflowOrchestrator foundation into its core architectural components: phase management system, event emitter infrastructure, state management interfaces, service integration, and lifecycle control methods. Each subtask should focus on a specific architectural concern with clear interfaces and testable units.",
|
|
||||||
"reasoning": "This is a foundational service requiring state machine implementation, event-driven architecture, and integration with existing services. The complexity is high due to the need for robust phase management, error handling, and service orchestration patterns."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 32,
|
|
||||||
"taskTitle": "Implement GitAdapter for repository operations",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Decompose the GitAdapter implementation into: TypeScript wrapper creation around existing git-utils.js, core git operation methods with comprehensive error handling, branch naming pattern system with token replacement, and confirmation gates for destructive operations. Focus on type safety and existing code integration.",
|
|
||||||
"reasoning": "Moderate-high complexity due to TypeScript integration over existing JavaScript utilities, branch pattern implementation, and safety mechanisms. The existing git-utils.js provides a solid foundation, reducing complexity."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 33,
|
|
||||||
"taskTitle": "Create TestRunnerAdapter for framework detection and execution",
|
|
||||||
"complexityScore": 8,
|
|
||||||
"recommendedSubtasks": 6,
|
|
||||||
"expansionPrompt": "Break down TestRunnerAdapter into framework detection logic, test execution engine with process management, Jest-specific result parsing, Vitest-specific result parsing, unified result interfaces, and final integration. Each framework parser should be separate to handle their unique output formats.",
|
|
||||||
"reasoning": "High complexity due to multiple framework support (Jest, Vitest), child process management, result parsing from different formats, coverage reporting, and timeout handling. Each framework has unique output formats requiring specialized parsers."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 34,
|
|
||||||
"taskTitle": "Implement autopilot CLI command structure",
|
|
||||||
"complexityScore": 5,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Structure the autopilot command into: basic command setup with Commander.js integration, comprehensive flag handling and validation system, preflight check validation with environment validation, and WorkflowOrchestrator integration with dry-run execution planning. Follow existing CLI patterns from the codebase.",
|
|
||||||
"reasoning": "Moderate complexity involving CLI structure, flag handling, and integration with WorkflowOrchestrator. The existing CLI patterns and Commander.js usage in the codebase provide good guidance, reducing implementation complexity."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 35,
|
|
||||||
"taskTitle": "Integrate surgical test generator with WorkflowOrchestrator",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Decompose the test generation integration into: TaskExecutionService enhancement for test generation mode, TestGenerationService creation using executor framework, prompt composition system for rule integration, and framework-specific test pattern support. Leverage existing executor patterns from the codebase.",
|
|
||||||
"reasoning": "Moderate-high complexity due to integration with existing services, prompt composition system, and framework-specific test generation. The existing executor framework and TaskExecutionService provide good integration points."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 36,
|
|
||||||
"taskTitle": "Implement subtask TDD loop execution",
|
|
||||||
"complexityScore": 9,
|
|
||||||
"recommendedSubtasks": 7,
|
|
||||||
"expansionPrompt": "Break down the TDD loop into: SubtaskExecutor class architecture, RED phase test generation, GREEN phase code generation, COMMIT phase with conventional commits, retry mechanism for GREEN phase, timeout and backoff policies, and TaskService integration. Each phase should be independently testable.",
|
|
||||||
"reasoning": "Very high complexity due to implementing the complete TDD red-green-commit cycle with AI integration, retry logic, timeout handling, and git operations. This is the core autonomous workflow requiring robust error handling and state management."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 37,
|
|
||||||
"taskTitle": "Add configuration schema for autopilot settings",
|
|
||||||
"complexityScore": 4,
|
|
||||||
"recommendedSubtasks": 3,
|
|
||||||
"expansionPrompt": "Expand configuration support into: extending configuration interfaces with autopilot settings, updating ConfigManager validation logic, and implementing default configuration values. Build on existing configuration patterns and maintain backward compatibility.",
|
|
||||||
"reasoning": "Low-moderate complexity involving schema extension and validation logic. The existing configuration system provides clear patterns to follow, making this primarily an extension task rather than new architecture."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 38,
|
|
||||||
"taskTitle": "Implement run state persistence and logging",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Structure run state management into: RunStateManager service class creation, run directory structure and manifest creation, JSONL event logging system, test result and commit tracking storage, and state checkpointing with resume functionality. Focus on data integrity and structured logging.",
|
|
||||||
"reasoning": "Moderate-high complexity due to file system operations, structured logging, state serialization, and resume functionality. Requires careful design of data formats and error handling for persistence operations."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 39,
|
|
||||||
"taskTitle": "Add GitHub PR creation with run reports",
|
|
||||||
"complexityScore": 5,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Decompose PR creation into: PRAdapter service foundation with interfaces, GitHub CLI integration and command execution, PR body generation from run data and test results, and custom PR template system with configuration support. Leverage existing git-utils.js patterns for CLI integration.",
|
|
||||||
"reasoning": "Moderate complexity involving GitHub CLI integration, report generation, and template systems. The existing git-utils.js provides patterns for CLI tool integration, reducing implementation complexity."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 40,
|
|
||||||
"taskTitle": "Implement task dependency resolution for subtask ordering",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Break down dependency resolution into: dependency resolution algorithm with cycle detection, topological sorting for subtask ordering, task eligibility checking system, and TaskService integration. Implement graph algorithms for dependency management with proper error handling.",
|
|
||||||
"reasoning": "Moderate-high complexity due to graph algorithm implementation, cycle detection, and integration with existing task management. Requires careful design of dependency resolution logic and edge case handling."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 41,
|
|
||||||
"taskTitle": "Create resume functionality for interrupted runs",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Structure resume functionality into: checkpoint creation in RunStateManager, state restoration logic with validation, state validation for safe resume operations, CLI flag implementation for resume command, and partial phase resume functionality. Focus on data integrity and workflow consistency.",
|
|
||||||
"reasoning": "High complexity due to state serialization/deserialization, workflow restoration, validation logic, and CLI integration. Requires robust error handling and state consistency checks for reliable resume operations."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 42,
|
|
||||||
"taskTitle": "Add coverage threshold enforcement",
|
|
||||||
"complexityScore": 5,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Decompose coverage enforcement into: coverage report parsing from Jest/Vitest, configurable threshold validation logic, coverage gates integration in workflow phases, and detailed coverage failure reporting system. Build on existing TestRunnerAdapter patterns.",
|
|
||||||
"reasoning": "Moderate complexity involving coverage report parsing, validation logic, and workflow integration. The existing TestRunnerAdapter provides good foundation for extending coverage capabilities."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 43,
|
|
||||||
"taskTitle": "Implement tmux-based TUI navigator",
|
|
||||||
"complexityScore": 8,
|
|
||||||
"recommendedSubtasks": 6,
|
|
||||||
"expansionPrompt": "Break down TUI implementation into: framework selection and basic structure setup, left pane interface layout with status indicators, tmux integration and terminal coordination, navigation system with keybindings, real-time status updates system, and comprehensive event handling with UX polish. Each component should be independently testable.",
|
|
||||||
"reasoning": "High complexity due to terminal UI framework integration, tmux session management, real-time updates, keyboard event handling, and terminal interface design. Requires expertise in terminal UI libraries and tmux integration."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 44,
|
|
||||||
"taskTitle": "Add prompt composition system for context-aware test generation",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Structure prompt composition into: PromptComposer service foundation, template processing engine with token replacement, rule loading system with precedence handling, and context injection with phase-specific prompt generation. Focus on flexible template system and rule management.",
|
|
||||||
"reasoning": "Moderate-high complexity due to template processing, rule precedence systems, and context injection logic. Requires careful design of template syntax and rule loading mechanisms."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 45,
|
|
||||||
"taskTitle": "Implement tag-branch mapping and automatic tag switching",
|
|
||||||
"complexityScore": 5,
|
|
||||||
"recommendedSubtasks": 3,
|
|
||||||
"expansionPrompt": "Decompose tag-branch mapping into: GitAdapter enhancement with branch-to-tag extraction logic, automatic tag switching workflow integration, and branch-to-tag mapping persistence with validation. Build on existing git-utils.js and tag management functionality.",
|
|
||||||
"reasoning": "Moderate complexity involving pattern matching, tag management integration, and workflow automation. The existing git-utils.js and tag management systems provide good foundation for implementation."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 46,
|
|
||||||
"taskTitle": "Add comprehensive error handling and recovery",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Structure error handling into: error classification system with specific error types, recovery suggestion engine with actionable recommendations, error context management and preservation, force flag implementation with selective bypass, and logging/reporting system integration. Focus on actionable error messages and automated recovery where possible.",
|
|
||||||
"reasoning": "High complexity due to comprehensive error taxonomy, recovery automation, context preservation, and integration across all workflow components. Requires deep understanding of failure modes and recovery strategies."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 47,
|
|
||||||
"taskTitle": "Implement conventional commit message generation",
|
|
||||||
"complexityScore": 4,
|
|
||||||
"recommendedSubtasks": 3,
|
|
||||||
"expansionPrompt": "Break down commit message generation into: template system creation with variable substitution, commit type auto-detection based on task content and file changes, and validation with GitAdapter integration. Follow conventional commit standards and integrate with existing git operations.",
|
|
||||||
"reasoning": "Low-moderate complexity involving template processing, pattern matching for commit type detection, and validation logic. Well-defined conventional commit standards provide clear implementation guidance."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 48,
|
|
||||||
"taskTitle": "Add multi-framework test execution support",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Expand test framework support into: framework detection system for multiple languages, common adapter interface design, Python pytest adapter implementation, Go and Rust adapter implementations, and integration with existing TestRunnerAdapter. Each language adapter should follow the unified interface pattern.",
|
|
||||||
"reasoning": "High complexity due to multi-language support, framework detection across different ecosystems, and adapter pattern implementation. Each language has unique testing conventions and output formats."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 49,
|
|
||||||
"taskTitle": "Implement workflow event streaming for real-time monitoring",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Structure event streaming into: WorkflowOrchestrator EventEmitter enhancement, structured event format with metadata, event persistence to run logs, and optional WebSocket streaming for external monitoring. Focus on event consistency and real-time delivery.",
|
|
||||||
"reasoning": "Moderate-high complexity due to event-driven architecture, structured event formats, persistence integration, and WebSocket implementation. Requires careful design of event schemas and delivery mechanisms."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 50,
|
|
||||||
"taskTitle": "Add intelligent test targeting for faster feedback",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Decompose test targeting into: file change detection system, test dependency analysis engine, framework-specific targeting adapters, test impact calculation algorithm, and fallback integration with TestRunnerAdapter. Focus on accuracy and performance optimization.",
|
|
||||||
"reasoning": "High complexity due to dependency analysis, impact calculation algorithms, framework-specific targeting, and integration with existing test execution. Requires sophisticated analysis of code relationships and test dependencies."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 51,
|
|
||||||
"taskTitle": "Implement dry-run visualization with execution timeline",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Structure dry-run visualization into: timeline calculation engine with duration estimates, estimation algorithms based on task complexity, ASCII art progress visualization with formatting, and resource validation with preflight checks. Focus on accurate planning and clear visual presentation.",
|
|
||||||
"reasoning": "Moderate-high complexity due to timeline calculation, estimation algorithms, ASCII visualization, and resource validation. Requires understanding of workflow timing and visual formatting for terminal output."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 52,
|
|
||||||
"taskTitle": "Add autopilot workflow integration tests",
|
|
||||||
"complexityScore": 8,
|
|
||||||
"recommendedSubtasks": 6,
|
|
||||||
"expansionPrompt": "Structure integration testing into: isolated test environment infrastructure, mock integrations and service stubs, end-to-end workflow test scenarios, performance benchmarking and resource monitoring, test isolation and parallelization strategies, and comprehensive result validation and reporting. Focus on realistic test scenarios and reliable automation.",
|
|
||||||
"reasoning": "High complexity due to end-to-end testing requirements, mock service integration, performance testing, isolation mechanisms, and comprehensive validation. Requires sophisticated test infrastructure and scenario design."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 53,
|
|
||||||
"taskTitle": "Finalize autopilot documentation and examples",
|
|
||||||
"complexityScore": 3,
|
|
||||||
"recommendedSubtasks": 4,
|
|
||||||
"expansionPrompt": "Structure documentation into: comprehensive autopilot documentation covering setup and usage, example PRD files and templates for different project types, troubleshooting guide for common issues and solutions, and demo materials with workflow visualization. Focus on clarity and practical examples.",
|
|
||||||
"reasoning": "Low complexity involving documentation writing, example creation, and demo material production. The main challenge is ensuring accuracy and completeness rather than technical implementation."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user