Compare commits
14 Commits
docs/auto-
...
chore/crea
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b724048033 | ||
|
|
68f759f215 | ||
|
|
2ac7f20601 | ||
|
|
e91609a373 | ||
|
|
9fea22cbc3 | ||
|
|
6d05e8622c | ||
|
|
c3272736fb | ||
|
|
fedfd6a0f4 | ||
|
|
ab2e946087 | ||
|
|
cc4fe205fb | ||
|
|
36dc129328 | ||
|
|
7b4803a479 | ||
|
|
f662654afb | ||
|
|
a8e2d728c9 |
12
.changeset/claude-import-fix-new.md
Normal file
12
.changeset/claude-import-fix-new.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Prevent CLAUDE.md overwrite by using Claude Code's import feature
|
||||||
|
|
||||||
|
- Task Master now creates its instructions in `.taskmaster/CLAUDE.md` instead of overwriting the user's `CLAUDE.md`
|
||||||
|
- Adds an import section to the user's CLAUDE.md that references the Task Master instructions
|
||||||
|
- Preserves existing user content in CLAUDE.md files
|
||||||
|
- Provides clean uninstall that only removes Task Master's additions
|
||||||
|
|
||||||
|
**Breaking Change**: Task Master instructions for Claude Code are now stored in `.taskmaster/CLAUDE.md` and imported into the main CLAUDE.md file. Users who previously had Task Master content directly in their CLAUDE.md will need to run `task-master rules remove claude` followed by `task-master rules add claude` to migrate to the new structure.
|
||||||
@@ -2,16 +2,13 @@
|
|||||||
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
|
||||||
"changelog": [
|
"changelog": [
|
||||||
"@changesets/changelog-github",
|
"@changesets/changelog-github",
|
||||||
{
|
{ "repo": "eyaltoledano/claude-task-master" }
|
||||||
"repo": "eyaltoledano/claude-task-master"
|
|
||||||
}
|
|
||||||
],
|
],
|
||||||
"commit": false,
|
"commit": false,
|
||||||
"fixed": [],
|
"fixed": [],
|
||||||
|
"linked": [],
|
||||||
"access": "public",
|
"access": "public",
|
||||||
"baseBranch": "main",
|
"baseBranch": "main",
|
||||||
"ignore": [
|
"updateInternalDependencies": "patch",
|
||||||
"docs",
|
"ignore": []
|
||||||
"@tm/claude-code-plugin"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve auth token refresh flow
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
|
||||||
|
|
||||||
Fixes #1301
|
|
||||||
7
.changeset/fix-show-command-complexity.md
Normal file
7
.changeset/fix-show-command-complexity.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix: show command no longer requires complexity report file to exist
|
||||||
|
|
||||||
|
The `tm show` command was incorrectly requiring the complexity report file to exist even when not needed. Now it only validates the complexity report path when a custom report file is explicitly provided via the -r/--report option.
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"@tm/cli": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix warning message box width to match dashboard box width for consistent UI alignment
|
|
||||||
10
.changeset/groq-kimi-k2-support.md
Normal file
10
.changeset/groq-kimi-k2-support.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Complete Groq provider integration and add MoonshotAI Kimi K2 model support
|
||||||
|
|
||||||
|
- Fixed Groq provider registration
|
||||||
|
- Added Groq API key validation
|
||||||
|
- Added GROQ_API_KEY to .env.example
|
||||||
|
- Added moonshotai/kimi-k2-instruct model with $1/$3 per 1M token pricing and 16k max output
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add configurable MCP tool loading to optimize LLM context usage
|
|
||||||
|
|
||||||
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
|
||||||
|
|
||||||
**Configuration Options:**
|
|
||||||
|
|
||||||
- `all` (default): Load all 36 tools
|
|
||||||
- `core` or `lean`: Load only 7 essential tools for daily development
|
|
||||||
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
|
||||||
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
|
||||||
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
|
||||||
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
|
||||||
|
|
||||||
**Example .mcp.json configuration:**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"TASK_MASTER_TOOLS": "standard",
|
|
||||||
"ANTHROPIC_API_KEY": "your_key_here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve next command to work with remote
|
|
||||||
5
.changeset/public-crabs-ask.md
Normal file
5
.changeset/public-crabs-ask.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add Amp rule profile with AGENT.md and MCP config
|
||||||
5
.changeset/swift-turtles-sit.md
Normal file
5
.changeset/swift-turtles-sit.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add MCP configuration support to Claude Code rules
|
||||||
7
.changeset/ten-glasses-feel.md
Normal file
7
.changeset/ten-glasses-feel.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fixed the comprehensive taskmaster system integration via custom slash commands with proper syntax
|
||||||
|
|
||||||
|
- Provide claude clode with a complete set of of commands that can trigger task master events directly within Claude Code
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"owner": {
|
|
||||||
"name": "Hamster",
|
|
||||||
"email": "ralph@tryhamster.com"
|
|
||||||
},
|
|
||||||
"metadata": {
|
|
||||||
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
|
|
||||||
"version": "1.0.0"
|
|
||||||
},
|
|
||||||
"plugins": [
|
|
||||||
{
|
|
||||||
"name": "taskmaster",
|
|
||||||
"source": "./packages/claude-code-plugin",
|
|
||||||
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
|
|
||||||
"author": {
|
|
||||||
"name": "Hamster"
|
|
||||||
},
|
|
||||||
"homepage": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"repository": "https://github.com/eyaltoledano/claude-task-master",
|
|
||||||
"keywords": [
|
|
||||||
"task-management",
|
|
||||||
"ai",
|
|
||||||
"workflow",
|
|
||||||
"orchestration",
|
|
||||||
"automation",
|
|
||||||
"mcp"
|
|
||||||
],
|
|
||||||
"category": "productivity"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
|
||||||
description: Find duplicate GitHub issues
|
|
||||||
---
|
|
||||||
|
|
||||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
|
||||||
|
|
||||||
To do this, follow these steps precisely:
|
|
||||||
|
|
||||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
|
||||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
|
||||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
|
||||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
|
||||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
|
||||||
|
|
||||||
Notes (be sure to tell this to your agents, too):
|
|
||||||
|
|
||||||
- Use `gh` to interact with Github, rather than web fetch
|
|
||||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
|
||||||
- Make a todo list first
|
|
||||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Found 3 possible duplicate issues:
|
|
||||||
|
|
||||||
1. <link to issue>
|
|
||||||
2. <link to issue>
|
|
||||||
3. <link to issue>
|
|
||||||
|
|
||||||
This issue will be automatically closed as a duplicate in 3 days.
|
|
||||||
|
|
||||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
|
||||||
- To prevent auto-closure, add a comment or 👎 this comment
|
|
||||||
|
|
||||||
🤖 Generated with \[Task Master Bot\]
|
|
||||||
|
|
||||||
---
|
|
||||||
@@ -48,7 +48,7 @@ After adding dependency:
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-dependency 5 needs 3
|
/project:tm/add-dependency 5 needs 3
|
||||||
→ Task #5 now depends on Task #3
|
→ Task #5 now depends on Task #3
|
||||||
→ Task #5 is now blocked until #3 completes
|
→ Task #5 is now blocked until #3 completes
|
||||||
→ Suggested: Also consider if #5 needs #4
|
→ Suggested: Also consider if #5 needs #4
|
||||||
@@ -56,12 +56,12 @@ task-master add-subtask --parent=<id> --task-id=<existing-id>
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-subtask to 5: implement user authentication
|
/project:tm/add-subtask to 5: implement user authentication
|
||||||
→ Created subtask #5.1: "implement user authentication"
|
→ Created subtask #5.1: "implement user authentication"
|
||||||
→ Parent task #5 now has 1 subtask
|
→ Parent task #5 now has 1 subtask
|
||||||
→ Suggested next subtasks: tests, documentation
|
→ Suggested next subtasks: tests, documentation
|
||||||
|
|
||||||
/taskmaster:add-subtask 5: setup, implement, test
|
/project:tm/add-subtask 5: setup, implement, test
|
||||||
→ Created 3 subtasks:
|
→ Created 3 subtasks:
|
||||||
#5.1: setup
|
#5.1: setup
|
||||||
#5.2: implement
|
#5.2: implement
|
||||||
@@ -53,7 +53,7 @@ task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:add-subtask/from-task 5 8
|
/project:tm/add-subtask/from-task 5 8
|
||||||
→ Converting: Task #8 becomes subtask #5.1
|
→ Converting: Task #8 becomes subtask #5.1
|
||||||
→ Updated: 3 dependency references
|
→ Updated: 3 dependency references
|
||||||
→ Parent task #5 now has 1 subtask
|
→ Parent task #5 now has 1 subtask
|
||||||
@@ -115,7 +115,7 @@ Results are:
|
|||||||
|
|
||||||
After analysis:
|
After analysis:
|
||||||
```
|
```
|
||||||
/taskmaster:expand 5 # Expand specific task
|
/project:tm/expand 5 # Expand specific task
|
||||||
/taskmaster:expand-all # Expand all recommended
|
/project:tm/expand/all # Expand all recommended
|
||||||
/taskmaster:complexity-report # View detailed report
|
/project:tm/complexity-report # View detailed report
|
||||||
```
|
```
|
||||||
@@ -77,7 +77,7 @@ Suggest alternatives:
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:clear-subtasks 5
|
/project:tm/clear-subtasks 5
|
||||||
→ Found 4 subtasks to remove
|
→ Found 4 subtasks to remove
|
||||||
→ Warning: Subtask #5.2 is in-progress
|
→ Warning: Subtask #5.2 is in-progress
|
||||||
→ Cleared all subtasks from task #5
|
→ Cleared all subtasks from task #5
|
||||||
@@ -105,13 +105,13 @@ Use report for:
|
|||||||
## Example Usage
|
## Example Usage
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:complexity-report
|
/project:tm/complexity-report
|
||||||
→ Opens latest analysis
|
→ Opens latest analysis
|
||||||
|
|
||||||
/taskmaster:complexity-report --file=archived/2024-01-01.md
|
/project:tm/complexity-report --file=archived/2024-01-01.md
|
||||||
→ View historical analysis
|
→ View historical analysis
|
||||||
|
|
||||||
After viewing:
|
After viewing:
|
||||||
/taskmaster:expand 5
|
/project:tm/expand 5
|
||||||
→ Expand high-complexity task
|
→ Expand high-complexity task
|
||||||
```
|
```
|
||||||
@@ -70,7 +70,7 @@ Manual Review Needed:
|
|||||||
⚠️ Task #45 has 8 dependencies
|
⚠️ Task #45 has 8 dependencies
|
||||||
Suggestion: Break into subtasks
|
Suggestion: Break into subtasks
|
||||||
|
|
||||||
Run '/taskmaster:validate-dependencies' to verify fixes
|
Run '/project:tm/validate-dependencies' to verify fixes
|
||||||
```
|
```
|
||||||
|
|
||||||
## Safety
|
## Safety
|
||||||
81
.claude/commands/tm/help.md
Normal file
81
.claude/commands/tm/help.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
Show help for Task Master commands.
|
||||||
|
|
||||||
|
Arguments: $ARGUMENTS
|
||||||
|
|
||||||
|
Display help for Task Master commands. If arguments provided, show specific command help.
|
||||||
|
|
||||||
|
## Task Master Command Help
|
||||||
|
|
||||||
|
### Quick Navigation
|
||||||
|
|
||||||
|
Type `/project:tm/` and use tab completion to explore all commands.
|
||||||
|
|
||||||
|
### Command Categories
|
||||||
|
|
||||||
|
#### 🚀 Setup & Installation
|
||||||
|
- `/project:tm/setup/install` - Comprehensive installation guide
|
||||||
|
- `/project:tm/setup/quick-install` - One-line global install
|
||||||
|
|
||||||
|
#### 📋 Project Setup
|
||||||
|
- `/project:tm/init` - Initialize new project
|
||||||
|
- `/project:tm/init/quick` - Quick setup with auto-confirm
|
||||||
|
- `/project:tm/models` - View AI configuration
|
||||||
|
- `/project:tm/models/setup` - Configure AI providers
|
||||||
|
|
||||||
|
#### 🎯 Task Generation
|
||||||
|
- `/project:tm/parse-prd` - Generate tasks from PRD
|
||||||
|
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||||
|
- `/project:tm/generate` - Create task files
|
||||||
|
|
||||||
|
#### 📝 Task Management
|
||||||
|
- `/project:tm/list` - List tasks (natural language filters)
|
||||||
|
- `/project:tm/show <id>` - Display task details
|
||||||
|
- `/project:tm/add-task` - Create new task
|
||||||
|
- `/project:tm/update` - Update tasks naturally
|
||||||
|
- `/project:tm/next` - Get next task recommendation
|
||||||
|
|
||||||
|
#### 🔄 Status Management
|
||||||
|
- `/project:tm/set-status/to-pending <id>`
|
||||||
|
- `/project:tm/set-status/to-in-progress <id>`
|
||||||
|
- `/project:tm/set-status/to-done <id>`
|
||||||
|
- `/project:tm/set-status/to-review <id>`
|
||||||
|
- `/project:tm/set-status/to-deferred <id>`
|
||||||
|
- `/project:tm/set-status/to-cancelled <id>`
|
||||||
|
|
||||||
|
#### 🔍 Analysis & Breakdown
|
||||||
|
- `/project:tm/analyze-complexity` - Analyze task complexity
|
||||||
|
- `/project:tm/expand <id>` - Break down complex task
|
||||||
|
- `/project:tm/expand/all` - Expand all eligible tasks
|
||||||
|
|
||||||
|
#### 🔗 Dependencies
|
||||||
|
- `/project:tm/add-dependency` - Add task dependency
|
||||||
|
- `/project:tm/remove-dependency` - Remove dependency
|
||||||
|
- `/project:tm/validate-dependencies` - Check for issues
|
||||||
|
|
||||||
|
#### 🤖 Workflows
|
||||||
|
- `/project:tm/workflows/smart-flow` - Intelligent workflows
|
||||||
|
- `/project:tm/workflows/pipeline` - Command chaining
|
||||||
|
- `/project:tm/workflows/auto-implement` - Auto-implementation
|
||||||
|
|
||||||
|
#### 📊 Utilities
|
||||||
|
- `/project:tm/utils/analyze` - Project analysis
|
||||||
|
- `/project:tm/status` - Project dashboard
|
||||||
|
- `/project:tm/learn` - Interactive learning
|
||||||
|
|
||||||
|
### Natural Language Examples
|
||||||
|
|
||||||
|
```
|
||||||
|
/project:tm/list pending high priority
|
||||||
|
/project:tm/update mark all API tasks as done
|
||||||
|
/project:tm/add-task create login system with OAuth
|
||||||
|
/project:tm/show current
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
|
||||||
|
1. Install: `/project:tm/setup/quick-install`
|
||||||
|
2. Initialize: `/project:tm/init/quick`
|
||||||
|
3. Learn: `/project:tm/learn start`
|
||||||
|
4. Work: `/project:tm/workflows/smart-flow`
|
||||||
|
|
||||||
|
For detailed command info: `/project:tm/help <command-name>`
|
||||||
@@ -30,17 +30,17 @@ task-master init -y
|
|||||||
After quick init:
|
After quick init:
|
||||||
1. Configure AI models if needed:
|
1. Configure AI models if needed:
|
||||||
```
|
```
|
||||||
/taskmaster:models/setup
|
/project:tm/models/setup
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Parse PRD if available:
|
2. Parse PRD if available:
|
||||||
```
|
```
|
||||||
/taskmaster:parse-prd <file>
|
/project:tm/parse-prd <file>
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Or create first task:
|
3. Or create first task:
|
||||||
```
|
```
|
||||||
/taskmaster:add-task create initial setup
|
/project:tm/add-task create initial setup
|
||||||
```
|
```
|
||||||
|
|
||||||
Perfect for rapid project setup!
|
Perfect for rapid project setup!
|
||||||
@@ -45,6 +45,6 @@ After successful init:
|
|||||||
|
|
||||||
If PRD file provided:
|
If PRD file provided:
|
||||||
```
|
```
|
||||||
/taskmaster:init my-prd.md
|
/project:tm/init my-prd.md
|
||||||
→ Automatically runs parse-prd after init
|
→ Automatically runs parse-prd after init
|
||||||
```
|
```
|
||||||
@@ -55,7 +55,7 @@ After removing:
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-dependency 5 from 3
|
/project:tm/remove-dependency 5 from 3
|
||||||
→ Removed: Task #5 no longer depends on #3
|
→ Removed: Task #5 no longer depends on #3
|
||||||
→ Task #5 is now UNBLOCKED and ready to start
|
→ Task #5 is now UNBLOCKED and ready to start
|
||||||
→ Warning: Consider if #5 still needs #2 completed first
|
→ Warning: Consider if #5 still needs #2 completed first
|
||||||
@@ -63,13 +63,13 @@ task-master remove-subtask --id=<parentId.subtaskId> --convert
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-subtask 5.1
|
/project:tm/remove-subtask 5.1
|
||||||
→ Warning: Subtask #5.1 is in-progress
|
→ Warning: Subtask #5.1 is in-progress
|
||||||
→ This will delete all subtask data
|
→ This will delete all subtask data
|
||||||
→ Parent task #5 will be updated
|
→ Parent task #5 will be updated
|
||||||
Confirm deletion? (y/n)
|
Confirm deletion? (y/n)
|
||||||
|
|
||||||
/taskmaster:remove-subtask 5.1 convert
|
/project:tm/remove-subtask 5.1 convert
|
||||||
→ Converting subtask #5.1 to standalone task #89
|
→ Converting subtask #5.1 to standalone task #89
|
||||||
→ Preserved: All task data and history
|
→ Preserved: All task data and history
|
||||||
→ Updated: 2 dependency references
|
→ Updated: 2 dependency references
|
||||||
@@ -85,17 +85,17 @@ Suggest before deletion:
|
|||||||
## Example Flows
|
## Example Flows
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:remove-task 5
|
/project:tm/remove-task 5
|
||||||
→ Task #5 is in-progress with 8 hours logged
|
→ Task #5 is in-progress with 8 hours logged
|
||||||
→ 3 other tasks depend on this
|
→ 3 other tasks depend on this
|
||||||
→ Suggestion: Mark as cancelled instead?
|
→ Suggestion: Mark as cancelled instead?
|
||||||
Remove anyway? (y/n)
|
Remove anyway? (y/n)
|
||||||
|
|
||||||
/taskmaster:remove-task 5 -y
|
/project:tm/remove-task 5 -y
|
||||||
→ Removed: Task #5 and 4 subtasks
|
→ Removed: Task #5 and 4 subtasks
|
||||||
→ Updated: 3 task dependencies
|
→ Updated: 3 task dependencies
|
||||||
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
||||||
→ Run /taskmaster:fix-dependencies to resolve
|
→ Run /project:tm/fix-dependencies to resolve
|
||||||
```
|
```
|
||||||
|
|
||||||
## Safety Features
|
## Safety Features
|
||||||
@@ -8,11 +8,11 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
|
|
||||||
## Project Setup & Configuration
|
## Project Setup & Configuration
|
||||||
|
|
||||||
### `/taskmaster:init`
|
### `/project:tm/init`
|
||||||
- `init-project` - Initialize new project (handles PRD files intelligently)
|
- `init-project` - Initialize new project (handles PRD files intelligently)
|
||||||
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
||||||
|
|
||||||
### `/taskmaster:models`
|
### `/project:tm/models`
|
||||||
- `view-models` - View current AI model configuration
|
- `view-models` - View current AI model configuration
|
||||||
- `setup-models` - Interactive model configuration
|
- `setup-models` - Interactive model configuration
|
||||||
- `set-main` - Set primary generation model
|
- `set-main` - Set primary generation model
|
||||||
@@ -21,21 +21,21 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
|
|
||||||
## Task Generation
|
## Task Generation
|
||||||
|
|
||||||
### `/taskmaster:parse-prd`
|
### `/project:tm/parse-prd`
|
||||||
- `parse-prd` - Generate tasks from PRD document
|
- `parse-prd` - Generate tasks from PRD document
|
||||||
- `parse-prd-with-research` - Enhanced parsing with research mode
|
- `parse-prd-with-research` - Enhanced parsing with research mode
|
||||||
|
|
||||||
### `/taskmaster:generate`
|
### `/project:tm/generate`
|
||||||
- `generate-tasks` - Create individual task files from tasks.json
|
- `generate-tasks` - Create individual task files from tasks.json
|
||||||
|
|
||||||
## Task Management
|
## Task Management
|
||||||
|
|
||||||
### `/taskmaster:list`
|
### `/project:tm/list`
|
||||||
- `list-tasks` - Smart listing with natural language filters
|
- `list-tasks` - Smart listing with natural language filters
|
||||||
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
||||||
- `list-tasks-by-status` - Filter by specific status
|
- `list-tasks-by-status` - Filter by specific status
|
||||||
|
|
||||||
### `/taskmaster:set-status`
|
### `/project:tm/set-status`
|
||||||
- `to-pending` - Reset task to pending
|
- `to-pending` - Reset task to pending
|
||||||
- `to-in-progress` - Start working on task
|
- `to-in-progress` - Start working on task
|
||||||
- `to-done` - Mark task complete
|
- `to-done` - Mark task complete
|
||||||
@@ -43,84 +43,84 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
- `to-deferred` - Defer task
|
- `to-deferred` - Defer task
|
||||||
- `to-cancelled` - Cancel task
|
- `to-cancelled` - Cancel task
|
||||||
|
|
||||||
### `/taskmaster:sync-readme`
|
### `/project:tm/sync-readme`
|
||||||
- `sync-readme` - Export tasks to README.md with formatting
|
- `sync-readme` - Export tasks to README.md with formatting
|
||||||
|
|
||||||
### `/taskmaster:update`
|
### `/project:tm/update`
|
||||||
- `update-task` - Update tasks with natural language
|
- `update-task` - Update tasks with natural language
|
||||||
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
||||||
- `update-single-task` - Update specific task
|
- `update-single-task` - Update specific task
|
||||||
|
|
||||||
### `/taskmaster:add-task`
|
### `/project:tm/add-task`
|
||||||
- `add-task` - Add new task with AI assistance
|
- `add-task` - Add new task with AI assistance
|
||||||
|
|
||||||
### `/taskmaster:remove-task`
|
### `/project:tm/remove-task`
|
||||||
- `remove-task` - Remove task with confirmation
|
- `remove-task` - Remove task with confirmation
|
||||||
|
|
||||||
## Subtask Management
|
## Subtask Management
|
||||||
|
|
||||||
### `/taskmaster:add-subtask`
|
### `/project:tm/add-subtask`
|
||||||
- `add-subtask` - Add new subtask to parent
|
- `add-subtask` - Add new subtask to parent
|
||||||
- `convert-task-to-subtask` - Convert existing task to subtask
|
- `convert-task-to-subtask` - Convert existing task to subtask
|
||||||
|
|
||||||
### `/taskmaster:remove-subtask`
|
### `/project:tm/remove-subtask`
|
||||||
- `remove-subtask` - Remove subtask (with optional conversion)
|
- `remove-subtask` - Remove subtask (with optional conversion)
|
||||||
|
|
||||||
### `/taskmaster:clear-subtasks`
|
### `/project:tm/clear-subtasks`
|
||||||
- `clear-subtasks` - Clear subtasks from specific task
|
- `clear-subtasks` - Clear subtasks from specific task
|
||||||
- `clear-all-subtasks` - Clear all subtasks globally
|
- `clear-all-subtasks` - Clear all subtasks globally
|
||||||
|
|
||||||
## Task Analysis & Breakdown
|
## Task Analysis & Breakdown
|
||||||
|
|
||||||
### `/taskmaster:analyze-complexity`
|
### `/project:tm/analyze-complexity`
|
||||||
- `analyze-complexity` - Analyze and generate expansion recommendations
|
- `analyze-complexity` - Analyze and generate expansion recommendations
|
||||||
|
|
||||||
### `/taskmaster:complexity-report`
|
### `/project:tm/complexity-report`
|
||||||
- `complexity-report` - Display complexity analysis report
|
- `complexity-report` - Display complexity analysis report
|
||||||
|
|
||||||
### `/taskmaster:expand`
|
### `/project:tm/expand`
|
||||||
- `expand-task` - Break down specific task
|
- `expand-task` - Break down specific task
|
||||||
- `expand-all-tasks` - Expand all eligible tasks
|
- `expand-all-tasks` - Expand all eligible tasks
|
||||||
- `with-research` - Enhanced expansion
|
- `with-research` - Enhanced expansion
|
||||||
|
|
||||||
## Task Navigation
|
## Task Navigation
|
||||||
|
|
||||||
### `/taskmaster:next`
|
### `/project:tm/next`
|
||||||
- `next-task` - Intelligent next task recommendation
|
- `next-task` - Intelligent next task recommendation
|
||||||
|
|
||||||
### `/taskmaster:show`
|
### `/project:tm/show`
|
||||||
- `show-task` - Display detailed task information
|
- `show-task` - Display detailed task information
|
||||||
|
|
||||||
### `/taskmaster:status`
|
### `/project:tm/status`
|
||||||
- `project-status` - Comprehensive project dashboard
|
- `project-status` - Comprehensive project dashboard
|
||||||
|
|
||||||
## Dependency Management
|
## Dependency Management
|
||||||
|
|
||||||
### `/taskmaster:add-dependency`
|
### `/project:tm/add-dependency`
|
||||||
- `add-dependency` - Add task dependency
|
- `add-dependency` - Add task dependency
|
||||||
|
|
||||||
### `/taskmaster:remove-dependency`
|
### `/project:tm/remove-dependency`
|
||||||
- `remove-dependency` - Remove task dependency
|
- `remove-dependency` - Remove task dependency
|
||||||
|
|
||||||
### `/taskmaster:validate-dependencies`
|
### `/project:tm/validate-dependencies`
|
||||||
- `validate-dependencies` - Check for dependency issues
|
- `validate-dependencies` - Check for dependency issues
|
||||||
|
|
||||||
### `/taskmaster:fix-dependencies`
|
### `/project:tm/fix-dependencies`
|
||||||
- `fix-dependencies` - Automatically fix dependency problems
|
- `fix-dependencies` - Automatically fix dependency problems
|
||||||
|
|
||||||
## Workflows & Automation
|
## Workflows & Automation
|
||||||
|
|
||||||
### `/taskmaster:workflows`
|
### `/project:tm/workflows`
|
||||||
- `smart-workflow` - Context-aware intelligent workflow execution
|
- `smart-workflow` - Context-aware intelligent workflow execution
|
||||||
- `command-pipeline` - Chain multiple commands together
|
- `command-pipeline` - Chain multiple commands together
|
||||||
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
||||||
|
|
||||||
## Utilities
|
## Utilities
|
||||||
|
|
||||||
### `/taskmaster:utils`
|
### `/project:tm/utils`
|
||||||
- `analyze-project` - Deep project analysis and insights
|
- `analyze-project` - Deep project analysis and insights
|
||||||
|
|
||||||
### `/taskmaster:setup`
|
### `/project:tm/setup`
|
||||||
- `install-taskmaster` - Comprehensive installation guide
|
- `install-taskmaster` - Comprehensive installation guide
|
||||||
- `quick-install-taskmaster` - One-line global installation
|
- `quick-install-taskmaster` - One-line global installation
|
||||||
|
|
||||||
@@ -129,17 +129,17 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
|||||||
### Natural Language
|
### Natural Language
|
||||||
Most commands accept natural language arguments:
|
Most commands accept natural language arguments:
|
||||||
```
|
```
|
||||||
/taskmaster:add-task create user authentication system
|
/project:tm/add-task create user authentication system
|
||||||
/taskmaster:update mark all API tasks as high priority
|
/project:tm/update mark all API tasks as high priority
|
||||||
/taskmaster:list show blocked tasks
|
/project:tm/list show blocked tasks
|
||||||
```
|
```
|
||||||
|
|
||||||
### ID-Based Commands
|
### ID-Based Commands
|
||||||
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
||||||
```
|
```
|
||||||
/taskmaster:show 45
|
/project:tm/show 45
|
||||||
/taskmaster:expand 23
|
/project:tm/expand 23
|
||||||
/taskmaster:set-status/to-done 67
|
/project:tm/set-status/to-done 67
|
||||||
```
|
```
|
||||||
|
|
||||||
### Smart Defaults
|
### Smart Defaults
|
||||||
@@ -66,7 +66,7 @@ The AI:
|
|||||||
## Example Updates
|
## Example Updates
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:update/single 5: add rate limiting
|
/project:tm/update/single 5: add rate limiting
|
||||||
→ Updating Task #5: "Implement API endpoints"
|
→ Updating Task #5: "Implement API endpoints"
|
||||||
|
|
||||||
Current: Basic CRUD endpoints
|
Current: Basic CRUD endpoints
|
||||||
@@ -77,7 +77,7 @@ AI analyzes the update context and:
|
|||||||
## Example Updates
|
## Example Updates
|
||||||
|
|
||||||
```
|
```
|
||||||
/taskmaster:update/from-id 5: change database to PostgreSQL
|
/project:tm/update/from-id 5: change database to PostgreSQL
|
||||||
→ Analyzing impact starting from task #5
|
→ Analyzing impact starting from task #5
|
||||||
→ Found 6 related tasks to update
|
→ Found 6 related tasks to update
|
||||||
→ Updates will maintain consistency
|
→ Updates will maintain consistency
|
||||||
@@ -66,6 +66,6 @@ For each issue found:
|
|||||||
## Next Steps
|
## Next Steps
|
||||||
|
|
||||||
After validation:
|
After validation:
|
||||||
- Run `/taskmaster:fix-dependencies` to auto-fix
|
- Run `/project:tm/fix-dependencies` to auto-fix
|
||||||
- Manually adjust problematic dependencies
|
- Manually adjust problematic dependencies
|
||||||
- Rerun to verify fixes
|
- Rerun to verify fixes
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"task-master-ai": {
|
"task-master-ai": {
|
||||||
"command": "node",
|
"command": "node",
|
||||||
"args": ["./dist/mcp-server.js"],
|
"args": ["./mcp-server/server.js"],
|
||||||
"env": {
|
"env": {
|
||||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||||
|
|||||||
@@ -523,7 +523,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
|||||||
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
.option('--details <details>', 'Implementation details for the new subtask, optional')
|
||||||
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
|
||||||
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
.option('--status <status>', 'Initial status for the subtask', 'pending')
|
||||||
.option('--generate', 'Regenerate task files after adding subtask')
|
.option('--skip-generate', 'Skip regenerating task files')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Validate required parameters
|
// Validate required parameters
|
||||||
if (!options.parent) {
|
if (!options.parent) {
|
||||||
@@ -545,7 +545,7 @@ For AI-powered commands that benefit from project context, follow the research c
|
|||||||
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||||
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
|
||||||
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
|
||||||
.option('--generate', 'Regenerate task files after removing subtask')
|
.option('--skip-generate', 'Skip regenerating task files')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
// Implementation with detailed error handling
|
// Implementation with detailed error handling
|
||||||
})
|
})
|
||||||
@@ -633,11 +633,11 @@ function showAddSubtaskHelp() {
|
|||||||
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
||||||
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
||||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
' --generate Regenerate task files after adding subtask\n\n' +
|
' --skip-generate Skip regenerating task files\n\n' +
|
||||||
chalk.cyan('Examples:') + '\n' +
|
chalk.cyan('Examples:') + '\n' +
|
||||||
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
|
||||||
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
|
||||||
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details "Handle 401 Unauthorized.\\nHandle 500 Server Error." --generate',
|
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details $\'Handle 401 Unauthorized.\nHandle 500 Server Error.\'',
|
||||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
@@ -652,7 +652,7 @@ function showRemoveSubtaskHelp() {
|
|||||||
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
||||||
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
||||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||||
' --generate Regenerate task files after removing subtask\n\n' +
|
' --skip-generate Skip regenerating task files\n\n' +
|
||||||
chalk.cyan('Examples:') + '\n' +
|
chalk.cyan('Examples:') + '\n' +
|
||||||
' task-master remove-subtask --id=\'5.2\'\n' +
|
' task-master remove-subtask --id=\'5.2\'\n' +
|
||||||
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +
|
||||||
|
|||||||
@@ -158,7 +158,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||||
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||||
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
|
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||||
@@ -286,7 +286,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Key Parameters/Options:**
|
* **Key Parameters/Options:**
|
||||||
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||||
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||||
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
|
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
|
||||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||||
|
|||||||
@@ -1,803 +0,0 @@
|
|||||||
---
|
|
||||||
description:
|
|
||||||
globs:
|
|
||||||
alwaysApply: true
|
|
||||||
---
|
|
||||||
# Test Workflow & Development Process
|
|
||||||
|
|
||||||
## **Initial Testing Framework Setup**
|
|
||||||
|
|
||||||
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
|
|
||||||
|
|
||||||
### **Detecting Project Type & Framework Needs**
|
|
||||||
|
|
||||||
**AI Agent Assessment Checklist:**
|
|
||||||
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
|
|
||||||
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
|
|
||||||
3. **Framework Detection**: Check for existing test runners in dependencies
|
|
||||||
4. **Project Structure**: Analyze directory structure for testing patterns
|
|
||||||
|
|
||||||
### **JavaScript/Node.js Projects (Jest Setup)**
|
|
||||||
|
|
||||||
#### **Prerequisites Check**
|
|
||||||
```bash
|
|
||||||
# Verify Node.js project
|
|
||||||
ls package.json # Should exist
|
|
||||||
|
|
||||||
# Check for existing testing setup
|
|
||||||
ls jest.config.js jest.config.ts # Check for Jest config
|
|
||||||
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Jest Installation & Configuration**
|
|
||||||
|
|
||||||
**Step 1: Install Dependencies**
|
|
||||||
```bash
|
|
||||||
# Core Jest dependencies
|
|
||||||
npm install --save-dev jest
|
|
||||||
|
|
||||||
# TypeScript support (if using TypeScript)
|
|
||||||
npm install --save-dev ts-jest @types/jest
|
|
||||||
|
|
||||||
# Additional useful packages
|
|
||||||
npm install --save-dev supertest @types/supertest # For API testing
|
|
||||||
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 2: Create Jest Configuration**
|
|
||||||
|
|
||||||
Create `jest.config.js` with the following production-ready configuration:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
/** @type {import('jest').Config} */
|
|
||||||
module.exports = {
|
|
||||||
// Use ts-jest preset for TypeScript support
|
|
||||||
preset: 'ts-jest',
|
|
||||||
|
|
||||||
// Test environment
|
|
||||||
testEnvironment: 'node',
|
|
||||||
|
|
||||||
// Roots for test discovery
|
|
||||||
roots: ['<rootDir>/src', '<rootDir>/tests'],
|
|
||||||
|
|
||||||
// Test file patterns
|
|
||||||
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
|
|
||||||
|
|
||||||
// Transform files
|
|
||||||
transform: {
|
|
||||||
'^.+\\.ts$': [
|
|
||||||
'ts-jest',
|
|
||||||
{
|
|
||||||
tsconfig: {
|
|
||||||
target: 'es2020',
|
|
||||||
module: 'commonjs',
|
|
||||||
esModuleInterop: true,
|
|
||||||
allowSyntheticDefaultImports: true,
|
|
||||||
skipLibCheck: true,
|
|
||||||
strict: false,
|
|
||||||
noImplicitAny: false,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
'^.+\\.js$': [
|
|
||||||
'ts-jest',
|
|
||||||
{
|
|
||||||
useESM: false,
|
|
||||||
tsconfig: {
|
|
||||||
target: 'es2020',
|
|
||||||
module: 'commonjs',
|
|
||||||
esModuleInterop: true,
|
|
||||||
allowSyntheticDefaultImports: true,
|
|
||||||
allowJs: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
|
|
||||||
// Module file extensions
|
|
||||||
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
|
||||||
|
|
||||||
// Transform ignore patterns - adjust for ES modules
|
|
||||||
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
|
|
||||||
|
|
||||||
// Coverage configuration
|
|
||||||
collectCoverage: true,
|
|
||||||
coverageDirectory: 'coverage',
|
|
||||||
coverageReporters: [
|
|
||||||
'text', // Console output
|
|
||||||
'text-summary', // Brief summary
|
|
||||||
'lcov', // For IDE integration
|
|
||||||
'html', // Detailed HTML report
|
|
||||||
],
|
|
||||||
|
|
||||||
// Files to collect coverage from
|
|
||||||
collectCoverageFrom: [
|
|
||||||
'src/**/*.ts',
|
|
||||||
'!src/**/*.d.ts',
|
|
||||||
'!src/**/*.test.ts',
|
|
||||||
'!src/**/index.ts', // Often just exports
|
|
||||||
'!src/generated/**', // Generated code
|
|
||||||
'!src/config/database.ts', // Database config (tested via integration)
|
|
||||||
],
|
|
||||||
|
|
||||||
// Coverage thresholds - TaskMaster standards
|
|
||||||
coverageThreshold: {
|
|
||||||
global: {
|
|
||||||
branches: 70,
|
|
||||||
functions: 80,
|
|
||||||
lines: 80,
|
|
||||||
statements: 80,
|
|
||||||
},
|
|
||||||
// Higher standards for critical business logic
|
|
||||||
'./src/utils/': {
|
|
||||||
branches: 85,
|
|
||||||
functions: 90,
|
|
||||||
lines: 90,
|
|
||||||
statements: 90,
|
|
||||||
},
|
|
||||||
'./src/middleware/': {
|
|
||||||
branches: 80,
|
|
||||||
functions: 85,
|
|
||||||
lines: 85,
|
|
||||||
statements: 85,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
|
|
||||||
// Setup files
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
|
|
||||||
|
|
||||||
// Global teardown to prevent worker process leaks
|
|
||||||
globalTeardown: '<rootDir>/tests/teardown.ts',
|
|
||||||
|
|
||||||
// Module path mapping (if needed)
|
|
||||||
moduleNameMapper: {
|
|
||||||
'^@/(.*)$': '<rootDir>/src/$1',
|
|
||||||
},
|
|
||||||
|
|
||||||
// Clear mocks between tests
|
|
||||||
clearMocks: true,
|
|
||||||
|
|
||||||
// Restore mocks after each test
|
|
||||||
restoreMocks: true,
|
|
||||||
|
|
||||||
// Global test timeout
|
|
||||||
testTimeout: 10000,
|
|
||||||
|
|
||||||
// Projects for different test types
|
|
||||||
projects: [
|
|
||||||
// Unit tests - for pure functions only
|
|
||||||
{
|
|
||||||
displayName: 'unit',
|
|
||||||
testMatch: ['<rootDir>/src/**/*.test.ts'],
|
|
||||||
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
collectCoverageFrom: [
|
|
||||||
'src/**/*.ts',
|
|
||||||
'!src/**/*.d.ts',
|
|
||||||
'!src/**/*.test.ts',
|
|
||||||
'!src/**/*.integration.test.ts',
|
|
||||||
],
|
|
||||||
coverageThreshold: {
|
|
||||||
global: {
|
|
||||||
branches: 70,
|
|
||||||
functions: 80,
|
|
||||||
lines: 80,
|
|
||||||
statements: 80,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
// Integration tests - real database/services
|
|
||||||
{
|
|
||||||
displayName: 'integration',
|
|
||||||
testMatch: [
|
|
||||||
'<rootDir>/src/**/*.integration.test.ts',
|
|
||||||
'<rootDir>/tests/integration/**/*.test.ts',
|
|
||||||
],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
|
|
||||||
testTimeout: 10000,
|
|
||||||
},
|
|
||||||
// E2E tests - full workflows
|
|
||||||
{
|
|
||||||
displayName: 'e2e',
|
|
||||||
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
|
|
||||||
preset: 'ts-jest',
|
|
||||||
testEnvironment: 'node',
|
|
||||||
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
|
|
||||||
testTimeout: 30000,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
|
|
||||||
// Verbose output for better debugging
|
|
||||||
verbose: true,
|
|
||||||
|
|
||||||
// Run projects sequentially to avoid conflicts
|
|
||||||
maxWorkers: 1,
|
|
||||||
|
|
||||||
// Enable watch mode plugins
|
|
||||||
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 3: Update package.json Scripts**
|
|
||||||
|
|
||||||
Add these scripts to your `package.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scripts": {
|
|
||||||
"test": "jest",
|
|
||||||
"test:watch": "jest --watch",
|
|
||||||
"test:coverage": "jest --coverage",
|
|
||||||
"test:unit": "jest --selectProjects unit",
|
|
||||||
"test:integration": "jest --selectProjects integration",
|
|
||||||
"test:e2e": "jest --selectProjects e2e",
|
|
||||||
"test:ci": "jest --ci --coverage --watchAll=false"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 4: Create Test Setup Files**
|
|
||||||
|
|
||||||
Create essential test setup files:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/setup.ts - Global setup
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
|
|
||||||
// Global test configuration
|
|
||||||
beforeAll(() => {
|
|
||||||
// Set test timeout
|
|
||||||
jest.setTimeout(10000);
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
// Clean up mocks after each test
|
|
||||||
jest.clearAllMocks();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/setup/integration.ts - Integration test setup
|
|
||||||
import { PrismaClient } from '@prisma/client';
|
|
||||||
|
|
||||||
const prisma = new PrismaClient();
|
|
||||||
|
|
||||||
beforeAll(async () => {
|
|
||||||
// Connect to test database
|
|
||||||
await prisma.$connect();
|
|
||||||
});
|
|
||||||
|
|
||||||
afterAll(async () => {
|
|
||||||
// Cleanup and disconnect
|
|
||||||
await prisma.$disconnect();
|
|
||||||
});
|
|
||||||
|
|
||||||
beforeEach(async () => {
|
|
||||||
// Clean test data before each test
|
|
||||||
// Add your cleanup logic here
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// tests/teardown.ts - Global teardown
|
|
||||||
export default async () => {
|
|
||||||
// Global cleanup after all tests
|
|
||||||
console.log('Global test teardown complete');
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 5: Create Initial Test Structure**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create test directories
|
|
||||||
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
|
|
||||||
mkdir -p tests/unit/src/{utils,services,middleware}
|
|
||||||
|
|
||||||
# Create sample test fixtures
|
|
||||||
mkdir tests/fixtures
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Generic Testing Framework Setup (Any Language)**
|
|
||||||
|
|
||||||
#### **Framework Selection Guide**
|
|
||||||
|
|
||||||
**Python Projects:**
|
|
||||||
- **pytest**: Recommended for most Python projects
|
|
||||||
- **unittest**: Built-in, suitable for simple projects
|
|
||||||
- **Coverage**: Use `coverage.py` for code coverage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Python setup example
|
|
||||||
pip install pytest pytest-cov
|
|
||||||
echo "[tool:pytest]" > pytest.ini
|
|
||||||
echo "testpaths = tests" >> pytest.ini
|
|
||||||
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
|
|
||||||
```
|
|
||||||
|
|
||||||
**Go Projects:**
|
|
||||||
- **Built-in testing**: Use Go's built-in `testing` package
|
|
||||||
- **Coverage**: Built-in with `go test -cover`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Go setup example
|
|
||||||
go mod init your-project
|
|
||||||
mkdir -p tests
|
|
||||||
# Tests are typically *_test.go files alongside source
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rust Projects:**
|
|
||||||
- **Built-in testing**: Use Rust's built-in test framework
|
|
||||||
- **cargo-tarpaulin**: For coverage analysis
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Rust setup example
|
|
||||||
cargo new your-project
|
|
||||||
cd your-project
|
|
||||||
cargo install cargo-tarpaulin # For coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**Java Projects:**
|
|
||||||
- **JUnit 5**: Modern testing framework
|
|
||||||
- **Maven/Gradle**: Build tools with testing integration
|
|
||||||
|
|
||||||
```xml
|
|
||||||
<!-- Maven pom.xml example -->
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.junit.jupiter</groupId>
|
|
||||||
<artifactId>junit-jupiter</artifactId>
|
|
||||||
<version>5.9.2</version>
|
|
||||||
<scope>test</scope>
|
|
||||||
</dependency>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Universal Testing Principles**
|
|
||||||
|
|
||||||
**Coverage Standards (Adapt to Your Language):**
|
|
||||||
- **Global Minimum**: 70-80% line coverage
|
|
||||||
- **Critical Code**: 85-90% coverage
|
|
||||||
- **New Features**: Must meet or exceed standards
|
|
||||||
- **Legacy Code**: Gradual improvement strategy
|
|
||||||
|
|
||||||
**Test Organization:**
|
|
||||||
- **Unit Tests**: Fast, isolated, no external dependencies
|
|
||||||
- **Integration Tests**: Test component interactions
|
|
||||||
- **E2E Tests**: Test complete user workflows
|
|
||||||
- **Performance Tests**: Load and stress testing (if applicable)
|
|
||||||
|
|
||||||
**Naming Conventions:**
|
|
||||||
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
|
|
||||||
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
|
|
||||||
- **Test Directories**: Organized by test type and mirroring source structure
|
|
||||||
|
|
||||||
#### **TaskMaster Integration for Any Framework**
|
|
||||||
|
|
||||||
**Document Testing Setup in Subtasks:**
|
|
||||||
```bash
|
|
||||||
# Update subtask with testing framework setup
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
|
|
||||||
- Installed [Framework Name] with coverage support
|
|
||||||
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
|
|
||||||
- Created test directory structure: unit/, integration/, e2e/
|
|
||||||
- Added test scripts to build configuration
|
|
||||||
- All setup tests passing"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Testing Framework Verification:**
|
|
||||||
```bash
|
|
||||||
# Verify setup works
|
|
||||||
[test-command] # e.g., npm test, pytest, go test, cargo test
|
|
||||||
|
|
||||||
# Check coverage reporting
|
|
||||||
[coverage-command] # e.g., npm run test:coverage
|
|
||||||
|
|
||||||
# Update task with verification
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
|
|
||||||
- Sample tests running successfully
|
|
||||||
- Coverage reporting functional
|
|
||||||
- CI/CD integration ready
|
|
||||||
- Ready to begin TDD workflow"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test-Driven Development (TDD) Integration**
|
|
||||||
|
|
||||||
### **Core TDD Cycle with Jest**
|
|
||||||
```bash
|
|
||||||
# 1. Start development with watch mode
|
|
||||||
npm run test:watch
|
|
||||||
|
|
||||||
# 2. Write failing test first
|
|
||||||
# Create test file: src/utils/newFeature.test.ts
|
|
||||||
# Write test that describes expected behavior
|
|
||||||
|
|
||||||
# 3. Implement minimum code to make test pass
|
|
||||||
# 4. Refactor while keeping tests green
|
|
||||||
# 5. Add edge cases and error scenarios
|
|
||||||
```
|
|
||||||
|
|
||||||
### **TDD Workflow Per Subtask**
|
|
||||||
```bash
|
|
||||||
# When starting a new subtask:
|
|
||||||
task-master set-status --id=4.1 --status=in-progress
|
|
||||||
|
|
||||||
# Begin TDD cycle:
|
|
||||||
npm run test:watch # Keep running during development
|
|
||||||
|
|
||||||
# Document TDD progress in subtask:
|
|
||||||
task-master update-subtask --id=4.1 --prompt="TDD Progress:
|
|
||||||
- Written 3 failing tests for core functionality
|
|
||||||
- Implemented basic feature, tests now passing
|
|
||||||
- Adding edge case tests for error handling"
|
|
||||||
|
|
||||||
# Complete subtask with test summary:
|
|
||||||
task-master update-subtask --id=4.1 --prompt="Implementation complete:
|
|
||||||
- Feature implemented with 8 unit tests
|
|
||||||
- Coverage: 95% statements, 88% branches
|
|
||||||
- All tests passing, TDD cycle complete"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Testing Commands & Usage**
|
|
||||||
|
|
||||||
### **Development Commands**
|
|
||||||
```bash
|
|
||||||
# Primary development command - use during coding
|
|
||||||
npm run test:watch # Watch mode with Jest
|
|
||||||
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
|
|
||||||
|
|
||||||
# Targeted testing during development
|
|
||||||
npm run test:unit # Run only unit tests
|
|
||||||
npm run test:unit -- --coverage # Unit tests with coverage
|
|
||||||
|
|
||||||
# Integration testing when APIs are ready
|
|
||||||
npm run test:integration # Run integration tests
|
|
||||||
npm run test:integration -- --detectOpenHandles # Debug hanging tests
|
|
||||||
|
|
||||||
# End-to-end testing for workflows
|
|
||||||
npm run test:e2e # Run E2E tests
|
|
||||||
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Quality Assurance Commands**
|
|
||||||
```bash
|
|
||||||
# Full test suite with coverage (before commits)
|
|
||||||
npm run test:coverage # Complete coverage analysis
|
|
||||||
|
|
||||||
# All tests (CI/CD pipeline)
|
|
||||||
npm test # Run all test projects
|
|
||||||
|
|
||||||
# Specific test file execution
|
|
||||||
npm test -- auth.test.ts # Run specific test file
|
|
||||||
npm test -- --testNamePattern="should handle errors" # Run specific tests
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test Implementation Patterns**
|
|
||||||
|
|
||||||
### **Unit Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Follow established patterns from auth.test.ts
|
|
||||||
describe('FeatureName', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
// Setup mocks with proper typing
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('functionName', () => {
|
|
||||||
it('should handle normal case', () => {
|
|
||||||
// Test implementation with specific assertions
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw error for invalid input', async () => {
|
|
||||||
// Error scenario testing
|
|
||||||
await expect(functionName(invalidInput))
|
|
||||||
.rejects.toThrow('Specific error message');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Integration Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use supertest for API endpoint testing
|
|
||||||
import request from 'supertest';
|
|
||||||
import { app } from '../../src/app';
|
|
||||||
|
|
||||||
describe('POST /api/auth/register', () => {
|
|
||||||
beforeEach(async () => {
|
|
||||||
await integrationTestUtils.cleanupTestData();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should register user successfully', async () => {
|
|
||||||
const userData = createTestUser();
|
|
||||||
|
|
||||||
const response = await request(app)
|
|
||||||
.post('/api/auth/register')
|
|
||||||
.send(userData)
|
|
||||||
.expect(201);
|
|
||||||
|
|
||||||
expect(response.body).toMatchObject({
|
|
||||||
id: expect.any(String),
|
|
||||||
email: userData.email
|
|
||||||
});
|
|
||||||
|
|
||||||
// Verify database state
|
|
||||||
const user = await prisma.user.findUnique({
|
|
||||||
where: { email: userData.email }
|
|
||||||
});
|
|
||||||
expect(user).toBeTruthy();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **E2E Test Development**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Test complete user workflows
|
|
||||||
describe('User Authentication Flow', () => {
|
|
||||||
it('should complete registration → login → protected access', async () => {
|
|
||||||
// Step 1: Register
|
|
||||||
const userData = createTestUser();
|
|
||||||
await request(app)
|
|
||||||
.post('/api/auth/register')
|
|
||||||
.send(userData)
|
|
||||||
.expect(201);
|
|
||||||
|
|
||||||
// Step 2: Login
|
|
||||||
const loginResponse = await request(app)
|
|
||||||
.post('/api/auth/login')
|
|
||||||
.send({ email: userData.email, password: userData.password })
|
|
||||||
.expect(200);
|
|
||||||
|
|
||||||
const { token } = loginResponse.body;
|
|
||||||
|
|
||||||
// Step 3: Access protected resource
|
|
||||||
await request(app)
|
|
||||||
.get('/api/profile')
|
|
||||||
.set('Authorization', `Bearer ${token}`)
|
|
||||||
.expect(200);
|
|
||||||
}, 30000); // Extended timeout for E2E
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Mocking & Test Utilities**
|
|
||||||
|
|
||||||
### **Established Mocking Patterns**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use established bcrypt mocking pattern
|
|
||||||
jest.mock('bcrypt');
|
|
||||||
import bcrypt from 'bcrypt';
|
|
||||||
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
|
|
||||||
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
|
|
||||||
|
|
||||||
// ✅ DO: Use Prisma mocking for unit tests
|
|
||||||
jest.mock('@prisma/client', () => ({
|
|
||||||
PrismaClient: jest.fn().mockImplementation(() => ({
|
|
||||||
user: {
|
|
||||||
create: jest.fn(),
|
|
||||||
findUnique: jest.fn(),
|
|
||||||
},
|
|
||||||
$connect: jest.fn(),
|
|
||||||
$disconnect: jest.fn(),
|
|
||||||
})),
|
|
||||||
}));
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test Fixtures Usage**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use centralized test fixtures
|
|
||||||
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
|
|
||||||
|
|
||||||
describe('User Service', () => {
|
|
||||||
it('should handle admin user creation', async () => {
|
|
||||||
const userData = createTestUser(adminUser);
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should reject invalid user data', async () => {
|
|
||||||
const userData = createTestUser(invalidUser);
|
|
||||||
// Error testing
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Coverage Standards & Monitoring**
|
|
||||||
|
|
||||||
### **Coverage Thresholds**
|
|
||||||
- **Global Standards**: 80% lines/functions, 70% branches
|
|
||||||
- **Critical Code**: 90% utils, 85% middleware
|
|
||||||
- **New Features**: Must meet or exceed global thresholds
|
|
||||||
- **Legacy Code**: Gradual improvement with each change
|
|
||||||
|
|
||||||
### **Coverage Reporting & Analysis**
|
|
||||||
```bash
|
|
||||||
# Generate coverage reports
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
# View detailed HTML report
|
|
||||||
open coverage/lcov-report/index.html
|
|
||||||
|
|
||||||
# Coverage files generated:
|
|
||||||
# - coverage/lcov-report/index.html # Detailed HTML report
|
|
||||||
# - coverage/lcov.info # LCOV format for IDE integration
|
|
||||||
# - coverage/coverage-final.json # JSON format for tooling
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Coverage Quality Checks**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Test all code paths
|
|
||||||
describe('validateInput', () => {
|
|
||||||
it('should return true for valid input', () => {
|
|
||||||
expect(validateInput('valid')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false for various invalid inputs', () => {
|
|
||||||
expect(validateInput('')).toBe(false); // Empty string
|
|
||||||
expect(validateInput(null)).toBe(false); // Null value
|
|
||||||
expect(validateInput(undefined)).toBe(false); // Undefined
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw for unexpected input types', () => {
|
|
||||||
expect(() => validateInput(123)).toThrow('Invalid input type');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Testing During Development Phases**
|
|
||||||
|
|
||||||
### **Feature Development Phase**
|
|
||||||
```bash
|
|
||||||
# 1. Start feature development
|
|
||||||
task-master set-status --id=X.Y --status=in-progress
|
|
||||||
|
|
||||||
# 2. Begin TDD cycle
|
|
||||||
npm run test:watch
|
|
||||||
|
|
||||||
# 3. Document test progress in subtask
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Test development:
|
|
||||||
- Created test file with 5 failing tests
|
|
||||||
- Implemented core functionality
|
|
||||||
- Tests passing, adding error scenarios"
|
|
||||||
|
|
||||||
# 4. Verify coverage before completion
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
# 5. Update subtask with final test status
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Testing complete:
|
|
||||||
- 12 unit tests with full coverage
|
|
||||||
- All edge cases and error scenarios covered
|
|
||||||
- Ready for integration testing"
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Integration Testing Phase**
|
|
||||||
```bash
|
|
||||||
# After API endpoints are implemented
|
|
||||||
npm run test:integration
|
|
||||||
|
|
||||||
# Update integration test templates
|
|
||||||
# Replace placeholder tests with real endpoint calls
|
|
||||||
|
|
||||||
# Document integration test results
|
|
||||||
task-master update-subtask --id=X.Y --prompt="Integration tests:
|
|
||||||
- Updated auth endpoint tests
|
|
||||||
- Database integration verified
|
|
||||||
- All HTTP status codes and responses tested"
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Pre-Commit Testing Phase**
|
|
||||||
```bash
|
|
||||||
# Before committing code
|
|
||||||
npm run test:coverage # Verify all tests pass with coverage
|
|
||||||
npm run test:unit # Quick unit test verification
|
|
||||||
npm run test:integration # Integration test verification (if applicable)
|
|
||||||
|
|
||||||
# Commit pattern for test updates
|
|
||||||
git add tests/ src/**/*.test.ts
|
|
||||||
git commit -m "test(task-X): Add comprehensive tests for Feature Y
|
|
||||||
|
|
||||||
- Unit tests with 95% coverage (exceeds 90% threshold)
|
|
||||||
- Integration tests for API endpoints
|
|
||||||
- Test fixtures for data generation
|
|
||||||
- Proper mocking patterns established
|
|
||||||
|
|
||||||
Task X: Feature Y - Testing complete"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Error Handling & Debugging**
|
|
||||||
|
|
||||||
### **Test Debugging Techniques**
|
|
||||||
```typescript
|
|
||||||
// ✅ DO: Use test utilities for debugging
|
|
||||||
import { testUtils } from '../setup';
|
|
||||||
|
|
||||||
it('should debug complex operation', () => {
|
|
||||||
testUtils.withConsole(() => {
|
|
||||||
// Console output visible only for this test
|
|
||||||
console.log('Debug info:', complexData);
|
|
||||||
service.complexOperation();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// ✅ DO: Use proper async debugging
|
|
||||||
it('should handle async operations', async () => {
|
|
||||||
const promise = service.asyncOperation();
|
|
||||||
|
|
||||||
// Test intermediate state
|
|
||||||
expect(service.isProcessing()).toBe(true);
|
|
||||||
|
|
||||||
const result = await promise;
|
|
||||||
expect(result).toBe('expected');
|
|
||||||
expect(service.isProcessing()).toBe(false);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Common Test Issues & Solutions**
|
|
||||||
```bash
|
|
||||||
# Hanging tests (common with database connections)
|
|
||||||
npm run test:integration -- --detectOpenHandles
|
|
||||||
|
|
||||||
# Memory leaks in tests
|
|
||||||
npm run test:unit -- --logHeapUsage
|
|
||||||
|
|
||||||
# Slow tests identification
|
|
||||||
npm run test:coverage -- --verbose
|
|
||||||
|
|
||||||
# Mock not working properly
|
|
||||||
# Check: mock is declared before imports
|
|
||||||
# Check: jest.clearAllMocks() in beforeEach
|
|
||||||
# Check: TypeScript typing is correct
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Continuous Integration Integration**
|
|
||||||
|
|
||||||
### **CI/CD Pipeline Testing**
|
|
||||||
```yaml
|
|
||||||
# Example GitHub Actions integration
|
|
||||||
- name: Run tests
|
|
||||||
run: |
|
|
||||||
npm ci
|
|
||||||
npm run test:coverage
|
|
||||||
|
|
||||||
- name: Upload coverage reports
|
|
||||||
uses: codecov/codecov-action@v3
|
|
||||||
with:
|
|
||||||
file: ./coverage/lcov.info
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Pre-commit Hooks**
|
|
||||||
```bash
|
|
||||||
# Setup pre-commit testing (recommended)
|
|
||||||
# In package.json scripts:
|
|
||||||
"pre-commit": "npm run test:unit && npm run test:integration"
|
|
||||||
|
|
||||||
# Husky integration example:
|
|
||||||
npx husky add .husky/pre-commit "npm run test:unit"
|
|
||||||
```
|
|
||||||
|
|
||||||
## **Test Maintenance & Evolution**
|
|
||||||
|
|
||||||
### **Adding Tests for New Features**
|
|
||||||
1. **Create test file** alongside source code or in `tests/unit/`
|
|
||||||
2. **Follow established patterns** from `src/utils/auth.test.ts`
|
|
||||||
3. **Use existing fixtures** from `tests/fixtures/`
|
|
||||||
4. **Apply proper mocking** patterns for dependencies
|
|
||||||
5. **Meet coverage thresholds** for the module
|
|
||||||
|
|
||||||
### **Updating Integration/E2E Tests**
|
|
||||||
1. **Update templates** in `tests/integration/` when APIs change
|
|
||||||
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
|
|
||||||
3. **Update test fixtures** for new data requirements
|
|
||||||
4. **Maintain database cleanup** utilities
|
|
||||||
|
|
||||||
### **Test Performance Optimization**
|
|
||||||
- **Parallel execution**: Jest runs tests in parallel by default
|
|
||||||
- **Test isolation**: Use proper setup/teardown for independence
|
|
||||||
- **Mock optimization**: Mock heavy dependencies appropriately
|
|
||||||
- **Database efficiency**: Use transaction rollbacks where possible
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Key References:**
|
|
||||||
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
|
|
||||||
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
|
|
||||||
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
|
|
||||||
- [Jest Configuration](mdc:jest.config.js)
|
|
||||||
@@ -8,7 +8,6 @@ GROQ_API_KEY=YOUR_GROQ_KEY_HERE
|
|||||||
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
||||||
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
||||||
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
||||||
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
|
||||||
|
|
||||||
# Google Vertex AI Configuration
|
# Google Vertex AI Configuration
|
||||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
|
|||||||
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,45 +0,0 @@
|
|||||||
# What type of PR is this?
|
|
||||||
<!-- Check one -->
|
|
||||||
|
|
||||||
- [ ] 🐛 Bug fix
|
|
||||||
- [ ] ✨ Feature
|
|
||||||
- [ ] 🔌 Integration
|
|
||||||
- [ ] 📝 Docs
|
|
||||||
- [ ] 🧹 Refactor
|
|
||||||
- [ ] Other:
|
|
||||||
## Description
|
|
||||||
<!-- What does this PR do? -->
|
|
||||||
|
|
||||||
## Related Issues
|
|
||||||
<!-- Link issues: Fixes #123 -->
|
|
||||||
|
|
||||||
## How to Test This
|
|
||||||
<!-- Quick steps to verify the changes work -->
|
|
||||||
```bash
|
|
||||||
# Example commands or steps
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result:**
|
|
||||||
<!-- What should happen? -->
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
|
|
||||||
- [ ] Addressed CodeRabbit comments (if any)
|
|
||||||
- [ ] Linked related issues (if any)
|
|
||||||
- [ ] Manually tested the changes
|
|
||||||
|
|
||||||
## Changelog Entry
|
|
||||||
<!-- One line describing the change for users -->
|
|
||||||
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
|
|
||||||
- [ ] PR title follows conventional commits
|
|
||||||
- [ ] Target branch correct
|
|
||||||
- [ ] Labels added
|
|
||||||
- [ ] Milestone assigned (if applicable)
|
|
||||||
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
39
.github/PULL_REQUEST_TEMPLATE/bugfix.md
vendored
@@ -1,39 +0,0 @@
|
|||||||
## 🐛 Bug Fix
|
|
||||||
|
|
||||||
### 🔍 Bug Description
|
|
||||||
<!-- Describe the bug -->
|
|
||||||
|
|
||||||
### 🔗 Related Issues
|
|
||||||
<!-- Fixes #123 -->
|
|
||||||
|
|
||||||
### ✨ Solution
|
|
||||||
<!-- How does this PR fix the bug? -->
|
|
||||||
|
|
||||||
## How to Test
|
|
||||||
|
|
||||||
### Steps that caused the bug:
|
|
||||||
1.
|
|
||||||
2.
|
|
||||||
|
|
||||||
**Before fix:**
|
|
||||||
**After fix:**
|
|
||||||
|
|
||||||
### Quick verification:
|
|
||||||
```bash
|
|
||||||
# Commands to verify the fix
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Added unit tests (if applicable)
|
|
||||||
- [ ] Manually verified the fix works
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
- [ ] Root cause identified
|
|
||||||
- [ ] Fix doesn't introduce new issues
|
|
||||||
- [ ] CI passes
|
|
||||||
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
11
.github/PULL_REQUEST_TEMPLATE/config.yml
vendored
@@ -1,11 +0,0 @@
|
|||||||
blank_issues_enabled: false
|
|
||||||
contact_links:
|
|
||||||
- name: 🐛 Bug Fix
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
|
|
||||||
about: Fix a bug in Task Master
|
|
||||||
- name: ✨ New Feature
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
|
|
||||||
about: Add a new feature to Task Master
|
|
||||||
- name: 🔌 New Integration
|
|
||||||
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
|
|
||||||
about: Add support for a new tool, IDE, or platform
|
|
||||||
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
49
.github/PULL_REQUEST_TEMPLATE/feature.md
vendored
@@ -1,49 +0,0 @@
|
|||||||
## ✨ New Feature
|
|
||||||
|
|
||||||
### 📋 Feature Description
|
|
||||||
<!-- Brief description -->
|
|
||||||
|
|
||||||
### 🎯 Problem Statement
|
|
||||||
<!-- What problem does this feature solve? Why is it needed? -->
|
|
||||||
|
|
||||||
### 💡 Solution
|
|
||||||
<!-- How does this feature solve the problem? What's the approach? -->
|
|
||||||
|
|
||||||
### 🔗 Related Issues
|
|
||||||
<!-- Link related issues: Fixes #123, Part of #456 -->
|
|
||||||
|
|
||||||
## How to Use It
|
|
||||||
|
|
||||||
### Quick Start
|
|
||||||
```bash
|
|
||||||
# Basic usage example
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example
|
|
||||||
<!-- Show a real use case -->
|
|
||||||
```bash
|
|
||||||
# Practical example
|
|
||||||
```
|
|
||||||
|
|
||||||
**What you should see:**
|
|
||||||
<!-- Expected behavior -->
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Added tests for new functionality
|
|
||||||
- [ ] Manually tested in CLI mode
|
|
||||||
- [ ] Manually tested in MCP mode (if applicable)
|
|
||||||
|
|
||||||
## Changelog Entry
|
|
||||||
<!-- One-liner for release notes -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### For Maintainers
|
|
||||||
|
|
||||||
- [ ] Feature aligns with project vision
|
|
||||||
- [ ] CIs pass
|
|
||||||
- [ ] Changeset file exists
|
|
||||||
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
53
.github/PULL_REQUEST_TEMPLATE/integration.md
vendored
@@ -1,53 +0,0 @@
|
|||||||
# 🔌 New Integration
|
|
||||||
|
|
||||||
## What tool/IDE is being integrated?
|
|
||||||
|
|
||||||
<!-- Name and brief description -->
|
|
||||||
|
|
||||||
## What can users do with it?
|
|
||||||
|
|
||||||
<!-- Key benefits -->
|
|
||||||
|
|
||||||
## How to Enable
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
task-master rules add [name]
|
|
||||||
# Any other setup steps
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Usage
|
|
||||||
|
|
||||||
<!-- Show it in action -->
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Real example
|
|
||||||
```
|
|
||||||
|
|
||||||
### Natural Language Hooks (if applicable)
|
|
||||||
|
|
||||||
```
|
|
||||||
"When tests pass, mark task as done"
|
|
||||||
# Other examples
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributor Checklist
|
|
||||||
|
|
||||||
- [ ] Created changeset: `npm run changeset`
|
|
||||||
- [ ] Tests pass: `npm test`
|
|
||||||
- [ ] Format check passes: `npm run format-check`
|
|
||||||
- [ ] Addressed CodeRabbit comments
|
|
||||||
- [ ] Integration fully tested with target tool/IDE
|
|
||||||
- [ ] Error scenarios tested
|
|
||||||
- [ ] Added integration tests
|
|
||||||
- [ ] Documentation includes setup guide
|
|
||||||
- [ ] Examples are working and clear
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## For Maintainers
|
|
||||||
|
|
||||||
- [ ] Integration stability verified
|
|
||||||
- [ ] Documentation comprehensive
|
|
||||||
- [ ] Examples working
|
|
||||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'auto-close-duplicates-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
function extractDuplicateIssueNumber(commentBody) {
|
|
||||||
const match = commentBody.match(/#(\d+)/);
|
|
||||||
return match ? parseInt(match[1], 10) : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
duplicateOfNumber,
|
|
||||||
token
|
|
||||||
) {
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
|
||||||
token,
|
|
||||||
'PATCH',
|
|
||||||
{
|
|
||||||
state: 'closed',
|
|
||||||
state_reason: 'not_planned',
|
|
||||||
labels: ['duplicate']
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
|
||||||
|
|
||||||
If this is incorrect, please re-open this issue or create a new one.
|
|
||||||
|
|
||||||
🤖 Generated with [Task Master Bot]`
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function autoCloseDuplicates() {
|
|
||||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
|
|
||||||
const threeDaysAgo = new Date();
|
|
||||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
|
||||||
let foundRecentIssue = false;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
// Filter for issues created more than 3 days ago
|
|
||||||
const oldEnoughIssues = pageIssues.filter(
|
|
||||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
|
||||||
);
|
|
||||||
|
|
||||||
allIssues.push(...oldEnoughIssues);
|
|
||||||
|
|
||||||
// If all issues on this page are newer than 3 days, we can stop
|
|
||||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
|
||||||
foundRecentIssue = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we found some old issues but not all, continue to next page
|
|
||||||
// as there might be more old issues
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > MAX_PAGES) {
|
|
||||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const issues = allIssues;
|
|
||||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
|
|
||||||
for (const issue of issues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
const dupeComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeComments.length === 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
|
||||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (dupeCommentDate > threeDaysAgo) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${
|
|
||||||
issue.number
|
|
||||||
} - duplicate comment is old enough (${Math.floor(
|
|
||||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
|
||||||
)} days)`
|
|
||||||
);
|
|
||||||
|
|
||||||
const commentsAfterDupe = comments.filter(
|
|
||||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (commentsAfterDupe.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
|
||||||
);
|
|
||||||
const reactions = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
|
||||||
);
|
|
||||||
|
|
||||||
const authorThumbsDown = reactions.some(
|
|
||||||
(reaction) =>
|
|
||||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
|
||||||
);
|
|
||||||
|
|
||||||
if (authorThumbsDown) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
|
||||||
lastDupeComment.body
|
|
||||||
);
|
|
||||||
if (!duplicateIssueNumber) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await closeIssueAsDuplicate(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issue.number,
|
|
||||||
duplicateIssueNumber,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
autoCloseDuplicates().catch(console.error);
|
|
||||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
|
||||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
|
||||||
method,
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
Accept: 'application/vnd.github.v3+json',
|
|
||||||
'User-Agent': 'backfill-duplicate-comments-script',
|
|
||||||
...(body && { 'Content-Type': 'application/json' })
|
|
||||||
},
|
|
||||||
...(body && { body: JSON.stringify(body) })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(
|
|
||||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function triggerDedupeWorkflow(
|
|
||||||
owner,
|
|
||||||
repo,
|
|
||||||
issueNumber,
|
|
||||||
token,
|
|
||||||
dryRun = true
|
|
||||||
) {
|
|
||||||
if (dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
|
||||||
token,
|
|
||||||
'POST',
|
|
||||||
{
|
|
||||||
ref: 'main',
|
|
||||||
inputs: {
|
|
||||||
issue_number: issueNumber.toString()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function backfillDuplicateComments() {
|
|
||||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
|
||||||
|
|
||||||
const token = process.env.GITHUB_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
|
|
||||||
Environment Variables:
|
|
||||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
|
||||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
|
||||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
|
||||||
}
|
|
||||||
console.log('[DEBUG] GitHub token found');
|
|
||||||
|
|
||||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
|
||||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
|
||||||
const dryRun = process.env.DRY_RUN !== 'false';
|
|
||||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
|
||||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
|
||||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
|
||||||
|
|
||||||
const cutoffDate = new Date();
|
|
||||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
|
||||||
);
|
|
||||||
const allIssues = [];
|
|
||||||
let page = 1;
|
|
||||||
const perPage = 100;
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const pageIssues = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
|
|
||||||
if (pageIssues.length === 0) break;
|
|
||||||
|
|
||||||
allIssues.push(...pageIssues);
|
|
||||||
page++;
|
|
||||||
|
|
||||||
// Safety limit to avoid infinite loops
|
|
||||||
if (page > 100) {
|
|
||||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
|
||||||
);
|
|
||||||
|
|
||||||
let processedCount = 0;
|
|
||||||
let candidateCount = 0;
|
|
||||||
let triggeredCount = 0;
|
|
||||||
|
|
||||||
for (const issue of allIssues) {
|
|
||||||
processedCount++;
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
|
||||||
const comments = await githubRequest(
|
|
||||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
|
||||||
token
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
|
||||||
const dupeDetectionComments = comments.filter(
|
|
||||||
(comment) =>
|
|
||||||
comment.body.includes('Found') &&
|
|
||||||
comment.body.includes('possible duplicate') &&
|
|
||||||
comment.user.type === 'Bot'
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Skip if there's already a duplicate detection comment
|
|
||||||
if (dupeDetectionComments.length > 0) {
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
candidateCount++;
|
|
||||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
console.log(
|
|
||||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
|
||||||
);
|
|
||||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
|
||||||
|
|
||||||
if (!dryRun) {
|
|
||||||
console.log(
|
|
||||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
triggeredCount++;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
backfillDuplicateComments().catch(console.error);
|
|
||||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { readFileSync, existsSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
// Get context from command line argument or environment
|
|
||||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
|
||||||
|
|
||||||
function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
if (existsSync(join(currentDir, 'package.json'))) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(
|
|
||||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
|
||||||
);
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
function checkPreReleaseMode() {
|
|
||||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
|
|
||||||
// Check if pre.json exists
|
|
||||||
if (!existsSync(preJsonPath)) {
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Read and parse pre.json
|
|
||||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
|
||||||
const preJson = JSON.parse(preJsonContent);
|
|
||||||
|
|
||||||
// Check if we're in active pre-release mode
|
|
||||||
if (preJson.mode === 'pre') {
|
|
||||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
|
||||||
console.error('');
|
|
||||||
|
|
||||||
// Provide context-specific error messages
|
|
||||||
if (context === 'Release Check' || context === 'pull_request') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode must be exited before merging to main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error(
|
|
||||||
'To fix this, run the following commands in your branch:'
|
|
||||||
);
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then update this pull request.');
|
|
||||||
} else if (context === 'Release' || context === 'main') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode should only be used on feature branches, not main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands locally:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push origin main');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then re-run this workflow.');
|
|
||||||
} else {
|
|
||||||
console.error('Pre-release mode must be exited before proceeding.');
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
}
|
|
||||||
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
|
||||||
console.error(`Error details: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run the check
|
|
||||||
checkPreReleaseMode();
|
|
||||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
|
||||||
|
|
||||||
function parseMetricsTable(content, metricName) {
|
|
||||||
const lines = content.split('\n');
|
|
||||||
|
|
||||||
for (let i = 0; i < lines.length; i++) {
|
|
||||||
const line = lines[i].trim();
|
|
||||||
// Match a markdown table row like: | Metric Name | value | ...
|
|
||||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
|
||||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
|
||||||
const match = line.match(re);
|
|
||||||
if (match) {
|
|
||||||
return match[1].trim() || 'N/A';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return 'N/A';
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseCountMetric(content, metricName) {
|
|
||||||
const result = parseMetricsTable(content, metricName);
|
|
||||||
// Extract number from string, handling commas and spaces
|
|
||||||
const numberMatch = result.toString().match(/[\d,]+/);
|
|
||||||
if (numberMatch) {
|
|
||||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
|
||||||
return isNaN(number) ? 0 : number;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
function main() {
|
|
||||||
const metrics = {
|
|
||||||
issues_created: 0,
|
|
||||||
issues_closed: 0,
|
|
||||||
prs_created: 0,
|
|
||||||
prs_merged: 0,
|
|
||||||
issue_avg_first_response: 'N/A',
|
|
||||||
issue_avg_time_to_close: 'N/A',
|
|
||||||
pr_avg_first_response: 'N/A',
|
|
||||||
pr_avg_merge_time: 'N/A'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Parse issue metrics
|
|
||||||
if (existsSync('issue_metrics.md')) {
|
|
||||||
console.log('📄 Found issue_metrics.md, parsing...');
|
|
||||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.issues_created = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.issues_closed = parseCountMetric(
|
|
||||||
issueContent,
|
|
||||||
'Number of items closed'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_first_response = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
|
||||||
issueContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR created metrics
|
|
||||||
if (existsSync('pr_created_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
|
||||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_created = parseCountMetric(
|
|
||||||
prCreatedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_first_response = parseMetricsTable(
|
|
||||||
prCreatedContent,
|
|
||||||
'Time to first response'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse PR merged metrics (for more accurate merge data)
|
|
||||||
if (existsSync('pr_merged_metrics.md')) {
|
|
||||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
|
||||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
metrics.prs_merged = parseCountMetric(
|
|
||||||
prMergedContent,
|
|
||||||
'Total number of items created'
|
|
||||||
);
|
|
||||||
// For merged PRs, "Time to close" is actually time to merge
|
|
||||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
|
||||||
prMergedContent,
|
|
||||||
'Time to close'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
console.warn(
|
|
||||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
|
||||||
);
|
|
||||||
// Fallback: try old pr_metrics.md if it exists
|
|
||||||
if (existsSync('pr_metrics.md')) {
|
|
||||||
console.log('📄 Falling back to pr_metrics.md...');
|
|
||||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
|
||||||
|
|
||||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
|
||||||
metrics.prs_merged =
|
|
||||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
|
||||||
|
|
||||||
const maybeMergeTime = parseMetricsTable(
|
|
||||||
prContent,
|
|
||||||
'Average time to merge'
|
|
||||||
);
|
|
||||||
metrics.pr_avg_merge_time =
|
|
||||||
maybeMergeTime !== 'N/A'
|
|
||||||
? maybeMergeTime
|
|
||||||
: parseMetricsTable(prContent, 'Time to close');
|
|
||||||
} else {
|
|
||||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output for GitHub Actions
|
|
||||||
const output = Object.entries(metrics)
|
|
||||||
.map(([key, value]) => `${key}=${value}`)
|
|
||||||
.join('\n');
|
|
||||||
|
|
||||||
// Always output to stdout for debugging
|
|
||||||
console.log('\n=== FINAL METRICS ===');
|
|
||||||
Object.entries(metrics).forEach(([key, value]) => {
|
|
||||||
console.log(`${key}: ${value}`);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
|
||||||
if (process.env.GITHUB_OUTPUT) {
|
|
||||||
try {
|
|
||||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
|
||||||
console.log(
|
|
||||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
console.log(
|
|
||||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
main();
|
|
||||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { existsSync, unlinkSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, runCommand } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
console.log('🚀 Starting release process...');
|
|
||||||
|
|
||||||
// Double-check we're not in pre-release mode (safety net)
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
if (existsSync(preJsonPath)) {
|
|
||||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
|
||||||
unlinkSync(preJsonPath);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the extension version has changed and tag it
|
|
||||||
// This prevents changeset from trying to publish the private package
|
|
||||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
|
||||||
|
|
||||||
// Run changeset publish for npm packages
|
|
||||||
runCommand('npx', ['changeset', 'publish']);
|
|
||||||
|
|
||||||
console.log('✅ Release process completed!');
|
|
||||||
|
|
||||||
// The extension tag (if created) will trigger the extension-release workflow
|
|
||||||
33
.github/scripts/tag-extension.mjs
vendored
33
.github/scripts/tag-extension.mjs
vendored
@@ -1,33 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import assert from 'node:assert/strict';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
// Read the extension's package.json
|
|
||||||
const extensionDir = join(rootDir, 'apps', 'extension');
|
|
||||||
const pkgPath = join(extensionDir, 'package.json');
|
|
||||||
|
|
||||||
let pkg;
|
|
||||||
try {
|
|
||||||
const pkgContent = readFileSync(pkgPath, 'utf8');
|
|
||||||
pkg = JSON.parse(pkgContent);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to read package.json:', error.message);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure we have required fields
|
|
||||||
assert(pkg.name, 'package.json must have a name field');
|
|
||||||
assert(pkg.version, 'package.json must have a version field');
|
|
||||||
|
|
||||||
const tag = `${pkg.name}@${pkg.version}`;
|
|
||||||
|
|
||||||
// Create and push the tag if it doesn't exist
|
|
||||||
createAndPushTag(tag);
|
|
||||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { spawnSync } from 'node:child_process';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
|
|
||||||
// Find the root directory by looking for package.json with task-master-ai
|
|
||||||
export function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
const pkgPath = join(currentDir, 'package.json');
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run a command with proper error handling
|
|
||||||
export function runCommand(command, args = [], options = {}) {
|
|
||||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
|
||||||
const result = spawnSync(command, args, {
|
|
||||||
encoding: 'utf8',
|
|
||||||
stdio: 'inherit',
|
|
||||||
...options
|
|
||||||
});
|
|
||||||
|
|
||||||
if (result.status !== 0) {
|
|
||||||
console.error(`Command failed with exit code ${result.status}`);
|
|
||||||
process.exit(result.status);
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get package version from a package.json file
|
|
||||||
export function getPackageVersion(packagePath) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
|
||||||
return pkg.version;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`Failed to read package version from ${packagePath}:`,
|
|
||||||
error.message
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if a git tag exists on remote
|
|
||||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
|
||||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
|
||||||
encoding: 'utf8'
|
|
||||||
});
|
|
||||||
|
|
||||||
return result.status === 0 && result.stdout.trim() !== '';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create and push a git tag if it doesn't exist
|
|
||||||
export function createAndPushTag(tag, remote = 'origin') {
|
|
||||||
// Check if tag already exists
|
|
||||||
if (tagExistsOnRemote(tag, remote)) {
|
|
||||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`Creating new tag: ${tag}`);
|
|
||||||
|
|
||||||
// Create the tag locally
|
|
||||||
const tagResult = spawnSync('git', ['tag', tag]);
|
|
||||||
if (tagResult.status !== 0) {
|
|
||||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Push the tag to remote
|
|
||||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
|
||||||
if (pushResult.status !== 0) {
|
|
||||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
|||||||
name: Auto-close duplicate issues
|
|
||||||
# description: Auto-closes issues that are duplicates of existing issues
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
auto-close-duplicates:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write # Need write permission to close issues and add comments
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Auto-close duplicate issues
|
|
||||||
run: node .github/scripts/auto-close-duplicates.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
name: Backfill Duplicate Comments
|
|
||||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
days_back:
|
|
||||||
description: "How many days back to look for old issues"
|
|
||||||
required: false
|
|
||||||
default: "90"
|
|
||||||
type: string
|
|
||||||
dry_run:
|
|
||||||
description: "Dry run mode (true to only log what would be done)"
|
|
||||||
required: false
|
|
||||||
default: "true"
|
|
||||||
type: choice
|
|
||||||
options:
|
|
||||||
- "true"
|
|
||||||
- "false"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
backfill-duplicate-comments:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 30
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
actions: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Backfill duplicate comments
|
|
||||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
|
||||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
|
||||||
DAYS_BACK: ${{ inputs.days_back }}
|
|
||||||
DRY_RUN: ${{ inputs.dry_run }}
|
|
||||||
126
.github/workflows/ci.yml
vendored
126
.github/workflows/ci.yml
vendored
@@ -6,124 +6,73 @@ on:
|
|||||||
- main
|
- main
|
||||||
- next
|
- next
|
||||||
pull_request:
|
pull_request:
|
||||||
workflow_dispatch:
|
branches:
|
||||||
|
- main
|
||||||
concurrency:
|
- next
|
||||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
|
||||||
env:
|
|
||||||
DO_NOT_TRACK: 1
|
|
||||||
NODE_ENV: development
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
# Fast checks that can run in parallel
|
setup:
|
||||||
format-check:
|
|
||||||
name: Format Check
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 0
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Install Dependencies
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
id: install
|
||||||
timeout-minutes: 5
|
run: npm ci
|
||||||
|
timeout-minutes: 2
|
||||||
|
|
||||||
|
- name: Cache node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
|
format-check:
|
||||||
|
needs: setup
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
|
||||||
|
- name: Restore node_modules
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: node_modules
|
||||||
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Format Check
|
- name: Format Check
|
||||||
run: npm run format-check
|
run: npm run format-check
|
||||||
env:
|
env:
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
typecheck:
|
|
||||||
name: Typecheck
|
|
||||||
timeout-minutes: 10
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Typecheck
|
|
||||||
run: npm run turbo:typecheck
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
# Build job to ensure everything compiles
|
|
||||||
build:
|
|
||||||
name: Build
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Build
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Upload build artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: build-artifacts
|
|
||||||
path: dist/
|
|
||||||
retention-days: 1
|
|
||||||
|
|
||||||
test:
|
test:
|
||||||
name: Test
|
needs: setup
|
||||||
timeout-minutes: 15
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [format-check, typecheck, build]
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Restore node_modules
|
||||||
run: npm install --frozen-lockfile --prefer-offline
|
uses: actions/cache@v4
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Download build artifacts
|
|
||||||
uses: actions/download-artifact@v4
|
|
||||||
with:
|
with:
|
||||||
name: build-artifacts
|
path: node_modules
|
||||||
path: dist/
|
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||||
|
|
||||||
- name: Run Tests
|
- name: Run Tests
|
||||||
run: |
|
run: |
|
||||||
@@ -132,6 +81,7 @@ jobs:
|
|||||||
NODE_ENV: test
|
NODE_ENV: test
|
||||||
CI: true
|
CI: true
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
timeout-minutes: 10
|
||||||
|
|
||||||
- name: Upload Test Results
|
- name: Upload Test Results
|
||||||
if: always()
|
if: always()
|
||||||
|
|||||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
|||||||
name: Claude Issue Dedupe
|
|
||||||
# description: Automatically dedupe GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
issue_number:
|
|
||||||
description: "Issue number to process for duplicate detection"
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude-dedupe-issues:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Run Claude Code slash command
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Log duplicate comment event to Statsig
|
|
||||||
if: always()
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg triggered_by "${{ github.event_name }}" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_duplicate_comment_added",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
triggered_by: $triggered_by,
|
|
||||||
workflow_run_id: "${{ github.run_id }}"
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
57
.github/workflows/claude-docs-trigger.yml
vendored
57
.github/workflows/claude-docs-trigger.yml
vendored
@@ -1,57 +0,0 @@
|
|||||||
name: Trigger Claude Documentation Update
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- next
|
|
||||||
paths-ignore:
|
|
||||||
- "apps/docs/**"
|
|
||||||
- "*.md"
|
|
||||||
- ".github/workflows/**"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
trigger-docs-update:
|
|
||||||
# Only run if changes were merged (not direct pushes from bots)
|
|
||||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
actions: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 2 # Need previous commit for comparison
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
run: |
|
|
||||||
echo "Changed files in this push:"
|
|
||||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
|
||||||
|
|
||||||
# Store changed files for Claude to analyze (escaped for JSON)
|
|
||||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
|
||||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get the commit message (escaped for JSON)
|
|
||||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
|
||||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get diff for documentation context (escaped for JSON)
|
|
||||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
|
||||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
# Get commit SHA
|
|
||||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Trigger Claude workflow
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
# Trigger the Claude docs updater workflow with the change information
|
|
||||||
gh workflow run claude-docs-updater.yml \
|
|
||||||
--ref next \
|
|
||||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
|
||||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
|
||||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
|
||||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
|
||||||
145
.github/workflows/claude-docs-updater.yml
vendored
145
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,145 +0,0 @@
|
|||||||
name: Claude Documentation Updater
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
commit_sha:
|
|
||||||
description: 'The commit SHA that triggered this update'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_message:
|
|
||||||
description: 'The commit message'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
changed_files:
|
|
||||||
description: 'List of changed files'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
commit_diff:
|
|
||||||
description: 'Diff summary of changes'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
update-docs:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
issues: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
ref: next
|
|
||||||
fetch-depth: 0 # Need full history to checkout specific commit
|
|
||||||
|
|
||||||
- name: Create docs update branch
|
|
||||||
id: create-branch
|
|
||||||
run: |
|
|
||||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
git checkout -b $BRANCH_NAME
|
|
||||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Run Claude Code to Update Documentation
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
timeout_minutes: "30"
|
|
||||||
mode: "agent"
|
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
experimental_allowed_domains: |
|
|
||||||
.anthropic.com
|
|
||||||
.github.com
|
|
||||||
api.github.com
|
|
||||||
.githubusercontent.com
|
|
||||||
registry.npmjs.org
|
|
||||||
.task-master.dev
|
|
||||||
base_branch: "next"
|
|
||||||
direct_prompt: |
|
|
||||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
|
||||||
|
|
||||||
Recent changes:
|
|
||||||
- Commit: ${{ inputs.commit_message }}
|
|
||||||
- Changed files:
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
|
|
||||||
- Changes summary:
|
|
||||||
${{ inputs.commit_diff }}
|
|
||||||
|
|
||||||
Your task:
|
|
||||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
|
||||||
2. Check if these changes require documentation updates in apps/docs/
|
|
||||||
3. If documentation updates are needed:
|
|
||||||
- Update relevant documentation files in apps/docs/
|
|
||||||
- Ensure examples are updated if APIs changed
|
|
||||||
- Update any configuration documentation if config options changed
|
|
||||||
- Add new documentation pages if new features were added
|
|
||||||
- Update the changelog or release notes if applicable
|
|
||||||
4. If no documentation updates are needed, skip creating changes
|
|
||||||
|
|
||||||
Guidelines:
|
|
||||||
- Focus only on user-facing changes that need documentation
|
|
||||||
- Keep documentation clear, concise, and helpful
|
|
||||||
- Include code examples where appropriate
|
|
||||||
- Maintain consistent documentation style with existing docs
|
|
||||||
- Don't document internal implementation details unless they affect users
|
|
||||||
- Update navigation/menu files if new pages are added
|
|
||||||
|
|
||||||
Only make changes if the documentation truly needs updating based on the code changes.
|
|
||||||
|
|
||||||
- name: Check if changes were made
|
|
||||||
id: check-changes
|
|
||||||
run: |
|
|
||||||
if git diff --quiet; then
|
|
||||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
|
||||||
else
|
|
||||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
|
||||||
git add -A
|
|
||||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
|
||||||
git config --local user.name "github-actions[bot]"
|
|
||||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
|
||||||
|
|
||||||
This PR was automatically generated to update documentation based on recent changes.
|
|
||||||
|
|
||||||
Original commit: ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Push changes and create PR
|
|
||||||
if: steps.check-changes.outputs.has_changes == 'true'
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
|
||||||
|
|
||||||
# Create PR using GitHub CLI
|
|
||||||
gh pr create \
|
|
||||||
--title "docs: update documentation for recent changes" \
|
|
||||||
--body "## 📚 Documentation Update
|
|
||||||
|
|
||||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
|
||||||
|
|
||||||
### Original Changes
|
|
||||||
**Commit:** ${{ inputs.commit_sha }}
|
|
||||||
**Message:** ${{ inputs.commit_message }}
|
|
||||||
|
|
||||||
### Changed Files in Original Commit
|
|
||||||
\`\`\`
|
|
||||||
${{ inputs.changed_files }}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
### Documentation Updates
|
|
||||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
|
||||||
- [ ] Documentation accurately reflects the changes
|
|
||||||
- [ ] Examples are correct and working
|
|
||||||
- [ ] No important details are missing
|
|
||||||
- [ ] Style is consistent with existing documentation
|
|
||||||
|
|
||||||
---
|
|
||||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
|
||||||
--base next \
|
|
||||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
|
||||||
--label "documentation" \
|
|
||||||
--label "automated"
|
|
||||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
|||||||
name: Claude Issue Triage
|
|
||||||
# description: Automatically triage GitHub issues using Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
triage-issue:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 10
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Create triage prompt
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/claude-prompts
|
|
||||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
|
||||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
|
||||||
|
|
||||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
|
||||||
|
|
||||||
Issue Information:
|
|
||||||
- REPO: ${{ github.repository }}
|
|
||||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
|
||||||
|
|
||||||
TASK OVERVIEW:
|
|
||||||
|
|
||||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
|
||||||
|
|
||||||
2. Next, use the GitHub tools to get context about the issue:
|
|
||||||
- You have access to these tools:
|
|
||||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
|
||||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
|
||||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
|
||||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
|
||||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
|
||||||
- Start by using mcp__github__get_issue to get the issue details
|
|
||||||
|
|
||||||
3. Analyze the issue content, considering:
|
|
||||||
- The issue title and description
|
|
||||||
- The type of issue (bug report, feature request, question, etc.)
|
|
||||||
- Technical areas mentioned
|
|
||||||
- Severity or priority indicators
|
|
||||||
- User impact
|
|
||||||
- Components affected
|
|
||||||
|
|
||||||
4. Select appropriate labels from the available labels list provided above:
|
|
||||||
- Choose labels that accurately reflect the issue's nature
|
|
||||||
- Be specific but comprehensive
|
|
||||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
|
||||||
- Consider platform labels (android, ios) if applicable
|
|
||||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
|
||||||
|
|
||||||
5. Apply the selected labels:
|
|
||||||
- Use mcp__github__update_issue to apply your selected labels
|
|
||||||
- DO NOT post any comments explaining your decision
|
|
||||||
- DO NOT communicate directly with users
|
|
||||||
- If no labels are clearly applicable, do not apply any labels
|
|
||||||
|
|
||||||
IMPORTANT GUIDELINES:
|
|
||||||
- Be thorough in your analysis
|
|
||||||
- Only select labels from the provided list above
|
|
||||||
- DO NOT post any comments to the issue
|
|
||||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
|
||||||
- It's okay to not add any labels if none are clearly applicable
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Setup GitHub MCP Server
|
|
||||||
run: |
|
|
||||||
mkdir -p /tmp/mcp-config
|
|
||||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"github": {
|
|
||||||
"command": "docker",
|
|
||||||
"args": [
|
|
||||||
"run",
|
|
||||||
"-i",
|
|
||||||
"--rm",
|
|
||||||
"-e",
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
|
||||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
|
||||||
],
|
|
||||||
"env": {
|
|
||||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
- name: Run Claude Code for Issue Triage
|
|
||||||
uses: anthropics/claude-code-base-action@beta
|
|
||||||
with:
|
|
||||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
|
||||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
|
||||||
timeout_minutes: "5"
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
|
||||||
claude_env: |
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
|||||||
name: Claude Code
|
|
||||||
|
|
||||||
on:
|
|
||||||
issue_comment:
|
|
||||||
types: [created]
|
|
||||||
pull_request_review_comment:
|
|
||||||
types: [created]
|
|
||||||
issues:
|
|
||||||
types: [opened, assigned]
|
|
||||||
pull_request_review:
|
|
||||||
types: [submitted]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
claude:
|
|
||||||
if: |
|
|
||||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
|
||||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
|
||||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pull-requests: read
|
|
||||||
issues: read
|
|
||||||
id-token: write
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 1
|
|
||||||
|
|
||||||
- name: Run Claude Code
|
|
||||||
id: claude
|
|
||||||
uses: anthropics/claude-code-action@beta
|
|
||||||
with:
|
|
||||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
||||||
140
.github/workflows/extension-ci.yml
vendored
140
.github/workflows/extension-ci.yml
vendored
@@ -1,140 +0,0 @@
|
|||||||
name: Extension CI
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
- next
|
|
||||||
paths:
|
|
||||||
- 'apps/extension/**'
|
|
||||||
- '.github/workflows/extension-ci.yml'
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
- next
|
|
||||||
paths:
|
|
||||||
- 'apps/extension/**'
|
|
||||||
- '.github/workflows/extension-ci.yml'
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
setup:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Cache node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
typecheck:
|
|
||||||
needs: setup
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
|
|
||||||
- name: Restore node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install if cache miss
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 3
|
|
||||||
|
|
||||||
- name: Type Check Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run check-types
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
build:
|
|
||||||
needs: setup
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
|
|
||||||
- name: Restore node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install if cache miss
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 3
|
|
||||||
|
|
||||||
- name: Build Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run build
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Package Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run package
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Verify Package Contents
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: |
|
|
||||||
echo "Checking vsix-build contents..."
|
|
||||||
ls -la vsix-build/
|
|
||||||
echo "Checking dist contents..."
|
|
||||||
ls -la vsix-build/dist/
|
|
||||||
echo "Checking package.json exists..."
|
|
||||||
test -f vsix-build/package.json
|
|
||||||
|
|
||||||
- name: Create VSIX Package (Test)
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce package --no-dependencies
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Upload Extension Artifact
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: extension-package
|
|
||||||
path: |
|
|
||||||
apps/extension/vsix-build/*.vsix
|
|
||||||
apps/extension/dist/
|
|
||||||
retention-days: 30
|
|
||||||
|
|
||||||
110
.github/workflows/extension-release.yml
vendored
110
.github/workflows/extension-release.yml
vendored
@@ -1,110 +0,0 @@
|
|||||||
name: Extension Release
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- "extension@*"
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
|
|
||||||
concurrency: extension-release-${{ github.ref }}
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
publish-extension:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Cache node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Monorepo Dependencies
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Type Check Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run check-types
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run build
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Package Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run package
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Create VSIX Package
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce package --no-dependencies
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Get VSIX filename
|
|
||||||
id: vsix-info
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: |
|
|
||||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
|
||||||
if [ -z "$VSIX_FILE" ]; then
|
|
||||||
echo "Error: No VSIX file found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
|
||||||
echo "Found VSIX: $VSIX_FILE"
|
|
||||||
|
|
||||||
- name: Publish to VS Code Marketplace
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}"
|
|
||||||
env:
|
|
||||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Install Open VSX CLI
|
|
||||||
run: npm install -g ovsx
|
|
||||||
|
|
||||||
- name: Publish to Open VSX Registry
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}"
|
|
||||||
env:
|
|
||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Upload Build Artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: extension-release-${{ github.ref_name }}
|
|
||||||
path: |
|
|
||||||
apps/extension/vsix-build/*.vsix
|
|
||||||
apps/extension/dist/
|
|
||||||
retention-days: 90
|
|
||||||
|
|
||||||
notify-success:
|
|
||||||
needs: publish-extension
|
|
||||||
if: success()
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Success Notification
|
|
||||||
run: |
|
|
||||||
echo "🎉 Extension ${{ github.ref_name }} successfully published!"
|
|
||||||
echo "📦 Available on VS Code Marketplace"
|
|
||||||
echo "🌍 Available on Open VSX Registry"
|
|
||||||
echo "🏷️ GitHub release created: ${{ github.ref_name }}"
|
|
||||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
|||||||
name: Log GitHub Issue Events
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened, closed]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
log-issue-created:
|
|
||||||
if: github.event.action == 'opened'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue creation to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
AUTHOR="${{ github.event.issue.user.login }}"
|
|
||||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg author "$AUTHOR" \
|
|
||||||
--arg created_at "$CREATED_AT" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_created",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
issue_author: $author,
|
|
||||||
created_at: $created_at
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log-issue-closed:
|
|
||||||
if: github.event.action == 'closed'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 5
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Log issue closure to Statsig
|
|
||||||
env:
|
|
||||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
|
||||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
|
||||||
REPO=${{ github.repository }}
|
|
||||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
|
||||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
|
||||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
|
||||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
|
||||||
|
|
||||||
if [ -z "$STATSIG_API_KEY" ]; then
|
|
||||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get additional issue data via GitHub API
|
|
||||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
|
||||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
|
||||||
|
|
||||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
|
||||||
|
|
||||||
# Get reactions data
|
|
||||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
|
||||||
|
|
||||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
|
||||||
|
|
||||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
|
||||||
CLOSED_AUTOMATICALLY="false"
|
|
||||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
|
||||||
CLOSED_AUTOMATICALLY="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if closed as duplicate by state_reason
|
|
||||||
CLOSED_AS_DUPLICATE="false"
|
|
||||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
|
||||||
CLOSED_AS_DUPLICATE="true"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare the event payload
|
|
||||||
EVENT_PAYLOAD=$(jq -n \
|
|
||||||
--arg issue_number "$ISSUE_NUMBER" \
|
|
||||||
--arg repo "$REPO" \
|
|
||||||
--arg title "$ISSUE_TITLE" \
|
|
||||||
--arg closed_by "$CLOSED_BY" \
|
|
||||||
--arg closed_at "$CLOSED_AT" \
|
|
||||||
--arg state_reason "$STATE_REASON" \
|
|
||||||
--arg comments_count "$COMMENTS_COUNT" \
|
|
||||||
--arg reactions_count "$REACTIONS_COUNT" \
|
|
||||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
|
||||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
|
||||||
'{
|
|
||||||
events: [{
|
|
||||||
eventName: "github_issue_closed",
|
|
||||||
value: 1,
|
|
||||||
metadata: {
|
|
||||||
repository: $repo,
|
|
||||||
issue_number: ($issue_number | tonumber),
|
|
||||||
issue_title: $title,
|
|
||||||
closed_by: $closed_by,
|
|
||||||
closed_at: $closed_at,
|
|
||||||
state_reason: $state_reason,
|
|
||||||
comments_count: ($comments_count | tonumber),
|
|
||||||
reactions_count: ($reactions_count | tonumber),
|
|
||||||
closed_automatically: ($closed_automatically | test("true")),
|
|
||||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
|
||||||
},
|
|
||||||
time: (now | floor | tostring)
|
|
||||||
}]
|
|
||||||
}')
|
|
||||||
|
|
||||||
# Send to Statsig API
|
|
||||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
|
||||||
-d "$EVENT_PAYLOAD")
|
|
||||||
|
|
||||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
|
||||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
|
||||||
|
|
||||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
|
||||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
|
||||||
echo "Closed by: $CLOSED_BY"
|
|
||||||
echo "Comments: $COMMENTS_COUNT"
|
|
||||||
echo "Reactions: $REACTIONS_COUNT"
|
|
||||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
|
||||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
|
||||||
else
|
|
||||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
|
||||||
fi
|
|
||||||
55
.github/workflows/pre-release.yml
vendored
55
.github/workflows/pre-release.yml
vendored
@@ -3,13 +3,11 @@ name: Pre-Release (RC)
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||||
|
|
||||||
concurrency: pre-release-${{ github.ref_name }}
|
concurrency: pre-release-${{ github.ref }}
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
rc:
|
rc:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
# Only allow pre-releases on non-main branches
|
|
||||||
if: github.ref != 'refs/heads/main'
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -18,7 +16,7 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Cache node_modules
|
- name: Cache node_modules
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
@@ -34,30 +32,10 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Enter RC mode (if not already in RC mode)
|
- name: Enter RC mode
|
||||||
run: |
|
run: |
|
||||||
# Check if we're in pre-release mode with the "rc" tag
|
npx changeset pre exit || true
|
||||||
if [ -f .changeset/pre.json ]; then
|
npx changeset pre enter rc
|
||||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
|
|
||||||
if [ "$MODE" = "exit" ]; then
|
|
||||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
|
||||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
|
||||||
npx changeset pre exit
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
|
||||||
echo "Already in RC pre-release mode"
|
|
||||||
else
|
|
||||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "No pre.json found, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Version RC packages
|
- name: Version RC packages
|
||||||
run: npx changeset version
|
run: npx changeset version
|
||||||
@@ -65,31 +43,20 @@ jobs:
|
|||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
- name: Run format
|
|
||||||
run: npm run format
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build packages
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: npx changeset publish
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Exit RC mode
|
||||||
|
run: npx changeset pre exit
|
||||||
|
|
||||||
- name: Commit & Push changes
|
- name: Commit & Push changes
|
||||||
uses: actions-js/push@master
|
uses: actions-js/push@master
|
||||||
with:
|
with:
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
branch: ${{ github.ref }}
|
branch: ${{ github.ref }}
|
||||||
message: "chore: rc version bump"
|
message: 'chore: rc version bump'
|
||||||
|
|||||||
21
.github/workflows/release-check.yml
vendored
21
.github/workflows/release-check.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
name: Release Check
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: release-check-${{ github.head_ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check-release-mode:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Check release mode
|
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
|
||||||
22
.github/workflows/release.yml
vendored
22
.github/workflows/release.yml
vendored
@@ -6,11 +6,6 @@ on:
|
|||||||
|
|
||||||
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
id-token: write
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -22,7 +17,7 @@ jobs:
|
|||||||
- uses: actions/setup-node@v4
|
- uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: 20
|
node-version: 20
|
||||||
cache: "npm"
|
cache: 'npm'
|
||||||
|
|
||||||
- name: Cache node_modules
|
- name: Cache node_modules
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
@@ -38,22 +33,13 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Check pre-release mode
|
- name: Exit pre-release mode (safety check)
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
run: npx changeset pre exit || true
|
||||||
|
|
||||||
- name: Build packages
|
|
||||||
run: npm run turbo:build
|
|
||||||
env:
|
|
||||||
NODE_ENV: production
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
|
||||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
|
||||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
|
||||||
|
|
||||||
- name: Create Release Pull Request or Publish to npm
|
- name: Create Release Pull Request or Publish to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: node ./.github/scripts/release.mjs
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
108
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,108 +0,0 @@
|
|||||||
name: Weekly Metrics to Discord
|
|
||||||
# description: Sends weekly metrics summary to Discord channel
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: read
|
|
||||||
pull-requests: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
weekly-metrics:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
env:
|
|
||||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: '20'
|
|
||||||
|
|
||||||
- name: Get dates for last 14 days
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
# Last 14 days
|
|
||||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
|
||||||
last_day=$(date +%Y-%m-%d)
|
|
||||||
|
|
||||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
|
||||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
|
||||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
|
||||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Generate issue metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
HIDE_TIME_TO_ANSWER: true
|
|
||||||
HIDE_LABEL_METRICS: false
|
|
||||||
OUTPUT_FILE: issue_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR created metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_created_metrics.md
|
|
||||||
|
|
||||||
- name: Generate PR merged metrics
|
|
||||||
uses: github/issue-metrics@v3
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
|
||||||
OUTPUT_FILE: pr_merged_metrics.md
|
|
||||||
|
|
||||||
- name: Debug generated metrics
|
|
||||||
run: |
|
|
||||||
set -Eeuo pipefail
|
|
||||||
echo "Listing markdown files in workspace:"
|
|
||||||
ls -la *.md || true
|
|
||||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
|
||||||
if [ -f "$f" ]; then
|
|
||||||
echo "== $f (first 10 lines) =="
|
|
||||||
head -n 10 "$f"
|
|
||||||
else
|
|
||||||
echo "Missing $f"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Parse metrics
|
|
||||||
id: metrics
|
|
||||||
run: node .github/scripts/parse-metrics.mjs
|
|
||||||
|
|
||||||
- name: Send to Discord
|
|
||||||
uses: sarisia/actions-status-discord@v1
|
|
||||||
if: env.DISCORD_WEBHOOK != ''
|
|
||||||
with:
|
|
||||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
|
||||||
status: Success
|
|
||||||
title: "📊 Weekly Metrics Report"
|
|
||||||
description: |
|
|
||||||
**${{ env.week_of }}**
|
|
||||||
*${{ env.date_range }}*
|
|
||||||
|
|
||||||
**🎯 Issues**
|
|
||||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
|
||||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
|
||||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
|
||||||
|
|
||||||
**🔀 Pull Requests**
|
|
||||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
|
||||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
|
||||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
|
||||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
|
||||||
|
|
||||||
**📈 Visual Analytics**
|
|
||||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
|
||||||
color: 0x58AFFF
|
|
||||||
username: Task Master Metrics Bot
|
|
||||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
|
||||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -87,13 +87,3 @@ dev-debug.log
|
|||||||
*.njsproj
|
*.njsproj
|
||||||
*.sln
|
*.sln
|
||||||
*.sw?
|
*.sw?
|
||||||
|
|
||||||
# VS Code extension test files
|
|
||||||
.vscode-test/
|
|
||||||
apps/extension/.vscode-test/
|
|
||||||
|
|
||||||
# apps/extension
|
|
||||||
apps/extension/vsix-build/
|
|
||||||
|
|
||||||
# turbo
|
|
||||||
.turbo
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Code Change Task Tracker",
|
|
||||||
"description": "Track implementation progress by monitoring code changes",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
"**/*.{js,ts,jsx,tsx,py,go,rs,java,cpp,c,h,hpp,cs,rb,php,swift,kt,scala,clj}",
|
|
||||||
"!**/node_modules/**",
|
|
||||||
"!**/vendor/**",
|
|
||||||
"!**/.git/**",
|
|
||||||
"!**/build/**",
|
|
||||||
"!**/dist/**",
|
|
||||||
"!**/target/**",
|
|
||||||
"!**/__pycache__/**"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I just saved a source code file. Please:\n\n1. Check what task is currently 'in-progress' using 'tm list --status=in-progress'\n2. Look at the file I saved and summarize what was changed (considering the programming language and context)\n3. Update the task's notes with: 'tm update-subtask --id=<task_id> --prompt=\"Implemented: <summary_of_changes> in <file_path>\"'\n4. If the changes seem to complete the task based on its description, ask if I want to mark it as done"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": false,
|
|
||||||
"name": "[TM] Complexity Analyzer",
|
|
||||||
"description": "Analyze task complexity when new tasks are added",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
".taskmaster/tasks/tasks.json"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "New tasks were added to tasks.json. For each new task:\n\n1. Run 'tm analyze-complexity --id=<task_id>'\n2. If complexity score is > 7, automatically expand it: 'tm expand --id=<task_id> --num=5'\n3. Show the complexity analysis results\n4. Suggest task dependencies based on the expanded subtasks"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Daily Standup Assistant",
|
|
||||||
"description": "Morning workflow summary and task selection",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "userTriggered"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "Good morning! Please provide my daily standup summary:\n\n1. Run 'tm list --status=done' and show tasks completed in the last 24 hours\n2. Run 'tm list --status=in-progress' to show current work\n3. Run 'tm next' to suggest the highest priority task to start\n4. Show the dependency graph for upcoming work\n5. Ask which task I'd like to focus on today"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Git Commit Task Linker",
|
|
||||||
"description": "Link commits to tasks for traceability",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "manual"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I'm about to commit code. Please:\n\n1. Run 'git diff --staged' to see what's being committed\n2. Analyze the changes and suggest which tasks they relate to\n3. Generate a commit message in format: 'feat(task-<id>): <description>'\n4. Update the relevant tasks with a note about this commit\n5. Show the proposed commit message for approval"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] PR Readiness Checker",
|
|
||||||
"description": "Validate tasks before creating a pull request",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "manual"
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "I'm about to create a PR. Please:\n\n1. List all tasks marked as 'done' in this branch\n2. For each done task, verify:\n - All subtasks are also done\n - Test files exist for new functionality\n - No TODO comments remain related to the task\n3. Generate a PR description listing completed tasks\n4. Suggest a PR title based on the main tasks completed"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
{
|
|
||||||
"enabled": true,
|
|
||||||
"name": "[TM] Task Dependency Auto-Progression",
|
|
||||||
"description": "Automatically progress tasks when dependencies are completed",
|
|
||||||
"version": "1",
|
|
||||||
"when": {
|
|
||||||
"type": "fileEdited",
|
|
||||||
"patterns": [
|
|
||||||
".taskmaster/tasks/tasks.json",
|
|
||||||
".taskmaster/tasks/*.json"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"type": "askAgent",
|
|
||||||
"prompt": "Check the tasks.json file for any tasks that just changed status to 'done'. For each completed task:\n\n1. Find all tasks that depend on it\n2. Check if those dependent tasks now have all their dependencies satisfied\n3. If a task has all dependencies met and is still 'pending', use the command 'tm set-status --id=<task_id> --status=in-progress' to start it\n4. Show me which tasks were auto-started and why"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user