Compare commits
1 Commits
docs/auto-
...
fix/claude
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5f5e0c73ec |
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
@@ -8,8 +8,10 @@
|
||||
],
|
||||
"commit": false,
|
||||
"fixed": [],
|
||||
"linked": [],
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"updateInternalDependencies": "patch",
|
||||
"ignore": [
|
||||
"docs"
|
||||
]
|
||||
|
||||
12
.changeset/cute-files-pay.md
Normal file
12
.changeset/cute-files-pay.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
5
.changeset/light-crabs-warn.md
Normal file
5
.changeset/light-crabs-warn.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"extension": minor
|
||||
---
|
||||
|
||||
Display current task ID on task details page
|
||||
@@ -1,47 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Claude Code plugin with marketplace distribution
|
||||
|
||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||
|
||||
## 🎉 New: Claude Code Plugin
|
||||
|
||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||
|
||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||
- **MCP server integration** for deep Claude Code integration
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||
|
||||
- Shows plugin installation instructions
|
||||
- Only manages CLAUDE.md imports for agent instructions
|
||||
- Directs users to install the official plugin
|
||||
|
||||
**Migration for Existing Users:**
|
||||
|
||||
If you previously used `rules add claude`:
|
||||
|
||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||
|
||||
**Why This Change?**
|
||||
|
||||
Claude Code plugins provide:
|
||||
|
||||
- ✅ Automatic updates when we release new features
|
||||
- ✅ Better command organization and naming
|
||||
- ✅ Seamless integration with Claude Code
|
||||
- ✅ No manual file copying or management
|
||||
|
||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
@@ -1,21 +0,0 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.28.0",
|
||||
"@tm/cli": "",
|
||||
"docs": "0.0.5",
|
||||
"extension": "0.25.5",
|
||||
"@tm/ai-sdk-provider-grok-cli": "",
|
||||
"@tm/build-config": "",
|
||||
"@tm/claude-code-plugin": "0.0.1",
|
||||
"@tm/core": ""
|
||||
},
|
||||
"changesets": [
|
||||
"auto-update-changelog-highlights",
|
||||
"mean-planes-wave",
|
||||
"nice-ways-hope",
|
||||
"plain-falcons-serve",
|
||||
"smart-owls-relax"
|
||||
]
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||
|
||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||
|
||||
Key improvements:
|
||||
- Automatic integration with complexity analysis reports
|
||||
- Tag-aware complexity report path resolution
|
||||
- Intelligent subtask count determination based on task complexity
|
||||
- Falls back to defaults when complexity analysis is unavailable
|
||||
- Enhanced logging for better visibility into expansion decisions
|
||||
|
||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||
5
.changeset/wet-seas-float.md
Normal file
5
.changeset/wet-seas-float.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
@@ -1,32 +0,0 @@
|
||||
{
|
||||
"name": "taskmaster",
|
||||
"owner": {
|
||||
"name": "Hamster",
|
||||
"email": "ralph@tryhamster.com"
|
||||
},
|
||||
"metadata": {
|
||||
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"plugins": [
|
||||
{
|
||||
"name": "taskmaster",
|
||||
"source": "./packages/claude-code-plugin",
|
||||
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
|
||||
"author": {
|
||||
"name": "Hamster"
|
||||
},
|
||||
"homepage": "https://github.com/eyaltoledano/claude-task-master",
|
||||
"repository": "https://github.com/eyaltoledano/claude-task-master",
|
||||
"keywords": [
|
||||
"task-management",
|
||||
"ai",
|
||||
"workflow",
|
||||
"orchestration",
|
||||
"automation",
|
||||
"mcp"
|
||||
],
|
||||
"category": "productivity"
|
||||
}
|
||||
]
|
||||
}
|
||||
92
.claude/agents/task-executor.md
Normal file
92
.claude/agents/task-executor.md
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
name: task-executor
|
||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
||||
|
||||
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
|
||||
- Execute ONE specific subtask or a small group of related subtasks
|
||||
- Complete your work, verify it, mark for review, and exit
|
||||
- Do NOT decide what to do next - the orchestrator handles task sequencing
|
||||
- Focus on implementation excellence within your assigned scope
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
|
||||
|
||||
2. **Rapid Implementation Planning**: Quickly identify:
|
||||
- The EXACT files you need to create/modify for THIS subtask
|
||||
- What already exists that you can build upon
|
||||
- The minimum viable implementation that satisfies requirements
|
||||
|
||||
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
|
||||
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
|
||||
- Use `Write` tool to create new files specified in the task
|
||||
- Use `Edit` tool to modify existing files
|
||||
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
|
||||
- Use `Read` tool to verify your implementations
|
||||
- Implement one subtask at a time for clarity and traceability
|
||||
- Follow the project's coding standards from CLAUDE.md if available
|
||||
- After each subtask, VERIFY the files exist using Read or ls commands
|
||||
|
||||
4. **Progress Documentation**:
|
||||
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
|
||||
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
|
||||
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
|
||||
- Tasks will be verified by task-checker before moving to 'done'
|
||||
|
||||
5. **Quality Assurance**:
|
||||
- Implement the testing strategy specified in the task
|
||||
- Verify that all acceptance criteria are met
|
||||
- Check for any dependency conflicts or integration issues
|
||||
- Run relevant tests before marking task as complete
|
||||
|
||||
6. **Dependency Management**:
|
||||
- Check task dependencies before starting implementation
|
||||
- If blocked by incomplete dependencies, clearly communicate this
|
||||
- Use `task-master validate-dependencies` when needed
|
||||
|
||||
**Implementation Workflow:**
|
||||
|
||||
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
|
||||
2. Check dependencies and prerequisites
|
||||
3. Plan implementation approach - list specific files to create
|
||||
4. Update task status to 'in-progress' using MCP tool
|
||||
5. **ACTUALLY IMPLEMENT** the solution using tools:
|
||||
- Use `Bash` to create directories
|
||||
- Use `Write` to create new files with actual content
|
||||
- Use `Edit` to modify existing files
|
||||
- DO NOT just describe what should be done - DO IT
|
||||
6. **VERIFY** your implementation:
|
||||
- Use `ls` or `Read` to confirm files were created
|
||||
- Use `Bash` to run any build/test commands
|
||||
- Ensure the implementation is real, not theoretical
|
||||
7. Log progress and decisions in subtask updates using MCP tools
|
||||
8. Test and verify the implementation works
|
||||
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
|
||||
10. Report completion with:
|
||||
- List of created/modified files
|
||||
- Any issues encountered
|
||||
- What needs verification by task-checker
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Focus on completing one task thoroughly before moving to the next
|
||||
- Maintain clear communication about what you're implementing and why
|
||||
- Follow existing code patterns and project conventions
|
||||
- Prioritize working code over extensive documentation unless docs are the task
|
||||
- Ask for clarification if task requirements are ambiguous
|
||||
- Consider edge cases and error handling in your implementations
|
||||
|
||||
**Integration with Task Master:**
|
||||
|
||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
||||
- Track your progress
|
||||
- Update task information
|
||||
- Maintain project state
|
||||
- Coordinate with the broader development workflow
|
||||
|
||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
||||
208
.claude/agents/task-orchestrator.md
Normal file
208
.claude/agents/task-orchestrator.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
name: task-orchestrator
|
||||
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
|
||||
model: opus
|
||||
color: green
|
||||
---
|
||||
|
||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
|
||||
|
||||
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
|
||||
|
||||
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
|
||||
|
||||
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
|
||||
|
||||
## Operational Workflow
|
||||
|
||||
### Initial Assessment Phase
|
||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
||||
2. Analyze task statuses, priorities, and dependencies
|
||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
||||
4. Group related tasks that could benefit from specialized executors
|
||||
5. Create an execution plan that maximizes parallelization
|
||||
|
||||
### Executor Deployment Phase
|
||||
1. For each independent task or task group:
|
||||
- Deploy a task-executor agent with specific instructions
|
||||
- Provide the executor with task ID, requirements, and context
|
||||
- Set clear completion criteria and reporting expectations
|
||||
2. Maintain a registry of active executors and their assigned tasks
|
||||
3. Establish communication protocols for progress updates
|
||||
|
||||
### Coordination Phase
|
||||
1. Monitor executor progress through task status updates
|
||||
2. When a task completes:
|
||||
- Verify completion with `get_task` or `task-master show <id>`
|
||||
- Update task status if needed using `set_task_status`
|
||||
- Reassess dependency graph for newly unblocked tasks
|
||||
- Deploy new executors for available work
|
||||
3. Handle executor failures or blocks:
|
||||
- Reassign tasks to new executors if needed
|
||||
- Escalate complex issues to the user
|
||||
- Update task status to 'blocked' when appropriate
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
**Parallel Execution Rules**:
|
||||
- Never assign dependent tasks to different executors simultaneously
|
||||
- Prioritize high-priority tasks when resources are limited
|
||||
- Group small, related subtasks for single executor efficiency
|
||||
- Balance executor load to prevent bottlenecks
|
||||
|
||||
**Context Management**:
|
||||
- Provide executors with minimal but sufficient context
|
||||
- Share relevant completed task information when it aids execution
|
||||
- Maintain a shared knowledge base of project-specific patterns
|
||||
|
||||
**Quality Assurance**:
|
||||
- Verify task completion before marking as done
|
||||
- Ensure test strategies are followed when specified
|
||||
- Coordinate cross-task integration testing when needed
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
When deploying executors, provide them with:
|
||||
```
|
||||
TASK ASSIGNMENT:
|
||||
- Task ID: [specific ID]
|
||||
- Objective: [clear goal]
|
||||
- Dependencies: [list any completed prerequisites]
|
||||
- Success Criteria: [specific completion requirements]
|
||||
- Context: [relevant project information]
|
||||
- Reporting: [when and how to report back]
|
||||
```
|
||||
|
||||
When receiving executor updates:
|
||||
1. Acknowledge completion or issues
|
||||
2. Update task status in Task Master
|
||||
3. Reassess execution strategy
|
||||
4. Deploy new executors as appropriate
|
||||
|
||||
## Decision Framework
|
||||
|
||||
**When to parallelize**:
|
||||
- Multiple pending tasks with no interdependencies
|
||||
- Sufficient context available for independent execution
|
||||
- Tasks are well-defined with clear success criteria
|
||||
|
||||
**When to serialize**:
|
||||
- Strong dependencies between tasks
|
||||
- Limited context or unclear requirements
|
||||
- Integration points requiring careful coordination
|
||||
|
||||
**When to escalate**:
|
||||
- Circular dependencies detected
|
||||
- Critical blockers affecting multiple tasks
|
||||
- Ambiguous requirements needing clarification
|
||||
- Resource conflicts between executors
|
||||
|
||||
## Error Handling
|
||||
|
||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
Track and optimize for:
|
||||
- Task completion rate
|
||||
- Parallel execution efficiency
|
||||
- Executor success rate
|
||||
- Time to completion for task groups
|
||||
- Dependency resolution speed
|
||||
|
||||
## Integration with Task Master
|
||||
|
||||
Leverage these Task Master MCP tools effectively:
|
||||
- `get_tasks` - Continuous queue monitoring
|
||||
- `get_task` - Detailed task analysis
|
||||
- `set_task_status` - Progress tracking
|
||||
- `next_task` - Fallback for serial execution
|
||||
- `analyze_project_complexity` - Strategic planning
|
||||
- `complexity_report` - Resource allocation
|
||||
|
||||
## Output Format for Execution
|
||||
|
||||
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
|
||||
|
||||
After completing your dependency analysis, you MUST output a structured execution plan:
|
||||
|
||||
```yaml
|
||||
execution_plan:
|
||||
EXECUTE_IN_PARALLEL:
|
||||
# Maximum 3 subtasks running simultaneously
|
||||
- subtask_id: [e.g., 118.2]
|
||||
parent_task: [e.g., 118]
|
||||
title: [Specific subtask title]
|
||||
priority: [high/medium/low]
|
||||
estimated_time: [e.g., 10 minutes]
|
||||
executor_prompt: |
|
||||
Execute Subtask [ID]: [Specific subtask title]
|
||||
|
||||
SPECIFIC REQUIREMENTS:
|
||||
[Exact implementation needed for THIS subtask only]
|
||||
|
||||
FILES TO CREATE/MODIFY:
|
||||
[Specific file paths]
|
||||
|
||||
CONTEXT:
|
||||
[What already exists that this subtask depends on]
|
||||
|
||||
SUCCESS CRITERIA:
|
||||
[Specific completion criteria for this subtask]
|
||||
|
||||
IMPORTANT:
|
||||
- Focus ONLY on this subtask
|
||||
- Mark subtask as 'review' when complete
|
||||
- Use MCP tool: mcp__task-master-ai__set_task_status
|
||||
|
||||
- subtask_id: [Another subtask that can run in parallel]
|
||||
parent_task: [Parent task ID]
|
||||
title: [Specific subtask title]
|
||||
priority: [priority]
|
||||
estimated_time: [time estimate]
|
||||
executor_prompt: |
|
||||
[Focused prompt for this specific subtask]
|
||||
|
||||
blocked:
|
||||
- task_id: [ID]
|
||||
title: [Task title]
|
||||
waiting_for: [list of blocking task IDs]
|
||||
becomes_ready_when: [condition for unblocking]
|
||||
|
||||
next_wave:
|
||||
trigger: "After tasks [IDs] complete"
|
||||
newly_available: [List of task IDs that will unblock]
|
||||
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
|
||||
|
||||
critical_path: [Ordered list of task IDs forming the critical path]
|
||||
|
||||
parallelization_instruction: |
|
||||
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
|
||||
simultaneously using multiple Task tool invocations in a single response.
|
||||
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
|
||||
|
||||
verification_needed:
|
||||
- task_id: [ID of any task in 'review' status]
|
||||
verification_focus: [what to check]
|
||||
```
|
||||
|
||||
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
|
||||
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
|
||||
2. Use multiple Task tool invocations in a SINGLE response
|
||||
3. Do not execute them sequentially - they must run in parallel
|
||||
4. Wait for all parallel executors to complete before proceeding to next wave
|
||||
|
||||
**IMPORTANT NOTES**:
|
||||
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
|
||||
- Provide complete, self-contained prompts for each executor
|
||||
- Executors should mark tasks as 'review' for verification, not 'done'
|
||||
- Be explicit about which tasks can run simultaneously
|
||||
|
||||
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
Find up to 3 likely duplicate issues for a given GitHub issue.
|
||||
|
||||
To do this, follow these steps precisely:
|
||||
|
||||
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
|
||||
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
|
||||
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
|
||||
|
||||
---
|
||||
|
||||
Found 3 possible duplicate issues:
|
||||
|
||||
1. <link to issue>
|
||||
2. <link to issue>
|
||||
3. <link to issue>
|
||||
|
||||
This issue will be automatically closed as a duplicate in 3 days.
|
||||
|
||||
- If your issue is a duplicate, please close it and 👍 the existing issue instead
|
||||
- To prevent auto-closure, add a comment or 👎 this comment
|
||||
|
||||
🤖 Generated with \[Task Master Bot\]
|
||||
|
||||
---
|
||||
@@ -48,7 +48,7 @@ After adding dependency:
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/taskmaster:add-dependency 5 needs 3
|
||||
/project:tm/add-dependency 5 needs 3
|
||||
→ Task #5 now depends on Task #3
|
||||
→ Task #5 is now blocked until #3 completes
|
||||
→ Suggested: Also consider if #5 needs #4
|
||||
@@ -56,12 +56,12 @@ task-master add-subtask --parent=<id> --task-id=<existing-id>
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/taskmaster:add-subtask to 5: implement user authentication
|
||||
/project:tm/add-subtask to 5: implement user authentication
|
||||
→ Created subtask #5.1: "implement user authentication"
|
||||
→ Parent task #5 now has 1 subtask
|
||||
→ Suggested next subtasks: tests, documentation
|
||||
|
||||
/taskmaster:add-subtask 5: setup, implement, test
|
||||
/project:tm/add-subtask 5: setup, implement, test
|
||||
→ Created 3 subtasks:
|
||||
#5.1: setup
|
||||
#5.2: implement
|
||||
@@ -53,7 +53,7 @@ task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
|
||||
## Example
|
||||
|
||||
```
|
||||
/taskmaster:add-subtask/from-task 5 8
|
||||
/project:tm/add-subtask/from-task 5 8
|
||||
→ Converting: Task #8 becomes subtask #5.1
|
||||
→ Updated: 3 dependency references
|
||||
→ Parent task #5 now has 1 subtask
|
||||
@@ -115,7 +115,7 @@ Results are:
|
||||
|
||||
After analysis:
|
||||
```
|
||||
/taskmaster:expand 5 # Expand specific task
|
||||
/taskmaster:expand-all # Expand all recommended
|
||||
/taskmaster:complexity-report # View detailed report
|
||||
/project:tm/expand 5 # Expand specific task
|
||||
/project:tm/expand/all # Expand all recommended
|
||||
/project:tm/complexity-report # View detailed report
|
||||
```
|
||||
@@ -105,13 +105,13 @@ Use report for:
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
/taskmaster:complexity-report
|
||||
/project:tm/complexity-report
|
||||
→ Opens latest analysis
|
||||
|
||||
/taskmaster:complexity-report --file=archived/2024-01-01.md
|
||||
/project:tm/complexity-report --file=archived/2024-01-01.md
|
||||
→ View historical analysis
|
||||
|
||||
After viewing:
|
||||
/taskmaster:expand 5
|
||||
/project:tm/expand 5
|
||||
→ Expand high-complexity task
|
||||
```
|
||||
@@ -70,7 +70,7 @@ Manual Review Needed:
|
||||
⚠️ Task #45 has 8 dependencies
|
||||
Suggestion: Break into subtasks
|
||||
|
||||
Run '/taskmaster:validate-dependencies' to verify fixes
|
||||
Run '/project:tm/validate-dependencies' to verify fixes
|
||||
```
|
||||
|
||||
## Safety
|
||||
81
.claude/commands/tm/help.md
Normal file
81
.claude/commands/tm/help.md
Normal file
@@ -0,0 +1,81 @@
|
||||
Show help for Task Master commands.
|
||||
|
||||
Arguments: $ARGUMENTS
|
||||
|
||||
Display help for Task Master commands. If arguments provided, show specific command help.
|
||||
|
||||
## Task Master Command Help
|
||||
|
||||
### Quick Navigation
|
||||
|
||||
Type `/project:tm/` and use tab completion to explore all commands.
|
||||
|
||||
### Command Categories
|
||||
|
||||
#### 🚀 Setup & Installation
|
||||
- `/project:tm/setup/install` - Comprehensive installation guide
|
||||
- `/project:tm/setup/quick-install` - One-line global install
|
||||
|
||||
#### 📋 Project Setup
|
||||
- `/project:tm/init` - Initialize new project
|
||||
- `/project:tm/init/quick` - Quick setup with auto-confirm
|
||||
- `/project:tm/models` - View AI configuration
|
||||
- `/project:tm/models/setup` - Configure AI providers
|
||||
|
||||
#### 🎯 Task Generation
|
||||
- `/project:tm/parse-prd` - Generate tasks from PRD
|
||||
- `/project:tm/parse-prd/with-research` - Enhanced parsing
|
||||
- `/project:tm/generate` - Create task files
|
||||
|
||||
#### 📝 Task Management
|
||||
- `/project:tm/list` - List tasks (natural language filters)
|
||||
- `/project:tm/show <id>` - Display task details
|
||||
- `/project:tm/add-task` - Create new task
|
||||
- `/project:tm/update` - Update tasks naturally
|
||||
- `/project:tm/next` - Get next task recommendation
|
||||
|
||||
#### 🔄 Status Management
|
||||
- `/project:tm/set-status/to-pending <id>`
|
||||
- `/project:tm/set-status/to-in-progress <id>`
|
||||
- `/project:tm/set-status/to-done <id>`
|
||||
- `/project:tm/set-status/to-review <id>`
|
||||
- `/project:tm/set-status/to-deferred <id>`
|
||||
- `/project:tm/set-status/to-cancelled <id>`
|
||||
|
||||
#### 🔍 Analysis & Breakdown
|
||||
- `/project:tm/analyze-complexity` - Analyze task complexity
|
||||
- `/project:tm/expand <id>` - Break down complex task
|
||||
- `/project:tm/expand/all` - Expand all eligible tasks
|
||||
|
||||
#### 🔗 Dependencies
|
||||
- `/project:tm/add-dependency` - Add task dependency
|
||||
- `/project:tm/remove-dependency` - Remove dependency
|
||||
- `/project:tm/validate-dependencies` - Check for issues
|
||||
|
||||
#### 🤖 Workflows
|
||||
- `/project:tm/workflows/smart-flow` - Intelligent workflows
|
||||
- `/project:tm/workflows/pipeline` - Command chaining
|
||||
- `/project:tm/workflows/auto-implement` - Auto-implementation
|
||||
|
||||
#### 📊 Utilities
|
||||
- `/project:tm/utils/analyze` - Project analysis
|
||||
- `/project:tm/status` - Project dashboard
|
||||
- `/project:tm/learn` - Interactive learning
|
||||
|
||||
### Natural Language Examples
|
||||
|
||||
```
|
||||
/project:tm/list pending high priority
|
||||
/project:tm/update mark all API tasks as done
|
||||
/project:tm/add-task create login system with OAuth
|
||||
/project:tm/show current
|
||||
```
|
||||
|
||||
### Getting Started
|
||||
|
||||
1. Install: `/project:tm/setup/quick-install`
|
||||
2. Initialize: `/project:tm/init/quick`
|
||||
3. Learn: `/project:tm/learn start`
|
||||
4. Work: `/project:tm/workflows/smart-flow`
|
||||
|
||||
For detailed command info: `/project:tm/help <command-name>`
|
||||
@@ -30,17 +30,17 @@ task-master init -y
|
||||
After quick init:
|
||||
1. Configure AI models if needed:
|
||||
```
|
||||
/taskmaster:models/setup
|
||||
/project:tm/models/setup
|
||||
```
|
||||
|
||||
2. Parse PRD if available:
|
||||
```
|
||||
/taskmaster:parse-prd <file>
|
||||
/project:tm/parse-prd <file>
|
||||
```
|
||||
|
||||
3. Or create first task:
|
||||
```
|
||||
/taskmaster:add-task create initial setup
|
||||
/project:tm/add-task create initial setup
|
||||
```
|
||||
|
||||
Perfect for rapid project setup!
|
||||
@@ -45,6 +45,6 @@ After successful init:
|
||||
|
||||
If PRD file provided:
|
||||
```
|
||||
/taskmaster:init my-prd.md
|
||||
/project:tm/init my-prd.md
|
||||
→ Automatically runs parse-prd after init
|
||||
```
|
||||
@@ -55,7 +55,7 @@ After removing:
|
||||
## Example
|
||||
|
||||
```
|
||||
/taskmaster:remove-dependency 5 from 3
|
||||
/project:tm/remove-dependency 5 from 3
|
||||
→ Removed: Task #5 no longer depends on #3
|
||||
→ Task #5 is now UNBLOCKED and ready to start
|
||||
→ Warning: Consider if #5 still needs #2 completed first
|
||||
@@ -63,13 +63,13 @@ task-master remove-subtask --id=<parentId.subtaskId> --convert
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/taskmaster:remove-subtask 5.1
|
||||
/project:tm/remove-subtask 5.1
|
||||
→ Warning: Subtask #5.1 is in-progress
|
||||
→ This will delete all subtask data
|
||||
→ Parent task #5 will be updated
|
||||
Confirm deletion? (y/n)
|
||||
|
||||
/taskmaster:remove-subtask 5.1 convert
|
||||
/project:tm/remove-subtask 5.1 convert
|
||||
→ Converting subtask #5.1 to standalone task #89
|
||||
→ Preserved: All task data and history
|
||||
→ Updated: 2 dependency references
|
||||
@@ -85,17 +85,17 @@ Suggest before deletion:
|
||||
## Example Flows
|
||||
|
||||
```
|
||||
/taskmaster:remove-task 5
|
||||
/project:tm/remove-task 5
|
||||
→ Task #5 is in-progress with 8 hours logged
|
||||
→ 3 other tasks depend on this
|
||||
→ Suggestion: Mark as cancelled instead?
|
||||
Remove anyway? (y/n)
|
||||
|
||||
/taskmaster:remove-task 5 -y
|
||||
/project:tm/remove-task 5 -y
|
||||
→ Removed: Task #5 and 4 subtasks
|
||||
→ Updated: 3 task dependencies
|
||||
→ Warning: Tasks #7, #8, #9 now have missing dependency
|
||||
→ Run /taskmaster:fix-dependencies to resolve
|
||||
→ Run /project:tm/fix-dependencies to resolve
|
||||
```
|
||||
|
||||
## Safety Features
|
||||
@@ -8,11 +8,11 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
||||
|
||||
## Project Setup & Configuration
|
||||
|
||||
### `/taskmaster:init`
|
||||
### `/project:tm/init`
|
||||
- `init-project` - Initialize new project (handles PRD files intelligently)
|
||||
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
|
||||
|
||||
### `/taskmaster:models`
|
||||
### `/project:tm/models`
|
||||
- `view-models` - View current AI model configuration
|
||||
- `setup-models` - Interactive model configuration
|
||||
- `set-main` - Set primary generation model
|
||||
@@ -21,21 +21,21 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
||||
|
||||
## Task Generation
|
||||
|
||||
### `/taskmaster:parse-prd`
|
||||
### `/project:tm/parse-prd`
|
||||
- `parse-prd` - Generate tasks from PRD document
|
||||
- `parse-prd-with-research` - Enhanced parsing with research mode
|
||||
|
||||
### `/taskmaster:generate`
|
||||
### `/project:tm/generate`
|
||||
- `generate-tasks` - Create individual task files from tasks.json
|
||||
|
||||
## Task Management
|
||||
|
||||
### `/taskmaster:list`
|
||||
### `/project:tm/list`
|
||||
- `list-tasks` - Smart listing with natural language filters
|
||||
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
|
||||
- `list-tasks-by-status` - Filter by specific status
|
||||
|
||||
### `/taskmaster:set-status`
|
||||
### `/project:tm/set-status`
|
||||
- `to-pending` - Reset task to pending
|
||||
- `to-in-progress` - Start working on task
|
||||
- `to-done` - Mark task complete
|
||||
@@ -43,84 +43,84 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
||||
- `to-deferred` - Defer task
|
||||
- `to-cancelled` - Cancel task
|
||||
|
||||
### `/taskmaster:sync-readme`
|
||||
### `/project:tm/sync-readme`
|
||||
- `sync-readme` - Export tasks to README.md with formatting
|
||||
|
||||
### `/taskmaster:update`
|
||||
### `/project:tm/update`
|
||||
- `update-task` - Update tasks with natural language
|
||||
- `update-tasks-from-id` - Update multiple tasks from a starting point
|
||||
- `update-single-task` - Update specific task
|
||||
|
||||
### `/taskmaster:add-task`
|
||||
### `/project:tm/add-task`
|
||||
- `add-task` - Add new task with AI assistance
|
||||
|
||||
### `/taskmaster:remove-task`
|
||||
### `/project:tm/remove-task`
|
||||
- `remove-task` - Remove task with confirmation
|
||||
|
||||
## Subtask Management
|
||||
|
||||
### `/taskmaster:add-subtask`
|
||||
### `/project:tm/add-subtask`
|
||||
- `add-subtask` - Add new subtask to parent
|
||||
- `convert-task-to-subtask` - Convert existing task to subtask
|
||||
|
||||
### `/taskmaster:remove-subtask`
|
||||
### `/project:tm/remove-subtask`
|
||||
- `remove-subtask` - Remove subtask (with optional conversion)
|
||||
|
||||
### `/taskmaster:clear-subtasks`
|
||||
### `/project:tm/clear-subtasks`
|
||||
- `clear-subtasks` - Clear subtasks from specific task
|
||||
- `clear-all-subtasks` - Clear all subtasks globally
|
||||
|
||||
## Task Analysis & Breakdown
|
||||
|
||||
### `/taskmaster:analyze-complexity`
|
||||
### `/project:tm/analyze-complexity`
|
||||
- `analyze-complexity` - Analyze and generate expansion recommendations
|
||||
|
||||
### `/taskmaster:complexity-report`
|
||||
### `/project:tm/complexity-report`
|
||||
- `complexity-report` - Display complexity analysis report
|
||||
|
||||
### `/taskmaster:expand`
|
||||
### `/project:tm/expand`
|
||||
- `expand-task` - Break down specific task
|
||||
- `expand-all-tasks` - Expand all eligible tasks
|
||||
- `with-research` - Enhanced expansion
|
||||
|
||||
## Task Navigation
|
||||
|
||||
### `/taskmaster:next`
|
||||
### `/project:tm/next`
|
||||
- `next-task` - Intelligent next task recommendation
|
||||
|
||||
### `/taskmaster:show`
|
||||
### `/project:tm/show`
|
||||
- `show-task` - Display detailed task information
|
||||
|
||||
### `/taskmaster:status`
|
||||
### `/project:tm/status`
|
||||
- `project-status` - Comprehensive project dashboard
|
||||
|
||||
## Dependency Management
|
||||
|
||||
### `/taskmaster:add-dependency`
|
||||
### `/project:tm/add-dependency`
|
||||
- `add-dependency` - Add task dependency
|
||||
|
||||
### `/taskmaster:remove-dependency`
|
||||
### `/project:tm/remove-dependency`
|
||||
- `remove-dependency` - Remove task dependency
|
||||
|
||||
### `/taskmaster:validate-dependencies`
|
||||
### `/project:tm/validate-dependencies`
|
||||
- `validate-dependencies` - Check for dependency issues
|
||||
|
||||
### `/taskmaster:fix-dependencies`
|
||||
### `/project:tm/fix-dependencies`
|
||||
- `fix-dependencies` - Automatically fix dependency problems
|
||||
|
||||
## Workflows & Automation
|
||||
|
||||
### `/taskmaster:workflows`
|
||||
### `/project:tm/workflows`
|
||||
- `smart-workflow` - Context-aware intelligent workflow execution
|
||||
- `command-pipeline` - Chain multiple commands together
|
||||
- `auto-implement-tasks` - Advanced auto-implementation with code generation
|
||||
|
||||
## Utilities
|
||||
|
||||
### `/taskmaster:utils`
|
||||
### `/project:tm/utils`
|
||||
- `analyze-project` - Deep project analysis and insights
|
||||
|
||||
### `/taskmaster:setup`
|
||||
### `/project:tm/setup`
|
||||
- `install-taskmaster` - Comprehensive installation guide
|
||||
- `quick-install-taskmaster` - One-line global installation
|
||||
|
||||
@@ -129,17 +129,17 @@ Commands are organized hierarchically to match Task Master's CLI structure while
|
||||
### Natural Language
|
||||
Most commands accept natural language arguments:
|
||||
```
|
||||
/taskmaster:add-task create user authentication system
|
||||
/taskmaster:update mark all API tasks as high priority
|
||||
/taskmaster:list show blocked tasks
|
||||
/project:tm/add-task create user authentication system
|
||||
/project:tm/update mark all API tasks as high priority
|
||||
/project:tm/list show blocked tasks
|
||||
```
|
||||
|
||||
### ID-Based Commands
|
||||
Commands requiring IDs intelligently parse from $ARGUMENTS:
|
||||
```
|
||||
/taskmaster:show 45
|
||||
/taskmaster:expand 23
|
||||
/taskmaster:set-status/to-done 67
|
||||
/project:tm/show 45
|
||||
/project:tm/expand 23
|
||||
/project:tm/set-status/to-done 67
|
||||
```
|
||||
|
||||
### Smart Defaults
|
||||
@@ -66,7 +66,7 @@ The AI:
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/taskmaster:update/single 5: add rate limiting
|
||||
/project:tm/update/single 5: add rate limiting
|
||||
→ Updating Task #5: "Implement API endpoints"
|
||||
|
||||
Current: Basic CRUD endpoints
|
||||
@@ -77,7 +77,7 @@ AI analyzes the update context and:
|
||||
## Example Updates
|
||||
|
||||
```
|
||||
/taskmaster:update/from-id 5: change database to PostgreSQL
|
||||
/project:tm/update/from-id 5: change database to PostgreSQL
|
||||
→ Analyzing impact starting from task #5
|
||||
→ Found 6 related tasks to update
|
||||
→ Updates will maintain consistency
|
||||
@@ -66,6 +66,6 @@ For each issue found:
|
||||
## Next Steps
|
||||
|
||||
After validation:
|
||||
- Run `/taskmaster:fix-dependencies` to auto-fix
|
||||
- Run `/project:tm/fix-dependencies` to auto-fix
|
||||
- Manually adjust problematic dependencies
|
||||
- Rerun to verify fixes
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./dist/mcp-server.js"],
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
259
.github/scripts/auto-close-duplicates.mjs
vendored
259
.github/scripts/auto-close-duplicates.mjs
vendored
@@ -1,259 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'auto-close-duplicates-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
function extractDuplicateIssueNumber(commentBody) {
|
||||
const match = commentBody.match(/#(\d+)/);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
async function closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
duplicateOfNumber,
|
||||
token
|
||||
) {
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}`,
|
||||
token,
|
||||
'PATCH',
|
||||
{
|
||||
state: 'closed',
|
||||
state_reason: 'not_planned',
|
||||
labels: ['duplicate']
|
||||
}
|
||||
);
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
|
||||
|
||||
If this is incorrect, please re-open this issue or create a new one.
|
||||
|
||||
🤖 Generated with [Task Master Bot]`
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function autoCloseDuplicates() {
|
||||
console.log('[DEBUG] Starting auto-close duplicates script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error('GITHUB_TOKEN environment variable is required');
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
|
||||
const threeDaysAgo = new Date();
|
||||
threeDaysAgo.setDate(threeDaysAgo.getDate() - 3);
|
||||
console.log(
|
||||
`[DEBUG] Checking for duplicate comments older than: ${threeDaysAgo.toISOString()}`
|
||||
);
|
||||
|
||||
console.log('[DEBUG] Fetching open issues created more than 3 days ago...');
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
const MAX_PAGES = 50; // Increase limit for larger repos
|
||||
let foundRecentIssue = false;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=open&per_page=${perPage}&page=${page}&sort=created&direction=desc`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
// Filter for issues created more than 3 days ago
|
||||
const oldEnoughIssues = pageIssues.filter(
|
||||
(issue) => new Date(issue.created_at) <= threeDaysAgo
|
||||
);
|
||||
|
||||
allIssues.push(...oldEnoughIssues);
|
||||
|
||||
// If all issues on this page are newer than 3 days, we can stop
|
||||
if (oldEnoughIssues.length === 0 && page === 1) {
|
||||
foundRecentIssue = true;
|
||||
break;
|
||||
}
|
||||
|
||||
// If we found some old issues but not all, continue to next page
|
||||
// as there might be more old issues
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > MAX_PAGES) {
|
||||
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const issues = allIssues;
|
||||
console.log(`[DEBUG] Found ${issues.length} open issues`);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
|
||||
for (const issue of issues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
const dupeComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
if (dupeComments.length === 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - no duplicate comments found, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const lastDupeComment = dupeComments[dupeComments.length - 1];
|
||||
const dupeCommentDate = new Date(lastDupeComment.created_at);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - most recent duplicate comment from: ${dupeCommentDate.toISOString()}`
|
||||
);
|
||||
|
||||
if (dupeCommentDate > threeDaysAgo) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment is too recent, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
console.log(
|
||||
`[DEBUG] Issue #${
|
||||
issue.number
|
||||
} - duplicate comment is old enough (${Math.floor(
|
||||
(Date.now() - dupeCommentDate.getTime()) / (1000 * 60 * 60 * 24)
|
||||
)} days)`
|
||||
);
|
||||
|
||||
const commentsAfterDupe = comments.filter(
|
||||
(comment) => new Date(comment.created_at) > dupeCommentDate
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - ${commentsAfterDupe.length} comments after duplicate detection`
|
||||
);
|
||||
|
||||
if (commentsAfterDupe.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - has activity after duplicate comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - checking reactions on duplicate comment...`
|
||||
);
|
||||
const reactions = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
|
||||
);
|
||||
|
||||
const authorThumbsDown = reactions.some(
|
||||
(reaction) =>
|
||||
reaction.user.id === issue.user.id && reaction.content === '-1'
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author thumbs down reaction: ${authorThumbsDown}`
|
||||
);
|
||||
|
||||
if (authorThumbsDown) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - author disagreed with duplicate detection, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const duplicateIssueNumber = extractDuplicateIssueNumber(
|
||||
lastDupeComment.body
|
||||
);
|
||||
if (!duplicateIssueNumber) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] Auto-closing issue #${issue.number} as duplicate of #${duplicateIssueNumber}: ${issueUrl}`
|
||||
);
|
||||
await closeIssueAsDuplicate(
|
||||
owner,
|
||||
repo,
|
||||
issue.number,
|
||||
duplicateIssueNumber,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[SUCCESS] Successfully closed issue #${issue.number} as duplicate of #${duplicateIssueNumber}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to close issue #${issue.number} as duplicate: ${error}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates for auto-close`
|
||||
);
|
||||
}
|
||||
|
||||
autoCloseDuplicates().catch(console.error);
|
||||
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
178
.github/scripts/backfill-duplicate-comments.mjs
vendored
@@ -1,178 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
async function githubRequest(endpoint, token, method = 'GET', body) {
|
||||
const response = await fetch(`https://api.github.com${endpoint}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: 'application/vnd.github.v3+json',
|
||||
'User-Agent': 'backfill-duplicate-comments-script',
|
||||
...(body && { 'Content-Type': 'application/json' })
|
||||
},
|
||||
...(body && { body: JSON.stringify(body) })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`GitHub API request failed: ${response.status} ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
async function triggerDedupeWorkflow(
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
token,
|
||||
dryRun = true
|
||||
) {
|
||||
if (dryRun) {
|
||||
console.log(
|
||||
`[DRY RUN] Would trigger dedupe workflow for issue #${issueNumber}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await githubRequest(
|
||||
`/repos/${owner}/${repo}/actions/workflows/claude-dedupe-issues.yml/dispatches`,
|
||||
token,
|
||||
'POST',
|
||||
{
|
||||
ref: 'main',
|
||||
inputs: {
|
||||
issue_number: issueNumber.toString()
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async function backfillDuplicateComments() {
|
||||
console.log('[DEBUG] Starting backfill duplicate comments script');
|
||||
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
if (!token) {
|
||||
throw new Error(`GITHUB_TOKEN environment variable is required
|
||||
|
||||
Usage:
|
||||
node .github/scripts/backfill-duplicate-comments.mjs
|
||||
|
||||
Environment Variables:
|
||||
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
|
||||
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
|
||||
DAYS_BACK - How many days back to look for old issues (default: 90)`);
|
||||
}
|
||||
console.log('[DEBUG] GitHub token found');
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
|
||||
const dryRun = process.env.DRY_RUN !== 'false';
|
||||
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
|
||||
|
||||
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
|
||||
console.log(`[DEBUG] Dry run mode: ${dryRun}`);
|
||||
console.log(`[DEBUG] Looking back ${daysBack} days`);
|
||||
|
||||
const cutoffDate = new Date();
|
||||
cutoffDate.setDate(cutoffDate.getDate() - daysBack);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Fetching issues created since ${cutoffDate.toISOString()}...`
|
||||
);
|
||||
const allIssues = [];
|
||||
let page = 1;
|
||||
const perPage = 100;
|
||||
|
||||
while (true) {
|
||||
const pageIssues = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues?state=all&per_page=${perPage}&page=${page}&since=${cutoffDate.toISOString()}`,
|
||||
token
|
||||
);
|
||||
|
||||
if (pageIssues.length === 0) break;
|
||||
|
||||
allIssues.push(...pageIssues);
|
||||
page++;
|
||||
|
||||
// Safety limit to avoid infinite loops
|
||||
if (page > 100) {
|
||||
console.log('[DEBUG] Reached page limit, stopping pagination');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Found ${allIssues.length} issues from the last ${daysBack} days`
|
||||
);
|
||||
|
||||
let processedCount = 0;
|
||||
let candidateCount = 0;
|
||||
let triggeredCount = 0;
|
||||
|
||||
for (const issue of allIssues) {
|
||||
processedCount++;
|
||||
console.log(
|
||||
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${allIssues.length}): ${issue.title}`
|
||||
);
|
||||
|
||||
console.log(`[DEBUG] Fetching comments for issue #${issue.number}...`);
|
||||
const comments = await githubRequest(
|
||||
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
|
||||
token
|
||||
);
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${comments.length} comments`
|
||||
);
|
||||
|
||||
// Look for existing duplicate detection comments (from the dedupe bot)
|
||||
const dupeDetectionComments = comments.filter(
|
||||
(comment) =>
|
||||
comment.body.includes('Found') &&
|
||||
comment.body.includes('possible duplicate') &&
|
||||
comment.user.type === 'Bot'
|
||||
);
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} has ${dupeDetectionComments.length} duplicate detection comments`
|
||||
);
|
||||
|
||||
// Skip if there's already a duplicate detection comment
|
||||
if (dupeDetectionComments.length > 0) {
|
||||
console.log(
|
||||
`[DEBUG] Issue #${issue.number} already has duplicate detection comment, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
candidateCount++;
|
||||
const issueUrl = `https://github.com/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
try {
|
||||
console.log(
|
||||
`[INFO] ${dryRun ? '[DRY RUN] ' : ''}Triggering dedupe workflow for issue #${issue.number}: ${issueUrl}`
|
||||
);
|
||||
await triggerDedupeWorkflow(owner, repo, issue.number, token, dryRun);
|
||||
|
||||
if (!dryRun) {
|
||||
console.log(
|
||||
`[SUCCESS] Successfully triggered dedupe workflow for issue #${issue.number}`
|
||||
);
|
||||
}
|
||||
triggeredCount++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
`[ERROR] Failed to trigger workflow for issue #${issue.number}: ${error}`
|
||||
);
|
||||
}
|
||||
|
||||
// Add a delay between workflow triggers to avoid overwhelming the system
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log(
|
||||
`[DEBUG] Script completed. Processed ${processedCount} issues, found ${candidateCount} candidates without duplicate comments, ${dryRun ? 'would trigger' : 'triggered'} ${triggeredCount} workflows`
|
||||
);
|
||||
}
|
||||
|
||||
backfillDuplicateComments().catch(console.error);
|
||||
157
.github/scripts/parse-metrics.mjs
vendored
157
.github/scripts/parse-metrics.mjs
vendored
@@ -1,157 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { readFileSync, existsSync, writeFileSync } from 'fs';
|
||||
|
||||
function parseMetricsTable(content, metricName) {
|
||||
const lines = content.split('\n');
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i].trim();
|
||||
// Match a markdown table row like: | Metric Name | value | ...
|
||||
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
|
||||
const match = line.match(re);
|
||||
if (match) {
|
||||
return match[1].trim() || 'N/A';
|
||||
}
|
||||
}
|
||||
return 'N/A';
|
||||
}
|
||||
|
||||
function parseCountMetric(content, metricName) {
|
||||
const result = parseMetricsTable(content, metricName);
|
||||
// Extract number from string, handling commas and spaces
|
||||
const numberMatch = result.toString().match(/[\d,]+/);
|
||||
if (numberMatch) {
|
||||
const number = parseInt(numberMatch[0].replace(/,/g, ''));
|
||||
return isNaN(number) ? 0 : number;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function main() {
|
||||
const metrics = {
|
||||
issues_created: 0,
|
||||
issues_closed: 0,
|
||||
prs_created: 0,
|
||||
prs_merged: 0,
|
||||
issue_avg_first_response: 'N/A',
|
||||
issue_avg_time_to_close: 'N/A',
|
||||
pr_avg_first_response: 'N/A',
|
||||
pr_avg_merge_time: 'N/A'
|
||||
};
|
||||
|
||||
// Parse issue metrics
|
||||
if (existsSync('issue_metrics.md')) {
|
||||
console.log('📄 Found issue_metrics.md, parsing...');
|
||||
const issueContent = readFileSync('issue_metrics.md', 'utf8');
|
||||
|
||||
metrics.issues_created = parseCountMetric(
|
||||
issueContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.issues_closed = parseCountMetric(
|
||||
issueContent,
|
||||
'Number of items closed'
|
||||
);
|
||||
metrics.issue_avg_first_response = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to first response'
|
||||
);
|
||||
metrics.issue_avg_time_to_close = parseMetricsTable(
|
||||
issueContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
|
||||
}
|
||||
|
||||
// Parse PR created metrics
|
||||
if (existsSync('pr_created_metrics.md')) {
|
||||
console.log('📄 Found pr_created_metrics.md, parsing...');
|
||||
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_created = parseCountMetric(
|
||||
prCreatedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
metrics.pr_avg_first_response = parseMetricsTable(
|
||||
prCreatedContent,
|
||||
'Time to first response'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
|
||||
);
|
||||
}
|
||||
|
||||
// Parse PR merged metrics (for more accurate merge data)
|
||||
if (existsSync('pr_merged_metrics.md')) {
|
||||
console.log('📄 Found pr_merged_metrics.md, parsing...');
|
||||
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
|
||||
|
||||
metrics.prs_merged = parseCountMetric(
|
||||
prMergedContent,
|
||||
'Total number of items created'
|
||||
);
|
||||
// For merged PRs, "Time to close" is actually time to merge
|
||||
metrics.pr_avg_merge_time = parseMetricsTable(
|
||||
prMergedContent,
|
||||
'Time to close'
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
|
||||
);
|
||||
// Fallback: try old pr_metrics.md if it exists
|
||||
if (existsSync('pr_metrics.md')) {
|
||||
console.log('📄 Falling back to pr_metrics.md...');
|
||||
const prContent = readFileSync('pr_metrics.md', 'utf8');
|
||||
|
||||
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
|
||||
metrics.prs_merged =
|
||||
mergedCount || parseCountMetric(prContent, 'Number of items closed');
|
||||
|
||||
const maybeMergeTime = parseMetricsTable(
|
||||
prContent,
|
||||
'Average time to merge'
|
||||
);
|
||||
metrics.pr_avg_merge_time =
|
||||
maybeMergeTime !== 'N/A'
|
||||
? maybeMergeTime
|
||||
: parseMetricsTable(prContent, 'Time to close');
|
||||
} else {
|
||||
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
|
||||
}
|
||||
}
|
||||
|
||||
// Output for GitHub Actions
|
||||
const output = Object.entries(metrics)
|
||||
.map(([key, value]) => `${key}=${value}`)
|
||||
.join('\n');
|
||||
|
||||
// Always output to stdout for debugging
|
||||
console.log('\n=== FINAL METRICS ===');
|
||||
Object.entries(metrics).forEach(([key, value]) => {
|
||||
console.log(`${key}: ${value}`);
|
||||
});
|
||||
|
||||
// Write to GITHUB_OUTPUT if in GitHub Actions
|
||||
if (process.env.GITHUB_OUTPUT) {
|
||||
try {
|
||||
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
|
||||
console.log(
|
||||
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
54
.github/scripts/pre-release.mjs
vendored
Executable file
54
.github/scripts/pre-release.mjs
vendored
Executable file
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env node
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import {
|
||||
findRootDir,
|
||||
runCommand,
|
||||
getPackageVersion,
|
||||
createAndPushTag
|
||||
} from './utils.mjs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const rootDir = findRootDir(__dirname);
|
||||
const extensionPkgPath = join(rootDir, 'apps', 'extension', 'package.json');
|
||||
|
||||
console.log('🚀 Starting pre-release process...');
|
||||
|
||||
// Check if we're in RC mode
|
||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
||||
if (!existsSync(preJsonPath)) {
|
||||
console.error('⚠️ Not in RC mode. Run "npx changeset pre enter rc" first.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
try {
|
||||
const preJson = JSON.parse(readFileSync(preJsonPath, 'utf8'));
|
||||
if (preJson.tag !== 'rc') {
|
||||
console.error(`⚠️ Not in RC mode. Current tag: ${preJson.tag}`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to read pre.json:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get current extension version
|
||||
const extensionVersion = getPackageVersion(extensionPkgPath);
|
||||
console.log(`Extension version: ${extensionVersion}`);
|
||||
|
||||
// Run changeset publish for npm packages
|
||||
console.log('📦 Publishing npm packages...');
|
||||
runCommand('npx', ['changeset', 'publish']);
|
||||
|
||||
// Create tag for extension pre-release if it doesn't exist
|
||||
const extensionTag = `extension-rc@${extensionVersion}`;
|
||||
const tagCreated = createAndPushTag(extensionTag);
|
||||
|
||||
if (tagCreated) {
|
||||
console.log('This will trigger the extension-pre-release workflow...');
|
||||
}
|
||||
|
||||
console.log('✅ Pre-release process completed!');
|
||||
31
.github/workflows/auto-close-duplicates.yml
vendored
31
.github/workflows/auto-close-duplicates.yml
vendored
@@ -1,31 +0,0 @@
|
||||
name: Auto-close duplicate issues
|
||||
# description: Auto-closes issues that are duplicates of existing issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * *" # Runs daily at 9 AM UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
auto-close-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write # Need write permission to close issues and add comments
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Auto-close duplicate issues
|
||||
run: node .github/scripts/auto-close-duplicates.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
@@ -1,46 +0,0 @@
|
||||
name: Backfill Duplicate Comments
|
||||
# description: Triggers duplicate detection for old issues that don't have duplicate comments
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
days_back:
|
||||
description: "How many days back to look for old issues"
|
||||
required: false
|
||||
default: "90"
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Dry run mode (true to only log what would be done)"
|
||||
required: false
|
||||
default: "true"
|
||||
type: choice
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
|
||||
jobs:
|
||||
backfill-duplicate-comments:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Backfill duplicate comments
|
||||
run: node .github/scripts/backfill-duplicate-comments.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
|
||||
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
|
||||
DAYS_BACK: ${{ inputs.days_back }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
126
.github/workflows/ci.yml
vendored
126
.github/workflows/ci.yml
vendored
@@ -6,124 +6,73 @@ on:
|
||||
- main
|
||||
- next
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
branches:
|
||||
- main
|
||||
- next
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
DO_NOT_TRACK: 1
|
||||
NODE_ENV: development
|
||||
|
||||
jobs:
|
||||
# Fast checks that can run in parallel
|
||||
format-check:
|
||||
name: Format Check
|
||||
setup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
- name: Install Dependencies
|
||||
id: install
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
format-check:
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Format Check
|
||||
run: npm run format-check
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
typecheck:
|
||||
name: Typecheck
|
||||
timeout-minutes: 10
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Typecheck
|
||||
run: npm run turbo:typecheck
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
# Build job to ensure everything compiles
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Build
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Upload build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
retention-days: 1
|
||||
|
||||
test:
|
||||
name: Test
|
||||
timeout-minutes: 15
|
||||
needs: setup
|
||||
runs-on: ubuntu-latest
|
||||
needs: [format-check, typecheck, build]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install --frozen-lockfile --prefer-offline
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
- name: Restore node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
name: build-artifacts
|
||||
path: dist/
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
@@ -132,6 +81,7 @@ jobs:
|
||||
NODE_ENV: test
|
||||
CI: true
|
||||
FORCE_COLOR: 1
|
||||
timeout-minutes: 10
|
||||
|
||||
- name: Upload Test Results
|
||||
if: always()
|
||||
|
||||
81
.github/workflows/claude-dedupe-issues.yml
vendored
81
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -1,81 +0,0 @@
|
||||
name: Claude Issue Dedupe
|
||||
# description: Automatically dedupe GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_number:
|
||||
description: "Issue number to process for duplicate detection"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
claude-dedupe-issues:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code slash command
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log duplicate comment event to Statsig
|
||||
if: always()
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number || inputs.issue_number }}
|
||||
REPO=${{ github.repository }}
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg triggered_by "${{ github.event_name }}" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_duplicate_comment_added",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
triggered_by: $triggered_by,
|
||||
workflow_run_id: "${{ github.run_id }}"
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging duplicate comment event to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged duplicate comment event for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log duplicate comment event for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
57
.github/workflows/claude-docs-trigger.yml
vendored
57
.github/workflows/claude-docs-trigger.yml
vendored
@@ -1,57 +0,0 @@
|
||||
name: Trigger Claude Documentation Update
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- next
|
||||
paths-ignore:
|
||||
- "apps/docs/**"
|
||||
- "*.md"
|
||||
- ".github/workflows/**"
|
||||
|
||||
jobs:
|
||||
trigger-docs-update:
|
||||
# Only run if changes were merged (not direct pushes from bots)
|
||||
if: github.actor != 'github-actions[bot]' && github.actor != 'dependabot[bot]'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
actions: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2 # Need previous commit for comparison
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
run: |
|
||||
echo "Changed files in this push:"
|
||||
git diff --name-only HEAD^ HEAD | tee changed_files.txt
|
||||
|
||||
# Store changed files for Claude to analyze (escaped for JSON)
|
||||
CHANGED_FILES=$(git diff --name-only HEAD^ HEAD | jq -Rs .)
|
||||
echo "changed_files=$CHANGED_FILES" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the commit message (escaped for JSON)
|
||||
COMMIT_MSG=$(git log -1 --pretty=%B | jq -Rs .)
|
||||
echo "commit_message=$COMMIT_MSG" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get diff for documentation context (escaped for JSON)
|
||||
COMMIT_DIFF=$(git diff HEAD^ HEAD --stat | jq -Rs .)
|
||||
echo "commit_diff=$COMMIT_DIFF" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get commit SHA
|
||||
echo "commit_sha=${{ github.sha }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Trigger Claude workflow
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# Trigger the Claude docs updater workflow with the change information
|
||||
gh workflow run claude-docs-updater.yml \
|
||||
--ref next \
|
||||
-f commit_sha="${{ steps.changed-files.outputs.commit_sha }}" \
|
||||
-f commit_message=${{ steps.changed-files.outputs.commit_message }} \
|
||||
-f changed_files=${{ steps.changed-files.outputs.changed_files }} \
|
||||
-f commit_diff=${{ steps.changed-files.outputs.commit_diff }}
|
||||
145
.github/workflows/claude-docs-updater.yml
vendored
145
.github/workflows/claude-docs-updater.yml
vendored
@@ -1,145 +0,0 @@
|
||||
name: Claude Documentation Updater
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
commit_sha:
|
||||
description: 'The commit SHA that triggered this update'
|
||||
required: true
|
||||
type: string
|
||||
commit_message:
|
||||
description: 'The commit message'
|
||||
required: true
|
||||
type: string
|
||||
changed_files:
|
||||
description: 'List of changed files'
|
||||
required: true
|
||||
type: string
|
||||
commit_diff:
|
||||
description: 'Diff summary of changes'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
update-docs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: next
|
||||
fetch-depth: 0 # Need full history to checkout specific commit
|
||||
|
||||
- name: Create docs update branch
|
||||
id: create-branch
|
||||
run: |
|
||||
BRANCH_NAME="docs/auto-update-$(date +%Y%m%d-%H%M%S)"
|
||||
git checkout -b $BRANCH_NAME
|
||||
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Claude Code to Update Documentation
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
timeout_minutes: "30"
|
||||
mode: "agent"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
.github.com
|
||||
api.github.com
|
||||
.githubusercontent.com
|
||||
registry.npmjs.org
|
||||
.task-master.dev
|
||||
base_branch: "next"
|
||||
direct_prompt: |
|
||||
You are a documentation specialist. Analyze the recent changes pushed to the 'next' branch and update the documentation accordingly.
|
||||
|
||||
Recent changes:
|
||||
- Commit: ${{ inputs.commit_message }}
|
||||
- Changed files:
|
||||
${{ inputs.changed_files }}
|
||||
|
||||
- Changes summary:
|
||||
${{ inputs.commit_diff }}
|
||||
|
||||
Your task:
|
||||
1. Analyze the changes to understand what functionality was added, modified, or removed
|
||||
2. Check if these changes require documentation updates in apps/docs/
|
||||
3. If documentation updates are needed:
|
||||
- Update relevant documentation files in apps/docs/
|
||||
- Ensure examples are updated if APIs changed
|
||||
- Update any configuration documentation if config options changed
|
||||
- Add new documentation pages if new features were added
|
||||
- Update the changelog or release notes if applicable
|
||||
4. If no documentation updates are needed, skip creating changes
|
||||
|
||||
Guidelines:
|
||||
- Focus only on user-facing changes that need documentation
|
||||
- Keep documentation clear, concise, and helpful
|
||||
- Include code examples where appropriate
|
||||
- Maintain consistent documentation style with existing docs
|
||||
- Don't document internal implementation details unless they affect users
|
||||
- Update navigation/menu files if new pages are added
|
||||
|
||||
Only make changes if the documentation truly needs updating based on the code changes.
|
||||
|
||||
- name: Check if changes were made
|
||||
id: check-changes
|
||||
run: |
|
||||
if git diff --quiet; then
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
git add -A
|
||||
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --local user.name "github-actions[bot]"
|
||||
git commit -m "docs: auto-update documentation based on changes in next branch
|
||||
|
||||
This PR was automatically generated to update documentation based on recent changes.
|
||||
|
||||
Original commit: ${{ inputs.commit_message }}
|
||||
|
||||
Co-authored-by: Claude <claude-assistant@anthropic.com>"
|
||||
fi
|
||||
|
||||
- name: Push changes and create PR
|
||||
if: steps.check-changes.outputs.has_changes == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
git push origin ${{ steps.create-branch.outputs.branch_name }}
|
||||
|
||||
# Create PR using GitHub CLI
|
||||
gh pr create \
|
||||
--title "docs: update documentation for recent changes" \
|
||||
--body "## 📚 Documentation Update
|
||||
|
||||
This PR automatically updates documentation based on recent changes merged to the \`next\` branch.
|
||||
|
||||
### Original Changes
|
||||
**Commit:** ${{ inputs.commit_sha }}
|
||||
**Message:** ${{ inputs.commit_message }}
|
||||
|
||||
### Changed Files in Original Commit
|
||||
\`\`\`
|
||||
${{ inputs.changed_files }}
|
||||
\`\`\`
|
||||
|
||||
### Documentation Updates
|
||||
This PR includes documentation updates to reflect the changes above. Please review to ensure:
|
||||
- [ ] Documentation accurately reflects the changes
|
||||
- [ ] Examples are correct and working
|
||||
- [ ] No important details are missing
|
||||
- [ ] Style is consistent with existing documentation
|
||||
|
||||
---
|
||||
*This PR was automatically generated by Claude Code GitHub Action*" \
|
||||
--base next \
|
||||
--head ${{ steps.create-branch.outputs.branch_name }} \
|
||||
--label "documentation" \
|
||||
--label "automated"
|
||||
107
.github/workflows/claude-issue-triage.yml
vendored
107
.github/workflows/claude-issue-triage.yml
vendored
@@ -1,107 +0,0 @@
|
||||
name: Claude Issue Triage
|
||||
# description: Automatically triage GitHub issues using Claude Code
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage-issue:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
with:
|
||||
prompt_file: /tmp/claude-prompts/triage-prompt.txt
|
||||
allowed_tools: "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
timeout_minutes: "5"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
mcp_config: /tmp/mcp-config/mcp-servers.json
|
||||
claude_env: |
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
36
.github/workflows/claude.yml
vendored
36
.github/workflows/claude.yml
vendored
@@ -1,36 +0,0 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
5
.github/workflows/extension-ci.yml
vendored
5
.github/workflows/extension-ci.yml
vendored
@@ -41,7 +41,8 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Monorepo Dependencies
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
@@ -67,6 +68,7 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
@@ -98,6 +100,7 @@ jobs:
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install if cache miss
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 3
|
||||
|
||||
|
||||
110
.github/workflows/extension-pre-release.yml
vendored
Normal file
110
.github/workflows/extension-pre-release.yml
vendored
Normal file
@@ -0,0 +1,110 @@
|
||||
name: Extension Pre-Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "extension-rc@*"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
concurrency: extension-pre-release-${{ github.ref }}
|
||||
|
||||
jobs:
|
||||
publish-extension-rc:
|
||||
runs-on: ubuntu-latest
|
||||
environment: extension-release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
node_modules
|
||||
*/*/node_modules
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
- name: Type Check Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run check-types
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run build
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Package Extension
|
||||
working-directory: apps/extension
|
||||
run: npm run package
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Create VSIX Package (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce package --no-dependencies --pre-release
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Get VSIX filename
|
||||
id: vsix-info
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: |
|
||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
||||
if [ -z "$VSIX_FILE" ]; then
|
||||
echo "Error: No VSIX file found"
|
||||
exit 1
|
||||
fi
|
||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
||||
echo "Found VSIX: $VSIX_FILE"
|
||||
|
||||
- name: Publish to VS Code Marketplace (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Install Open VSX CLI
|
||||
run: npm install -g ovsx
|
||||
|
||||
- name: Publish to Open VSX Registry (Pre-Release)
|
||||
working-directory: apps/extension/vsix-build
|
||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
||||
env:
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Upload Build Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: extension-pre-release-${{ github.ref_name }}
|
||||
path: |
|
||||
apps/extension/vsix-build/*.vsix
|
||||
apps/extension/dist/
|
||||
retention-days: 30
|
||||
|
||||
notify-success:
|
||||
needs: publish-extension-rc
|
||||
if: success()
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Success Notification
|
||||
run: |
|
||||
echo "🚀 Extension ${{ github.ref_name }} successfully published as pre-release!"
|
||||
echo "📦 Available on VS Code Marketplace (Pre-Release)"
|
||||
echo "🌍 Available on Open VSX Registry (Pre-Release)"
|
||||
3
.github/workflows/extension-release.yml
vendored
3
.github/workflows/extension-release.yml
vendored
@@ -31,7 +31,8 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install Monorepo Dependencies
|
||||
- name: Install Extension Dependencies
|
||||
working-directory: apps/extension
|
||||
run: npm ci
|
||||
timeout-minutes: 5
|
||||
|
||||
|
||||
176
.github/workflows/log-issue-events.yml
vendored
176
.github/workflows/log-issue-events.yml
vendored
@@ -1,176 +0,0 @@
|
||||
name: Log GitHub Issue Events
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, closed]
|
||||
|
||||
jobs:
|
||||
log-issue-created:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue creation to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
AUTHOR="${{ github.event.issue.user.login }}"
|
||||
CREATED_AT="${{ github.event.issue.created_at }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg author "$AUTHOR" \
|
||||
--arg created_at "$CREATED_AT" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_created",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
issue_author: $author,
|
||||
created_at: $created_at
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue creation to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue creation for issue #${ISSUE_NUMBER}"
|
||||
else
|
||||
echo "Failed to log issue creation for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
|
||||
log-issue-closed:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
|
||||
steps:
|
||||
- name: Log issue closure to Statsig
|
||||
env:
|
||||
STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
ISSUE_NUMBER=${{ github.event.issue.number }}
|
||||
REPO=${{ github.repository }}
|
||||
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
|
||||
CLOSED_BY="${{ github.event.issue.closed_by.login }}"
|
||||
CLOSED_AT="${{ github.event.issue.closed_at }}"
|
||||
STATE_REASON="${{ github.event.issue.state_reason }}"
|
||||
|
||||
if [ -z "$STATSIG_API_KEY" ]; then
|
||||
echo "STATSIG_API_KEY not found, skipping Statsig logging"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get additional issue data via GitHub API
|
||||
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
|
||||
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
|
||||
|
||||
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
|
||||
|
||||
# Get reactions data
|
||||
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
|
||||
|
||||
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
|
||||
|
||||
# Check if issue was closed automatically (by checking if closed_by is a bot)
|
||||
CLOSED_AUTOMATICALLY="false"
|
||||
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
|
||||
CLOSED_AUTOMATICALLY="true"
|
||||
fi
|
||||
|
||||
# Check if closed as duplicate by state_reason
|
||||
CLOSED_AS_DUPLICATE="false"
|
||||
if [ "$STATE_REASON" = "duplicate" ]; then
|
||||
CLOSED_AS_DUPLICATE="true"
|
||||
fi
|
||||
|
||||
# Prepare the event payload
|
||||
EVENT_PAYLOAD=$(jq -n \
|
||||
--arg issue_number "$ISSUE_NUMBER" \
|
||||
--arg repo "$REPO" \
|
||||
--arg title "$ISSUE_TITLE" \
|
||||
--arg closed_by "$CLOSED_BY" \
|
||||
--arg closed_at "$CLOSED_AT" \
|
||||
--arg state_reason "$STATE_REASON" \
|
||||
--arg comments_count "$COMMENTS_COUNT" \
|
||||
--arg reactions_count "$REACTIONS_COUNT" \
|
||||
--arg closed_automatically "$CLOSED_AUTOMATICALLY" \
|
||||
--arg closed_as_duplicate "$CLOSED_AS_DUPLICATE" \
|
||||
'{
|
||||
events: [{
|
||||
eventName: "github_issue_closed",
|
||||
value: 1,
|
||||
metadata: {
|
||||
repository: $repo,
|
||||
issue_number: ($issue_number | tonumber),
|
||||
issue_title: $title,
|
||||
closed_by: $closed_by,
|
||||
closed_at: $closed_at,
|
||||
state_reason: $state_reason,
|
||||
comments_count: ($comments_count | tonumber),
|
||||
reactions_count: ($reactions_count | tonumber),
|
||||
closed_automatically: ($closed_automatically | test("true")),
|
||||
closed_as_duplicate: ($closed_as_duplicate | test("true"))
|
||||
},
|
||||
time: (now | floor | tostring)
|
||||
}]
|
||||
}')
|
||||
|
||||
# Send to Statsig API
|
||||
echo "Logging issue closure to Statsig for issue #${ISSUE_NUMBER}"
|
||||
|
||||
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST https://events.statsigapi.net/v1/log_event \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "STATSIG-API-KEY: ${STATSIG_API_KEY}" \
|
||||
-d "$EVENT_PAYLOAD")
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
BODY=$(echo "$RESPONSE" | head -n-1)
|
||||
|
||||
if [ "$HTTP_CODE" -eq 200 ] || [ "$HTTP_CODE" -eq 202 ]; then
|
||||
echo "Successfully logged issue closure for issue #${ISSUE_NUMBER}"
|
||||
echo "Closed by: $CLOSED_BY"
|
||||
echo "Comments: $COMMENTS_COUNT"
|
||||
echo "Reactions: $REACTIONS_COUNT"
|
||||
echo "Closed automatically: $CLOSED_AUTOMATICALLY"
|
||||
echo "Closed as duplicate: $CLOSED_AS_DUPLICATE"
|
||||
else
|
||||
echo "Failed to log issue closure for issue #${ISSUE_NUMBER}. HTTP ${HTTP_CODE}: ${BODY}"
|
||||
fi
|
||||
18
.github/workflows/pre-release.yml
vendored
18
.github/workflows/pre-release.yml
vendored
@@ -65,27 +65,15 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
- name: Run format
|
||||
run: npm run format
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: npx changeset publish
|
||||
publish: node ./.github/scripts/pre-release.mjs
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||
|
||||
- name: Commit & Push changes
|
||||
uses: actions-js/push@master
|
||||
|
||||
11
.github/workflows/release.yml
vendored
11
.github/workflows/release.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: "npm"
|
||||
cache: 'npm'
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
@@ -41,15 +41,6 @@ jobs:
|
||||
- name: Check pre-release mode
|
||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
||||
|
||||
- name: Build packages
|
||||
run: npm run turbo:build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
FORCE_COLOR: 1
|
||||
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
|
||||
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
|
||||
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
|
||||
|
||||
- name: Create Release Pull Request or Publish to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
|
||||
108
.github/workflows/weekly-metrics-discord.yml
vendored
108
.github/workflows/weekly-metrics-discord.yml
vendored
@@ -1,108 +0,0 @@
|
||||
name: Weekly Metrics to Discord
|
||||
# description: Sends weekly metrics summary to Discord channel
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 9 * * 1" # Every Monday at 9 AM
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
weekly-metrics:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Get dates for last 14 days
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
# Last 14 days
|
||||
first_day=$(date -d "14 days ago" +%Y-%m-%d)
|
||||
last_day=$(date +%Y-%m-%d)
|
||||
|
||||
echo "first_day=$first_day" >> $GITHUB_ENV
|
||||
echo "last_day=$last_day" >> $GITHUB_ENV
|
||||
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
|
||||
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
|
||||
|
||||
- name: Generate issue metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
HIDE_TIME_TO_ANSWER: true
|
||||
HIDE_LABEL_METRICS: false
|
||||
OUTPUT_FILE: issue_metrics.md
|
||||
|
||||
- name: Generate PR created metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_created_metrics.md
|
||||
|
||||
- name: Generate PR merged metrics
|
||||
uses: github/issue-metrics@v3
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
|
||||
OUTPUT_FILE: pr_merged_metrics.md
|
||||
|
||||
- name: Debug generated metrics
|
||||
run: |
|
||||
set -Eeuo pipefail
|
||||
echo "Listing markdown files in workspace:"
|
||||
ls -la *.md || true
|
||||
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
|
||||
if [ -f "$f" ]; then
|
||||
echo "== $f (first 10 lines) =="
|
||||
head -n 10 "$f"
|
||||
else
|
||||
echo "Missing $f"
|
||||
fi
|
||||
done
|
||||
|
||||
- name: Parse metrics
|
||||
id: metrics
|
||||
run: node .github/scripts/parse-metrics.mjs
|
||||
|
||||
- name: Send to Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: env.DISCORD_WEBHOOK != ''
|
||||
with:
|
||||
webhook: ${{ env.DISCORD_WEBHOOK }}
|
||||
status: Success
|
||||
title: "📊 Weekly Metrics Report"
|
||||
description: |
|
||||
**${{ env.week_of }}**
|
||||
*${{ env.date_range }}*
|
||||
|
||||
**🎯 Issues**
|
||||
• Created: ${{ steps.metrics.outputs.issues_created }}
|
||||
• Closed: ${{ steps.metrics.outputs.issues_closed }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
|
||||
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
|
||||
|
||||
**🔀 Pull Requests**
|
||||
• Created: ${{ steps.metrics.outputs.prs_created }}
|
||||
• Merged: ${{ steps.metrics.outputs.prs_merged }}
|
||||
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
|
||||
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
|
||||
|
||||
**📈 Visual Analytics**
|
||||
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
|
||||
color: 0x58AFFF
|
||||
username: Task Master Metrics Bot
|
||||
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -94,6 +94,3 @@ apps/extension/.vscode-test/
|
||||
|
||||
# apps/extension
|
||||
apps/extension/vsix-build/
|
||||
|
||||
# turbo
|
||||
.turbo
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
|
||||
"defaultBranch": "main",
|
||||
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
|
||||
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
|
||||
}
|
||||
@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-sonnet-4-20250514",
|
||||
"maxTokens": 64000,
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
@@ -14,8 +14,8 @@
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"modelId": "claude-3-5-sonnet-20241022",
|
||||
"maxTokens": 8192,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
@@ -29,15 +29,9 @@
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||
"responseLanguage": "English",
|
||||
"enableCodebaseAnalysis": true,
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"defaultTag": "master"
|
||||
},
|
||||
"claudeCode": {},
|
||||
"grokCli": {
|
||||
"timeout": 120000,
|
||||
"workingDirectory": null,
|
||||
"defaultModel": "grok-4-latest"
|
||||
}
|
||||
"claudeCode": {}
|
||||
}
|
||||
|
||||
@@ -1,188 +0,0 @@
|
||||
# Task Master Migration Roadmap
|
||||
|
||||
## Overview
|
||||
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
|
||||
|
||||
## Architecture Vision
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ User Interfaces │
|
||||
├──────────┬──────────┬──────────┬────────────────┤
|
||||
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
|
||||
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
|
||||
└──────────┴──────────┴──────────┴────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────┐
|
||||
│ @tm/core │
|
||||
│ (Business Logic) │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## Migration Phases
|
||||
|
||||
### Phase 1: Core Extraction ✅ (In Progress)
|
||||
**Goal**: Move all business logic to @tm/core
|
||||
|
||||
- [x] Create @tm/core package structure
|
||||
- [x] Move types and interfaces
|
||||
- [x] Implement TaskMasterCore facade
|
||||
- [x] Move storage adapters
|
||||
- [x] Move task services
|
||||
- [ ] Move AI providers
|
||||
- [ ] Move parser logic
|
||||
- [ ] Complete test coverage
|
||||
|
||||
### Phase 2: CLI Package Creation 🚧 (Started)
|
||||
**Goal**: Create @tm/cli as a thin presentation layer
|
||||
|
||||
- [x] Create @tm/cli package structure
|
||||
- [x] Implement Command interface pattern
|
||||
- [x] Create CommandRegistry
|
||||
- [x] Build legacy bridge/adapter
|
||||
- [x] Migrate list-tasks command
|
||||
- [ ] Migrate remaining commands one by one
|
||||
- [ ] Remove UI logic from core
|
||||
|
||||
### Phase 3: Transitional Integration
|
||||
**Goal**: Use new packages in existing scripts without breaking changes
|
||||
|
||||
```javascript
|
||||
// scripts/modules/commands.js gradually adopts new commands
|
||||
import { ListTasksCommand } from '@tm/cli';
|
||||
const listCommand = new ListTasksCommand();
|
||||
|
||||
// Old interface remains the same
|
||||
programInstance
|
||||
.command('list')
|
||||
.action(async (options) => {
|
||||
// Use new command internally
|
||||
const result = await listCommand.execute(convertOptions(options));
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 4: MCP Package
|
||||
**Goal**: Separate MCP server as its own package
|
||||
|
||||
- [ ] Create @tm/mcp package
|
||||
- [ ] Move MCP server code
|
||||
- [ ] Use @tm/core for all logic
|
||||
- [ ] MCP becomes a thin RPC layer
|
||||
|
||||
### Phase 5: Complete Migration
|
||||
**Goal**: Remove old scripts, pure monorepo
|
||||
|
||||
- [ ] All commands migrated to @tm/cli
|
||||
- [ ] Remove scripts/modules/task-manager/*
|
||||
- [ ] Remove scripts/modules/commands.js
|
||||
- [ ] Update bin/task-master.js to use @tm/cli
|
||||
- [ ] Clean up dependencies
|
||||
|
||||
## Current Transitional Strategy
|
||||
|
||||
### 1. Adapter Pattern (commands-adapter.js)
|
||||
```javascript
|
||||
// Checks if new CLI is available and uses it
|
||||
// Falls back to legacy implementation if not
|
||||
export async function listTasksAdapter(...args) {
|
||||
if (cliAvailable) {
|
||||
return useNewImplementation(...args);
|
||||
}
|
||||
return useLegacyImplementation(...args);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Command Bridge Pattern
|
||||
```javascript
|
||||
// Allows new commands to work in old code
|
||||
const bridge = new CommandBridge(new ListTasksCommand());
|
||||
const data = await bridge.run(legacyOptions); // Legacy style
|
||||
const result = await bridge.execute(newOptions); // New style
|
||||
```
|
||||
|
||||
### 3. Gradual File Migration
|
||||
Instead of big-bang refactoring:
|
||||
1. Create new implementation in @tm/cli
|
||||
2. Add adapter in commands-adapter.js
|
||||
3. Update commands.js to use adapter
|
||||
4. Test both paths work
|
||||
5. Eventually remove adapter when all migrated
|
||||
|
||||
## Benefits of This Approach
|
||||
|
||||
1. **No Breaking Changes**: Existing CLI continues to work
|
||||
2. **Incremental PRs**: Each command can be migrated separately
|
||||
3. **Parallel Development**: New features can use new architecture
|
||||
4. **Easy Rollback**: Can disable new implementation if issues
|
||||
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
|
||||
|
||||
## Example PR Sequence
|
||||
|
||||
### PR 1: Core Package Setup ✅
|
||||
- Create @tm/core
|
||||
- Move types and interfaces
|
||||
- Basic TaskMasterCore implementation
|
||||
|
||||
### PR 2: CLI Package Foundation ✅
|
||||
- Create @tm/cli
|
||||
- Command interface and registry
|
||||
- Legacy bridge utilities
|
||||
|
||||
### PR 3: First Command Migration
|
||||
- Migrate list-tasks to new system
|
||||
- Add adapter in scripts
|
||||
- Test both implementations
|
||||
|
||||
### PR 4-N: Migrate Commands One by One
|
||||
- Each PR migrates 1-2 related commands
|
||||
- Small, reviewable changes
|
||||
- Continuous delivery
|
||||
|
||||
### Final PR: Cleanup
|
||||
- Remove legacy implementations
|
||||
- Remove adapters
|
||||
- Update documentation
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Dual Testing During Migration
|
||||
```javascript
|
||||
describe('List Tasks', () => {
|
||||
it('works with legacy implementation', async () => {
|
||||
// Force legacy
|
||||
const result = await legacyListTasks(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
|
||||
it('works with new implementation', async () => {
|
||||
// Force new
|
||||
const command = new ListTasksCommand();
|
||||
const result = await command.execute(...);
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('adapter chooses correctly', async () => {
|
||||
// Let adapter decide
|
||||
const result = await listTasksAdapter(...);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] All commands migrated without breaking changes
|
||||
- [ ] Test coverage maintained or improved
|
||||
- [ ] Performance maintained or improved
|
||||
- [ ] Cleaner, more maintainable codebase
|
||||
- [ ] Easy to add new interfaces (web, desktop, etc.)
|
||||
|
||||
## Notes for Contributors
|
||||
|
||||
1. **Keep PRs Small**: Migrate one command at a time
|
||||
2. **Test Both Paths**: Ensure legacy and new both work
|
||||
3. **Document Changes**: Update this roadmap as you go
|
||||
4. **Communicate**: Discuss in PRs if architecture needs adjustment
|
||||
|
||||
This is a living document - update as the migration progresses!
|
||||
@@ -1,91 +0,0 @@
|
||||
<context>
|
||||
# Overview
|
||||
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
|
||||
|
||||
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
|
||||
|
||||
# Core Features
|
||||
- `start` command (Commander class style)
|
||||
- Hard-coded executor: `claude-code`
|
||||
- Standardized prompt designed for minimal changes following existing patterns
|
||||
- Shows claude-code output (no streaming)
|
||||
- Git status check for success detection
|
||||
- Auto-mark task done if successful
|
||||
|
||||
# User Experience
|
||||
```
|
||||
task-master start 12
|
||||
```
|
||||
1) Fetches Task #12 details
|
||||
2) Builds standardized prompt with task context
|
||||
3) Runs claude-code with the prompt
|
||||
4) Shows output
|
||||
5) Checks git status for changes
|
||||
6) Auto-marks task done if changes detected
|
||||
</context>
|
||||
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
|
||||
- Command pattern:
|
||||
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
|
||||
|
||||
- Task retrieval:
|
||||
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
|
||||
- Extract: id, title, description, details
|
||||
|
||||
- Executor (ultra-simple approach):
|
||||
- Execute `claude "full prompt here"` command directly
|
||||
- The prompt tells Claude to first run `tm show <task_id>` to get task details
|
||||
- Then tells Claude to implement the code changes
|
||||
- This opens Claude CLI interface naturally in the current terminal
|
||||
- No subprocess management needed - just execute the command
|
||||
|
||||
- Execution flow:
|
||||
1) Validate `<task_id>` exists; exit with error if not
|
||||
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
|
||||
3) Execute `claude "prompt"` command directly in terminal
|
||||
4) Claude CLI opens, runs `tm show`, then implements changes
|
||||
5) After Claude session ends, run `git status --porcelain` to detect changes
|
||||
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
|
||||
|
||||
- Success criteria:
|
||||
- Success = exit code 0 AND git shows modified/created files
|
||||
- Print changed file paths; warn if no changes detected
|
||||
|
||||
# Development Roadmap
|
||||
|
||||
MVP (ship in ~1 hour):
|
||||
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
|
||||
2) Validate task exists via tm-core
|
||||
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
|
||||
4) Execute `claude "prompt"` command, then check git status and auto-mark done
|
||||
|
||||
# Risks and Mitigations
|
||||
- Executor availability: Error clearly if `claude-code` provider fails
|
||||
- False success: Git-change heuristic acceptable for hackathon MVP
|
||||
|
||||
# Appendix
|
||||
|
||||
**Standardized Prompt Template:**
|
||||
```
|
||||
You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show <task_id>
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show <task_id> to understand what needs to be implemented.
|
||||
```
|
||||
|
||||
**Key References:**
|
||||
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
|
||||
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
|
||||
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
|
||||
</PRD>
|
||||
@@ -1,8 +0,0 @@
|
||||
Simple Todo App PRD
|
||||
|
||||
Create a basic todo list application with the following features:
|
||||
1. Add new todos
|
||||
2. Mark todos as complete
|
||||
3. Delete todos
|
||||
|
||||
That's it. Keep it simple.
|
||||
@@ -1,77 +0,0 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:39:03.250Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
|
||||
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
|
||||
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
|
||||
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
|
||||
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
|
||||
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
|
||||
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
|
||||
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
|
||||
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-08-06T12:15:01.327Z",
|
||||
"tasksAnalyzed": 8,
|
||||
"totalTasks": 11,
|
||||
"analysisCount": 8,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 118,
|
||||
"taskTitle": "Create AI Provider Base Architecture",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
|
||||
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 119,
|
||||
"taskTitle": "Implement Provider Factory with Dynamic Imports",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
|
||||
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
|
||||
},
|
||||
{
|
||||
"taskId": 120,
|
||||
"taskTitle": "Implement Anthropic Provider",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
|
||||
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
|
||||
},
|
||||
{
|
||||
"taskId": 121,
|
||||
"taskTitle": "Create Prompt Builder and Task Parser",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
|
||||
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
|
||||
},
|
||||
{
|
||||
"taskId": 122,
|
||||
"taskTitle": "Implement Configuration Management",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
|
||||
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
|
||||
},
|
||||
{
|
||||
"taskId": 123,
|
||||
"taskTitle": "Create Utility Functions and Error Handling",
|
||||
"complexityScore": 2,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
|
||||
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
|
||||
},
|
||||
{
|
||||
"taskId": 124,
|
||||
"taskTitle": "Implement TaskMasterCore Facade",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
|
||||
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
|
||||
},
|
||||
{
|
||||
"taskId": 125,
|
||||
"taskTitle": "Create Placeholder Providers and Complete Testing",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
|
||||
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"currentTag": "master",
|
||||
"lastSwitched": "2025-09-12T22:25:27.535Z",
|
||||
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
||||
"branchTagMapping": {
|
||||
"v017-adds": "v017-adds",
|
||||
"next": "next"
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
# Task ID: 1
|
||||
# Title: Create start command class structure
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create the basic structure for the start command following the Commander class pattern
|
||||
# Details:
|
||||
Create a new file `apps/cli/src/commands/start.command.ts` based on the existing list.command.ts pattern. Implement the command class with proper command registration, description, and argument handling for the task_id parameter. The class should extend the base Command class and implement the required methods.
|
||||
|
||||
Example structure:
|
||||
```typescript
|
||||
import { Command } from 'commander';
|
||||
import { BaseCommand } from './base.command';
|
||||
|
||||
export class StartCommand extends BaseCommand {
|
||||
public register(program: Command): void {
|
||||
program
|
||||
.command('start')
|
||||
.alias('tm start')
|
||||
.description('Start implementing a task using claude-code')
|
||||
.argument('<task_id>', 'ID of the task to start')
|
||||
.action(async (taskId: string) => {
|
||||
await this.execute(taskId);
|
||||
});
|
||||
}
|
||||
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Implementation will be added in subsequent tasks
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command registers correctly by running the CLI with --help and checking that the start command appears with proper description and arguments. Test the basic structure by ensuring the command can be invoked without errors.
|
||||
@@ -1,26 +0,0 @@
|
||||
# Task ID: 2
|
||||
# Title: Register start command in CLI
|
||||
# Status: pending
|
||||
# Dependencies: 7
|
||||
# Priority: high
|
||||
# Description: Register the start command in the CLI application
|
||||
# Details:
|
||||
Update the CLI application to register the new start command. This involves importing the StartCommand class and adding it to the commands array in the CLI initialization.
|
||||
|
||||
In `apps/cli/src/index.ts` or the appropriate file where commands are registered:
|
||||
|
||||
```typescript
|
||||
import { StartCommand } from './commands/start.command';
|
||||
|
||||
// Add StartCommand to the commands array
|
||||
const commands = [
|
||||
// ... existing commands
|
||||
new StartCommand(),
|
||||
];
|
||||
|
||||
// Register all commands
|
||||
commands.forEach(command => command.register(program));
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Verify the command is correctly registered by running the CLI with --help and checking that the start command appears in the list of available commands.
|
||||
@@ -1,32 +0,0 @@
|
||||
# Task ID: 3
|
||||
# Title: Create standardized prompt builder
|
||||
# Status: pending
|
||||
# Dependencies: 1
|
||||
# Priority: medium
|
||||
# Description: Implement a function to build the standardized prompt for claude-code based on the task details
|
||||
# Details:
|
||||
Create a function in the StartCommand class that builds the standardized prompt according to the template provided in the PRD. The prompt should include instructions for Claude to first run `tm show <task_id>` to get task details, and then implement the required changes.
|
||||
|
||||
```typescript
|
||||
private buildPrompt(taskId: string): string {
|
||||
return `You are an AI coding assistant with access to this repository's codebase.
|
||||
|
||||
First, run this command to get the task details:
|
||||
tm show ${taskId}
|
||||
|
||||
Then implement the task with these requirements:
|
||||
- Make the SMALLEST number of code changes possible
|
||||
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
|
||||
- Do NOT over-engineer the solution
|
||||
- Use existing files/functions/patterns wherever possible
|
||||
- When complete, print: COMPLETED: <brief summary of changes>
|
||||
|
||||
Begin by running tm show ${taskId} to understand what needs to be implemented.`;
|
||||
}
|
||||
```
|
||||
<info added on 2025-09-12T02:40:01.812Z>
|
||||
The prompt builder function will handle task context retrieval by instructing Claude to use the task-master show command. This approach ensures Claude has access to all necessary task details before implementation begins. The command syntax "tm show ${taskId}" embedded in the prompt will direct Claude to first gather the complete task context, including description, requirements, and any existing implementation details, before proceeding with code changes.
|
||||
</info added on 2025-09-12T02:40:01.812Z>
|
||||
|
||||
# Test Strategy:
|
||||
Verify the prompt is correctly formatted by calling the function with a sample task ID and checking that the output matches the expected template with the task ID properly inserted.
|
||||
@@ -1,36 +0,0 @@
|
||||
# Task ID: 4
|
||||
# Title: Implement claude-code executor
|
||||
# Status: pending
|
||||
# Dependencies: 3
|
||||
# Priority: high
|
||||
# Description: Add functionality to execute the claude-code command with the built prompt
|
||||
# Details:
|
||||
Implement the functionality to execute the claude command with the built prompt. This should use Node.js child_process.exec() to run the command directly in the terminal.
|
||||
|
||||
```typescript
|
||||
import { exec } from 'child_process';
|
||||
|
||||
// Inside execute method, after task validation
|
||||
private async executeClaude(prompt: string): Promise<void> {
|
||||
console.log('Starting claude-code to implement the task...');
|
||||
|
||||
try {
|
||||
// Execute claude with the prompt
|
||||
const claudeCommand = `claude "${prompt.replace(/"/g, '\\"')}"`;
|
||||
|
||||
// Use execSync to wait for the command to complete
|
||||
const { execSync } = require('child_process');
|
||||
execSync(claudeCommand, { stdio: 'inherit' });
|
||||
|
||||
console.log('Claude session completed.');
|
||||
} catch (error) {
|
||||
console.error('Error executing claude-code:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then call this method from the execute method after building the prompt.
|
||||
|
||||
# Test Strategy:
|
||||
Test by running the command with a valid task ID and verifying that the claude command is executed with the correct prompt. Check that the command handles errors appropriately if claude-code is not available.
|
||||
@@ -1,49 +0,0 @@
|
||||
# Task ID: 7
|
||||
# Title: Integrate execution flow in start command
|
||||
# Status: pending
|
||||
# Dependencies: 3, 4
|
||||
# Priority: high
|
||||
# Description: Connect all the components to implement the complete execution flow for the start command
|
||||
# Details:
|
||||
Update the execute method in the StartCommand class to integrate all the components and implement the complete execution flow as described in the PRD:
|
||||
1. Validate task exists
|
||||
2. Build standardized prompt
|
||||
3. Execute claude-code
|
||||
4. Check git status for changes
|
||||
5. Auto-mark task as done if changes detected
|
||||
|
||||
```typescript
|
||||
public async execute(taskId: string): Promise<void> {
|
||||
// Validate task exists
|
||||
const core = await createTaskMasterCore();
|
||||
const task = await core.tasks.getById(parseInt(taskId, 10));
|
||||
|
||||
if (!task) {
|
||||
console.error(`Task with ID ${taskId} not found`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Build prompt
|
||||
const prompt = this.buildPrompt(taskId);
|
||||
|
||||
// Execute claude-code
|
||||
await this.executeClaude(prompt);
|
||||
|
||||
// Check git status
|
||||
const changedFiles = await this.checkGitChanges();
|
||||
|
||||
if (changedFiles.length > 0) {
|
||||
console.log('\nChanges detected in the following files:');
|
||||
changedFiles.forEach(file => console.log(`- ${file}`));
|
||||
|
||||
// Auto-mark task as done
|
||||
await this.markTaskAsDone(taskId);
|
||||
console.log(`\nTask ${taskId} completed successfully and marked as done.`);
|
||||
} else {
|
||||
console.warn('\nNo changes detected after claude-code execution. Task not marked as done.');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Test the complete execution flow by running the start command with a valid task ID and verifying that all steps are executed correctly. Test with both scenarios: when changes are detected and when no changes are detected.
|
||||
File diff suppressed because one or more lines are too long
@@ -1,511 +0,0 @@
|
||||
<rpg-method>
|
||||
# Repository Planning Graph (RPG) Method - PRD Template
|
||||
|
||||
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
|
||||
2. **Explicit Dependencies**: Never assume - always state what depends on what
|
||||
3. **Topological Order**: Build foundation first, then layers on top
|
||||
4. **Progressive Refinement**: Start broad, refine iteratively
|
||||
|
||||
## How to Use This Template
|
||||
|
||||
- Follow the instructions in each `<instruction>` block
|
||||
- Look at `<example>` blocks to see good vs bad patterns
|
||||
- Fill in the content sections with your project details
|
||||
- The AI reading this will learn the RPG method by following along
|
||||
- Task Master will parse the resulting PRD into dependency-aware tasks
|
||||
|
||||
## Recommended Tools for Creating PRDs
|
||||
|
||||
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
|
||||
|
||||
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
|
||||
|
||||
**Recommended tools:**
|
||||
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
|
||||
- **Cursor/Windsurf** - IDE integration with full codebase context
|
||||
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
|
||||
- **Codex/Grok CLI** - Strong code generation with context awareness
|
||||
|
||||
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
|
||||
</rpg-method>
|
||||
|
||||
---
|
||||
|
||||
<overview>
|
||||
<instruction>
|
||||
Start with the problem, not the solution. Be specific about:
|
||||
- What pain point exists?
|
||||
- Who experiences it?
|
||||
- Why existing solutions don't work?
|
||||
- What success looks like (measurable outcomes)?
|
||||
|
||||
Keep this section focused - don't jump into implementation details yet.
|
||||
</instruction>
|
||||
|
||||
## Problem Statement
|
||||
[Describe the core problem. Be concrete about user pain points.]
|
||||
|
||||
## Target Users
|
||||
[Define personas, their workflows, and what they're trying to achieve.]
|
||||
|
||||
## Success Metrics
|
||||
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
|
||||
|
||||
</overview>
|
||||
|
||||
---
|
||||
|
||||
<functional-decomposition>
|
||||
<instruction>
|
||||
Now think about CAPABILITIES (what the system DOES), not code structure yet.
|
||||
|
||||
Step 1: Identify high-level capability domains
|
||||
- Think: "What major things does this system do?"
|
||||
- Examples: Data Management, Core Processing, Presentation Layer
|
||||
|
||||
Step 2: For each capability, enumerate specific features
|
||||
- Use explore-exploit strategy:
|
||||
* Exploit: What features are REQUIRED for core value?
|
||||
* Explore: What features make this domain COMPLETE?
|
||||
|
||||
Step 3: For each feature, define:
|
||||
- Description: What it does in one sentence
|
||||
- Inputs: What data/context it needs
|
||||
- Outputs: What it produces/returns
|
||||
- Behavior: Key logic or transformations
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
Feature: Schema validation
|
||||
- Description: Validate JSON payloads against defined schemas
|
||||
- Inputs: JSON object, schema definition
|
||||
- Outputs: Validation result (pass/fail) + error details
|
||||
- Behavior: Iterate fields, check types, enforce constraints
|
||||
|
||||
Feature: Business rule validation
|
||||
- Description: Apply domain-specific validation rules
|
||||
- Inputs: Validated data object, rule set
|
||||
- Outputs: Boolean + list of violated rules
|
||||
- Behavior: Execute rules sequentially, short-circuit on failure
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: validation.js
|
||||
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
|
||||
|
||||
Capability: Validation
|
||||
Feature: Make sure data is good
|
||||
(Problem: Too vague. No inputs/outputs. Not actionable.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Capability Tree
|
||||
|
||||
### Capability: [Name]
|
||||
[Brief description of what this capability domain covers]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**: [One sentence]
|
||||
- **Inputs**: [What it needs]
|
||||
- **Outputs**: [What it produces]
|
||||
- **Behavior**: [Key logic]
|
||||
|
||||
#### Feature: [Name]
|
||||
- **Description**:
|
||||
- **Inputs**:
|
||||
- **Outputs**:
|
||||
- **Behavior**:
|
||||
|
||||
### Capability: [Name]
|
||||
...
|
||||
|
||||
</functional-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<structural-decomposition>
|
||||
<instruction>
|
||||
NOW think about code organization. Map capabilities to actual file/folder structure.
|
||||
|
||||
Rules:
|
||||
1. Each capability maps to a module (folder or file)
|
||||
2. Features within a capability map to functions/classes
|
||||
3. Use clear module boundaries - each module has ONE responsibility
|
||||
4. Define what each module exports (public interface)
|
||||
|
||||
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
|
||||
|
||||
<example type="good">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/
|
||||
├── schema-validator.js (Schema validation feature)
|
||||
├── rule-validator.js (Business rule validation feature)
|
||||
└── index.js (Public exports)
|
||||
|
||||
Exports:
|
||||
- validateSchema(data, schema)
|
||||
- validateRules(data, rules)
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/utils.js
|
||||
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
|
||||
|
||||
Capability: Data Validation
|
||||
→ Maps to: src/validation/everything.js
|
||||
(Problem: One giant file. Features should map to separate files for maintainability.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── src/
|
||||
│ ├── [module-name]/ # Maps to: [Capability Name]
|
||||
│ │ ├── [file].js # Maps to: [Feature Name]
|
||||
│ │ └── index.js # Public exports
|
||||
│ └── [module-name]/
|
||||
├── tests/
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Module Definitions
|
||||
|
||||
### Module: [Name]
|
||||
- **Maps to capability**: [Capability from functional decomposition]
|
||||
- **Responsibility**: [Single clear purpose]
|
||||
- **File structure**:
|
||||
```
|
||||
module-name/
|
||||
├── feature1.js
|
||||
├── feature2.js
|
||||
└── index.js
|
||||
```
|
||||
- **Exports**:
|
||||
- `functionName()` - [what it does]
|
||||
- `ClassName` - [what it does]
|
||||
|
||||
</structural-decomposition>
|
||||
|
||||
---
|
||||
|
||||
<dependency-graph>
|
||||
<instruction>
|
||||
This is THE CRITICAL SECTION for Task Master parsing.
|
||||
|
||||
Define explicit dependencies between modules. This creates the topological order for task execution.
|
||||
|
||||
Rules:
|
||||
1. List modules in dependency order (foundation first)
|
||||
2. For each module, state what it depends on
|
||||
3. Foundation modules should have NO dependencies
|
||||
4. Every non-foundation module should depend on at least one other module
|
||||
5. Think: "What must EXIST before I can build this module?"
|
||||
|
||||
<example type="good">
|
||||
Foundation Layer (no dependencies):
|
||||
- error-handling: No dependencies
|
||||
- config-manager: No dependencies
|
||||
- base-types: No dependencies
|
||||
|
||||
Data Layer:
|
||||
- schema-validator: Depends on [base-types, error-handling]
|
||||
- data-ingestion: Depends on [schema-validator, config-manager]
|
||||
|
||||
Core Layer:
|
||||
- algorithm-engine: Depends on [base-types, error-handling]
|
||||
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
- validation: Depends on API
|
||||
- API: Depends on validation
|
||||
(Problem: Circular dependency. This will cause build/runtime issues.)
|
||||
|
||||
- user-auth: Depends on everything
|
||||
(Problem: Too many dependencies. Should be more focused.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Dependency Chain
|
||||
|
||||
### Foundation Layer (Phase 0)
|
||||
No dependencies - these are built first.
|
||||
|
||||
- **[Module Name]**: [What it provides]
|
||||
- **[Module Name]**: [What it provides]
|
||||
|
||||
### [Layer Name] (Phase 1)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
|
||||
- **[Module Name]**: Depends on [[module-from-phase-0]]
|
||||
|
||||
### [Layer Name] (Phase 2)
|
||||
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
|
||||
|
||||
[Continue building up layers...]
|
||||
|
||||
</dependency-graph>
|
||||
|
||||
---
|
||||
|
||||
<implementation-roadmap>
|
||||
<instruction>
|
||||
Turn the dependency graph into concrete development phases.
|
||||
|
||||
Each phase should:
|
||||
1. Have clear entry criteria (what must exist before starting)
|
||||
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
|
||||
3. Have clear exit criteria (how do we know phase is complete?)
|
||||
4. Build toward something USABLE (not just infrastructure)
|
||||
|
||||
Phase ordering follows topological sort of dependency graph.
|
||||
|
||||
<example type="good">
|
||||
Phase 0: Foundation
|
||||
Entry: Clean repository
|
||||
Tasks:
|
||||
- Implement error handling utilities
|
||||
- Create base type definitions
|
||||
- Setup configuration system
|
||||
Exit: Other modules can import foundation without errors
|
||||
|
||||
Phase 1: Data Layer
|
||||
Entry: Phase 0 complete
|
||||
Tasks:
|
||||
- Implement schema validator (uses: base types, error handling)
|
||||
- Build data ingestion pipeline (uses: validator, config)
|
||||
Exit: End-to-end data flow from input to validated output
|
||||
</example>
|
||||
|
||||
<example type="bad">
|
||||
Phase 1: Build Everything
|
||||
Tasks:
|
||||
- API
|
||||
- Database
|
||||
- UI
|
||||
- Tests
|
||||
(Problem: No clear focus. Too broad. Dependencies not considered.)
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 0: [Foundation Name]
|
||||
**Goal**: [What foundational capability this establishes]
|
||||
|
||||
**Entry Criteria**: [What must be true before starting]
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
- Acceptance criteria: [How we know it's done]
|
||||
- Test strategy: [What tests prove it works]
|
||||
|
||||
- [ ] [Task name] (depends on: [none or list])
|
||||
|
||||
**Exit Criteria**: [Observable outcome that proves phase complete]
|
||||
|
||||
**Delivers**: [What can users/developers do after this phase?]
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: [Layer Name]
|
||||
**Goal**:
|
||||
|
||||
**Entry Criteria**: Phase 0 complete
|
||||
|
||||
**Tasks**:
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
|
||||
|
||||
**Exit Criteria**:
|
||||
|
||||
**Delivers**:
|
||||
|
||||
---
|
||||
|
||||
[Continue with more phases...]
|
||||
|
||||
</implementation-roadmap>
|
||||
|
||||
---
|
||||
|
||||
<test-strategy>
|
||||
<instruction>
|
||||
Define how testing will be integrated throughout development (TDD approach).
|
||||
|
||||
Specify:
|
||||
1. Test pyramid ratios (unit vs integration vs e2e)
|
||||
2. Coverage requirements
|
||||
3. Critical test scenarios
|
||||
4. Test generation guidelines for Surgical Test Generator
|
||||
|
||||
This section guides the AI when generating tests during the RED phase of TDD.
|
||||
|
||||
<example type="good">
|
||||
Critical Test Scenarios for Data Validation module:
|
||||
- Happy path: Valid data passes all checks
|
||||
- Edge cases: Empty strings, null values, boundary numbers
|
||||
- Error cases: Invalid types, missing required fields
|
||||
- Integration: Validator works with ingestion pipeline
|
||||
</example>
|
||||
</instruction>
|
||||
|
||||
## Test Pyramid
|
||||
|
||||
```
|
||||
/\
|
||||
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
|
||||
/------\
|
||||
/Integration\ ← [Y]% (Module interactions)
|
||||
/------------\
|
||||
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
|
||||
/----------------\
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
- Line coverage: [X]% minimum
|
||||
- Branch coverage: [X]% minimum
|
||||
- Function coverage: [X]% minimum
|
||||
- Statement coverage: [X]% minimum
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### [Module/Feature Name]
|
||||
**Happy path**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Edge cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [What should happen]
|
||||
|
||||
**Error cases**:
|
||||
- [Scenario description]
|
||||
- Expected: [How system handles failure]
|
||||
|
||||
**Integration points**:
|
||||
- [What interactions to test]
|
||||
- Expected: [End-to-end behavior]
|
||||
|
||||
## Test Generation Guidelines
|
||||
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
|
||||
|
||||
</test-strategy>
|
||||
|
||||
---
|
||||
|
||||
<architecture>
|
||||
<instruction>
|
||||
Describe technical architecture, data models, and key design decisions.
|
||||
|
||||
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
|
||||
</instruction>
|
||||
|
||||
## System Components
|
||||
[Major architectural pieces and their responsibilities]
|
||||
|
||||
## Data Models
|
||||
[Core data structures, schemas, database design]
|
||||
|
||||
## Technology Stack
|
||||
[Languages, frameworks, key libraries]
|
||||
|
||||
**Decision: [Technology/Pattern]**
|
||||
- **Rationale**: [Why chosen]
|
||||
- **Trade-offs**: [What we're giving up]
|
||||
- **Alternatives considered**: [What else we looked at]
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
<risks>
|
||||
<instruction>
|
||||
Identify risks that could derail development and how to mitigate them.
|
||||
|
||||
Categories:
|
||||
- Technical risks (complexity, unknowns)
|
||||
- Dependency risks (blocking issues)
|
||||
- Scope risks (creep, underestimation)
|
||||
</instruction>
|
||||
|
||||
## Technical Risks
|
||||
**Risk**: [Description]
|
||||
- **Impact**: [High/Medium/Low - effect on project]
|
||||
- **Likelihood**: [High/Medium/Low]
|
||||
- **Mitigation**: [How to address]
|
||||
- **Fallback**: [Plan B if mitigation fails]
|
||||
|
||||
## Dependency Risks
|
||||
[External dependencies, blocking issues]
|
||||
|
||||
## Scope Risks
|
||||
[Scope creep, underestimation, unclear requirements]
|
||||
|
||||
</risks>
|
||||
|
||||
---
|
||||
|
||||
<appendix>
|
||||
## References
|
||||
[Papers, documentation, similar systems]
|
||||
|
||||
## Glossary
|
||||
[Domain-specific terms]
|
||||
|
||||
## Open Questions
|
||||
[Things to resolve during development]
|
||||
</appendix>
|
||||
|
||||
---
|
||||
|
||||
<task-master-integration>
|
||||
# How Task Master Uses This PRD
|
||||
|
||||
When you run `task-master parse-prd <file>.txt`, the parser:
|
||||
|
||||
1. **Extracts capabilities** → Main tasks
|
||||
- Each `### Capability:` becomes a top-level task
|
||||
|
||||
2. **Extracts features** → Subtasks
|
||||
- Each `#### Feature:` becomes a subtask under its capability
|
||||
|
||||
3. **Parses dependencies** → Task dependencies
|
||||
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
|
||||
|
||||
4. **Orders by phases** → Task priorities
|
||||
- Phase 0 tasks = highest priority
|
||||
- Phase N tasks = lower priority, properly sequenced
|
||||
|
||||
5. **Uses test strategy** → Test generation context
|
||||
- Feeds test scenarios to Surgical Test Generator during implementation
|
||||
|
||||
**Result**: A dependency-aware task graph that can be executed in topological order.
|
||||
|
||||
## Why RPG Structure Matters
|
||||
|
||||
Traditional flat PRDs lead to:
|
||||
- ❌ Unclear task dependencies
|
||||
- ❌ Arbitrary task ordering
|
||||
- ❌ Circular dependencies discovered late
|
||||
- ❌ Poorly scoped tasks
|
||||
|
||||
RPG-structured PRDs provide:
|
||||
- ✅ Explicit dependency chains
|
||||
- ✅ Topological execution order
|
||||
- ✅ Clear module boundaries
|
||||
- ✅ Validated task graph before implementation
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
|
||||
2. **Keep features atomic** - Each feature should be independently testable
|
||||
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
|
||||
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
|
||||
</task-master-integration>
|
||||
15
.vscode/settings.json
vendored
15
.vscode/settings.json
vendored
@@ -10,18 +10,5 @@
|
||||
},
|
||||
|
||||
"json.format.enable": true,
|
||||
"json.validate.enable": true,
|
||||
"typescript.tsdk": "node_modules/typescript/lib",
|
||||
"[typescript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[typescriptreact]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[javascript]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "biomejs.biome"
|
||||
}
|
||||
"json.validate.enable": true
|
||||
}
|
||||
|
||||
569
CHANGELOG.md
569
CHANGELOG.md
@@ -1,574 +1,5 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.29.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
|
||||
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
|
||||
|
||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||
|
||||
## 🎉 New: Claude Code Plugin
|
||||
|
||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
|
||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||
- **MCP server integration** for deep Claude Code integration
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||
- Shows plugin installation instructions
|
||||
- Only manages CLAUDE.md imports for agent instructions
|
||||
- Directs users to install the official plugin
|
||||
|
||||
**Migration for Existing Users:**
|
||||
|
||||
If you previously used `rules add claude`:
|
||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||
|
||||
**Why This Change?**
|
||||
|
||||
Claude Code plugins provide:
|
||||
- ✅ Automatic updates when we release new features
|
||||
- ✅ Better command organization and naming
|
||||
- ✅ Seamless integration with Claude Code
|
||||
- ✅ No manual file copying or management
|
||||
|
||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||
|
||||
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
|
||||
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||
|
||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||
|
||||
Key improvements:
|
||||
- Automatic integration with complexity analysis reports
|
||||
- Tag-aware complexity report path resolution
|
||||
- Intelligent subtask count determination based on task complexity
|
||||
- Falls back to defaults when complexity analysis is unavailable
|
||||
- Enhanced logging for better visibility into expansion decisions
|
||||
|
||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
|
||||
## 0.28.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
|
||||
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
|
||||
- OAuth-first authentication via `codex login` - no API key required
|
||||
- Optional OPENAI_CODEX_API_KEY support
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Command-specific settings and approval/sandbox modes
|
||||
|
||||
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
|
||||
|
||||
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
|
||||
- Works better with claude-code and gemini-cli as ai providers
|
||||
- Improved openai model family compatibility
|
||||
- Migrate ollama provider to v2
|
||||
- Closes #1223, #1013, #1161, #1174
|
||||
|
||||
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
|
||||
|
||||
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
|
||||
|
||||
### Key Changes:
|
||||
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
|
||||
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
|
||||
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
|
||||
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
|
||||
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
|
||||
|
||||
### Technical Improvements:
|
||||
- Centralized provider configuration in `ai-providers-unified.js`
|
||||
- Added `generateObject` support detection for each provider
|
||||
- Implemented proper error handling for schema validation failures
|
||||
- Maintained backward compatibility with existing prompt structures
|
||||
|
||||
### Bug Fixes:
|
||||
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
|
||||
- Enhanced prompt instructions to enforce proper ID generation patterns
|
||||
- Ensured subtasks display correctly as X.1, X.2, X.3 format
|
||||
|
||||
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
|
||||
|
||||
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
|
||||
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
|
||||
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
|
||||
|
||||
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
|
||||
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
|
||||
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
|
||||
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
|
||||
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
|
||||
- Added complexity score on "next task" when running `task-master list`
|
||||
- Added colors to complexity to reflect complexity (easy, medium, hard)
|
||||
|
||||
## 0.28.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
|
||||
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
|
||||
- OAuth-first authentication via `codex login` - no API key required
|
||||
- Optional OPENAI_CODEX_API_KEY support
|
||||
- Codebase analysis capabilities automatically enabled
|
||||
- Command-specific settings and approval/sandbox modes
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
|
||||
|
||||
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
|
||||
|
||||
## 0.28.0-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
|
||||
|
||||
## 0.28.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
|
||||
|
||||
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
|
||||
- Works better with claude-code and gemini-cli as ai providers
|
||||
- Improved openai model family compatibility
|
||||
- Migrate ollama provider to v2
|
||||
- Closes #1223, #1013, #1161, #1174
|
||||
|
||||
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
|
||||
|
||||
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
|
||||
|
||||
### Key Changes:
|
||||
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
|
||||
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
|
||||
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
|
||||
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
|
||||
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
|
||||
|
||||
### Technical Improvements:
|
||||
- Centralized provider configuration in `ai-providers-unified.js`
|
||||
- Added `generateObject` support detection for each provider
|
||||
- Implemented proper error handling for schema validation failures
|
||||
- Maintained backward compatibility with existing prompt structures
|
||||
|
||||
### Bug Fixes:
|
||||
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
|
||||
- Enhanced prompt instructions to enforce proper ID generation patterns
|
||||
- Ensured subtasks display correctly as X.1, X.2, X.3 format
|
||||
|
||||
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
|
||||
|
||||
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
|
||||
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
|
||||
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
|
||||
|
||||
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
|
||||
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
|
||||
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
|
||||
- Added complexity score on "next task" when running `task-master list`
|
||||
- Added colors to complexity to reflect complexity (easy, medium, hard)
|
||||
|
||||
## 0.27.3
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1254](https://github.com/eyaltoledano/claude-task-master/pull/1254) [`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed issue where `tm show` command could not find subtasks using dotted notation IDs (e.g., '8.1').
|
||||
- The command now properly searches within parent task subtasks and returns the correct subtask information.
|
||||
|
||||
## 0.27.2
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1248](https://github.com/eyaltoledano/claude-task-master/pull/1248) [`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix set-status for subtasks:
|
||||
- Parent tasks are now set as `done` when subtasks are all `done`
|
||||
- Parent tasks are now set as `in-progress` when at least one subtask is `in-progress` or `done`
|
||||
|
||||
## 0.27.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Zed MCP configuration by adding required "source" property
|
||||
- Add "source": "custom" property to task-master-ai server in Zed settings.json
|
||||
|
||||
## 0.27.1-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - One last testing final final
|
||||
|
||||
## 0.27.1-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
|
||||
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
|
||||
- we now bundle our whole package, so we no longer need the --package
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
|
||||
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
|
||||
- `task-master start` will automatically detect next-task when no ID is provided.
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
|
||||
|
||||
## Setup Instructions
|
||||
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
|
||||
2. **Set the environment variable**:
|
||||
```bash
|
||||
export GROK_CLI_API_KEY="your-api-key-here"
|
||||
```
|
||||
3. **Configure Task Master to use Grok**:
|
||||
```bash
|
||||
task-master models --set-main grok-beta
|
||||
# or
|
||||
task-master models --set-research grok-beta
|
||||
# or
|
||||
task-master models --set-fallback grok-beta
|
||||
```
|
||||
|
||||
## Key Features
|
||||
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
|
||||
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
|
||||
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
|
||||
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
|
||||
|
||||
## Available Models
|
||||
- `grok-beta` - Latest Grok model with codebase context
|
||||
- `grok-vision-beta` - Grok with vision capabilities and codebase context
|
||||
|
||||
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
|
||||
|
||||
## Credits
|
||||
|
||||
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
|
||||
|
||||
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
|
||||
- moving from main anthropic 3.7 to anthropic sonnet 4
|
||||
- moving from fallback anthropic 3.5 to anthropic 3.7
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
|
||||
|
||||
## 0.27.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
## 0.27.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [`255b9f0`](https://github.com/eyaltoledano/claude-task-master/commit/255b9f0334555b0063280abde701445cd62fa11b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Testing one more pre-release iteration
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Test out the RC
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee)]:
|
||||
- @tm/cli@0.27.0-rc.0
|
||||
|
||||
## 0.26.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1133](https://github.com/eyaltoledano/claude-task-master/pull/1133) [`df26c65`](https://github.com/eyaltoledano/claude-task-master/commit/df26c65632000874a73504963b08f18c46283144) Thanks [@neonwatty](https://github.com/neonwatty)! - Restore Taskmaster claude-code commands and move clear commands under /remove to avoid collision with the claude-code /clear command.
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.26.0-rc.1
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1165](https://github.com/eyaltoledano/claude-task-master/pull/1165) [`c4f92f6`](https://github.com/eyaltoledano/claude-task-master/commit/c4f92f6a0aee3435c56eb8d27d9aa9204284833e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add configurable codebase analysis feature flag with multiple configuration sources
|
||||
|
||||
Users can now control whether codebase analysis features (Claude Code and Gemini CLI integration) are enabled through environment variables, MCP configuration, or project config files.
|
||||
|
||||
Priority order: .env > MCP session env > .taskmaster/config.json.
|
||||
|
||||
Set `TASKMASTER_ENABLE_CODEBASE_ANALYSIS=false` in `.env` to disable codebase analysis prompts and tool integration.
|
||||
|
||||
## 0.26.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1163](https://github.com/eyaltoledano/claude-task-master/pull/1163) [`37af0f1`](https://github.com/eyaltoledano/claude-task-master/commit/37af0f191227a68d119b7f89a377bf932ee3ac66) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Gemini CLI provider with codebase-aware task generation
|
||||
|
||||
Added automatic codebase analysis for Gemini CLI provider in parse-prd, and analyze-complexity, add-task, udpate-task, update, update-subtask commands
|
||||
When using Gemini CLI as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||
Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - feat(move): improve cross-tag move UX and safety
|
||||
- CLI: print "Next Steps" tips after cross-tag moves that used --ignore-dependencies (validate/fix guidance)
|
||||
- CLI: show dedicated help block on ID collisions (destination tag already has the ID)
|
||||
- Core: add structured suggestions to TASK_ALREADY_EXISTS errors
|
||||
- MCP: map ID collision errors to TASK_ALREADY_EXISTS and include suggestions
|
||||
- Tests: cover MCP options, error suggestions, CLI tips printing, and integration error payload suggestions
|
||||
|
||||
***
|
||||
|
||||
- [#1162](https://github.com/eyaltoledano/claude-task-master/pull/1162) [`4dad2fd`](https://github.com/eyaltoledano/claude-task-master/commit/4dad2fd613ceac56a65ae9d3c1c03092b8860ac9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code and Google CLI integration with automatic codebase analysis for task operations
|
||||
|
||||
When using Claude Code as the AI provider, task management commands now automatically analyze your codebase before generating or updating tasks. This provides more accurate, context-aware implementation details that align with your project's existing architecture and patterns.
|
||||
|
||||
Commands contextualised:
|
||||
- add-task
|
||||
- update-subtask
|
||||
- update-task
|
||||
- update
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1135](https://github.com/eyaltoledano/claude-task-master/pull/1135) [`8783708`](https://github.com/eyaltoledano/claude-task-master/commit/8783708e5e3389890a78fcf685d3da0580e73b3f) Thanks [@mm-parthy](https://github.com/mm-parthy)! - docs(move): clarify cross-tag move docs; deprecate "force"; add explicit --with-dependencies/--ignore-dependencies examples
|
||||
|
||||
## 0.25.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1152](https://github.com/eyaltoledano/claude-task-master/pull/1152) [`8933557`](https://github.com/eyaltoledano/claude-task-master/commit/89335578ffffc65504b2055c0c85aa7521e5e79b) Thanks [@ben-vargas](https://github.com/ben-vargas)! - fix(claude-code): prevent crash/hang when the optional `@anthropic-ai/claude-code` SDK is missing by guarding `AbortError instanceof` checks and adding explicit SDK presence checks in `doGenerate`/`doStream`. Also bump the optional dependency to `^1.0.88` for improved export consistency.
|
||||
|
||||
Related to JSON truncation handling in #920; this change addresses a separate error-path crash reported in #1142.
|
||||
|
||||
- [#1151](https://github.com/eyaltoledano/claude-task-master/pull/1151) [`db720a9`](https://github.com/eyaltoledano/claude-task-master/commit/db720a954d390bb44838cd021b8813dde8f3d8de) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Temporarily disable streaming for improved model compatibility - will be re-enabled in upcoming release
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Add cross-tag task movement functionality for organizing tasks across different contexts.
|
||||
|
||||
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
|
||||
|
||||
## CLI Usage Examples
|
||||
|
||||
Move a single task from one tag to another:
|
||||
|
||||
```bash
|
||||
# Move task 5 from backlog tag to in-progress tag
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
|
||||
|
||||
# Move task with its dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
|
||||
|
||||
# Move task without checking dependencies
|
||||
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
|
||||
```
|
||||
|
||||
Move multiple tasks at once:
|
||||
|
||||
```bash
|
||||
# Move multiple tasks between tags
|
||||
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
|
||||
```
|
||||
|
||||
- [#1040](https://github.com/eyaltoledano/claude-task-master/pull/1040) [`fc47714`](https://github.com/eyaltoledano/claude-task-master/commit/fc477143400fd11d953727bf1b4277af5ad308d1) Thanks [@DomVidja](https://github.com/DomVidja)! - "Add Kilo Code profile integration with custom modes and MCP configuration"
|
||||
|
||||
- [#1054](https://github.com/eyaltoledano/claude-task-master/pull/1054) [`782728f`](https://github.com/eyaltoledano/claude-task-master/commit/782728ff95aa2e3b766d48273b57f6c6753e8573) Thanks [@martincik](https://github.com/martincik)! - Add compact mode --compact / -c flag to the `tm list` CLI command
|
||||
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
|
||||
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
|
||||
- Color-coded status, priority, and dependencies
|
||||
- Smart title truncation and dependency abbreviation
|
||||
- Subtask support with indentation
|
||||
- Full backward compatibility with existing list options
|
||||
|
||||
- [#1048](https://github.com/eyaltoledano/claude-task-master/pull/1048) [`e3ed4d7`](https://github.com/eyaltoledano/claude-task-master/commit/e3ed4d7c14b56894d7da675eb2b757423bea8f9d) Thanks [@joedanz](https://github.com/joedanz)! - Add CLI & MCP progress tracking for parse-prd command.
|
||||
|
||||
- [#1124](https://github.com/eyaltoledano/claude-task-master/pull/1124) [`95640dc`](https://github.com/eyaltoledano/claude-task-master/commit/95640dcde87ce7879858c0a951399fb49f3b6397) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`
|
||||
|
||||
- [#1123](https://github.com/eyaltoledano/claude-task-master/pull/1123) [`311b243`](https://github.com/eyaltoledano/claude-task-master/commit/311b2433e23c771c8d3a4d3f5ac577302b8321e5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1131](https://github.com/eyaltoledano/claude-task-master/pull/1131) [`3dee60d`](https://github.com/eyaltoledano/claude-task-master/commit/3dee60dc3d566e3cff650accb30f994b8bb3a15e) Thanks [@joedanz](https://github.com/joedanz)! - Update Cursor one-click install link to new URL format
|
||||
|
||||
- [#1088](https://github.com/eyaltoledano/claude-task-master/pull/1088) [`04e11b5`](https://github.com/eyaltoledano/claude-task-master/commit/04e11b5e828597c0ba5b82ca7d5fb6f933e4f1e8) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
|
||||
|
||||
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.
|
||||
|
||||
## 0.24.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user