Compare commits
3 Commits
extension-
...
task-104
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
451a55d44f | ||
|
|
2b79822002 | ||
|
|
1080162a09 |
5
.changeset/eleven-horses-shop.md
Normal file
5
.changeset/eleven-horses-shop.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix for tasks not found when using string IDs
|
||||||
7
.changeset/fix-tag-complexity-detection.md
Normal file
7
.changeset/fix-tag-complexity-detection.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix tag-specific complexity report detection in expand command
|
||||||
|
|
||||||
|
The expand command now correctly finds and uses tag-specific complexity reports (e.g., `task-complexity-report_feature-xyz.json`) when operating in a tag context. Previously, it would always look for the generic `task-complexity-report.json` file due to a default value in the CLI option definition.
|
||||||
38
.changeset/floppy-news-buy.md
Normal file
38
.changeset/floppy-news-buy.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
|
||||||
|
|
||||||
|
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
|
||||||
|
|
||||||
|
**New CLI Commands:**
|
||||||
|
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
|
||||||
|
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
|
||||||
|
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
|
||||||
|
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
|
||||||
|
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
|
||||||
|
- **Smart context**: AI considers your project context and task dependencies when making adjustments
|
||||||
|
|
||||||
|
**Usage Examples:**
|
||||||
|
```bash
|
||||||
|
# Make a task more detailed
|
||||||
|
task-master scope-up --id=5
|
||||||
|
|
||||||
|
# Simplify multiple tasks with light touch
|
||||||
|
task-master scope-down --id=10,11,12 --strength=light
|
||||||
|
|
||||||
|
# Custom adjustment with specific instructions
|
||||||
|
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why use this?**
|
||||||
|
- **Iterative refinement**: Adjust task complexity as your understanding evolves
|
||||||
|
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
|
||||||
|
- **Team coordination**: Adjust complexity based on team member experience levels
|
||||||
|
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
|
||||||
|
|
||||||
|
Perfect for agile workflows where task requirements change as you learn more about the problem space.
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix expand task generating unrelated generic subtasks
|
|
||||||
|
|
||||||
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
|
||||||
|
|
||||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
|
||||||
- Ensures generated JSON includes all fields required by the schema
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Enhanced Claude Code provider with codebase-aware task generation
|
|
||||||
|
|
||||||
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
|
||||||
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
|
||||||
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
|
||||||
@@ -2,12 +2,11 @@
|
|||||||
"mode": "pre",
|
"mode": "pre",
|
||||||
"tag": "rc",
|
"tag": "rc",
|
||||||
"initialVersions": {
|
"initialVersions": {
|
||||||
"task-master-ai": "0.23.0",
|
"task-master-ai": "0.22.0",
|
||||||
"extension": "0.23.0"
|
"extension": "0.20.0"
|
||||||
},
|
},
|
||||||
"changesets": [
|
"changesets": [
|
||||||
"fuzzy-words-count",
|
"eleven-horses-shop",
|
||||||
"tender-trams-refuse",
|
"fix-tag-complexity-detection"
|
||||||
"vast-sites-leave"
|
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
42
.changeset/sour-pans-beam.md
Normal file
42
.changeset/sour-pans-beam.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
"extension": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
🎉 **Introducing TaskMaster Extension!**
|
||||||
|
|
||||||
|
We're thrilled to launch the first version of our Code extension, bringing the power of TaskMaster directly into your favorite code editor. While this is our initial release and we've kept things focused, it already packs powerful features to supercharge your development workflow.
|
||||||
|
|
||||||
|
## ✨ Key Features
|
||||||
|
|
||||||
|
### 📋 Visual Task Management
|
||||||
|
- **Kanban Board View**: Visualize all your tasks in an intuitive board layout directly in VS Code
|
||||||
|
- **Drag & Drop**: Easily change task status by dragging cards between columns
|
||||||
|
- **Real-time Updates**: See changes instantly as you work through your project
|
||||||
|
|
||||||
|
### 🏷️ Multi-Context Support
|
||||||
|
- **Tag Switching**: Seamlessly switch between different project contexts/tags
|
||||||
|
- **Isolated Workflows**: Keep different features or experiments organized separately
|
||||||
|
|
||||||
|
### 🤖 AI-Powered Task Updates
|
||||||
|
- **Smart Updates**: Use TaskMaster's AI capabilities to update tasks and subtasks
|
||||||
|
- **Context-Aware**: Leverages your existing TaskMaster configuration and models
|
||||||
|
|
||||||
|
### 📊 Rich Task Information
|
||||||
|
- **Complexity Scores**: See task complexity ratings at a glance
|
||||||
|
- **Subtask Visualization**: Expand tasks to view and manage subtasks
|
||||||
|
- **Dependency Graphs**: Understand task relationships and dependencies visually
|
||||||
|
|
||||||
|
### ⚙️ Configuration Management
|
||||||
|
- **Visual Config Editor**: View and understand your `.taskmaster/config.json` settings
|
||||||
|
- **Easy Access**: No more manual JSON editing for common configuration tasks
|
||||||
|
|
||||||
|
### 🚀 Quick Actions
|
||||||
|
- **Status Updates**: Change task status with a single click
|
||||||
|
- **Task Details**: Access full task information without leaving VS Code
|
||||||
|
- **Integrated Commands**: All TaskMaster commands available through the command palette
|
||||||
|
|
||||||
|
## 🎯 What's Next?
|
||||||
|
|
||||||
|
This is just the beginning! We wanted to get a solid foundation into your hands quickly. The extension will evolve rapidly with your feedback, adding more advanced features, better visualizations, and deeper integration with your development workflow.
|
||||||
|
|
||||||
|
Thank you for being part of the TaskMaster journey. Your workflow has never looked better! 🚀
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix MCP scope-up/down tools not finding tasks
|
|
||||||
|
|
||||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
|
||||||
- scope_up_task and scope_down_task MCP tools now work properly
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"extension": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve AI provider compatibility for JSON generation
|
|
||||||
|
|
||||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
|
||||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
|
||||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
|
||||||
- Perplexity now uses JSON mode for more reliable structured output
|
|
||||||
- Post-processing handles default values separately from schema validation
|
|
||||||
@@ -1,59 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
|
||||||
|
|
||||||
## New Claude Code Agents
|
|
||||||
|
|
||||||
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
|
||||||
|
|
||||||
### task-orchestrator
|
|
||||||
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
|
||||||
- Analyzes task dependencies to identify parallelizable work
|
|
||||||
- Deploys multiple task-executor agents for concurrent execution
|
|
||||||
- Monitors task completion and updates the dependency graph
|
|
||||||
- Automatically identifies and starts newly unblocked tasks
|
|
||||||
|
|
||||||
### task-executor
|
|
||||||
Handles the actual implementation of individual tasks:
|
|
||||||
- Executes specific tasks identified by the orchestrator
|
|
||||||
- Works on concrete implementation rather than planning
|
|
||||||
- Updates task status and logs progress
|
|
||||||
- Can work in parallel with other executors on independent tasks
|
|
||||||
|
|
||||||
### task-checker
|
|
||||||
Verifies that completed tasks meet their specifications:
|
|
||||||
- Reviews tasks marked as 'review' status
|
|
||||||
- Validates implementation against requirements
|
|
||||||
- Runs tests and checks for best practices
|
|
||||||
- Ensures quality before marking tasks as 'done'
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
|
||||||
|
|
||||||
## Usage Example
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In Claude Code, after initializing a project with tasks:
|
|
||||||
|
|
||||||
# Use task-orchestrator to analyze and coordinate work
|
|
||||||
# The orchestrator will:
|
|
||||||
# 1. Check task dependencies
|
|
||||||
# 2. Identify tasks that can run in parallel
|
|
||||||
# 3. Deploy executors for available work
|
|
||||||
# 4. Monitor progress and deploy new executors as tasks complete
|
|
||||||
|
|
||||||
# Use task-executor for specific task implementation
|
|
||||||
# When the orchestrator identifies task 2.3 needs work:
|
|
||||||
# The executor will implement that specific task
|
|
||||||
```
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
|
||||||
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
|
||||||
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
|
||||||
- **Progress Tracking**: Real-time updates as tasks are completed
|
|
||||||
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
|
||||||
@@ -1,162 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-checker
|
|
||||||
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
|
|
||||||
model: sonnet
|
|
||||||
color: yellow
|
|
||||||
---
|
|
||||||
|
|
||||||
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
|
|
||||||
|
|
||||||
## Core Responsibilities
|
|
||||||
|
|
||||||
1. **Task Specification Review**
|
|
||||||
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
|
|
||||||
- Understand the requirements, test strategy, and success criteria
|
|
||||||
- Review any subtasks and their individual requirements
|
|
||||||
|
|
||||||
2. **Implementation Verification**
|
|
||||||
- Use `Read` tool to examine all created/modified files
|
|
||||||
- Use `Bash` tool to run compilation and build commands
|
|
||||||
- Use `Grep` tool to search for required patterns and implementations
|
|
||||||
- Verify file structure matches specifications
|
|
||||||
- Check that all required methods/functions are implemented
|
|
||||||
|
|
||||||
3. **Test Execution**
|
|
||||||
- Run tests specified in the task's testStrategy
|
|
||||||
- Execute build commands (npm run build, tsc --noEmit, etc.)
|
|
||||||
- Verify no compilation errors or warnings
|
|
||||||
- Check for runtime errors where applicable
|
|
||||||
- Test edge cases mentioned in requirements
|
|
||||||
|
|
||||||
4. **Code Quality Assessment**
|
|
||||||
- Verify code follows project conventions
|
|
||||||
- Check for proper error handling
|
|
||||||
- Ensure TypeScript typing is strict (no 'any' unless justified)
|
|
||||||
- Verify documentation/comments where required
|
|
||||||
- Check for security best practices
|
|
||||||
|
|
||||||
5. **Dependency Validation**
|
|
||||||
- Verify all task dependencies were actually completed
|
|
||||||
- Check integration points with dependent tasks
|
|
||||||
- Ensure no breaking changes to existing functionality
|
|
||||||
|
|
||||||
## Verification Workflow
|
|
||||||
|
|
||||||
1. **Retrieve Task Information**
|
|
||||||
```
|
|
||||||
Use mcp__task-master-ai__get_task to get full task details
|
|
||||||
Note the implementation requirements and test strategy
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Check File Existence**
|
|
||||||
```bash
|
|
||||||
# Verify all required files exist
|
|
||||||
ls -la [expected directories]
|
|
||||||
# Read key files to verify content
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify Implementation**
|
|
||||||
- Read each created/modified file
|
|
||||||
- Check against requirements checklist
|
|
||||||
- Verify all subtasks are complete
|
|
||||||
|
|
||||||
4. **Run Tests**
|
|
||||||
```bash
|
|
||||||
# TypeScript compilation
|
|
||||||
cd [project directory] && npx tsc --noEmit
|
|
||||||
|
|
||||||
# Run specified tests
|
|
||||||
npm test [specific test files]
|
|
||||||
|
|
||||||
# Build verification
|
|
||||||
npm run build
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Generate Verification Report**
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
verification_report:
|
|
||||||
task_id: [ID]
|
|
||||||
status: PASS | FAIL | PARTIAL
|
|
||||||
score: [1-10]
|
|
||||||
|
|
||||||
requirements_met:
|
|
||||||
- ✅ [Requirement that was satisfied]
|
|
||||||
- ✅ [Another satisfied requirement]
|
|
||||||
|
|
||||||
issues_found:
|
|
||||||
- ❌ [Issue description]
|
|
||||||
- ⚠️ [Warning or minor issue]
|
|
||||||
|
|
||||||
files_verified:
|
|
||||||
- path: [file path]
|
|
||||||
status: [created/modified/verified]
|
|
||||||
issues: [any problems found]
|
|
||||||
|
|
||||||
tests_run:
|
|
||||||
- command: [test command]
|
|
||||||
result: [pass/fail]
|
|
||||||
output: [relevant output]
|
|
||||||
|
|
||||||
recommendations:
|
|
||||||
- [Specific fix needed]
|
|
||||||
- [Improvement suggestion]
|
|
||||||
|
|
||||||
verdict: |
|
|
||||||
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
|
|
||||||
[If FAIL: Specific list of what must be fixed]
|
|
||||||
[If PASS: Confirmation that all requirements are met]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Decision Criteria
|
|
||||||
|
|
||||||
**Mark as PASS (ready for 'done'):**
|
|
||||||
- All required files exist and contain expected content
|
|
||||||
- All tests pass successfully
|
|
||||||
- No compilation or build errors
|
|
||||||
- All subtasks are complete
|
|
||||||
- Core requirements are met
|
|
||||||
- Code quality is acceptable
|
|
||||||
|
|
||||||
**Mark as PARTIAL (may proceed with warnings):**
|
|
||||||
- Core functionality is implemented
|
|
||||||
- Minor issues that don't block functionality
|
|
||||||
- Missing nice-to-have features
|
|
||||||
- Documentation could be improved
|
|
||||||
- Tests pass but coverage could be better
|
|
||||||
|
|
||||||
**Mark as FAIL (must return to 'pending'):**
|
|
||||||
- Required files are missing
|
|
||||||
- Compilation or build errors
|
|
||||||
- Tests fail
|
|
||||||
- Core requirements not met
|
|
||||||
- Security vulnerabilities detected
|
|
||||||
- Breaking changes to existing code
|
|
||||||
|
|
||||||
## Important Guidelines
|
|
||||||
|
|
||||||
- **BE THOROUGH**: Check every requirement systematically
|
|
||||||
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
|
|
||||||
- **BE FAIR**: Distinguish between critical issues and minor improvements
|
|
||||||
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
|
|
||||||
- **BE EFFICIENT**: Focus on requirements, not perfection
|
|
||||||
|
|
||||||
## Tools You MUST Use
|
|
||||||
|
|
||||||
- `Read`: Examine implementation files (READ-ONLY)
|
|
||||||
- `Bash`: Run tests and verification commands
|
|
||||||
- `Grep`: Search for patterns in code
|
|
||||||
- `mcp__task-master-ai__get_task`: Get task details
|
|
||||||
- **NEVER use Write/Edit** - you only verify, not fix
|
|
||||||
|
|
||||||
## Integration with Workflow
|
|
||||||
|
|
||||||
You are the quality gate between 'review' and 'done' status:
|
|
||||||
1. Task-executor implements and marks as 'review'
|
|
||||||
2. You verify and report PASS/FAIL
|
|
||||||
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
|
|
||||||
4. If FAIL, task-executor re-implements based on your report
|
|
||||||
|
|
||||||
Your verification ensures high quality and prevents accumulation of technical debt.
|
|
||||||
@@ -1,92 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-executor
|
|
||||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
|
||||||
model: sonnet
|
|
||||||
color: blue
|
|
||||||
---
|
|
||||||
|
|
||||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
|
||||||
|
|
||||||
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
|
|
||||||
- Execute ONE specific subtask or a small group of related subtasks
|
|
||||||
- Complete your work, verify it, mark for review, and exit
|
|
||||||
- Do NOT decide what to do next - the orchestrator handles task sequencing
|
|
||||||
- Focus on implementation excellence within your assigned scope
|
|
||||||
|
|
||||||
**Core Responsibilities:**
|
|
||||||
|
|
||||||
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
|
|
||||||
|
|
||||||
2. **Rapid Implementation Planning**: Quickly identify:
|
|
||||||
- The EXACT files you need to create/modify for THIS subtask
|
|
||||||
- What already exists that you can build upon
|
|
||||||
- The minimum viable implementation that satisfies requirements
|
|
||||||
|
|
||||||
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
|
|
||||||
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
|
|
||||||
- Use `Write` tool to create new files specified in the task
|
|
||||||
- Use `Edit` tool to modify existing files
|
|
||||||
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
|
|
||||||
- Use `Read` tool to verify your implementations
|
|
||||||
- Implement one subtask at a time for clarity and traceability
|
|
||||||
- Follow the project's coding standards from CLAUDE.md if available
|
|
||||||
- After each subtask, VERIFY the files exist using Read or ls commands
|
|
||||||
|
|
||||||
4. **Progress Documentation**:
|
|
||||||
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
|
|
||||||
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
|
|
||||||
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
|
|
||||||
- Tasks will be verified by task-checker before moving to 'done'
|
|
||||||
|
|
||||||
5. **Quality Assurance**:
|
|
||||||
- Implement the testing strategy specified in the task
|
|
||||||
- Verify that all acceptance criteria are met
|
|
||||||
- Check for any dependency conflicts or integration issues
|
|
||||||
- Run relevant tests before marking task as complete
|
|
||||||
|
|
||||||
6. **Dependency Management**:
|
|
||||||
- Check task dependencies before starting implementation
|
|
||||||
- If blocked by incomplete dependencies, clearly communicate this
|
|
||||||
- Use `task-master validate-dependencies` when needed
|
|
||||||
|
|
||||||
**Implementation Workflow:**
|
|
||||||
|
|
||||||
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
|
|
||||||
2. Check dependencies and prerequisites
|
|
||||||
3. Plan implementation approach - list specific files to create
|
|
||||||
4. Update task status to 'in-progress' using MCP tool
|
|
||||||
5. **ACTUALLY IMPLEMENT** the solution using tools:
|
|
||||||
- Use `Bash` to create directories
|
|
||||||
- Use `Write` to create new files with actual content
|
|
||||||
- Use `Edit` to modify existing files
|
|
||||||
- DO NOT just describe what should be done - DO IT
|
|
||||||
6. **VERIFY** your implementation:
|
|
||||||
- Use `ls` or `Read` to confirm files were created
|
|
||||||
- Use `Bash` to run any build/test commands
|
|
||||||
- Ensure the implementation is real, not theoretical
|
|
||||||
7. Log progress and decisions in subtask updates using MCP tools
|
|
||||||
8. Test and verify the implementation works
|
|
||||||
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
|
|
||||||
10. Report completion with:
|
|
||||||
- List of created/modified files
|
|
||||||
- Any issues encountered
|
|
||||||
- What needs verification by task-checker
|
|
||||||
|
|
||||||
**Key Principles:**
|
|
||||||
|
|
||||||
- Focus on completing one task thoroughly before moving to the next
|
|
||||||
- Maintain clear communication about what you're implementing and why
|
|
||||||
- Follow existing code patterns and project conventions
|
|
||||||
- Prioritize working code over extensive documentation unless docs are the task
|
|
||||||
- Ask for clarification if task requirements are ambiguous
|
|
||||||
- Consider edge cases and error handling in your implementations
|
|
||||||
|
|
||||||
**Integration with Task Master:**
|
|
||||||
|
|
||||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
|
||||||
- Track your progress
|
|
||||||
- Update task information
|
|
||||||
- Maintain project state
|
|
||||||
- Coordinate with the broader development workflow
|
|
||||||
|
|
||||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
|
||||||
@@ -1,208 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-orchestrator
|
|
||||||
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
|
|
||||||
model: opus
|
|
||||||
color: green
|
|
||||||
---
|
|
||||||
|
|
||||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
|
||||||
|
|
||||||
## Core Responsibilities
|
|
||||||
|
|
||||||
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
|
|
||||||
|
|
||||||
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
|
|
||||||
|
|
||||||
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
|
|
||||||
|
|
||||||
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
|
|
||||||
|
|
||||||
## Operational Workflow
|
|
||||||
|
|
||||||
### Initial Assessment Phase
|
|
||||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
|
||||||
2. Analyze task statuses, priorities, and dependencies
|
|
||||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
|
||||||
4. Group related tasks that could benefit from specialized executors
|
|
||||||
5. Create an execution plan that maximizes parallelization
|
|
||||||
|
|
||||||
### Executor Deployment Phase
|
|
||||||
1. For each independent task or task group:
|
|
||||||
- Deploy a task-executor agent with specific instructions
|
|
||||||
- Provide the executor with task ID, requirements, and context
|
|
||||||
- Set clear completion criteria and reporting expectations
|
|
||||||
2. Maintain a registry of active executors and their assigned tasks
|
|
||||||
3. Establish communication protocols for progress updates
|
|
||||||
|
|
||||||
### Coordination Phase
|
|
||||||
1. Monitor executor progress through task status updates
|
|
||||||
2. When a task completes:
|
|
||||||
- Verify completion with `get_task` or `task-master show <id>`
|
|
||||||
- Update task status if needed using `set_task_status`
|
|
||||||
- Reassess dependency graph for newly unblocked tasks
|
|
||||||
- Deploy new executors for available work
|
|
||||||
3. Handle executor failures or blocks:
|
|
||||||
- Reassign tasks to new executors if needed
|
|
||||||
- Escalate complex issues to the user
|
|
||||||
- Update task status to 'blocked' when appropriate
|
|
||||||
|
|
||||||
### Optimization Strategies
|
|
||||||
|
|
||||||
**Parallel Execution Rules**:
|
|
||||||
- Never assign dependent tasks to different executors simultaneously
|
|
||||||
- Prioritize high-priority tasks when resources are limited
|
|
||||||
- Group small, related subtasks for single executor efficiency
|
|
||||||
- Balance executor load to prevent bottlenecks
|
|
||||||
|
|
||||||
**Context Management**:
|
|
||||||
- Provide executors with minimal but sufficient context
|
|
||||||
- Share relevant completed task information when it aids execution
|
|
||||||
- Maintain a shared knowledge base of project-specific patterns
|
|
||||||
|
|
||||||
**Quality Assurance**:
|
|
||||||
- Verify task completion before marking as done
|
|
||||||
- Ensure test strategies are followed when specified
|
|
||||||
- Coordinate cross-task integration testing when needed
|
|
||||||
|
|
||||||
## Communication Protocols
|
|
||||||
|
|
||||||
When deploying executors, provide them with:
|
|
||||||
```
|
|
||||||
TASK ASSIGNMENT:
|
|
||||||
- Task ID: [specific ID]
|
|
||||||
- Objective: [clear goal]
|
|
||||||
- Dependencies: [list any completed prerequisites]
|
|
||||||
- Success Criteria: [specific completion requirements]
|
|
||||||
- Context: [relevant project information]
|
|
||||||
- Reporting: [when and how to report back]
|
|
||||||
```
|
|
||||||
|
|
||||||
When receiving executor updates:
|
|
||||||
1. Acknowledge completion or issues
|
|
||||||
2. Update task status in Task Master
|
|
||||||
3. Reassess execution strategy
|
|
||||||
4. Deploy new executors as appropriate
|
|
||||||
|
|
||||||
## Decision Framework
|
|
||||||
|
|
||||||
**When to parallelize**:
|
|
||||||
- Multiple pending tasks with no interdependencies
|
|
||||||
- Sufficient context available for independent execution
|
|
||||||
- Tasks are well-defined with clear success criteria
|
|
||||||
|
|
||||||
**When to serialize**:
|
|
||||||
- Strong dependencies between tasks
|
|
||||||
- Limited context or unclear requirements
|
|
||||||
- Integration points requiring careful coordination
|
|
||||||
|
|
||||||
**When to escalate**:
|
|
||||||
- Circular dependencies detected
|
|
||||||
- Critical blockers affecting multiple tasks
|
|
||||||
- Ambiguous requirements needing clarification
|
|
||||||
- Resource conflicts between executors
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
|
||||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
|
||||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
|
||||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
|
||||||
|
|
||||||
## Performance Metrics
|
|
||||||
|
|
||||||
Track and optimize for:
|
|
||||||
- Task completion rate
|
|
||||||
- Parallel execution efficiency
|
|
||||||
- Executor success rate
|
|
||||||
- Time to completion for task groups
|
|
||||||
- Dependency resolution speed
|
|
||||||
|
|
||||||
## Integration with Task Master
|
|
||||||
|
|
||||||
Leverage these Task Master MCP tools effectively:
|
|
||||||
- `get_tasks` - Continuous queue monitoring
|
|
||||||
- `get_task` - Detailed task analysis
|
|
||||||
- `set_task_status` - Progress tracking
|
|
||||||
- `next_task` - Fallback for serial execution
|
|
||||||
- `analyze_project_complexity` - Strategic planning
|
|
||||||
- `complexity_report` - Resource allocation
|
|
||||||
|
|
||||||
## Output Format for Execution
|
|
||||||
|
|
||||||
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
|
|
||||||
|
|
||||||
After completing your dependency analysis, you MUST output a structured execution plan:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
execution_plan:
|
|
||||||
EXECUTE_IN_PARALLEL:
|
|
||||||
# Maximum 3 subtasks running simultaneously
|
|
||||||
- subtask_id: [e.g., 118.2]
|
|
||||||
parent_task: [e.g., 118]
|
|
||||||
title: [Specific subtask title]
|
|
||||||
priority: [high/medium/low]
|
|
||||||
estimated_time: [e.g., 10 minutes]
|
|
||||||
executor_prompt: |
|
|
||||||
Execute Subtask [ID]: [Specific subtask title]
|
|
||||||
|
|
||||||
SPECIFIC REQUIREMENTS:
|
|
||||||
[Exact implementation needed for THIS subtask only]
|
|
||||||
|
|
||||||
FILES TO CREATE/MODIFY:
|
|
||||||
[Specific file paths]
|
|
||||||
|
|
||||||
CONTEXT:
|
|
||||||
[What already exists that this subtask depends on]
|
|
||||||
|
|
||||||
SUCCESS CRITERIA:
|
|
||||||
[Specific completion criteria for this subtask]
|
|
||||||
|
|
||||||
IMPORTANT:
|
|
||||||
- Focus ONLY on this subtask
|
|
||||||
- Mark subtask as 'review' when complete
|
|
||||||
- Use MCP tool: mcp__task-master-ai__set_task_status
|
|
||||||
|
|
||||||
- subtask_id: [Another subtask that can run in parallel]
|
|
||||||
parent_task: [Parent task ID]
|
|
||||||
title: [Specific subtask title]
|
|
||||||
priority: [priority]
|
|
||||||
estimated_time: [time estimate]
|
|
||||||
executor_prompt: |
|
|
||||||
[Focused prompt for this specific subtask]
|
|
||||||
|
|
||||||
blocked:
|
|
||||||
- task_id: [ID]
|
|
||||||
title: [Task title]
|
|
||||||
waiting_for: [list of blocking task IDs]
|
|
||||||
becomes_ready_when: [condition for unblocking]
|
|
||||||
|
|
||||||
next_wave:
|
|
||||||
trigger: "After tasks [IDs] complete"
|
|
||||||
newly_available: [List of task IDs that will unblock]
|
|
||||||
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
|
|
||||||
|
|
||||||
critical_path: [Ordered list of task IDs forming the critical path]
|
|
||||||
|
|
||||||
parallelization_instruction: |
|
|
||||||
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
|
|
||||||
simultaneously using multiple Task tool invocations in a single response.
|
|
||||||
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
|
|
||||||
|
|
||||||
verification_needed:
|
|
||||||
- task_id: [ID of any task in 'review' status]
|
|
||||||
verification_focus: [what to check]
|
|
||||||
```
|
|
||||||
|
|
||||||
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
|
|
||||||
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
|
|
||||||
2. Use multiple Task tool invocations in a SINGLE response
|
|
||||||
3. Do not execute them sequentially - they must run in parallel
|
|
||||||
4. Wait for all parallel executors to complete before proceeding to next wave
|
|
||||||
|
|
||||||
**IMPORTANT NOTES**:
|
|
||||||
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
|
|
||||||
- Provide complete, self-contained prompts for each executor
|
|
||||||
- Executors should mark tasks as 'review' for verification, not 'done'
|
|
||||||
- Be explicit about which tasks can run simultaneously
|
|
||||||
|
|
||||||
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.
|
|
||||||
102
.github/scripts/check-pre-release-mode.mjs
vendored
102
.github/scripts/check-pre-release-mode.mjs
vendored
@@ -1,102 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { readFileSync, existsSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
// Get context from command line argument or environment
|
|
||||||
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
|
|
||||||
|
|
||||||
function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
if (existsSync(join(currentDir, 'package.json'))) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(
|
|
||||||
readFileSync(join(currentDir, 'package.json'), 'utf8')
|
|
||||||
);
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
function checkPreReleaseMode() {
|
|
||||||
console.log('🔍 Checking if branch is in pre-release mode...');
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
|
|
||||||
// Check if pre.json exists
|
|
||||||
if (!existsSync(preJsonPath)) {
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Read and parse pre.json
|
|
||||||
const preJsonContent = readFileSync(preJsonPath, 'utf8');
|
|
||||||
const preJson = JSON.parse(preJsonContent);
|
|
||||||
|
|
||||||
// Check if we're in active pre-release mode
|
|
||||||
if (preJson.mode === 'pre') {
|
|
||||||
console.error('❌ ERROR: This branch is in active pre-release mode!');
|
|
||||||
console.error('');
|
|
||||||
|
|
||||||
// Provide context-specific error messages
|
|
||||||
if (context === 'Release Check' || context === 'pull_request') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode must be exited before merging to main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error(
|
|
||||||
'To fix this, run the following commands in your branch:'
|
|
||||||
);
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then update this pull request.');
|
|
||||||
} else if (context === 'Release' || context === 'main') {
|
|
||||||
console.error(
|
|
||||||
'Pre-release mode should only be used on feature branches, not main.'
|
|
||||||
);
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands locally:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push origin main');
|
|
||||||
console.error('');
|
|
||||||
console.error('Then re-run this workflow.');
|
|
||||||
} else {
|
|
||||||
console.error('Pre-release mode must be exited before proceeding.');
|
|
||||||
console.error('');
|
|
||||||
console.error('To fix this, run the following commands:');
|
|
||||||
console.error(' npx changeset pre exit');
|
|
||||||
console.error(' git add -u');
|
|
||||||
console.error(' git commit -m "chore: exit pre-release mode"');
|
|
||||||
console.error(' git push');
|
|
||||||
}
|
|
||||||
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Not in active pre-release mode - safe to proceed');
|
|
||||||
process.exit(0);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`❌ ERROR: Unable to parse .changeset/pre.json – aborting.`);
|
|
||||||
console.error(`Error details: ${error.message}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run the check
|
|
||||||
checkPreReleaseMode();
|
|
||||||
54
.github/scripts/pre-release.mjs
vendored
54
.github/scripts/pre-release.mjs
vendored
@@ -1,54 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { readFileSync, existsSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import {
|
|
||||||
findRootDir,
|
|
||||||
runCommand,
|
|
||||||
getPackageVersion,
|
|
||||||
createAndPushTag
|
|
||||||
} from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
const extensionPkgPath = join(rootDir, 'apps', 'extension', 'package.json');
|
|
||||||
|
|
||||||
console.log('🚀 Starting pre-release process...');
|
|
||||||
|
|
||||||
// Check if we're in RC mode
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
if (!existsSync(preJsonPath)) {
|
|
||||||
console.error('⚠️ Not in RC mode. Run "npx changeset pre enter rc" first.');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const preJson = JSON.parse(readFileSync(preJsonPath, 'utf8'));
|
|
||||||
if (preJson.tag !== 'rc') {
|
|
||||||
console.error(`⚠️ Not in RC mode. Current tag: ${preJson.tag}`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to read pre.json:', error.message);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get current extension version
|
|
||||||
const extensionVersion = getPackageVersion(extensionPkgPath);
|
|
||||||
console.log(`Extension version: ${extensionVersion}`);
|
|
||||||
|
|
||||||
// Run changeset publish for npm packages
|
|
||||||
console.log('📦 Publishing npm packages...');
|
|
||||||
runCommand('npx', ['changeset', 'publish']);
|
|
||||||
|
|
||||||
// Create tag for extension pre-release if it doesn't exist
|
|
||||||
const extensionTag = `extension-rc@${extensionVersion}`;
|
|
||||||
const tagCreated = createAndPushTag(extensionTag);
|
|
||||||
|
|
||||||
if (tagCreated) {
|
|
||||||
console.log('This will trigger the extension-pre-release workflow...');
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Pre-release process completed!');
|
|
||||||
30
.github/scripts/release.mjs
vendored
30
.github/scripts/release.mjs
vendored
@@ -1,30 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { existsSync, unlinkSync } from 'node:fs';
|
|
||||||
import { join, dirname } from 'node:path';
|
|
||||||
import { fileURLToPath } from 'node:url';
|
|
||||||
import { findRootDir, runCommand } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = dirname(__filename);
|
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
console.log('🚀 Starting release process...');
|
|
||||||
|
|
||||||
// Double-check we're not in pre-release mode (safety net)
|
|
||||||
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
|
|
||||||
if (existsSync(preJsonPath)) {
|
|
||||||
console.log('⚠️ Warning: pre.json still exists. Removing it...');
|
|
||||||
unlinkSync(preJsonPath);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the extension version has changed and tag it
|
|
||||||
// This prevents changeset from trying to publish the private package
|
|
||||||
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
|
|
||||||
|
|
||||||
// Run changeset publish for npm packages
|
|
||||||
runCommand('npx', ['changeset', 'publish']);
|
|
||||||
|
|
||||||
console.log('✅ Release process completed!');
|
|
||||||
|
|
||||||
// The extension tag (if created) will trigger the extension-release workflow
|
|
||||||
21
.github/scripts/release.sh
vendored
Executable file
21
.github/scripts/release.sh
vendored
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🚀 Starting release process..."
|
||||||
|
|
||||||
|
# Double-check we're not in pre-release mode (safety net)
|
||||||
|
if [ -f .changeset/pre.json ]; then
|
||||||
|
echo "⚠️ Warning: pre.json still exists. Removing it..."
|
||||||
|
rm -f .changeset/pre.json
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if the extension version has changed and tag it
|
||||||
|
# This prevents changeset from trying to publish the private package
|
||||||
|
node .github/scripts/tag-extension.mjs
|
||||||
|
|
||||||
|
# Run changeset publish for npm packages
|
||||||
|
npx changeset publish
|
||||||
|
|
||||||
|
echo "✅ Release process completed!"
|
||||||
|
|
||||||
|
# The extension tag (if created) will trigger the extension-release workflow
|
||||||
56
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
56
.github/scripts/tag-extension.mjs
vendored
Executable file → Normal file
@@ -1,17 +1,15 @@
|
|||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
import assert from 'node:assert/strict';
|
import assert from 'node:assert/strict';
|
||||||
|
import { spawnSync } from 'node:child_process';
|
||||||
import { readFileSync } from 'node:fs';
|
import { readFileSync } from 'node:fs';
|
||||||
import { join, dirname } from 'node:path';
|
import { join, dirname } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
import { findRootDir, createAndPushTag } from './utils.mjs';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
const __dirname = dirname(__filename);
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
const rootDir = findRootDir(__dirname);
|
|
||||||
|
|
||||||
// Read the extension's package.json
|
// Read the extension's package.json
|
||||||
const extensionDir = join(rootDir, 'apps', 'extension');
|
const extensionDir = join(__dirname, '..', 'apps', 'extension');
|
||||||
const pkgPath = join(extensionDir, 'package.json');
|
const pkgPath = join(extensionDir, 'package.json');
|
||||||
|
|
||||||
let pkg;
|
let pkg;
|
||||||
@@ -23,11 +21,57 @@ try {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Read root package.json for repository info
|
||||||
|
const rootPkgPath = join(__dirname, '..', 'package.json');
|
||||||
|
let rootPkg;
|
||||||
|
try {
|
||||||
|
const rootPkgContent = readFileSync(rootPkgPath, 'utf8');
|
||||||
|
rootPkg = JSON.parse(rootPkgContent);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to read root package.json:', error.message);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
// Ensure we have required fields
|
// Ensure we have required fields
|
||||||
assert(pkg.name, 'package.json must have a name field');
|
assert(pkg.name, 'package.json must have a name field');
|
||||||
assert(pkg.version, 'package.json must have a version field');
|
assert(pkg.version, 'package.json must have a version field');
|
||||||
|
assert(rootPkg.repository, 'root package.json must have a repository field');
|
||||||
|
|
||||||
const tag = `${pkg.name}@${pkg.version}`;
|
const tag = `${pkg.name}@${pkg.version}`;
|
||||||
|
|
||||||
// Create and push the tag if it doesn't exist
|
// Get repository URL from root package.json
|
||||||
createAndPushTag(tag);
|
const repoUrl = rootPkg.repository.url;
|
||||||
|
|
||||||
|
const { status, stdout, error } = spawnSync('git', ['ls-remote', repoUrl, tag]);
|
||||||
|
|
||||||
|
assert.equal(status, 0, error);
|
||||||
|
|
||||||
|
const exists = String(stdout).trim() !== '';
|
||||||
|
|
||||||
|
if (!exists) {
|
||||||
|
console.log(`Creating new extension tag: ${tag}`);
|
||||||
|
|
||||||
|
// Create the tag
|
||||||
|
const tagResult = spawnSync('git', ['tag', tag]);
|
||||||
|
if (tagResult.status !== 0) {
|
||||||
|
console.error(
|
||||||
|
'Failed to create tag:',
|
||||||
|
tagResult.error || tagResult.stderr.toString()
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Push the tag
|
||||||
|
const pushResult = spawnSync('git', ['push', 'origin', tag]);
|
||||||
|
if (pushResult.status !== 0) {
|
||||||
|
console.error(
|
||||||
|
'Failed to push tag:',
|
||||||
|
pushResult.error || pushResult.stderr.toString()
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
||||||
|
} else {
|
||||||
|
console.log(`Extension tag already exists: ${tag}`);
|
||||||
|
}
|
||||||
|
|||||||
88
.github/scripts/utils.mjs
vendored
88
.github/scripts/utils.mjs
vendored
@@ -1,88 +0,0 @@
|
|||||||
#!/usr/bin/env node
|
|
||||||
import { spawnSync } from 'node:child_process';
|
|
||||||
import { readFileSync } from 'node:fs';
|
|
||||||
import { join, dirname, resolve } from 'node:path';
|
|
||||||
|
|
||||||
// Find the root directory by looking for package.json with task-master-ai
|
|
||||||
export function findRootDir(startDir) {
|
|
||||||
let currentDir = resolve(startDir);
|
|
||||||
while (currentDir !== '/') {
|
|
||||||
const pkgPath = join(currentDir, 'package.json');
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
|
||||||
if (pkg.name === 'task-master-ai' || pkg.repository) {
|
|
||||||
return currentDir;
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
currentDir = dirname(currentDir);
|
|
||||||
}
|
|
||||||
throw new Error('Could not find root directory');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run a command with proper error handling
|
|
||||||
export function runCommand(command, args = [], options = {}) {
|
|
||||||
console.log(`Running: ${command} ${args.join(' ')}`);
|
|
||||||
const result = spawnSync(command, args, {
|
|
||||||
encoding: 'utf8',
|
|
||||||
stdio: 'inherit',
|
|
||||||
...options
|
|
||||||
});
|
|
||||||
|
|
||||||
if (result.status !== 0) {
|
|
||||||
console.error(`Command failed with exit code ${result.status}`);
|
|
||||||
process.exit(result.status);
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get package version from a package.json file
|
|
||||||
export function getPackageVersion(packagePath) {
|
|
||||||
try {
|
|
||||||
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
|
|
||||||
return pkg.version;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
`Failed to read package version from ${packagePath}:`,
|
|
||||||
error.message
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if a git tag exists on remote
|
|
||||||
export function tagExistsOnRemote(tag, remote = 'origin') {
|
|
||||||
const result = spawnSync('git', ['ls-remote', remote, tag], {
|
|
||||||
encoding: 'utf8'
|
|
||||||
});
|
|
||||||
|
|
||||||
return result.status === 0 && result.stdout.trim() !== '';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create and push a git tag if it doesn't exist
|
|
||||||
export function createAndPushTag(tag, remote = 'origin') {
|
|
||||||
// Check if tag already exists
|
|
||||||
if (tagExistsOnRemote(tag, remote)) {
|
|
||||||
console.log(`Tag ${tag} already exists on remote, skipping`);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`Creating new tag: ${tag}`);
|
|
||||||
|
|
||||||
// Create the tag locally
|
|
||||||
const tagResult = spawnSync('git', ['tag', tag]);
|
|
||||||
if (tagResult.status !== 0) {
|
|
||||||
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Push the tag to remote
|
|
||||||
const pushResult = spawnSync('git', ['push', remote, tag]);
|
|
||||||
if (pushResult.status !== 0) {
|
|
||||||
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`✅ Successfully created and pushed tag: ${tag}`);
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
110
.github/workflows/extension-pre-release.yml
vendored
110
.github/workflows/extension-pre-release.yml
vendored
@@ -1,110 +0,0 @@
|
|||||||
name: Extension Pre-Release
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- "extension-rc@*"
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
|
|
||||||
concurrency: extension-pre-release-${{ github.ref }}
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
publish-extension-rc:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: 20
|
|
||||||
|
|
||||||
- name: Cache node_modules
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
node_modules
|
|
||||||
*/*/node_modules
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Extension Dependencies
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm ci
|
|
||||||
timeout-minutes: 5
|
|
||||||
|
|
||||||
- name: Type Check Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run check-types
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Build Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run build
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Package Extension
|
|
||||||
working-directory: apps/extension
|
|
||||||
run: npm run package
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Create VSIX Package (Pre-Release)
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce package --no-dependencies --pre-release
|
|
||||||
env:
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Get VSIX filename
|
|
||||||
id: vsix-info
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: |
|
|
||||||
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
|
|
||||||
if [ -z "$VSIX_FILE" ]; then
|
|
||||||
echo "Error: No VSIX file found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
|
|
||||||
echo "Found VSIX: $VSIX_FILE"
|
|
||||||
|
|
||||||
- name: Publish to VS Code Marketplace (Pre-Release)
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
|
||||||
env:
|
|
||||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Install Open VSX CLI
|
|
||||||
run: npm install -g ovsx
|
|
||||||
|
|
||||||
- name: Publish to Open VSX Registry (Pre-Release)
|
|
||||||
working-directory: apps/extension/vsix-build
|
|
||||||
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
|
|
||||||
env:
|
|
||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
|
||||||
FORCE_COLOR: 1
|
|
||||||
|
|
||||||
- name: Upload Build Artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: extension-pre-release-${{ github.ref_name }}
|
|
||||||
path: |
|
|
||||||
apps/extension/vsix-build/*.vsix
|
|
||||||
apps/extension/dist/
|
|
||||||
retention-days: 30
|
|
||||||
|
|
||||||
notify-success:
|
|
||||||
needs: publish-extension-rc
|
|
||||||
if: success()
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Success Notification
|
|
||||||
run: |
|
|
||||||
echo "🚀 Extension ${{ github.ref_name }} successfully published as pre-release!"
|
|
||||||
echo "📦 Available on VS Code Marketplace (Pre-Release)"
|
|
||||||
echo "🌍 Available on Open VSX Registry (Pre-Release)"
|
|
||||||
26
.github/workflows/extension-release.yml
vendored
26
.github/workflows/extension-release.yml
vendored
@@ -89,6 +89,32 @@ jobs:
|
|||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
||||||
FORCE_COLOR: 1
|
FORCE_COLOR: 1
|
||||||
|
|
||||||
|
- name: Create GitHub Release
|
||||||
|
uses: actions/create-release@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
tag_name: ${{ github.ref_name }}
|
||||||
|
release_name: Extension ${{ github.ref_name }}
|
||||||
|
body: |
|
||||||
|
VS Code Extension Release ${{ github.ref_name }}
|
||||||
|
|
||||||
|
**Marketplaces:**
|
||||||
|
- [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Hamster.task-master-hamster)
|
||||||
|
- [Open VSX Registry](https://open-vsx.org/extension/Hamster/task-master-hamster)
|
||||||
|
draft: false
|
||||||
|
prerelease: false
|
||||||
|
|
||||||
|
- name: Upload VSIX to Release
|
||||||
|
uses: actions/upload-release-asset@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||||
|
asset_path: apps/extension/vsix-build/${{ steps.vsix-info.outputs.vsix-filename }}
|
||||||
|
asset_name: ${{ steps.vsix-info.outputs.vsix-filename }}
|
||||||
|
asset_content_type: application/zip
|
||||||
|
|
||||||
- name: Upload Build Artifacts
|
- name: Upload Build Artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
|
|||||||
33
.github/workflows/pre-release.yml
vendored
33
.github/workflows/pre-release.yml
vendored
@@ -3,13 +3,11 @@ name: Pre-Release (RC)
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||||
|
|
||||||
concurrency: pre-release-${{ github.ref_name }}
|
concurrency: pre-release-${{ github.ref }}
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
rc:
|
rc:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
# Only allow pre-releases on non-main branches
|
|
||||||
if: github.ref != 'refs/heads/main'
|
|
||||||
environment: extension-release
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -36,26 +34,9 @@ jobs:
|
|||||||
|
|
||||||
- name: Enter RC mode (if not already in RC mode)
|
- name: Enter RC mode (if not already in RC mode)
|
||||||
run: |
|
run: |
|
||||||
# Check if we're in pre-release mode with the "rc" tag
|
# ensure we’re in the right pre-mode (tag "rc")
|
||||||
if [ -f .changeset/pre.json ]; then
|
if [ ! -f .changeset/pre.json ] \
|
||||||
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
|
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
|
||||||
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
|
|
||||||
|
|
||||||
if [ "$MODE" = "exit" ]; then
|
|
||||||
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
|
|
||||||
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
|
|
||||||
npx changeset pre exit
|
|
||||||
npx changeset pre enter rc
|
|
||||||
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
|
|
||||||
echo "Already in RC pre-release mode"
|
|
||||||
else
|
|
||||||
echo "Unknown mode state: $MODE, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "No pre.json found, entering RC mode..."
|
|
||||||
npx changeset pre enter rc
|
npx changeset pre enter rc
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -68,12 +49,10 @@ jobs:
|
|||||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: node ./.github/scripts/pre-release.mjs
|
publish: npm run release
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
VSCE_PAT: ${{ secrets.VSCE_PAT }}
|
|
||||||
OVSX_PAT: ${{ secrets.OVSX_PAT }}
|
|
||||||
|
|
||||||
- name: Commit & Push changes
|
- name: Commit & Push changes
|
||||||
uses: actions-js/push@master
|
uses: actions-js/push@master
|
||||||
|
|||||||
21
.github/workflows/release-check.yml
vendored
21
.github/workflows/release-check.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
name: Release Check
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: release-check-${{ github.head_ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check-release-mode:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Check release mode
|
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"
|
|
||||||
24
.github/workflows/release.yml
vendored
24
.github/workflows/release.yml
vendored
@@ -38,13 +38,31 @@ jobs:
|
|||||||
run: npm ci
|
run: npm ci
|
||||||
timeout-minutes: 2
|
timeout-minutes: 2
|
||||||
|
|
||||||
- name: Check pre-release mode
|
- name: Exit pre-release mode and clean up
|
||||||
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
|
run: |
|
||||||
|
echo "🔄 Ensuring we're not in pre-release mode for main branch..."
|
||||||
|
|
||||||
|
# Exit pre-release mode if we're in it
|
||||||
|
npx changeset pre exit || echo "Not in pre-release mode"
|
||||||
|
|
||||||
|
# Remove pre.json file if it exists (belt and suspenders approach)
|
||||||
|
if [ -f .changeset/pre.json ]; then
|
||||||
|
echo "🧹 Removing pre.json file..."
|
||||||
|
rm -f .changeset/pre.json
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify the file is gone
|
||||||
|
if [ ! -f .changeset/pre.json ]; then
|
||||||
|
echo "✅ pre.json successfully removed"
|
||||||
|
else
|
||||||
|
echo "❌ Failed to remove pre.json"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
- name: Create Release Pull Request or Publish to npm
|
- name: Create Release Pull Request or Publish to npm
|
||||||
uses: changesets/action@v1
|
uses: changesets/action@v1
|
||||||
with:
|
with:
|
||||||
publish: node ./.github/scripts/release.mjs
|
publish: ./.github/scripts/release.sh
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|||||||
@@ -1,343 +0,0 @@
|
|||||||
# Product Requirements Document: tm-core Package - Parse PRD Feature
|
|
||||||
|
|
||||||
## Project Overview
|
|
||||||
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
|
|
||||||
|
|
||||||
## Design Patterns & Architecture
|
|
||||||
|
|
||||||
### Patterns to Apply
|
|
||||||
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
|
|
||||||
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
|
|
||||||
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
|
|
||||||
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
|
|
||||||
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
|
|
||||||
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
|
|
||||||
|
|
||||||
### Naming Conventions
|
|
||||||
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
|
|
||||||
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
|
|
||||||
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
|
|
||||||
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
|
|
||||||
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
|
|
||||||
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
|
|
||||||
|
|
||||||
## Exact Folder Structure Required
|
|
||||||
```
|
|
||||||
packages/tm-core/
|
|
||||||
├── src/
|
|
||||||
│ ├── index.ts
|
|
||||||
│ ├── types/
|
|
||||||
│ │ └── index.ts
|
|
||||||
│ ├── interfaces/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── storage.interface.ts
|
|
||||||
│ │ ├── ai-provider.interface.ts
|
|
||||||
│ │ └── configuration.interface.ts
|
|
||||||
│ ├── tasks/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── task-parser.ts
|
|
||||||
│ ├── ai/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── base-provider.ts
|
|
||||||
│ │ ├── provider-factory.ts
|
|
||||||
│ │ ├── prompt-builder.ts
|
|
||||||
│ │ └── providers/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ ├── anthropic-provider.ts
|
|
||||||
│ │ ├── openai-provider.ts
|
|
||||||
│ │ └── google-provider.ts
|
|
||||||
│ ├── storage/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── file-storage.ts
|
|
||||||
│ ├── config/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── config-manager.ts
|
|
||||||
│ ├── utils/
|
|
||||||
│ │ ├── index.ts # Barrel export
|
|
||||||
│ │ └── id-generator.ts
|
|
||||||
│ └── errors/
|
|
||||||
│ ├── index.ts # Barrel export
|
|
||||||
│ └── task-master-error.ts
|
|
||||||
├── tests/
|
|
||||||
│ ├── task-parser.test.ts
|
|
||||||
│ ├── integration/
|
|
||||||
│ │ └── parse-prd.test.ts
|
|
||||||
│ └── mocks/
|
|
||||||
│ └── mock-provider.ts
|
|
||||||
├── package.json
|
|
||||||
├── tsconfig.json
|
|
||||||
├── tsup.config.js
|
|
||||||
└── jest.config.js
|
|
||||||
```
|
|
||||||
|
|
||||||
## Specific Implementation Requirements
|
|
||||||
|
|
||||||
### 1. Create types/index.ts
|
|
||||||
Define these exact TypeScript interfaces:
|
|
||||||
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
|
|
||||||
- `Subtask` interface with fields: id, title, description, completed
|
|
||||||
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
|
|
||||||
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
|
|
||||||
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
|
|
||||||
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
|
|
||||||
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
|
|
||||||
|
|
||||||
### 2. Create interfaces/storage.interface.ts
|
|
||||||
Define `IStorage` interface with these exact methods:
|
|
||||||
- `loadTasks(tag?: string): Promise<Task[]>`
|
|
||||||
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
|
|
||||||
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
|
|
||||||
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
|
|
||||||
- `deleteTask(id: string, tag?: string): Promise<void>`
|
|
||||||
- `exists(tag?: string): Promise<boolean>`
|
|
||||||
|
|
||||||
### 3. Create interfaces/ai-provider.interface.ts
|
|
||||||
Define `IAIProvider` interface with these exact methods:
|
|
||||||
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
|
|
||||||
- `calculateTokens(text: string): number`
|
|
||||||
- `getName(): string`
|
|
||||||
- `getModel(): string`
|
|
||||||
|
|
||||||
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
|
|
||||||
|
|
||||||
### 4. Create interfaces/configuration.interface.ts
|
|
||||||
Define `IConfiguration` interface with fields:
|
|
||||||
- `projectPath: string`
|
|
||||||
- `aiProvider: string`
|
|
||||||
- `apiKey?: string`
|
|
||||||
- `aiOptions?: AIOptions`
|
|
||||||
- `mainModel?: string`
|
|
||||||
- `researchModel?: string`
|
|
||||||
- `fallbackModel?: string`
|
|
||||||
- `tasksPath?: string`
|
|
||||||
- `enableTags?: boolean`
|
|
||||||
|
|
||||||
### 5. Create tasks/task-parser.ts
|
|
||||||
Create class `TaskParser` with:
|
|
||||||
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
|
|
||||||
- Private property `promptBuilder: PromptBuilder`
|
|
||||||
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
|
||||||
- Private method `readPRD(prdPath: string): Promise<string>`
|
|
||||||
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
|
|
||||||
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
|
|
||||||
- Apply **Dependency Injection** pattern via constructor
|
|
||||||
|
|
||||||
### 6. Create ai/base-provider.ts
|
|
||||||
Copy existing base-provider.js and convert to TypeScript abstract class:
|
|
||||||
- Abstract class `BaseProvider` implementing `IAIProvider`
|
|
||||||
- Protected properties: `apiKey: string`, `model: string`
|
|
||||||
- Constructor accepting `apiKey: string` and `options: { model?: string }`
|
|
||||||
- Abstract methods matching IAIProvider interface
|
|
||||||
- Abstract method `getDefaultModel(): string`
|
|
||||||
- Apply **Template Method** pattern for common provider logic
|
|
||||||
|
|
||||||
### 7. Create ai/provider-factory.ts
|
|
||||||
Create class `ProviderFactory` with:
|
|
||||||
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
|
|
||||||
- Switch statement for providers: 'anthropic', 'openai', 'google'
|
|
||||||
- Dynamic imports for each provider
|
|
||||||
- Throw error for unknown providers
|
|
||||||
- Apply **Factory** pattern for creating provider instances
|
|
||||||
|
|
||||||
Example implementation structure:
|
|
||||||
```typescript
|
|
||||||
switch (provider.toLowerCase()) {
|
|
||||||
case 'anthropic':
|
|
||||||
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
|
|
||||||
return new AnthropicProvider(apiKey, { model });
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8. Create ai/providers/anthropic-provider.ts
|
|
||||||
Create class `AnthropicProvider` extending `BaseProvider`:
|
|
||||||
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
|
|
||||||
- Private property `client: Anthropic`
|
|
||||||
- Implement all abstract methods from BaseProvider
|
|
||||||
- Default model: 'claude-3-sonnet-20240229'
|
|
||||||
- Handle API errors and wrap with meaningful messages
|
|
||||||
|
|
||||||
### 9. Create ai/providers/openai-provider.ts (placeholder)
|
|
||||||
Create class `OpenAIProvider` extending `BaseProvider`:
|
|
||||||
- Import OpenAI SDK when implemented
|
|
||||||
- For now, throw error: "OpenAI provider not yet implemented"
|
|
||||||
|
|
||||||
### 10. Create ai/providers/google-provider.ts (placeholder)
|
|
||||||
Create class `GoogleProvider` extending `BaseProvider`:
|
|
||||||
- Import Google Generative AI SDK when implemented
|
|
||||||
- For now, throw error: "Google provider not yet implemented"
|
|
||||||
|
|
||||||
### 11. Create ai/prompt-builder.ts
|
|
||||||
Create class `PromptBuilder` with:
|
|
||||||
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
|
|
||||||
- Method `buildExpandPrompt(task: string, context?: string): string`
|
|
||||||
- Use template literals for prompt construction
|
|
||||||
- Include specific JSON format instructions in prompts
|
|
||||||
|
|
||||||
### 9. Create storage/file-storage.ts
|
|
||||||
Create class `FileStorage` implementing `IStorage`:
|
|
||||||
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
|
|
||||||
- Constructor accepting `projectPath: string`
|
|
||||||
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
|
|
||||||
- Private method `ensureDirectory(dir: string): Promise<void>`
|
|
||||||
- Implement all IStorage methods
|
|
||||||
- Handle ENOENT errors by returning empty arrays
|
|
||||||
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
|
|
||||||
- Apply **Repository** pattern for data access abstraction
|
|
||||||
|
|
||||||
### 10. Create config/config-manager.ts
|
|
||||||
Create class `ConfigManager`:
|
|
||||||
- Private property `config: IConfiguration`
|
|
||||||
- Constructor accepting `options: Partial<IConfiguration>`
|
|
||||||
- Use Zod for validation with schema matching IConfiguration
|
|
||||||
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
|
|
||||||
- Method `getAll(): IConfiguration`
|
|
||||||
- Method `validate(): boolean`
|
|
||||||
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
|
|
||||||
|
|
||||||
### 11. Create utils/id-generator.ts
|
|
||||||
Export functions:
|
|
||||||
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
|
|
||||||
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
|
|
||||||
|
|
||||||
### 16. Create src/index.ts
|
|
||||||
Create main class `TaskMasterCore`:
|
|
||||||
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
|
|
||||||
- Constructor accepting `options: Partial<IConfiguration>`
|
|
||||||
- Method `initialize(): Promise<void>` for lazy loading
|
|
||||||
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
|
|
||||||
- Method `getTasks(tag?: string): Promise<Task[]>`
|
|
||||||
- Apply **Facade** pattern to provide simple API over complex subsystems
|
|
||||||
|
|
||||||
Export:
|
|
||||||
- Class `TaskMasterCore`
|
|
||||||
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
|
|
||||||
- All types from './types'
|
|
||||||
- All interfaces from './interfaces/*'
|
|
||||||
|
|
||||||
Import statements should use kebab-case:
|
|
||||||
```typescript
|
|
||||||
import { TaskParser } from './tasks/task-parser';
|
|
||||||
import { FileStorage } from './storage/file-storage';
|
|
||||||
import { ConfigManager } from './config/config-manager';
|
|
||||||
import { ProviderFactory } from './ai/provider-factory';
|
|
||||||
```
|
|
||||||
|
|
||||||
### 17. Configure package.json
|
|
||||||
Create package.json with:
|
|
||||||
- name: "@task-master/core"
|
|
||||||
- version: "0.1.0"
|
|
||||||
- type: "module"
|
|
||||||
- main: "./dist/index.js"
|
|
||||||
- module: "./dist/index.mjs"
|
|
||||||
- types: "./dist/index.d.ts"
|
|
||||||
- exports map for proper ESM/CJS support
|
|
||||||
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
|
|
||||||
- dependencies: zod@^3.23.8
|
|
||||||
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
|
|
||||||
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
|
|
||||||
|
|
||||||
### 18. Configure TypeScript
|
|
||||||
Create tsconfig.json with:
|
|
||||||
- target: "ES2022"
|
|
||||||
- module: "ESNext"
|
|
||||||
- strict: true (with all strict flags enabled)
|
|
||||||
- declaration: true
|
|
||||||
- outDir: "./dist"
|
|
||||||
- rootDir: "./src"
|
|
||||||
|
|
||||||
### 19. Configure tsup
|
|
||||||
Create tsup.config.js with:
|
|
||||||
- entry: ['src/index.ts']
|
|
||||||
- format: ['cjs', 'esm']
|
|
||||||
- dts: true
|
|
||||||
- sourcemap: true
|
|
||||||
- clean: true
|
|
||||||
- external: AI provider SDKs
|
|
||||||
|
|
||||||
### 20. Configure Jest
|
|
||||||
Create jest.config.js with:
|
|
||||||
- preset: 'ts-jest'
|
|
||||||
- testEnvironment: 'node'
|
|
||||||
- Coverage threshold: 80% for all metrics
|
|
||||||
|
|
||||||
## Build Process
|
|
||||||
1. Use tsup to compile TypeScript to both CommonJS and ESM
|
|
||||||
2. Generate .d.ts files for TypeScript consumers
|
|
||||||
3. Output to dist/ directory
|
|
||||||
4. Ensure tree-shaking works properly
|
|
||||||
|
|
||||||
## Testing Requirements
|
|
||||||
- Create unit tests for TaskParser in tests/task-parser.test.ts
|
|
||||||
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
|
|
||||||
- Test error scenarios (file not found, invalid JSON, etc.)
|
|
||||||
- Create integration test in tests/integration/parse-prd.test.ts
|
|
||||||
- Follow kebab-case naming for all test files
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
- TypeScript compilation with zero errors
|
|
||||||
- No use of 'any' type
|
|
||||||
- All interfaces properly exported
|
|
||||||
- Compatible with existing tasks.json format
|
|
||||||
- Feature flag support via USE_TM_CORE environment variable
|
|
||||||
|
|
||||||
## Import/Export Conventions
|
|
||||||
- Use named exports for all classes and interfaces
|
|
||||||
- Use barrel exports (index.ts) in each directory
|
|
||||||
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
|
|
||||||
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
|
|
||||||
- Use .js extension in import paths for ESM compatibility
|
|
||||||
|
|
||||||
## Error Handling Patterns
|
|
||||||
- Create custom error classes in `src/errors/` directory
|
|
||||||
- All public methods should catch and wrap errors with context
|
|
||||||
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
|
|
||||||
- Never expose internal implementation details in error messages
|
|
||||||
- Log errors to console.error only in development mode
|
|
||||||
|
|
||||||
## Barrel Exports Content
|
|
||||||
|
|
||||||
### interfaces/index.ts
|
|
||||||
```typescript
|
|
||||||
export type { IStorage } from './storage.interface';
|
|
||||||
export type { IAIProvider, AIOptions } from './ai-provider.interface';
|
|
||||||
export type { IConfiguration } from './configuration.interface';
|
|
||||||
```
|
|
||||||
|
|
||||||
### tasks/index.ts
|
|
||||||
```typescript
|
|
||||||
export { TaskParser } from './task-parser';
|
|
||||||
```
|
|
||||||
|
|
||||||
### ai/index.ts
|
|
||||||
```typescript
|
|
||||||
export { BaseProvider } from './base-provider';
|
|
||||||
export { ProviderFactory } from './provider-factory';
|
|
||||||
export { PromptBuilder } from './prompt-builder';
|
|
||||||
```
|
|
||||||
|
|
||||||
### ai/providers/index.ts
|
|
||||||
```typescript
|
|
||||||
export { AnthropicProvider } from './anthropic-provider';
|
|
||||||
export { OpenAIProvider } from './openai-provider';
|
|
||||||
export { GoogleProvider } from './google-provider';
|
|
||||||
```
|
|
||||||
|
|
||||||
### storage/index.ts
|
|
||||||
```typescript
|
|
||||||
export { FileStorage } from './file-storage';
|
|
||||||
```
|
|
||||||
|
|
||||||
### config/index.ts
|
|
||||||
```typescript
|
|
||||||
export { ConfigManager } from './config-manager';
|
|
||||||
```
|
|
||||||
|
|
||||||
### utils/index.ts
|
|
||||||
```typescript
|
|
||||||
export { generateTaskId, generateSubtaskId } from './id-generator';
|
|
||||||
```
|
|
||||||
|
|
||||||
### errors/index.ts
|
|
||||||
```typescript
|
|
||||||
export { TaskMasterError } from './task-master-error';
|
|
||||||
```
|
|
||||||
@@ -18,4 +18,4 @@
|
|||||||
"reasoning": "This task has high complexity due to several challenging aspects: 1) AI integration requiring sophisticated prompt engineering, 2) Test generation across multiple frameworks, 3) File system operations with proper error handling, 4) MCP tool integration, 5) Complex configuration requirements, and 6) Framework-specific template generation. The task already has 5 subtasks but could benefit from reorganization based on the updated implementation details in the info blocks, particularly around framework support and configuration."
|
"reasoning": "This task has high complexity due to several challenging aspects: 1) AI integration requiring sophisticated prompt engineering, 2) Test generation across multiple frameworks, 3) File system operations with proper error handling, 4) MCP tool integration, 5) Complex configuration requirements, and 6) Framework-specific template generation. The task already has 5 subtasks but could benefit from reorganization based on the updated implementation details in the info blocks, particularly around framework support and configuration."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -1,9 +1,9 @@
|
|||||||
{
|
{
|
||||||
"currentTag": "master",
|
"currentTag": "master",
|
||||||
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
"lastSwitched": "2025-08-01T14:09:25.838Z",
|
||||||
"branchTagMapping": {
|
"branchTagMapping": {
|
||||||
"v017-adds": "v017-adds",
|
"v017-adds": "v017-adds",
|
||||||
"next": "next"
|
"next": "next"
|
||||||
},
|
},
|
||||||
"migrationNoticeShown": true
|
"migrationNoticeShown": true
|
||||||
}
|
}
|
||||||
147
CHANGELOG.md
147
CHANGELOG.md
@@ -1,152 +1,5 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
## 0.23.1-rc.0
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
|
|
||||||
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
|
|
||||||
- Ensures generated JSON includes all fields required by the schema
|
|
||||||
|
|
||||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
|
|
||||||
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
|
|
||||||
- scope_up_task and scope_down_task MCP tools now work properly
|
|
||||||
|
|
||||||
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
|
|
||||||
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
|
|
||||||
- Removed nullable/default modifiers from Zod schemas for broader compatibility
|
|
||||||
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
|
|
||||||
- Perplexity now uses JSON mode for more reliable structured output
|
|
||||||
- Post-processing handles default values separately from schema validation
|
|
||||||
|
|
||||||
## 0.23.0
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`53903f1`](https://github.com/eyaltoledano/claude-task-master/commit/53903f1e8eee23ac512eb13a6d81d8cbcfe658cb) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
|
|
||||||
|
|
||||||
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
|
|
||||||
|
|
||||||
**New CLI Commands:**
|
|
||||||
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
|
|
||||||
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
|
|
||||||
|
|
||||||
**Key Features:**
|
|
||||||
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
|
|
||||||
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
|
|
||||||
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
|
|
||||||
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
|
|
||||||
- **Smart context**: AI considers your project context and task dependencies when making adjustments
|
|
||||||
|
|
||||||
**Usage Examples:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Make a task more detailed
|
|
||||||
task-master scope-up --id=5
|
|
||||||
|
|
||||||
# Simplify multiple tasks with light touch
|
|
||||||
task-master scope-down --id=10,11,12 --strength=light
|
|
||||||
|
|
||||||
# Custom adjustment with specific instructions
|
|
||||||
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why use this?**
|
|
||||||
- **Iterative refinement**: Adjust task complexity as your understanding evolves
|
|
||||||
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
|
|
||||||
- **Team coordination**: Adjust complexity based on team member experience levels
|
|
||||||
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
|
|
||||||
|
|
||||||
Perfect for agile workflows where task requirements change as you learn more about the problem space.
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#1063](https://github.com/eyaltoledano/claude-task-master/pull/1063) [`2ae6e7e`](https://github.com/eyaltoledano/claude-task-master/commit/2ae6e7e6be3605c3c4d353f34666e54750dba973) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix for tasks not found when using string IDs
|
|
||||||
|
|
||||||
- [#1049](https://github.com/eyaltoledano/claude-task-master/pull/1049) [`45a14c3`](https://github.com/eyaltoledano/claude-task-master/commit/45a14c323d21071c15106335e89ad1f4a20976ab) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix tag-specific complexity report detection in expand command
|
|
||||||
|
|
||||||
The expand command now correctly finds and uses tag-specific complexity reports (e.g., `task-complexity-report_feature-xyz.json`) when operating in a tag context. Previously, it would always look for the generic `task-complexity-report.json` file due to a default value in the CLI option definition.
|
|
||||||
|
|
||||||
## 0.23.0-rc.2
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`53903f1`](https://github.com/eyaltoledano/claude-task-master/commit/53903f1e8eee23ac512eb13a6d81d8cbcfe658cb) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
|
|
||||||
|
|
||||||
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
|
|
||||||
|
|
||||||
**New CLI Commands:**
|
|
||||||
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
|
|
||||||
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
|
|
||||||
|
|
||||||
**Key Features:**
|
|
||||||
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
|
|
||||||
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
|
|
||||||
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
|
|
||||||
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
|
|
||||||
- **Smart context**: AI considers your project context and task dependencies when making adjustments
|
|
||||||
|
|
||||||
**Usage Examples:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Make a task more detailed
|
|
||||||
task-master scope-up --id=5
|
|
||||||
|
|
||||||
# Simplify multiple tasks with light touch
|
|
||||||
task-master scope-down --id=10,11,12 --strength=light
|
|
||||||
|
|
||||||
# Custom adjustment with specific instructions
|
|
||||||
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why use this?**
|
|
||||||
- **Iterative refinement**: Adjust task complexity as your understanding evolves
|
|
||||||
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
|
|
||||||
- **Team coordination**: Adjust complexity based on team member experience levels
|
|
||||||
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
|
|
||||||
|
|
||||||
Perfect for agile workflows where task requirements change as you learn more about the problem space.
|
|
||||||
|
|
||||||
## 0.22.1-rc.1
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#1069](https://github.com/eyaltoledano/claude-task-master/pull/1069) [`72ca68e`](https://github.com/eyaltoledano/claude-task-master/commit/72ca68edeb870ff7a3b0d2d632e09dae921dc16a) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
|
|
||||||
|
|
||||||
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
|
|
||||||
|
|
||||||
**New CLI Commands:**
|
|
||||||
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
|
|
||||||
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
|
|
||||||
|
|
||||||
**Key Features:**
|
|
||||||
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
|
|
||||||
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
|
|
||||||
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
|
|
||||||
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
|
|
||||||
- **Smart context**: AI considers your project context and task dependencies when making adjustments
|
|
||||||
|
|
||||||
**Usage Examples:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Make a task more detailed
|
|
||||||
task-master scope-up --id=5
|
|
||||||
|
|
||||||
# Simplify multiple tasks with light touch
|
|
||||||
task-master scope-down --id=10,11,12 --strength=light
|
|
||||||
|
|
||||||
# Custom adjustment with specific instructions
|
|
||||||
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why use this?**
|
|
||||||
- **Iterative refinement**: Adjust task complexity as your understanding evolves
|
|
||||||
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
|
|
||||||
- **Team coordination**: Adjust complexity based on team member experience levels
|
|
||||||
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
|
|
||||||
|
|
||||||
Perfect for agile workflows where task requirements change as you learn more about the problem space.
|
|
||||||
|
|
||||||
## 0.22.1-rc.0
|
## 0.22.1-rc.0
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -1,130 +1 @@
|
|||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
## 0.23.0
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`b82d858`](https://github.com/eyaltoledano/claude-task-master/commit/b82d858f81a1e702ad59d84d5ae8a2ca84359a83) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - 🎉 **Introducing TaskMaster Extension!**
|
|
||||||
|
|
||||||
We're thrilled to launch the first version of our Code extension, bringing the power of TaskMaster directly into your favorite code editor. While this is our initial release and we've kept things focused, it already packs powerful features to supercharge your development workflow.
|
|
||||||
|
|
||||||
## ✨ Key Features
|
|
||||||
|
|
||||||
### 📋 Visual Task Management
|
|
||||||
- **Kanban Board View**: Visualize all your tasks in an intuitive board layout directly in VS Code
|
|
||||||
- **Drag & Drop**: Easily change task status by dragging cards between columns
|
|
||||||
- **Real-time Updates**: See changes instantly as you work through your project
|
|
||||||
|
|
||||||
### 🏷️ Multi-Context Support
|
|
||||||
- **Tag Switching**: Seamlessly switch between different project contexts/tags
|
|
||||||
- **Isolated Workflows**: Keep different features or experiments organized separately
|
|
||||||
|
|
||||||
### 🤖 AI-Powered Task Updates
|
|
||||||
- **Smart Updates**: Use TaskMaster's AI capabilities to update tasks and subtasks
|
|
||||||
- **Context-Aware**: Leverages your existing TaskMaster configuration and models
|
|
||||||
|
|
||||||
### 📊 Rich Task Information
|
|
||||||
- **Complexity Scores**: See task complexity ratings at a glance
|
|
||||||
- **Subtask Visualization**: Expand tasks to view and manage subtasks
|
|
||||||
- **Dependency Graphs**: Understand task relationships and dependencies visually
|
|
||||||
|
|
||||||
### ⚙️ Configuration Management
|
|
||||||
- **Visual Config Editor**: View and understand your `.taskmaster/config.json` settings
|
|
||||||
- **Easy Access**: No more manual JSON editing for common configuration tasks
|
|
||||||
|
|
||||||
### 🚀 Quick Actions
|
|
||||||
- **Status Updates**: Change task status with a single click
|
|
||||||
- **Task Details**: Access full task information without leaving VS Code
|
|
||||||
- **Integrated Commands**: All TaskMaster commands available through the command palette
|
|
||||||
|
|
||||||
## 🎯 What's Next?
|
|
||||||
|
|
||||||
This is just the beginning! We wanted to get a solid foundation into your hands quickly. The extension will evolve rapidly with your feedback, adding more advanced features, better visualizations, and deeper integration with your development workflow.
|
|
||||||
|
|
||||||
Thank you for being part of the TaskMaster journey. Your workflow has never looked better! 🚀
|
|
||||||
|
|
||||||
## 0.23.0-rc.1
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`b82d858`](https://github.com/eyaltoledano/claude-task-master/commit/b82d858f81a1e702ad59d84d5ae8a2ca84359a83) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - 🎉 **Introducing TaskMaster Extension!**
|
|
||||||
|
|
||||||
We're thrilled to launch the first version of our Code extension, bringing the power of TaskMaster directly into your favorite code editor. While this is our initial release and we've kept things focused, it already packs powerful features to supercharge your development workflow.
|
|
||||||
|
|
||||||
## ✨ Key Features
|
|
||||||
|
|
||||||
### 📋 Visual Task Management
|
|
||||||
- **Kanban Board View**: Visualize all your tasks in an intuitive board layout directly in VS Code
|
|
||||||
- **Drag & Drop**: Easily change task status by dragging cards between columns
|
|
||||||
- **Real-time Updates**: See changes instantly as you work through your project
|
|
||||||
|
|
||||||
### 🏷️ Multi-Context Support
|
|
||||||
- **Tag Switching**: Seamlessly switch between different project contexts/tags
|
|
||||||
- **Isolated Workflows**: Keep different features or experiments organized separately
|
|
||||||
|
|
||||||
### 🤖 AI-Powered Task Updates
|
|
||||||
- **Smart Updates**: Use TaskMaster's AI capabilities to update tasks and subtasks
|
|
||||||
- **Context-Aware**: Leverages your existing TaskMaster configuration and models
|
|
||||||
|
|
||||||
### 📊 Rich Task Information
|
|
||||||
- **Complexity Scores**: See task complexity ratings at a glance
|
|
||||||
- **Subtask Visualization**: Expand tasks to view and manage subtasks
|
|
||||||
- **Dependency Graphs**: Understand task relationships and dependencies visually
|
|
||||||
|
|
||||||
### ⚙️ Configuration Management
|
|
||||||
- **Visual Config Editor**: View and understand your `.taskmaster/config.json` settings
|
|
||||||
- **Easy Access**: No more manual JSON editing for common configuration tasks
|
|
||||||
|
|
||||||
### 🚀 Quick Actions
|
|
||||||
- **Status Updates**: Change task status with a single click
|
|
||||||
- **Task Details**: Access full task information without leaving VS Code
|
|
||||||
- **Integrated Commands**: All TaskMaster commands available through the command palette
|
|
||||||
|
|
||||||
## 🎯 What's Next?
|
|
||||||
|
|
||||||
This is just the beginning! We wanted to get a solid foundation into your hands quickly. The extension will evolve rapidly with your feedback, adding more advanced features, better visualizations, and deeper integration with your development workflow.
|
|
||||||
|
|
||||||
Thank you for being part of the TaskMaster journey. Your workflow has never looked better! 🚀
|
|
||||||
|
|
||||||
## 0.23.0-rc.0
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#997](https://github.com/eyaltoledano/claude-task-master/pull/997) [`64302dc`](https://github.com/eyaltoledano/claude-task-master/commit/64302dc1918f673fcdac05b29411bf76ffe93505) Thanks [@DavidMaliglowka](https://github.com/DavidMaliglowka)! - 🎉 **Introducing TaskMaster Extension!**
|
|
||||||
|
|
||||||
We're thrilled to launch the first version of our Code extension, bringing the power of TaskMaster directly into your favorite code editor. While this is our initial release and we've kept things focused, it already packs powerful features to supercharge your development workflow.
|
|
||||||
|
|
||||||
## ✨ Key Features
|
|
||||||
|
|
||||||
### 📋 Visual Task Management
|
|
||||||
- **Kanban Board View**: Visualize all your tasks in an intuitive board layout directly in VS Code
|
|
||||||
- **Drag & Drop**: Easily change task status by dragging cards between columns
|
|
||||||
- **Real-time Updates**: See changes instantly as you work through your project
|
|
||||||
|
|
||||||
### 🏷️ Multi-Context Support
|
|
||||||
- **Tag Switching**: Seamlessly switch between different project contexts/tags
|
|
||||||
- **Isolated Workflows**: Keep different features or experiments organized separately
|
|
||||||
|
|
||||||
### 🤖 AI-Powered Task Updates
|
|
||||||
- **Smart Updates**: Use TaskMaster's AI capabilities to update tasks and subtasks
|
|
||||||
- **Context-Aware**: Leverages your existing TaskMaster configuration and models
|
|
||||||
|
|
||||||
### 📊 Rich Task Information
|
|
||||||
- **Complexity Scores**: See task complexity ratings at a glance
|
|
||||||
- **Subtask Visualization**: Expand tasks to view and manage subtasks
|
|
||||||
- **Dependency Graphs**: Understand task relationships and dependencies visually
|
|
||||||
|
|
||||||
### ⚙️ Configuration Management
|
|
||||||
- **Visual Config Editor**: View and understand your `.taskmaster/config.json` settings
|
|
||||||
- **Easy Access**: No more manual JSON editing for common configuration tasks
|
|
||||||
|
|
||||||
### 🚀 Quick Actions
|
|
||||||
- **Status Updates**: Change task status with a single click
|
|
||||||
- **Task Details**: Access full task information without leaving VS Code
|
|
||||||
- **Integrated Commands**: All TaskMaster commands available through the command palette
|
|
||||||
|
|
||||||
## 🎯 What's Next?
|
|
||||||
|
|
||||||
This is just the beginning! We wanted to get a solid foundation into your hands quickly. The extension will evolve rapidly with your feedback, adding more advanced features, better visualizations, and deeper integration with your development workflow.
|
|
||||||
|
|
||||||
Thank you for being part of the TaskMaster journey. Your workflow has never looked better! 🚀
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
"private": true,
|
"private": true,
|
||||||
"displayName": "TaskMaster",
|
"displayName": "TaskMaster",
|
||||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||||
"version": "0.23.0",
|
"version": "0.22.3",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
@@ -64,16 +64,16 @@
|
|||||||
"properties": {
|
"properties": {
|
||||||
"taskmaster.mcp.command": {
|
"taskmaster.mcp.command": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"default": "node",
|
"default": "npx",
|
||||||
"description": "The command to execute for the MCP server (e.g., 'node' for bundled server or 'npx' for remote)."
|
"description": "The command or absolute path to execute for the MCP server (e.g., 'npx' or '/usr/local/bin/task-master-ai')."
|
||||||
},
|
},
|
||||||
"taskmaster.mcp.args": {
|
"taskmaster.mcp.args": {
|
||||||
"type": "array",
|
"type": "array",
|
||||||
"items": {
|
"items": {
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
"default": [],
|
"default": ["task-master-ai"],
|
||||||
"description": "Arguments for the MCP server (leave empty to use bundled server)."
|
"description": "An array of arguments to pass to the MCP server command."
|
||||||
},
|
},
|
||||||
"taskmaster.mcp.cwd": {
|
"taskmaster.mcp.cwd": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
@@ -238,9 +238,6 @@
|
|||||||
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
|
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
|
||||||
"check-types": "tsc --noEmit"
|
"check-types": "tsc --noEmit"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
|
||||||
"task-master-ai": "*"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@dnd-kit/core": "^6.3.1",
|
"@dnd-kit/core": "^6.3.1",
|
||||||
"@dnd-kit/modifiers": "^9.0.0",
|
"@dnd-kit/modifiers": "^9.0.0",
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
"name": "task-master-hamster",
|
"name": "task-master-hamster",
|
||||||
"displayName": "Taskmaster AI",
|
"displayName": "Taskmaster AI",
|
||||||
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
|
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
|
||||||
"version": "0.23.0",
|
"version": "0.22.3",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
|
|||||||
@@ -4,13 +4,7 @@ import { Button } from '@/components/ui/button';
|
|||||||
import { Label } from '@/components/ui/label';
|
import { Label } from '@/components/ui/label';
|
||||||
import { Textarea } from '@/components/ui/textarea';
|
import { Textarea } from '@/components/ui/textarea';
|
||||||
import { CollapsibleSection } from '@/components/ui/CollapsibleSection';
|
import { CollapsibleSection } from '@/components/ui/CollapsibleSection';
|
||||||
import {
|
import { Wand2, Loader2, PlusCircle, TrendingUp, TrendingDown } from 'lucide-react';
|
||||||
Wand2,
|
|
||||||
Loader2,
|
|
||||||
PlusCircle,
|
|
||||||
TrendingUp,
|
|
||||||
TrendingDown
|
|
||||||
} from 'lucide-react';
|
|
||||||
import {
|
import {
|
||||||
useUpdateTask,
|
useUpdateTask,
|
||||||
useUpdateSubtask,
|
useUpdateSubtask,
|
||||||
@@ -40,12 +34,10 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
}) => {
|
}) => {
|
||||||
const [prompt, setPrompt] = useState('');
|
const [prompt, setPrompt] = useState('');
|
||||||
const [scopePrompt, setScopePrompt] = useState('');
|
const [scopePrompt, setScopePrompt] = useState('');
|
||||||
const [scopeStrength, setScopeStrength] = useState<
|
const [scopeStrength, setScopeStrength] = useState<'light' | 'regular' | 'heavy'>('regular');
|
||||||
'light' | 'regular' | 'heavy'
|
const [lastAction, setLastAction] = useState<'regenerate' | 'append' | 'scope-up' | 'scope-down' | null>(
|
||||||
>('regular');
|
null
|
||||||
const [lastAction, setLastAction] = useState<
|
);
|
||||||
'regenerate' | 'append' | 'scope-up' | 'scope-down' | null
|
|
||||||
>(null);
|
|
||||||
const updateTask = useUpdateTask();
|
const updateTask = useUpdateTask();
|
||||||
const updateSubtask = useUpdateSubtask();
|
const updateSubtask = useUpdateSubtask();
|
||||||
const scopeUpTask = useScopeUpTask();
|
const scopeUpTask = useScopeUpTask();
|
||||||
@@ -125,11 +117,8 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
setLastAction('scope-up');
|
setLastAction('scope-up');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const taskId =
|
const taskId = isSubtask && parentTask ? `${parentTask.id}.${currentTask.id}` : currentTask.id;
|
||||||
isSubtask && parentTask
|
|
||||||
? `${parentTask.id}.${currentTask.id}`
|
|
||||||
: currentTask.id;
|
|
||||||
|
|
||||||
await scopeUpTask.mutateAsync({
|
await scopeUpTask.mutateAsync({
|
||||||
taskId,
|
taskId,
|
||||||
strength: scopeStrength,
|
strength: scopeStrength,
|
||||||
@@ -154,11 +143,8 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
setLastAction('scope-down');
|
setLastAction('scope-down');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const taskId =
|
const taskId = isSubtask && parentTask ? `${parentTask.id}.${currentTask.id}` : currentTask.id;
|
||||||
isSubtask && parentTask
|
|
||||||
? `${parentTask.id}.${currentTask.id}`
|
|
||||||
: currentTask.id;
|
|
||||||
|
|
||||||
await scopeDownTask.mutateAsync({
|
await scopeDownTask.mutateAsync({
|
||||||
taskId,
|
taskId,
|
||||||
strength: scopeStrength,
|
strength: scopeStrength,
|
||||||
@@ -176,11 +162,7 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Track loading states based on the last action
|
// Track loading states based on the last action
|
||||||
const isLoading =
|
const isLoading = updateTask.isPending || updateSubtask.isPending || scopeUpTask.isPending || scopeDownTask.isPending;
|
||||||
updateTask.isPending ||
|
|
||||||
updateSubtask.isPending ||
|
|
||||||
scopeUpTask.isPending ||
|
|
||||||
scopeDownTask.isPending;
|
|
||||||
const isRegenerating = isLoading && lastAction === 'regenerate';
|
const isRegenerating = isLoading && lastAction === 'regenerate';
|
||||||
const isAppending = isLoading && lastAction === 'append';
|
const isAppending = isLoading && lastAction === 'append';
|
||||||
const isScopingUp = isLoading && lastAction === 'scope-up';
|
const isScopingUp = isLoading && lastAction === 'scope-up';
|
||||||
@@ -269,7 +251,7 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
<Label className="block text-sm font-medium text-vscode-foreground/80 mb-3">
|
<Label className="block text-sm font-medium text-vscode-foreground/80 mb-3">
|
||||||
Task Complexity Adjustment
|
Task Complexity Adjustment
|
||||||
</Label>
|
</Label>
|
||||||
|
|
||||||
{/* Strength Selection */}
|
{/* Strength Selection */}
|
||||||
<div className="mb-3">
|
<div className="mb-3">
|
||||||
<Label className="block text-xs text-vscode-foreground/60 mb-2">
|
<Label className="block text-xs text-vscode-foreground/60 mb-2">
|
||||||
@@ -366,12 +348,10 @@ export const AIActionsSection: React.FC<AIActionsSectionProps> = ({
|
|||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
<p>
|
<p>
|
||||||
<strong>Scope Up:</strong> Increases task complexity with more
|
<strong>Scope Up:</strong> Increases task complexity with more details, requirements, or implementation steps
|
||||||
details, requirements, or implementation steps
|
|
||||||
</p>
|
</p>
|
||||||
<p>
|
<p>
|
||||||
<strong>Scope Down:</strong> Decreases task complexity by
|
<strong>Scope Down:</strong> Decreases task complexity by simplifying or removing unnecessary details
|
||||||
simplifying or removing unnecessary details
|
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
||||||
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
||||||
import * as vscode from 'vscode';
|
import * as vscode from 'vscode';
|
||||||
import * as path from 'path';
|
|
||||||
import { logger } from './logger';
|
import { logger } from './logger';
|
||||||
|
|
||||||
export interface MCPConfig {
|
export interface MCPConfig {
|
||||||
@@ -144,7 +143,7 @@ export class MCPClientManager {
|
|||||||
// Create the client
|
// Create the client
|
||||||
this.client = new Client(
|
this.client = new Client(
|
||||||
{
|
{
|
||||||
name: 'task-master-vscode-extension',
|
name: 'taskr-vscode-extension',
|
||||||
version: '1.0.0'
|
version: '1.0.0'
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -212,30 +211,6 @@ export class MCPClientManager {
|
|||||||
};
|
};
|
||||||
|
|
||||||
logger.log('MCP client connected successfully');
|
logger.log('MCP client connected successfully');
|
||||||
|
|
||||||
// Log Task Master version information after successful connection
|
|
||||||
try {
|
|
||||||
const versionResult = await this.callTool('get_tasks', {});
|
|
||||||
if (versionResult?.content?.[0]?.text) {
|
|
||||||
const response = JSON.parse(versionResult.content[0].text);
|
|
||||||
if (response?.version) {
|
|
||||||
logger.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
|
||||||
logger.log('✅ Task Master MCP Server Connected');
|
|
||||||
logger.log(` Version: ${response.version.version || 'unknown'}`);
|
|
||||||
logger.log(
|
|
||||||
` Package: ${response.version.name || 'task-master-ai'}`
|
|
||||||
);
|
|
||||||
if (response.tag) {
|
|
||||||
logger.log(
|
|
||||||
` Current Tag: ${response.tag.currentTag || 'master'}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
logger.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (versionError) {
|
|
||||||
logger.log('Note: Could not retrieve Task Master version information');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('Failed to connect to MCP server:', error);
|
logger.error('Failed to connect to MCP server:', error);
|
||||||
this.status = {
|
this.status = {
|
||||||
@@ -337,34 +312,6 @@ export class MCPClientManager {
|
|||||||
'Available MCP tools:',
|
'Available MCP tools:',
|
||||||
result.tools?.map((t) => t.name) || []
|
result.tools?.map((t) => t.name) || []
|
||||||
);
|
);
|
||||||
|
|
||||||
// Try to get version information by calling a simple tool
|
|
||||||
// The get_tasks tool is lightweight and returns version info
|
|
||||||
try {
|
|
||||||
const versionResult = await this.callTool('get_tasks', {});
|
|
||||||
if (versionResult?.content?.[0]?.text) {
|
|
||||||
// Parse the response to extract version info
|
|
||||||
const response = JSON.parse(versionResult.content[0].text);
|
|
||||||
if (response?.version) {
|
|
||||||
logger.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
|
||||||
logger.log('📦 Task Master MCP Server Connected');
|
|
||||||
logger.log(` Version: ${response.version.version || 'unknown'}`);
|
|
||||||
logger.log(
|
|
||||||
` Package: ${response.version.name || 'task-master-ai'}`
|
|
||||||
);
|
|
||||||
if (response.tag) {
|
|
||||||
logger.log(
|
|
||||||
` Current Tag: ${response.tag.currentTag || 'master'}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
logger.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (versionError) {
|
|
||||||
// Don't fail the connection test if we can't get version info
|
|
||||||
logger.log('Could not retrieve Task Master version information');
|
|
||||||
}
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('Connection test failed:', error);
|
logger.error('Connection test failed:', error);
|
||||||
@@ -398,34 +345,8 @@ export function createMCPConfigFromSettings(): MCPConfig {
|
|||||||
);
|
);
|
||||||
const config = vscode.workspace.getConfiguration('taskmaster');
|
const config = vscode.workspace.getConfiguration('taskmaster');
|
||||||
|
|
||||||
let command = config.get<string>('mcp.command', 'node');
|
let command = config.get<string>('mcp.command', 'npx');
|
||||||
let args = config.get<string[]>('mcp.args', []);
|
const args = config.get<string[]>('mcp.args', ['task-master-ai']);
|
||||||
|
|
||||||
// If using default settings, use the bundled MCP server
|
|
||||||
if (command === 'node' && args.length === 0) {
|
|
||||||
try {
|
|
||||||
// Try to resolve the bundled MCP server
|
|
||||||
const taskMasterPath = require.resolve('task-master-ai');
|
|
||||||
const mcpServerPath = path.resolve(
|
|
||||||
path.dirname(taskMasterPath),
|
|
||||||
'mcp-server/server.js'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Verify the server file exists
|
|
||||||
const fs = require('fs');
|
|
||||||
if (!fs.existsSync(mcpServerPath)) {
|
|
||||||
throw new Error('MCP server file not found at: ' + mcpServerPath);
|
|
||||||
}
|
|
||||||
|
|
||||||
args = [mcpServerPath];
|
|
||||||
logger.log(`📦 Using bundled MCP server at: ${mcpServerPath}`);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('❌ Could not find bundled task-master-ai server:', error);
|
|
||||||
// Fallback to npx
|
|
||||||
command = 'npx';
|
|
||||||
args = ['-y', 'task-master-ai'];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use proper VS Code workspace detection
|
// Use proper VS Code workspace detection
|
||||||
const defaultCwd =
|
const defaultCwd =
|
||||||
|
|||||||
@@ -251,7 +251,7 @@ export function useScopeUpTask() {
|
|||||||
type: 'mcpRequest',
|
type: 'mcpRequest',
|
||||||
tool: 'scope_up_task',
|
tool: 'scope_up_task',
|
||||||
params: {
|
params: {
|
||||||
id: String(taskId),
|
id: taskId,
|
||||||
strength,
|
strength,
|
||||||
prompt,
|
prompt,
|
||||||
research: options.research || false
|
research: options.research || false
|
||||||
@@ -268,7 +268,9 @@ export function useScopeUpTask() {
|
|||||||
return response;
|
return response;
|
||||||
},
|
},
|
||||||
onSuccess: async (data, variables) => {
|
onSuccess: async (data, variables) => {
|
||||||
console.log('✅ Task scope up successful, invalidating all task queries');
|
console.log(
|
||||||
|
'✅ Task scope up successful, invalidating all task queries'
|
||||||
|
);
|
||||||
console.log('Task ID:', variables.taskId);
|
console.log('Task ID:', variables.taskId);
|
||||||
|
|
||||||
// Invalidate ALL task-related queries
|
// Invalidate ALL task-related queries
|
||||||
@@ -307,7 +309,7 @@ export function useScopeDownTask() {
|
|||||||
type: 'mcpRequest',
|
type: 'mcpRequest',
|
||||||
tool: 'scope_down_task',
|
tool: 'scope_down_task',
|
||||||
params: {
|
params: {
|
||||||
id: String(taskId),
|
id: taskId,
|
||||||
strength,
|
strength,
|
||||||
prompt,
|
prompt,
|
||||||
research: options.research || false
|
research: options.research || false
|
||||||
|
|||||||
@@ -1,70 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-executor
|
|
||||||
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
|
|
||||||
model: sonnet
|
|
||||||
color: blue
|
|
||||||
---
|
|
||||||
|
|
||||||
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
|
|
||||||
|
|
||||||
**Core Responsibilities:**
|
|
||||||
|
|
||||||
1. **Task Analysis**: When given a task, first retrieve its full details using `task-master show <id>` to understand requirements, dependencies, and acceptance criteria.
|
|
||||||
|
|
||||||
2. **Implementation Planning**: Before coding, briefly outline your implementation approach:
|
|
||||||
- Identify files that need to be created or modified
|
|
||||||
- Note any dependencies or prerequisites
|
|
||||||
- Consider the testing strategy defined in the task
|
|
||||||
|
|
||||||
3. **Focused Execution**:
|
|
||||||
- Implement one subtask at a time for clarity and traceability
|
|
||||||
- Follow the project's coding standards from CLAUDE.md if available
|
|
||||||
- Prefer editing existing files over creating new ones
|
|
||||||
- Only create files that are essential for the task completion
|
|
||||||
|
|
||||||
4. **Progress Documentation**:
|
|
||||||
- Use `task-master update-subtask --id=<id> --prompt="implementation notes"` to log your approach and any important decisions
|
|
||||||
- Update task status to 'in-progress' when starting: `task-master set-status --id=<id> --status=in-progress`
|
|
||||||
- Mark as 'done' only after verification: `task-master set-status --id=<id> --status=done`
|
|
||||||
|
|
||||||
5. **Quality Assurance**:
|
|
||||||
- Implement the testing strategy specified in the task
|
|
||||||
- Verify that all acceptance criteria are met
|
|
||||||
- Check for any dependency conflicts or integration issues
|
|
||||||
- Run relevant tests before marking task as complete
|
|
||||||
|
|
||||||
6. **Dependency Management**:
|
|
||||||
- Check task dependencies before starting implementation
|
|
||||||
- If blocked by incomplete dependencies, clearly communicate this
|
|
||||||
- Use `task-master validate-dependencies` when needed
|
|
||||||
|
|
||||||
**Implementation Workflow:**
|
|
||||||
|
|
||||||
1. Retrieve task details and understand requirements
|
|
||||||
2. Check dependencies and prerequisites
|
|
||||||
3. Plan implementation approach
|
|
||||||
4. Update task status to in-progress
|
|
||||||
5. Implement the solution incrementally
|
|
||||||
6. Log progress and decisions in subtask updates
|
|
||||||
7. Test and verify the implementation
|
|
||||||
8. Mark task as done when complete
|
|
||||||
9. Suggest next task if appropriate
|
|
||||||
|
|
||||||
**Key Principles:**
|
|
||||||
|
|
||||||
- Focus on completing one task thoroughly before moving to the next
|
|
||||||
- Maintain clear communication about what you're implementing and why
|
|
||||||
- Follow existing code patterns and project conventions
|
|
||||||
- Prioritize working code over extensive documentation unless docs are the task
|
|
||||||
- Ask for clarification if task requirements are ambiguous
|
|
||||||
- Consider edge cases and error handling in your implementations
|
|
||||||
|
|
||||||
**Integration with Task Master:**
|
|
||||||
|
|
||||||
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
|
|
||||||
- Track your progress
|
|
||||||
- Update task information
|
|
||||||
- Maintain project state
|
|
||||||
- Coordinate with the broader development workflow
|
|
||||||
|
|
||||||
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.
|
|
||||||
@@ -1,130 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-orchestrator
|
|
||||||
description: Use this agent when you need to coordinate and manage the execution of Task Master tasks, especially when dealing with complex task dependencies and parallel execution opportunities. This agent should be invoked at the beginning of a work session to analyze the task queue, identify parallelizable work, and orchestrate the deployment of task-executor agents. It should also be used when tasks complete to reassess the dependency graph and deploy new executors as needed.\n\n<example>\nContext: User wants to start working on their project tasks using Task Master\nuser: "Let's work on the next available tasks in the project"\nassistant: "I'll use the task-orchestrator agent to analyze the task queue and coordinate execution"\n<commentary>\nThe user wants to work on tasks, so the task-orchestrator should be deployed to analyze dependencies and coordinate execution.\n</commentary>\n</example>\n\n<example>\nContext: Multiple independent tasks are available in the queue\nuser: "Can we work on multiple tasks at once?"\nassistant: "Let me deploy the task-orchestrator to analyze task dependencies and parallelize the work"\n<commentary>\nWhen parallelization is mentioned or multiple tasks could be worked on, the orchestrator should coordinate the effort.\n</commentary>\n</example>\n\n<example>\nContext: A complex feature with many subtasks needs implementation\nuser: "Implement the authentication system tasks"\nassistant: "I'll use the task-orchestrator to break down the authentication tasks and coordinate their execution"\n<commentary>\nFor complex multi-task features, the orchestrator manages the overall execution strategy.\n</commentary>\n</example>
|
|
||||||
model: opus
|
|
||||||
color: green
|
|
||||||
---
|
|
||||||
|
|
||||||
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
|
|
||||||
|
|
||||||
## Core Responsibilities
|
|
||||||
|
|
||||||
1. **Task Queue Analysis**: You continuously monitor and analyze the task queue using Task Master MCP tools to understand the current state of work, dependencies, and priorities.
|
|
||||||
|
|
||||||
2. **Dependency Graph Management**: You build and maintain a mental model of task dependencies, identifying which tasks can be executed in parallel and which must wait for prerequisites.
|
|
||||||
|
|
||||||
3. **Executor Deployment**: You strategically deploy task-executor agents for individual tasks or task groups, ensuring each executor has the necessary context and clear success criteria.
|
|
||||||
|
|
||||||
4. **Progress Coordination**: You track the progress of deployed executors, handle task completion notifications, and reassess the execution strategy as tasks complete.
|
|
||||||
|
|
||||||
## Operational Workflow
|
|
||||||
|
|
||||||
### Initial Assessment Phase
|
|
||||||
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
|
|
||||||
2. Analyze task statuses, priorities, and dependencies
|
|
||||||
3. Identify tasks with status 'pending' that have no blocking dependencies
|
|
||||||
4. Group related tasks that could benefit from specialized executors
|
|
||||||
5. Create an execution plan that maximizes parallelization
|
|
||||||
|
|
||||||
### Executor Deployment Phase
|
|
||||||
1. For each independent task or task group:
|
|
||||||
- Deploy a task-executor agent with specific instructions
|
|
||||||
- Provide the executor with task ID, requirements, and context
|
|
||||||
- Set clear completion criteria and reporting expectations
|
|
||||||
2. Maintain a registry of active executors and their assigned tasks
|
|
||||||
3. Establish communication protocols for progress updates
|
|
||||||
|
|
||||||
### Coordination Phase
|
|
||||||
1. Monitor executor progress through task status updates
|
|
||||||
2. When a task completes:
|
|
||||||
- Verify completion with `get_task` or `task-master show <id>`
|
|
||||||
- Update task status if needed using `set_task_status`
|
|
||||||
- Reassess dependency graph for newly unblocked tasks
|
|
||||||
- Deploy new executors for available work
|
|
||||||
3. Handle executor failures or blocks:
|
|
||||||
- Reassign tasks to new executors if needed
|
|
||||||
- Escalate complex issues to the user
|
|
||||||
- Update task status to 'blocked' when appropriate
|
|
||||||
|
|
||||||
### Optimization Strategies
|
|
||||||
|
|
||||||
**Parallel Execution Rules**:
|
|
||||||
- Never assign dependent tasks to different executors simultaneously
|
|
||||||
- Prioritize high-priority tasks when resources are limited
|
|
||||||
- Group small, related subtasks for single executor efficiency
|
|
||||||
- Balance executor load to prevent bottlenecks
|
|
||||||
|
|
||||||
**Context Management**:
|
|
||||||
- Provide executors with minimal but sufficient context
|
|
||||||
- Share relevant completed task information when it aids execution
|
|
||||||
- Maintain a shared knowledge base of project-specific patterns
|
|
||||||
|
|
||||||
**Quality Assurance**:
|
|
||||||
- Verify task completion before marking as done
|
|
||||||
- Ensure test strategies are followed when specified
|
|
||||||
- Coordinate cross-task integration testing when needed
|
|
||||||
|
|
||||||
## Communication Protocols
|
|
||||||
|
|
||||||
When deploying executors, provide them with:
|
|
||||||
```
|
|
||||||
TASK ASSIGNMENT:
|
|
||||||
- Task ID: [specific ID]
|
|
||||||
- Objective: [clear goal]
|
|
||||||
- Dependencies: [list any completed prerequisites]
|
|
||||||
- Success Criteria: [specific completion requirements]
|
|
||||||
- Context: [relevant project information]
|
|
||||||
- Reporting: [when and how to report back]
|
|
||||||
```
|
|
||||||
|
|
||||||
When receiving executor updates:
|
|
||||||
1. Acknowledge completion or issues
|
|
||||||
2. Update task status in Task Master
|
|
||||||
3. Reassess execution strategy
|
|
||||||
4. Deploy new executors as appropriate
|
|
||||||
|
|
||||||
## Decision Framework
|
|
||||||
|
|
||||||
**When to parallelize**:
|
|
||||||
- Multiple pending tasks with no interdependencies
|
|
||||||
- Sufficient context available for independent execution
|
|
||||||
- Tasks are well-defined with clear success criteria
|
|
||||||
|
|
||||||
**When to serialize**:
|
|
||||||
- Strong dependencies between tasks
|
|
||||||
- Limited context or unclear requirements
|
|
||||||
- Integration points requiring careful coordination
|
|
||||||
|
|
||||||
**When to escalate**:
|
|
||||||
- Circular dependencies detected
|
|
||||||
- Critical blockers affecting multiple tasks
|
|
||||||
- Ambiguous requirements needing clarification
|
|
||||||
- Resource conflicts between executors
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
|
|
||||||
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
|
|
||||||
3. **Task Ambiguity**: Request clarification from user before proceeding
|
|
||||||
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
|
|
||||||
|
|
||||||
## Performance Metrics
|
|
||||||
|
|
||||||
Track and optimize for:
|
|
||||||
- Task completion rate
|
|
||||||
- Parallel execution efficiency
|
|
||||||
- Executor success rate
|
|
||||||
- Time to completion for task groups
|
|
||||||
- Dependency resolution speed
|
|
||||||
|
|
||||||
## Integration with Task Master
|
|
||||||
|
|
||||||
Leverage these Task Master MCP tools effectively:
|
|
||||||
- `get_tasks` - Continuous queue monitoring
|
|
||||||
- `get_task` - Detailed task analysis
|
|
||||||
- `set_task_status` - Progress tracking
|
|
||||||
- `next_task` - Fallback for serial execution
|
|
||||||
- `analyze_project_complexity` - Strategic planning
|
|
||||||
- `complexity_report` - Resource allocation
|
|
||||||
|
|
||||||
You are the strategic mind coordinating the entire task execution effort. Your success is measured by the efficient completion of all tasks while maintaining quality and respecting dependencies. Think systematically, act decisively, and continuously optimize the execution strategy based on real-time progress.
|
|
||||||
@@ -71,8 +71,8 @@ export async function scopeDownDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse task IDs - convert to numbers as expected by scopeDownTask
|
// Parse task IDs
|
||||||
const taskIds = id.split(',').map((taskId) => parseInt(taskId.trim(), 10));
|
const taskIds = id.split(',').map((taskId) => taskId.trim());
|
||||||
|
|
||||||
log.info(
|
log.info(
|
||||||
`Scoping down tasks: ${taskIds.join(', ')}, strength: ${strength}, research: ${research}`
|
`Scoping down tasks: ${taskIds.join(', ')}, strength: ${strength}, research: ${research}`
|
||||||
@@ -90,10 +90,10 @@ export async function scopeDownDirect(args, log, context = {}) {
|
|||||||
projectRoot,
|
projectRoot,
|
||||||
commandName: 'scope-down',
|
commandName: 'scope-down',
|
||||||
outputType: 'mcp',
|
outputType: 'mcp',
|
||||||
tag,
|
tag
|
||||||
research
|
|
||||||
},
|
},
|
||||||
'json' // outputFormat
|
'json', // outputFormat
|
||||||
|
research
|
||||||
);
|
);
|
||||||
|
|
||||||
// Restore normal logging
|
// Restore normal logging
|
||||||
|
|||||||
@@ -71,8 +71,8 @@ export async function scopeUpDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse task IDs - convert to numbers as expected by scopeUpTask
|
// Parse task IDs
|
||||||
const taskIds = id.split(',').map((taskId) => parseInt(taskId.trim(), 10));
|
const taskIds = id.split(',').map((taskId) => taskId.trim());
|
||||||
|
|
||||||
log.info(
|
log.info(
|
||||||
`Scoping up tasks: ${taskIds.join(', ')}, strength: ${strength}, research: ${research}`
|
`Scoping up tasks: ${taskIds.join(', ')}, strength: ${strength}, research: ${research}`
|
||||||
@@ -90,10 +90,10 @@ export async function scopeUpDirect(args, log, context = {}) {
|
|||||||
projectRoot,
|
projectRoot,
|
||||||
commandName: 'scope-up',
|
commandName: 'scope-up',
|
||||||
outputType: 'mcp',
|
outputType: 'mcp',
|
||||||
tag,
|
tag
|
||||||
research
|
|
||||||
},
|
},
|
||||||
'json' // outputFormat
|
'json', // outputFormat
|
||||||
|
research
|
||||||
);
|
);
|
||||||
|
|
||||||
// Restore normal logging
|
// Restore normal logging
|
||||||
|
|||||||
19
package-lock.json
generated
19
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.23.1-rc.0",
|
"version": "0.22.1-rc.0",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.23.1-rc.0",
|
"version": "0.22.1-rc.0",
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"workspaces": [
|
"workspaces": [
|
||||||
"apps/*",
|
"apps/*",
|
||||||
@@ -46,7 +46,6 @@
|
|||||||
"helmet": "^8.1.0",
|
"helmet": "^8.1.0",
|
||||||
"inquirer": "^12.5.0",
|
"inquirer": "^12.5.0",
|
||||||
"jsonc-parser": "^3.3.1",
|
"jsonc-parser": "^3.3.1",
|
||||||
"jsonrepair": "^3.13.0",
|
|
||||||
"jsonwebtoken": "^9.0.2",
|
"jsonwebtoken": "^9.0.2",
|
||||||
"lru-cache": "^10.2.0",
|
"lru-cache": "^10.2.0",
|
||||||
"ollama-ai-provider": "^1.2.0",
|
"ollama-ai-provider": "^1.2.0",
|
||||||
@@ -85,10 +84,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"apps/extension": {
|
"apps/extension": {
|
||||||
"version": "0.23.0",
|
"version": "0.22.3",
|
||||||
"dependencies": {
|
|
||||||
"task-master-ai": "*"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@dnd-kit/core": "^6.3.1",
|
"@dnd-kit/core": "^6.3.1",
|
||||||
"@dnd-kit/modifiers": "^9.0.0",
|
"@dnd-kit/modifiers": "^9.0.0",
|
||||||
@@ -14946,15 +14942,6 @@
|
|||||||
"graceful-fs": "^4.1.6"
|
"graceful-fs": "^4.1.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/jsonrepair": {
|
|
||||||
"version": "3.13.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/jsonrepair/-/jsonrepair-3.13.0.tgz",
|
|
||||||
"integrity": "sha512-5YRzlAQ7tuzV1nAJu3LvDlrKtBFIALHN2+a+I1MGJCt3ldRDBF/bZuvIPzae8Epot6KBXd0awRZZcuoeAsZ/mw==",
|
|
||||||
"license": "ISC",
|
|
||||||
"bin": {
|
|
||||||
"jsonrepair": "bin/cli.js"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/jsonwebtoken": {
|
"node_modules/jsonwebtoken": {
|
||||||
"version": "9.0.2",
|
"version": "9.0.2",
|
||||||
"resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.23.1-rc.0",
|
"version": "0.22.1-rc.0",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@@ -73,7 +73,6 @@
|
|||||||
"helmet": "^8.1.0",
|
"helmet": "^8.1.0",
|
||||||
"inquirer": "^12.5.0",
|
"inquirer": "^12.5.0",
|
||||||
"jsonc-parser": "^3.3.1",
|
"jsonc-parser": "^3.3.1",
|
||||||
"jsonrepair": "^3.13.0",
|
|
||||||
"jsonwebtoken": "^9.0.2",
|
"jsonwebtoken": "^9.0.2",
|
||||||
"lru-cache": "^10.2.0",
|
"lru-cache": "^10.2.0",
|
||||||
"ollama-ai-provider": "^1.2.0",
|
"ollama-ai-provider": "^1.2.0",
|
||||||
|
|||||||
@@ -1479,8 +1479,7 @@ function registerCommands(programInstance) {
|
|||||||
projectRoot: taskMaster.getProjectRoot(),
|
projectRoot: taskMaster.getProjectRoot(),
|
||||||
tag,
|
tag,
|
||||||
commandName: 'scope-up',
|
commandName: 'scope-up',
|
||||||
outputType: 'cli',
|
outputType: 'cli'
|
||||||
research: options.research || false
|
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = await scopeUpTask(
|
const result = await scopeUpTask(
|
||||||
@@ -1606,8 +1605,7 @@ function registerCommands(programInstance) {
|
|||||||
projectRoot: taskMaster.getProjectRoot(),
|
projectRoot: taskMaster.getProjectRoot(),
|
||||||
tag,
|
tag,
|
||||||
commandName: 'scope-down',
|
commandName: 'scope-down',
|
||||||
outputType: 'cli',
|
outputType: 'cli'
|
||||||
research: options.research || false
|
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = await scopeDownTask(
|
const result = await scopeDownTask(
|
||||||
|
|||||||
@@ -13,18 +13,12 @@ import {
|
|||||||
|
|
||||||
import { generateTextService } from '../ai-services-unified.js';
|
import { generateTextService } from '../ai-services-unified.js';
|
||||||
|
|
||||||
import {
|
import { getDebugFlag, getProjectName } from '../config-manager.js';
|
||||||
getDebugFlag,
|
|
||||||
getProjectName,
|
|
||||||
getMainProvider,
|
|
||||||
getResearchProvider
|
|
||||||
} from '../config-manager.js';
|
|
||||||
import { getPromptManager } from '../prompt-manager.js';
|
import { getPromptManager } from '../prompt-manager.js';
|
||||||
import {
|
import {
|
||||||
COMPLEXITY_REPORT_FILE,
|
COMPLEXITY_REPORT_FILE,
|
||||||
LEGACY_TASKS_FILE
|
LEGACY_TASKS_FILE
|
||||||
} from '../../../src/constants/paths.js';
|
} from '../../../src/constants/paths.js';
|
||||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
|
||||||
import { resolveComplexityReportOutputPath } from '../../../src/utils/path-utils.js';
|
import { resolveComplexityReportOutputPath } from '../../../src/utils/path-utils.js';
|
||||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||||
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||||
@@ -414,18 +408,10 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
// Load prompts using PromptManager
|
// Load prompts using PromptManager
|
||||||
const promptManager = getPromptManager();
|
const promptManager = getPromptManager();
|
||||||
|
|
||||||
// Check if Claude Code is being used as the provider
|
|
||||||
const currentProvider = useResearch
|
|
||||||
? getResearchProvider(projectRoot)
|
|
||||||
: getMainProvider(projectRoot);
|
|
||||||
const isClaudeCode = currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
|
||||||
|
|
||||||
const promptParams = {
|
const promptParams = {
|
||||||
tasks: tasksData.tasks,
|
tasks: tasksData.tasks,
|
||||||
gatheredContext: gatheredContext || '',
|
gatheredContext: gatheredContext || '',
|
||||||
useResearch: useResearch,
|
useResearch: useResearch
|
||||||
isClaudeCode: isClaudeCode,
|
|
||||||
projectRoot: projectRoot || ''
|
|
||||||
};
|
};
|
||||||
|
|
||||||
const { systemPrompt, userPrompt: prompt } = await promptManager.loadPrompt(
|
const { systemPrompt, userPrompt: prompt } = await promptManager.loadPrompt(
|
||||||
|
|||||||
@@ -18,16 +18,10 @@ import {
|
|||||||
|
|
||||||
import { generateTextService } from '../ai-services-unified.js';
|
import { generateTextService } from '../ai-services-unified.js';
|
||||||
|
|
||||||
import {
|
import { getDefaultSubtasks, getDebugFlag } from '../config-manager.js';
|
||||||
getDefaultSubtasks,
|
|
||||||
getDebugFlag,
|
|
||||||
getMainProvider,
|
|
||||||
getResearchProvider
|
|
||||||
} from '../config-manager.js';
|
|
||||||
import { getPromptManager } from '../prompt-manager.js';
|
import { getPromptManager } from '../prompt-manager.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
|
||||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||||
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||||
import { flattenTasksWithSubtasks, findProjectRoot } from '../utils.js';
|
import { flattenTasksWithSubtasks, findProjectRoot } from '../utils.js';
|
||||||
@@ -457,12 +451,6 @@ async function expandTask(
|
|||||||
// Load prompts using PromptManager
|
// Load prompts using PromptManager
|
||||||
const promptManager = getPromptManager();
|
const promptManager = getPromptManager();
|
||||||
|
|
||||||
// Check if Claude Code is being used as the provider
|
|
||||||
const currentProvider = useResearch
|
|
||||||
? getResearchProvider(projectRoot)
|
|
||||||
: getMainProvider(projectRoot);
|
|
||||||
const isClaudeCode = currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
|
||||||
|
|
||||||
// Combine all context sources into a single additionalContext parameter
|
// Combine all context sources into a single additionalContext parameter
|
||||||
let combinedAdditionalContext = '';
|
let combinedAdditionalContext = '';
|
||||||
if (additionalContext || complexityReasoningContext) {
|
if (additionalContext || complexityReasoningContext) {
|
||||||
@@ -507,9 +495,7 @@ async function expandTask(
|
|||||||
complexityReasoningContext: complexityReasoningContext,
|
complexityReasoningContext: complexityReasoningContext,
|
||||||
gatheredContext: gatheredContextText || '',
|
gatheredContext: gatheredContextText || '',
|
||||||
useResearch: useResearch,
|
useResearch: useResearch,
|
||||||
expansionPrompt: expansionPromptText || undefined,
|
expansionPrompt: expansionPromptText || undefined
|
||||||
isClaudeCode: isClaudeCode,
|
|
||||||
projectRoot: projectRoot || ''
|
|
||||||
};
|
};
|
||||||
|
|
||||||
let variantKey = 'default';
|
let variantKey = 'default';
|
||||||
@@ -527,18 +513,6 @@ async function expandTask(
|
|||||||
|
|
||||||
const { systemPrompt, userPrompt: promptContent } =
|
const { systemPrompt, userPrompt: promptContent } =
|
||||||
await promptManager.loadPrompt('expand-task', promptParams, variantKey);
|
await promptManager.loadPrompt('expand-task', promptParams, variantKey);
|
||||||
|
|
||||||
// Debug logging to identify the issue
|
|
||||||
logger.debug(`Selected variant: ${variantKey}`);
|
|
||||||
logger.debug(
|
|
||||||
`Prompt params passed: ${JSON.stringify(promptParams, null, 2)}`
|
|
||||||
);
|
|
||||||
logger.debug(
|
|
||||||
`System prompt (first 500 chars): ${systemPrompt.substring(0, 500)}...`
|
|
||||||
);
|
|
||||||
logger.debug(
|
|
||||||
`User prompt (first 500 chars): ${promptContent.substring(0, 500)}...`
|
|
||||||
);
|
|
||||||
// --- End Complexity Report / Prompt Logic ---
|
// --- End Complexity Report / Prompt Logic ---
|
||||||
|
|
||||||
// --- AI Subtask Generation using generateTextService ---
|
// --- AI Subtask Generation using generateTextService ---
|
||||||
|
|||||||
@@ -17,26 +17,20 @@ import {
|
|||||||
} from '../utils.js';
|
} from '../utils.js';
|
||||||
|
|
||||||
import { generateObjectService } from '../ai-services-unified.js';
|
import { generateObjectService } from '../ai-services-unified.js';
|
||||||
import {
|
import { getDebugFlag } from '../config-manager.js';
|
||||||
getDebugFlag,
|
|
||||||
getMainProvider,
|
|
||||||
getResearchProvider,
|
|
||||||
getDefaultPriority
|
|
||||||
} from '../config-manager.js';
|
|
||||||
import { getPromptManager } from '../prompt-manager.js';
|
import { getPromptManager } from '../prompt-manager.js';
|
||||||
import { displayAiUsageSummary } from '../ui.js';
|
import { displayAiUsageSummary } from '../ui.js';
|
||||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
|
||||||
|
|
||||||
// Define the Zod schema for a SINGLE task object
|
// Define the Zod schema for a SINGLE task object
|
||||||
const prdSingleTaskSchema = z.object({
|
const prdSingleTaskSchema = z.object({
|
||||||
id: z.number(),
|
id: z.number().int().positive(),
|
||||||
title: z.string().min(1),
|
title: z.string().min(1),
|
||||||
description: z.string().min(1),
|
description: z.string().min(1),
|
||||||
details: z.string(),
|
details: z.string().nullable(),
|
||||||
testStrategy: z.string(),
|
testStrategy: z.string().nullable(),
|
||||||
priority: z.enum(['high', 'medium', 'low']),
|
priority: z.enum(['high', 'medium', 'low']).nullable(),
|
||||||
dependencies: z.array(z.number()),
|
dependencies: z.array(z.number().int().positive()).nullable(),
|
||||||
status: z.string()
|
status: z.string().nullable()
|
||||||
});
|
});
|
||||||
|
|
||||||
// Define the Zod schema for the ENTIRE expected AI response object
|
// Define the Zod schema for the ENTIRE expected AI response object
|
||||||
@@ -180,14 +174,9 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
const promptManager = getPromptManager();
|
const promptManager = getPromptManager();
|
||||||
|
|
||||||
// Get defaultTaskPriority from config
|
// Get defaultTaskPriority from config
|
||||||
|
const { getDefaultPriority } = await import('../config-manager.js');
|
||||||
const defaultTaskPriority = getDefaultPriority(projectRoot) || 'medium';
|
const defaultTaskPriority = getDefaultPriority(projectRoot) || 'medium';
|
||||||
|
|
||||||
// Check if Claude Code is being used as the provider
|
|
||||||
const currentProvider = research
|
|
||||||
? getResearchProvider(projectRoot)
|
|
||||||
: getMainProvider(projectRoot);
|
|
||||||
const isClaudeCode = currentProvider === CUSTOM_PROVIDERS.CLAUDE_CODE;
|
|
||||||
|
|
||||||
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
|
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
|
||||||
'parse-prd',
|
'parse-prd',
|
||||||
{
|
{
|
||||||
@@ -196,9 +185,7 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
nextId,
|
nextId,
|
||||||
prdContent,
|
prdContent,
|
||||||
prdPath,
|
prdPath,
|
||||||
defaultTaskPriority,
|
defaultTaskPriority
|
||||||
isClaudeCode,
|
|
||||||
projectRoot: projectRoot || ''
|
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -270,15 +257,10 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
return {
|
return {
|
||||||
...task,
|
...task,
|
||||||
id: newId,
|
id: newId,
|
||||||
status: task.status || 'pending',
|
status: 'pending',
|
||||||
priority: task.priority || 'medium',
|
priority: task.priority || 'medium',
|
||||||
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
||||||
subtasks: [],
|
subtasks: []
|
||||||
// Ensure all required fields have values (even if empty strings)
|
|
||||||
title: task.title || '',
|
|
||||||
description: task.description || '',
|
|
||||||
details: task.details || '',
|
|
||||||
testStrategy: task.testStrategy || ''
|
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -279,9 +279,8 @@ async function regenerateSubtasksForComplexity(
|
|||||||
'debug',
|
'debug',
|
||||||
`Complexity-aware subtask calculation${complexityInfo}: ${currentPendingCount} pending -> target ${targetSubtaskCount} total`
|
`Complexity-aware subtask calculation${complexityInfo}: ${currentPendingCount} pending -> target ${targetSubtaskCount} total`
|
||||||
);
|
);
|
||||||
log(
|
console.log(
|
||||||
'debug',
|
`[DEBUG] Complexity-aware calculation${complexityInfo}: ${currentPendingCount} pending -> ${targetSubtaskCount} total subtasks (${strength} ${direction})`
|
||||||
`Complexity-aware calculation${complexityInfo}: ${currentPendingCount} pending -> ${targetSubtaskCount} total subtasks (${strength} ${direction})`
|
|
||||||
);
|
);
|
||||||
|
|
||||||
const newSubtasksNeeded = Math.max(1, targetSubtaskCount - preservedCount);
|
const newSubtasksNeeded = Math.max(1, targetSubtaskCount - preservedCount);
|
||||||
@@ -337,7 +336,7 @@ ${
|
|||||||
}
|
}
|
||||||
|
|
||||||
Return a JSON object with a "subtasks" array. Each subtask should have:
|
Return a JSON object with a "subtasks" array. Each subtask should have:
|
||||||
- id: Sequential NUMBER starting from 1 (e.g., 1, 2, 3 - NOT "1", "2", "3")
|
- id: Sequential number starting from 1
|
||||||
- title: Clear, specific title
|
- title: Clear, specific title
|
||||||
- description: Detailed description
|
- description: Detailed description
|
||||||
- dependencies: Array of dependency IDs as STRINGS (use format ["${task.id}.1", "${task.id}.2"] for siblings, or empty array [] for no dependencies)
|
- dependencies: Array of dependency IDs as STRINGS (use format ["${task.id}.1", "${task.id}.2"] for siblings, or empty array [] for no dependencies)
|
||||||
@@ -345,9 +344,7 @@ Return a JSON object with a "subtasks" array. Each subtask should have:
|
|||||||
- status: "pending"
|
- status: "pending"
|
||||||
- testStrategy: Testing approach
|
- testStrategy: Testing approach
|
||||||
|
|
||||||
IMPORTANT:
|
IMPORTANT: Dependencies must be strings, not numbers!
|
||||||
- The 'id' field must be a NUMBER, not a string!
|
|
||||||
- Dependencies must be strings, not numbers!
|
|
||||||
|
|
||||||
Ensure the JSON is valid and properly formatted.`;
|
Ensure the JSON is valid and properly formatted.`;
|
||||||
|
|
||||||
@@ -360,14 +357,14 @@ Ensure the JSON is valid and properly formatted.`;
|
|||||||
description: z.string().min(10),
|
description: z.string().min(10),
|
||||||
dependencies: z.array(z.string()),
|
dependencies: z.array(z.string()),
|
||||||
details: z.string().min(20),
|
details: z.string().min(20),
|
||||||
status: z.string(),
|
status: z.string().default('pending'),
|
||||||
testStrategy: z.string()
|
testStrategy: z.string().nullable().default('')
|
||||||
})
|
})
|
||||||
)
|
)
|
||||||
});
|
});
|
||||||
|
|
||||||
const aiResult = await generateObjectService({
|
const aiResult = await generateObjectService({
|
||||||
role: context.research ? 'research' : 'main',
|
role: 'main',
|
||||||
session: context.session,
|
session: context.session,
|
||||||
systemPrompt,
|
systemPrompt,
|
||||||
prompt,
|
prompt,
|
||||||
@@ -379,26 +376,18 @@ Ensure the JSON is valid and properly formatted.`;
|
|||||||
|
|
||||||
const generatedSubtasks = aiResult.mainResult.subtasks || [];
|
const generatedSubtasks = aiResult.mainResult.subtasks || [];
|
||||||
|
|
||||||
// Post-process generated subtasks to ensure defaults
|
|
||||||
const processedGeneratedSubtasks = generatedSubtasks.map((subtask) => ({
|
|
||||||
...subtask,
|
|
||||||
status: subtask.status || 'pending',
|
|
||||||
testStrategy: subtask.testStrategy || ''
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Update task with preserved subtasks + newly generated ones
|
// Update task with preserved subtasks + newly generated ones
|
||||||
task.subtasks = [...preservedSubtasks, ...processedGeneratedSubtasks];
|
task.subtasks = [...preservedSubtasks, ...generatedSubtasks];
|
||||||
|
|
||||||
return {
|
return {
|
||||||
updatedTask: task,
|
updatedTask: task,
|
||||||
regenerated: true,
|
regenerated: true,
|
||||||
preserved: preservedSubtasks.length,
|
preserved: preservedSubtasks.length,
|
||||||
generated: processedGeneratedSubtasks.length
|
generated: generatedSubtasks.length
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log(
|
console.log(
|
||||||
'warn',
|
`[WARN] Failed to regenerate subtasks for task ${task.id}: ${error.message}`
|
||||||
`Failed to regenerate subtasks for task ${task.id}: ${error.message}`
|
|
||||||
);
|
);
|
||||||
// Don't fail the whole operation if subtask regeneration fails
|
// Don't fail the whole operation if subtask regeneration fails
|
||||||
return {
|
return {
|
||||||
@@ -466,7 +455,6 @@ ADJUSTMENT REQUIREMENTS:
|
|||||||
- description: Updated task description
|
- description: Updated task description
|
||||||
- details: Updated implementation details
|
- details: Updated implementation details
|
||||||
- testStrategy: Updated test strategy
|
- testStrategy: Updated test strategy
|
||||||
- priority: Task priority ('low', 'medium', or 'high')
|
|
||||||
|
|
||||||
Ensure the JSON is valid and properly formatted.`;
|
Ensure the JSON is valid and properly formatted.`;
|
||||||
|
|
||||||
@@ -511,11 +499,14 @@ async function adjustTaskComplexity(
|
|||||||
.string()
|
.string()
|
||||||
.min(1)
|
.min(1)
|
||||||
.describe('Updated testing approach for the adjusted scope'),
|
.describe('Updated testing approach for the adjusted scope'),
|
||||||
priority: z.enum(['low', 'medium', 'high']).describe('Task priority level')
|
priority: z
|
||||||
|
.enum(['low', 'medium', 'high'])
|
||||||
|
.optional()
|
||||||
|
.describe('Task priority level')
|
||||||
});
|
});
|
||||||
|
|
||||||
const aiResult = await generateObjectService({
|
const aiResult = await generateObjectService({
|
||||||
role: context.research ? 'research' : 'main',
|
role: 'main',
|
||||||
session: context.session,
|
session: context.session,
|
||||||
systemPrompt,
|
systemPrompt,
|
||||||
prompt,
|
prompt,
|
||||||
@@ -527,16 +518,10 @@ async function adjustTaskComplexity(
|
|||||||
|
|
||||||
const updatedTaskData = aiResult.mainResult;
|
const updatedTaskData = aiResult.mainResult;
|
||||||
|
|
||||||
// Ensure priority has a value (in case AI didn't provide one)
|
|
||||||
const processedTaskData = {
|
|
||||||
...updatedTaskData,
|
|
||||||
priority: updatedTaskData.priority || task.priority || 'medium'
|
|
||||||
};
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
updatedTask: {
|
updatedTask: {
|
||||||
...task,
|
...task,
|
||||||
...processedTaskData
|
...updatedTaskData
|
||||||
},
|
},
|
||||||
telemetryData: aiResult.telemetryData
|
telemetryData: aiResult.telemetryData
|
||||||
};
|
};
|
||||||
@@ -598,7 +583,7 @@ export async function scopeUpTask(
|
|||||||
// Get original complexity score (if available)
|
// Get original complexity score (if available)
|
||||||
const originalComplexity = getCurrentComplexityScore(taskId, context);
|
const originalComplexity = getCurrentComplexityScore(taskId, context);
|
||||||
if (originalComplexity && outputFormat === 'text') {
|
if (originalComplexity && outputFormat === 'text') {
|
||||||
log('info', `Original complexity: ${originalComplexity}/10`);
|
console.log(`[INFO] Original complexity: ${originalComplexity}/10`);
|
||||||
}
|
}
|
||||||
|
|
||||||
const adjustResult = await adjustTaskComplexity(
|
const adjustResult = await adjustTaskComplexity(
|
||||||
@@ -650,9 +635,8 @@ export async function scopeUpTask(
|
|||||||
const complexityChange = newComplexity - originalComplexity;
|
const complexityChange = newComplexity - originalComplexity;
|
||||||
const arrow =
|
const arrow =
|
||||||
complexityChange > 0 ? '↗️' : complexityChange < 0 ? '↘️' : '➡️';
|
complexityChange > 0 ? '↗️' : complexityChange < 0 ? '↘️' : '➡️';
|
||||||
log(
|
console.log(
|
||||||
'info',
|
`[INFO] New complexity: ${originalComplexity}/10 ${arrow} ${newComplexity}/10 (${complexityChange > 0 ? '+' : ''}${complexityChange})`
|
||||||
`New complexity: ${originalComplexity}/10 ${arrow} ${newComplexity}/10 (${complexityChange > 0 ? '+' : ''}${complexityChange})`
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -749,7 +733,7 @@ export async function scopeDownTask(
|
|||||||
// Get original complexity score (if available)
|
// Get original complexity score (if available)
|
||||||
const originalComplexity = getCurrentComplexityScore(taskId, context);
|
const originalComplexity = getCurrentComplexityScore(taskId, context);
|
||||||
if (originalComplexity && outputFormat === 'text') {
|
if (originalComplexity && outputFormat === 'text') {
|
||||||
log('info', `Original complexity: ${originalComplexity}/10`);
|
console.log(`[INFO] Original complexity: ${originalComplexity}/10`);
|
||||||
}
|
}
|
||||||
|
|
||||||
const adjustResult = await adjustTaskComplexity(
|
const adjustResult = await adjustTaskComplexity(
|
||||||
@@ -801,9 +785,8 @@ export async function scopeDownTask(
|
|||||||
const complexityChange = newComplexity - originalComplexity;
|
const complexityChange = newComplexity - originalComplexity;
|
||||||
const arrow =
|
const arrow =
|
||||||
complexityChange > 0 ? '↗️' : complexityChange < 0 ? '↘️' : '➡️';
|
complexityChange > 0 ? '↗️' : complexityChange < 0 ? '↘️' : '➡️';
|
||||||
log(
|
console.log(
|
||||||
'info',
|
`[INFO] New complexity: ${originalComplexity}/10 ${arrow} ${newComplexity}/10 (${complexityChange > 0 ? '+' : ''}${complexityChange})`
|
||||||
`New complexity: ${originalComplexity}/10 ${arrow} ${newComplexity}/10 (${complexityChange > 0 ? '+' : ''}${complexityChange})`
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|||||||
@@ -1,12 +1,4 @@
|
|||||||
import {
|
import { generateObject, generateText, streamText } from 'ai';
|
||||||
generateObject,
|
|
||||||
generateText,
|
|
||||||
streamText,
|
|
||||||
zodSchema,
|
|
||||||
JSONParseError,
|
|
||||||
NoObjectGeneratedError
|
|
||||||
} from 'ai';
|
|
||||||
import { jsonrepair } from 'jsonrepair';
|
|
||||||
import { log } from '../../scripts/modules/utils.js';
|
import { log } from '../../scripts/modules/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -214,8 +206,8 @@ export class BaseAIProvider {
|
|||||||
const result = await generateObject({
|
const result = await generateObject({
|
||||||
model: client(params.modelId),
|
model: client(params.modelId),
|
||||||
messages: params.messages,
|
messages: params.messages,
|
||||||
schema: zodSchema(params.schema),
|
schema: params.schema,
|
||||||
mode: params.mode || 'auto',
|
mode: 'auto',
|
||||||
maxTokens: params.maxTokens,
|
maxTokens: params.maxTokens,
|
||||||
temperature: params.temperature
|
temperature: params.temperature
|
||||||
});
|
});
|
||||||
@@ -234,43 +226,6 @@ export class BaseAIProvider {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Check if this is a JSON parsing error that we can potentially fix
|
|
||||||
if (
|
|
||||||
NoObjectGeneratedError.isInstance(error) &&
|
|
||||||
JSONParseError.isInstance(error.cause) &&
|
|
||||||
error.cause.text
|
|
||||||
) {
|
|
||||||
log(
|
|
||||||
'warn',
|
|
||||||
`${this.name} generated malformed JSON, attempting to repair...`
|
|
||||||
);
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Use jsonrepair to fix the malformed JSON
|
|
||||||
const repairedJson = jsonrepair(error.cause.text);
|
|
||||||
const parsed = JSON.parse(repairedJson);
|
|
||||||
|
|
||||||
log('info', `Successfully repaired ${this.name} JSON output`);
|
|
||||||
|
|
||||||
// Return in the expected format
|
|
||||||
return {
|
|
||||||
object: parsed,
|
|
||||||
usage: {
|
|
||||||
// Extract usage information from the error if available
|
|
||||||
inputTokens: error.usage?.promptTokens || 0,
|
|
||||||
outputTokens: error.usage?.completionTokens || 0,
|
|
||||||
totalTokens: error.usage?.totalTokens || 0
|
|
||||||
}
|
|
||||||
};
|
|
||||||
} catch (repairError) {
|
|
||||||
log(
|
|
||||||
'error',
|
|
||||||
`Failed to repair ${this.name} JSON: ${repairError.message}`
|
|
||||||
);
|
|
||||||
// Fall through to handleError with original error
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
this.handleError('object generation', error);
|
this.handleError('object generation', error);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -44,21 +44,4 @@ export class PerplexityAIProvider extends BaseAIProvider {
|
|||||||
this.handleError('client initialization', error);
|
this.handleError('client initialization', error);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Override generateObject to use JSON mode for Perplexity
|
|
||||||
*
|
|
||||||
* NOTE: Perplexity models (especially sonar models) have known issues
|
|
||||||
* generating valid JSON, particularly with array fields. They often
|
|
||||||
* generate malformed JSON like "dependencies": , instead of "dependencies": []
|
|
||||||
*
|
|
||||||
* The base provider now handles JSON repair automatically for all providers.
|
|
||||||
*/
|
|
||||||
async generateObject(params) {
|
|
||||||
// Force JSON mode for Perplexity as it may help with reliability
|
|
||||||
return super.generateObject({
|
|
||||||
...params,
|
|
||||||
mode: 'json'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -30,22 +30,12 @@
|
|||||||
"type": "boolean",
|
"type": "boolean",
|
||||||
"default": false,
|
"default": false,
|
||||||
"description": "Use research mode for deeper analysis"
|
"description": "Use research mode for deeper analysis"
|
||||||
},
|
|
||||||
"isClaudeCode": {
|
|
||||||
"type": "boolean",
|
|
||||||
"default": false,
|
|
||||||
"description": "Whether Claude Code is being used as the provider"
|
|
||||||
},
|
|
||||||
"projectRoot": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "",
|
|
||||||
"description": "Project root path for context"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"prompts": {
|
"prompts": {
|
||||||
"default": {
|
"default": {
|
||||||
"system": "You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.",
|
"system": "You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.",
|
||||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before analyzing task complexity:\n\n1. Use the Glob tool to explore the project structure and understand the codebase size\n2. Use the Grep tool to search for existing implementations related to each task\n3. Use the Read tool to examine key files that would be affected by these tasks\n4. Understand the current implementation state, patterns used, and technical debt\n\nBased on your codebase analysis:\n- Assess complexity based on ACTUAL code that needs to be modified/created\n- Consider existing abstractions and patterns that could simplify implementation\n- Identify tasks that require refactoring vs. greenfield development\n- Factor in dependencies between existing code and new features\n- Provide more accurate subtask recommendations based on real code structure\n\nProject Root: {{projectRoot}}\n\n{{/if}}Analyze the following tasks to determine their complexity (1-10 scale) and recommend the number of subtasks for expansion. Provide a brief reasoning and an initial expansion prompt for each.{{#if useResearch}} Consider current best practices, common implementation patterns, and industry standards in your analysis.{{/if}}\n\nTasks:\n{{{json tasks}}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nRespond ONLY with a valid JSON array matching the schema:\n[\n {\n \"taskId\": <number>,\n \"taskTitle\": \"<string>\",\n \"complexityScore\": <number 1-10>,\n \"recommendedSubtasks\": <number>,\n \"expansionPrompt\": \"<string>\",\n \"reasoning\": \"<string>\"\n },\n ...\n]\n\nDo not include any explanatory text, markdown formatting, or code block markers before or after the JSON array."
|
"user": "Analyze the following tasks to determine their complexity (1-10 scale) and recommend the number of subtasks for expansion. Provide a brief reasoning and an initial expansion prompt for each.{{#if useResearch}} Consider current best practices, common implementation patterns, and industry standards in your analysis.{{/if}}\n\nTasks:\n{{{json tasks}}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nRespond ONLY with a valid JSON array matching the schema:\n[\n {\n \"taskId\": <number>,\n \"taskTitle\": \"<string>\",\n \"complexityScore\": <number 1-10>,\n \"recommendedSubtasks\": <number>,\n \"expansionPrompt\": \"<string>\",\n \"reasoning\": \"<string>\"\n },\n ...\n]\n\nDo not include any explanatory text, markdown formatting, or code block markers before or after the JSON array."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -51,34 +51,22 @@
|
|||||||
"required": false,
|
"required": false,
|
||||||
"default": "",
|
"default": "",
|
||||||
"description": "Gathered project context"
|
"description": "Gathered project context"
|
||||||
},
|
|
||||||
"isClaudeCode": {
|
|
||||||
"type": "boolean",
|
|
||||||
"required": false,
|
|
||||||
"default": false,
|
|
||||||
"description": "Whether Claude Code is being used as the provider"
|
|
||||||
},
|
|
||||||
"projectRoot": {
|
|
||||||
"type": "string",
|
|
||||||
"required": false,
|
|
||||||
"default": "",
|
|
||||||
"description": "Project root path for context"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"prompts": {
|
"prompts": {
|
||||||
"complexity-report": {
|
"complexity-report": {
|
||||||
"condition": "expansionPrompt",
|
"condition": "expansionPrompt",
|
||||||
"system": "You are an AI assistant helping with task breakdown. Generate {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks based on the provided prompt and context.\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array of the generated subtask objects.\nEach subtask object in the array must have keys: \"id\", \"title\", \"description\", \"dependencies\", \"details\", \"status\".\nEnsure the 'id' starts from {{nextSubtaskId}} and is sequential.\nFor 'dependencies', use the full subtask ID format: \"{{task.id}}.1\", \"{{task.id}}.2\", etc. Only reference subtasks within this same task.\nEnsure 'status' is 'pending'.\nDo not include any other text or explanation.",
|
"system": "You are an AI assistant helping with task breakdown. Generate {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks based on the provided prompt and context.\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array of the generated subtask objects.\nEach subtask object in the array must have keys: \"id\", \"title\", \"description\", \"dependencies\", \"details\", \"status\".\nEnsure the 'id' starts from {{nextSubtaskId}} and is sequential.\nFor 'dependencies', use the full subtask ID format: \"{{task.id}}.1\", \"{{task.id}}.2\", etc. Only reference subtasks within this same task.\nEnsure 'status' is 'pending'.\nDo not include any other text or explanation.",
|
||||||
"user": "Break down the following task based on the analysis prompt:\n\nParent Task:\nID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}\n\nExpansion Guidance:\n{{expansionPrompt}}{{#if additionalContext}}\n\n{{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\n\n{{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nGenerate {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks with sequential IDs starting from {{nextSubtaskId}}."
|
"user": "{{expansionPrompt}}{{#if additionalContext}}\n\n{{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\n\n{{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}"
|
||||||
},
|
},
|
||||||
"research": {
|
"research": {
|
||||||
"condition": "useResearch === true && !expansionPrompt",
|
"condition": "useResearch === true && !expansionPrompt",
|
||||||
"system": "You are an AI assistant that responds ONLY with valid JSON objects as requested. The object should contain a 'subtasks' array.",
|
"system": "You are an AI assistant that responds ONLY with valid JSON objects as requested. The object should contain a 'subtasks' array.",
|
||||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating subtasks:\n\n1. Use the Glob tool to explore relevant files for this task (e.g., \"**/*.js\", \"src/**/*.ts\")\n2. Use the Grep tool to search for existing implementations related to this task\n3. Use the Read tool to examine files that would be affected by this task\n4. Understand the current implementation state and patterns used\n\nBased on your analysis:\n- Identify existing code that relates to this task\n- Understand patterns and conventions to follow\n- Generate subtasks that integrate smoothly with existing code\n- Ensure subtasks are specific and actionable based on the actual codebase\n\nProject Root: {{projectRoot}}\n\n{{/if}}Analyze the following task and break it down into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks using your research capabilities. Assign sequential IDs starting from {{nextSubtaskId}}.\n\nParent Task:\nID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nConsider this context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nCRITICAL: Respond ONLY with a valid JSON object containing a single key \"subtasks\". The value must be an array of the generated subtasks, strictly matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": <number>, // Sequential ID starting from {{nextSubtaskId}}\n \"title\": \"<string>\",\n \"description\": \"<string>\",\n \"dependencies\": [\"<string>\"], // Use full subtask IDs like [\"{{task.id}}.1\", \"{{task.id}}.2\"]. If no dependencies, use an empty array [].\n \"details\": \"<string>\",\n \"testStrategy\": \"<string>\" // Optional\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}appropriate number of{{/if}} subtasks)\n ]\n}\n\nImportant: For the 'dependencies' field, if a subtask has no dependencies, you MUST use an empty array, for example: \"dependencies\": []. Do not use null or omit the field.\n\nDo not include ANY explanatory text, markdown, or code block markers. Just the JSON object."
|
"user": "Analyze the following task and break it down into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks using your research capabilities. Assign sequential IDs starting from {{nextSubtaskId}}.\n\nParent Task:\nID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nConsider this context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nCRITICAL: Respond ONLY with a valid JSON object containing a single key \"subtasks\". The value must be an array of the generated subtasks, strictly matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": <number>, // Sequential ID starting from {{nextSubtaskId}}\n \"title\": \"<string>\",\n \"description\": \"<string>\",\n \"dependencies\": [\"<string>\"], // Use full subtask IDs like [\"{{task.id}}.1\", \"{{task.id}}.2\"]. If no dependencies, use an empty array [].\n \"details\": \"<string>\",\n \"testStrategy\": \"<string>\" // Optional\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}appropriate number of{{/if}} subtasks)\n ]\n}\n\nImportant: For the 'dependencies' field, if a subtask has no dependencies, you MUST use an empty array, for example: \"dependencies\": []. Do not use null or omit the field.\n\nDo not include ANY explanatory text, markdown, or code block markers. Just the JSON object."
|
||||||
},
|
},
|
||||||
"default": {
|
"default": {
|
||||||
"system": "You are an AI assistant helping with task breakdown for software development.\nYou need to break down a high-level task into {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks that can be implemented one by one.\n\nSubtasks should:\n1. Be specific and actionable implementation steps\n2. Follow a logical sequence\n3. Each handle a distinct part of the parent task\n4. Include clear guidance on implementation approach\n5. Have appropriate dependency chains between subtasks (using full subtask IDs)\n6. Collectively cover all aspects of the parent task\n\nFor each subtask, provide:\n- id: Sequential integer starting from the provided nextSubtaskId\n- title: Clear, specific title\n- description: Detailed description\n- dependencies: Array of prerequisite subtask IDs using full format like [\"{{task.id}}.1\", \"{{task.id}}.2\"]\n- details: Implementation details, the output should be in string\n- testStrategy: Optional testing approach\n\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array matching the structure described. Do not include any explanatory text, markdown formatting, or code block markers.",
|
"system": "You are an AI assistant helping with task breakdown for software development.\nYou need to break down a high-level task into {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks that can be implemented one by one.\n\nSubtasks should:\n1. Be specific and actionable implementation steps\n2. Follow a logical sequence\n3. Each handle a distinct part of the parent task\n4. Include clear guidance on implementation approach\n5. Have appropriate dependency chains between subtasks (using full subtask IDs)\n6. Collectively cover all aspects of the parent task\n\nFor each subtask, provide:\n- id: Sequential integer starting from the provided nextSubtaskId\n- title: Clear, specific title\n- description: Detailed description\n- dependencies: Array of prerequisite subtask IDs using full format like [\"{{task.id}}.1\", \"{{task.id}}.2\"]\n- details: Implementation details, the output should be in string\n- testStrategy: Optional testing approach\n\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array matching the structure described. Do not include any explanatory text, markdown formatting, or code block markers.",
|
||||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating subtasks:\n\n1. Use the Glob tool to explore relevant files for this task (e.g., \"**/*.js\", \"src/**/*.ts\")\n2. Use the Grep tool to search for existing implementations related to this task\n3. Use the Read tool to examine files that would be affected by this task\n4. Understand the current implementation state and patterns used\n\nBased on your analysis:\n- Identify existing code that relates to this task\n- Understand patterns and conventions to follow\n- Generate subtasks that integrate smoothly with existing code\n- Ensure subtasks are specific and actionable based on the actual codebase\n\nProject Root: {{projectRoot}}\n\n{{/if}}Break down this task into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks:\n\nTask ID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nAdditional context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nReturn ONLY the JSON object containing the \"subtasks\" array, matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": {{nextSubtaskId}}, // First subtask ID\n \"title\": \"Specific subtask title\",\n \"description\": \"Detailed description\",\n \"dependencies\": [], // e.g., [\"{{task.id}}.1\", \"{{task.id}}.2\"] for dependencies. Use empty array [] if no dependencies\n \"details\": \"Implementation guidance\",\n \"testStrategy\": \"Optional testing approach\"\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}a total of {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks with sequential IDs)\n ]\n}"
|
"user": "Break down this task into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks:\n\nTask ID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nAdditional context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nReturn ONLY the JSON object containing the \"subtasks\" array, matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": {{nextSubtaskId}}, // First subtask ID\n \"title\": \"Specific subtask title\",\n \"description\": \"Detailed description\",\n \"dependencies\": [], // e.g., [\"{{task.id}}.1\", \"{{task.id}}.2\"] for dependencies. Use empty array [] if no dependencies\n \"details\": \"Implementation guidance\",\n \"testStrategy\": \"Optional testing approach\"\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}a total of {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks with sequential IDs)\n ]\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -40,24 +40,12 @@
|
|||||||
"default": "medium",
|
"default": "medium",
|
||||||
"enum": ["high", "medium", "low"],
|
"enum": ["high", "medium", "low"],
|
||||||
"description": "Default priority for generated tasks"
|
"description": "Default priority for generated tasks"
|
||||||
},
|
|
||||||
"isClaudeCode": {
|
|
||||||
"type": "boolean",
|
|
||||||
"required": false,
|
|
||||||
"default": false,
|
|
||||||
"description": "Whether Claude Code is being used as the provider"
|
|
||||||
},
|
|
||||||
"projectRoot": {
|
|
||||||
"type": "string",
|
|
||||||
"required": false,
|
|
||||||
"default": "",
|
|
||||||
"description": "Project root path for context"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"prompts": {
|
"prompts": {
|
||||||
"default": {
|
"default": {
|
||||||
"system": "You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.{{#if research}}\nBefore breaking down the PRD into tasks, you will:\n1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project\n2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)\n4. Evaluate alternative implementation approaches and recommend the most efficient path\n5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research\n6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n\nYour task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.{{/if}}\n\nAnalyze the provided PRD content and generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD\nEach task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.\nAssign sequential IDs starting from {{nextId}}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.\nSet status to 'pending', dependencies to an empty array [], and priority to '{{defaultTaskPriority}}' initially for all tasks.\nRespond ONLY with a valid JSON object containing a single key \"tasks\", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.\n\nEach task should follow this JSON structure:\n{\n\t\"id\": number,\n\t\"title\": string,\n\t\"description\": string,\n\t\"status\": \"pending\",\n\t\"dependencies\": number[] (IDs of tasks this depends on),\n\t\"priority\": \"high\" | \"medium\" | \"low\",\n\t\"details\": string (implementation details),\n\t\"testStrategy\": string (validation approach)\n}\n\nGuidelines:\n1. {{#if (gt numTasks 0)}}Unless complexity warrants otherwise{{else}}Depending on the complexity{{/if}}, create {{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, numbered sequentially starting from {{nextId}}\n2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards\n3. Order tasks logically - consider dependencies and implementation sequence\n4. Early tasks should focus on setup, core functionality first, then advanced features\n5. Include clear validation/testing approach for each task\n6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than {{nextId}} if applicable)\n7. Assign priority (high/medium/low) based on criticality and dependency order\n8. Include detailed implementation guidance in the \"details\" field{{#if research}}, with specific libraries and version recommendations based on your research{{/if}}\n9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance\n10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements\n11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches{{#if research}}\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research{{/if}}",
|
"system": "You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.{{#if research}}\nBefore breaking down the PRD into tasks, you will:\n1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project\n2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)\n4. Evaluate alternative implementation approaches and recommend the most efficient path\n5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research\n6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n\nYour task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.{{/if}}\n\nAnalyze the provided PRD content and generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD\nEach task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.\nAssign sequential IDs starting from {{nextId}}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.\nSet status to 'pending', dependencies to an empty array [], and priority to '{{defaultTaskPriority}}' initially for all tasks.\nRespond ONLY with a valid JSON object containing a single key \"tasks\", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.\n\nEach task should follow this JSON structure:\n{\n\t\"id\": number,\n\t\"title\": string,\n\t\"description\": string,\n\t\"status\": \"pending\",\n\t\"dependencies\": number[] (IDs of tasks this depends on),\n\t\"priority\": \"high\" | \"medium\" | \"low\",\n\t\"details\": string (implementation details),\n\t\"testStrategy\": string (validation approach)\n}\n\nGuidelines:\n1. {{#if (gt numTasks 0)}}Unless complexity warrants otherwise{{else}}Depending on the complexity{{/if}}, create {{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, numbered sequentially starting from {{nextId}}\n2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards\n3. Order tasks logically - consider dependencies and implementation sequence\n4. Early tasks should focus on setup, core functionality first, then advanced features\n5. Include clear validation/testing approach for each task\n6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than {{nextId}} if applicable)\n7. Assign priority (high/medium/low) based on criticality and dependency order\n8. Include detailed implementation guidance in the \"details\" field{{#if research}}, with specific libraries and version recommendations based on your research{{/if}}\n9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance\n10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements\n11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches{{#if research}}\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research{{/if}}",
|
||||||
"user": "{{#if isClaudeCode}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating tasks:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine key files like package.json, README.md, and main entry points\n4. Analyze the current state of implementation to understand what already exists\n\nBased on your analysis:\n- Identify what components/features are already implemented\n- Understand the technology stack, frameworks, and patterns in use\n- Generate tasks that build upon the existing codebase rather than duplicating work\n- Ensure tasks align with the project's current architecture and conventions\n\nProject Root: {{projectRoot}}\n\n{{/if}}Here's the Product Requirements Document (PRD) to break down into {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, starting IDs from {{nextId}}:{{#if research}}\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.{{/if}}\n\n{{prdContent}}\n\n\n\t\tReturn your response in this format:\n{\n \"tasks\": [\n {\n \"id\": 1,\n \"title\": \"Setup Project Repository\",\n \"description\": \"...\",\n ...\n },\n ...\n ],\n \"metadata\": {\n \"projectName\": \"PRD Implementation\",\n \"totalTasks\": {{#if (gt numTasks 0)}}{{numTasks}}{{else}}{number of tasks}{{/if}},\n \"sourceFile\": \"{{prdPath}}\",\n \"generatedAt\": \"YYYY-MM-DD\"\n }\n}"
|
"user": "Here's the Product Requirements Document (PRD) to break down into {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, starting IDs from {{nextId}}:{{#if research}}\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.{{/if}}\n\n{{prdContent}}\n\n\n\t\tReturn your response in this format:\n{\n \"tasks\": [\n {\n \"id\": 1,\n \"title\": \"Setup Project Repository\",\n \"description\": \"...\",\n ...\n },\n ...\n ],\n \"metadata\": {\n \"projectName\": \"PRD Implementation\",\n \"totalTasks\": {{#if (gt numTasks 0)}}{{numTasks}}{{else}}{number of tasks}{{/if}},\n \"sourceFile\": \"{{prdPath}}\",\n \"generatedAt\": \"YYYY-MM-DD\"\n }\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,134 +0,0 @@
|
|||||||
import { jest } from '@jest/globals';
|
|
||||||
import { PromptManager } from '../../../scripts/modules/prompt-manager.js';
|
|
||||||
|
|
||||||
describe('expand-task prompt template', () => {
|
|
||||||
let promptManager;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
promptManager = new PromptManager();
|
|
||||||
});
|
|
||||||
|
|
||||||
const testTask = {
|
|
||||||
id: 1,
|
|
||||||
title: 'Setup AWS Infrastructure',
|
|
||||||
description: 'Provision core AWS services',
|
|
||||||
details: 'Create VPC, subnets, and security groups'
|
|
||||||
};
|
|
||||||
|
|
||||||
const baseParams = {
|
|
||||||
task: testTask,
|
|
||||||
subtaskCount: 3,
|
|
||||||
nextSubtaskId: 1,
|
|
||||||
additionalContext: '',
|
|
||||||
complexityReasoningContext: '',
|
|
||||||
gatheredContext: '',
|
|
||||||
useResearch: false,
|
|
||||||
expansionPrompt: undefined
|
|
||||||
};
|
|
||||||
|
|
||||||
test('default variant includes task context', () => {
|
|
||||||
const { userPrompt } = promptManager.loadPrompt(
|
|
||||||
'expand-task',
|
|
||||||
baseParams,
|
|
||||||
'default'
|
|
||||||
);
|
|
||||||
|
|
||||||
expect(userPrompt).toContain(testTask.title);
|
|
||||||
expect(userPrompt).toContain(testTask.description);
|
|
||||||
expect(userPrompt).toContain(testTask.details);
|
|
||||||
expect(userPrompt).toContain('Task ID: 1');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('research variant includes task context', () => {
|
|
||||||
const params = { ...baseParams, useResearch: true };
|
|
||||||
const { userPrompt } = promptManager.loadPrompt(
|
|
||||||
'expand-task',
|
|
||||||
params,
|
|
||||||
'research'
|
|
||||||
);
|
|
||||||
|
|
||||||
expect(userPrompt).toContain(testTask.title);
|
|
||||||
expect(userPrompt).toContain(testTask.description);
|
|
||||||
expect(userPrompt).toContain(testTask.details);
|
|
||||||
expect(userPrompt).toContain('Parent Task:');
|
|
||||||
expect(userPrompt).toContain('ID: 1');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('complexity-report variant includes task context', () => {
|
|
||||||
const params = {
|
|
||||||
...baseParams,
|
|
||||||
expansionPrompt: 'Focus on security best practices',
|
|
||||||
complexityReasoningContext: 'High complexity due to security requirements'
|
|
||||||
};
|
|
||||||
const { userPrompt } = promptManager.loadPrompt(
|
|
||||||
'expand-task',
|
|
||||||
params,
|
|
||||||
'complexity-report'
|
|
||||||
);
|
|
||||||
|
|
||||||
// The fix ensures task context is included
|
|
||||||
expect(userPrompt).toContain('Parent Task:');
|
|
||||||
expect(userPrompt).toContain(`ID: ${testTask.id}`);
|
|
||||||
expect(userPrompt).toContain(`Title: ${testTask.title}`);
|
|
||||||
expect(userPrompt).toContain(`Description: ${testTask.description}`);
|
|
||||||
expect(userPrompt).toContain(`Current details: ${testTask.details}`);
|
|
||||||
|
|
||||||
// Also includes the expansion prompt
|
|
||||||
expect(userPrompt).toContain('Expansion Guidance:');
|
|
||||||
expect(userPrompt).toContain(params.expansionPrompt);
|
|
||||||
expect(userPrompt).toContain(params.complexityReasoningContext);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('all variants request JSON format with subtasks array', () => {
|
|
||||||
const variants = ['default', 'research', 'complexity-report'];
|
|
||||||
|
|
||||||
variants.forEach((variant) => {
|
|
||||||
const params =
|
|
||||||
variant === 'complexity-report'
|
|
||||||
? { ...baseParams, expansionPrompt: 'test' }
|
|
||||||
: baseParams;
|
|
||||||
|
|
||||||
const { systemPrompt, userPrompt } = promptManager.loadPrompt(
|
|
||||||
'expand-task',
|
|
||||||
params,
|
|
||||||
variant
|
|
||||||
);
|
|
||||||
const combined = systemPrompt + userPrompt;
|
|
||||||
|
|
||||||
expect(combined.toLowerCase()).toContain('subtasks');
|
|
||||||
expect(combined).toContain('JSON');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
test('complexity-report variant fails without task context regression test', () => {
|
|
||||||
// This test ensures we don't regress to the old behavior where
|
|
||||||
// complexity-report variant only used expansionPrompt without task context
|
|
||||||
const params = {
|
|
||||||
...baseParams,
|
|
||||||
expansionPrompt: 'Generic expansion prompt'
|
|
||||||
};
|
|
||||||
|
|
||||||
const { userPrompt } = promptManager.loadPrompt(
|
|
||||||
'expand-task',
|
|
||||||
params,
|
|
||||||
'complexity-report'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Count occurrences of task-specific content
|
|
||||||
const titleOccurrences = (
|
|
||||||
userPrompt.match(new RegExp(testTask.title, 'g')) || []
|
|
||||||
).length;
|
|
||||||
const descriptionOccurrences = (
|
|
||||||
userPrompt.match(new RegExp(testTask.description, 'g')) || []
|
|
||||||
).length;
|
|
||||||
|
|
||||||
// Should have at least one occurrence of title and description
|
|
||||||
expect(titleOccurrences).toBeGreaterThanOrEqual(1);
|
|
||||||
expect(descriptionOccurrences).toBeGreaterThanOrEqual(1);
|
|
||||||
|
|
||||||
// Should not be ONLY the expansion prompt
|
|
||||||
expect(userPrompt.length).toBeGreaterThan(
|
|
||||||
params.expansionPrompt.length + 100
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
@@ -123,9 +123,7 @@ jest.unstable_mockModule(
|
|||||||
() => ({
|
() => ({
|
||||||
getDefaultSubtasks: jest.fn(() => 3),
|
getDefaultSubtasks: jest.fn(() => 3),
|
||||||
getDebugFlag: jest.fn(() => false),
|
getDebugFlag: jest.fn(() => false),
|
||||||
getDefaultNumTasks: jest.fn(() => 10),
|
getDefaultNumTasks: jest.fn(() => 10)
|
||||||
getMainProvider: jest.fn(() => 'openai'),
|
|
||||||
getResearchProvider: jest.fn(() => 'perplexity')
|
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|||||||
@@ -49,9 +49,7 @@ jest.unstable_mockModule(
|
|||||||
() => ({
|
() => ({
|
||||||
getDebugFlag: jest.fn(() => false),
|
getDebugFlag: jest.fn(() => false),
|
||||||
getDefaultNumTasks: jest.fn(() => 10),
|
getDefaultNumTasks: jest.fn(() => 10),
|
||||||
getDefaultPriority: jest.fn(() => 'medium'),
|
getDefaultPriority: jest.fn(() => 'medium')
|
||||||
getMainProvider: jest.fn(() => 'openai'),
|
|
||||||
getResearchProvider: jest.fn(() => 'perplexity')
|
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|||||||
@@ -8,17 +8,13 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
|||||||
log: jest.fn(),
|
log: jest.fn(),
|
||||||
readJSON: jest.fn(),
|
readJSON: jest.fn(),
|
||||||
writeJSON: jest.fn(),
|
writeJSON: jest.fn(),
|
||||||
getCurrentTag: jest.fn(() => 'master'),
|
getCurrentTag: jest.fn(() => 'master')
|
||||||
readComplexityReport: jest.fn(),
|
|
||||||
findTaskInComplexityReport: jest.fn(),
|
|
||||||
findProjectRoot: jest.fn()
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
jest.unstable_mockModule(
|
||||||
'../../../../../scripts/modules/ai-services-unified.js',
|
'../../../../../scripts/modules/ai-services-unified.js',
|
||||||
() => ({
|
() => ({
|
||||||
generateObjectService: jest.fn(),
|
generateObjectService: jest.fn()
|
||||||
generateTextService: jest.fn()
|
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -30,25 +26,10 @@ jest.unstable_mockModule(
|
|||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/task-manager/analyze-task-complexity.js',
|
|
||||||
() => ({
|
|
||||||
default: jest.fn()
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule('../../../../../src/utils/path-utils.js', () => ({
|
|
||||||
findComplexityReportPath: jest.fn()
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Import modules after mocking
|
// Import modules after mocking
|
||||||
const {
|
const { log, readJSON, writeJSON } = await import(
|
||||||
log,
|
'../../../../../scripts/modules/utils.js'
|
||||||
readJSON,
|
);
|
||||||
writeJSON,
|
|
||||||
readComplexityReport,
|
|
||||||
findTaskInComplexityReport
|
|
||||||
} = await import('../../../../../scripts/modules/utils.js');
|
|
||||||
const { generateObjectService } = await import(
|
const { generateObjectService } = await import(
|
||||||
'../../../../../scripts/modules/ai-services-unified.js'
|
'../../../../../scripts/modules/ai-services-unified.js'
|
||||||
);
|
);
|
||||||
|
|||||||
Reference in New Issue
Block a user