Compare commits

..

16 Commits

Author SHA1 Message Date
Ralph Khreish
ef4e2e425b chore: apply requested changes 2025-10-09 14:53:33 +02:00
Ralph Khreish
f0d1d5de89 chore: apply requested changes 2025-10-08 21:56:32 +02:00
Ralph Khreish
519d8bdfcb chore: apply requested changes 2025-10-08 16:49:02 +02:00
Ralph Khreish
4b6ad19bc4 chore: apply requested changes and improve coderabbit config 2025-10-08 16:46:35 +02:00
Ralph Khreish
f71cdb4eaa chore: fix format 2025-10-08 16:46:35 +02:00
Ralph Khreish
bc0093d506 Discard changes to .taskmaster/config.json 2025-10-08 16:46:35 +02:00
Ralph Khreish
042fe6dced chore: back to master tag 2025-10-08 16:46:34 +02:00
Ralph Khreish
3178c3aeac refactor: migrate git-utils to TypeScript in tm-core
Move git utilities from scripts/modules/utils/git-utils.js to packages/tm-core/src/utils/git-utils.ts for better type safety and reusability.

## Changes

**New File**: `packages/tm-core/src/utils/git-utils.ts`
- Converted from JavaScript to TypeScript with full type annotations
- Added `GitHubRepoInfo` interface for type safety
- Includes all essential git functions needed for Phase 1:
  - `isGitRepository`, `isGitRepositorySync`
  - `getCurrentBranch`, `getCurrentBranchSync`
  - `getLocalBranches`, `getRemoteBranches`
  - `isGhCliAvailable`, `getGitHubRepoInfo`
  - `getDefaultBranch`, `isOnDefaultBranch`
  - `sanitizeBranchNameForTag`, `isValidBranchForTag`

**Updated Files**:
- `preflight-checker.service.ts`: Now imports from local git-utils
- `packages/tm-core/src/utils/index.ts`: Exports git utilities

## Rationale

Phase 1 will need git operations for:
- Creating feature branches (WorkflowOrchestrator)
- Checking git status before execution
- Validating clean working tree
- Branch naming validation

Having these utilities in tm-core provides:
- Type safety (no more `require()` hacks)
- Better testability
- Cleaner imports
- Reusability across services

## Verification

 All tests pass (1298 passed, 121 test suites)
 Typecheck passes (5/5 successful)
 Build successful

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-08 16:46:34 +02:00
Ralph Khreish
d75430c4d8 fix: resolve TypeScript typecheck errors in Phase 0 implementation
- Fix git-utils import in PreflightChecker using require() with type casting
- Fix ConfigManager initialization in TaskLoaderService (use async factory)
- Fix TaskService.getTask return type (returns Task | null directly)
- Export PreflightChecker and TaskLoaderService from @tm/core
- Fix unused parameter and type annotations in autopilot command
- Add boolean fallback for optional dryRun parameter

All turbo:typecheck errors resolved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-08 16:46:34 +02:00
Ralph Khreish
2dbfaa0d3b chore: run format 2025-10-08 16:46:34 +02:00
Ralph Khreish
8857417870 feat: implement Phase 0 TDD autopilot dry-run foundation
Implements the complete Phase 0 spike for autonomous TDD workflow with orchestration architecture.

## What's New

### Core Services (tm-core)
- **PreflightChecker**: Validates environment prerequisites
  - Test command detection from package.json
  - Git working tree status validation
  - Required tools availability (git, gh, node, npm)
  - Default branch detection

- **TaskLoaderService**: Comprehensive task validation
  - Task existence and structure validation
  - Subtask dependency analysis with circular detection
  - Execution order calculation via topological sort
  - Helpful expansion suggestions for unready tasks

### CLI Command
- **autopilot command**: `tm autopilot <taskId> --dry-run`
  - Displays complete execution plan without executing
  - Shows preflight check results
  - Lists subtasks in dependency order
  - Preview RED/GREEN/COMMIT phases per subtask
  - Registered in command registry

### Architecture Documentation
- **Phase 0 completion**: Marked tdd-workflow-phase-0-spike.md as complete
- **Orchestration model**: Added execution model section to main workflow doc
  - Clarifies orchestrator guides AI sessions vs direct execution
  - WorkflowOrchestrator API design (getNextWorkUnit, completeWorkUnit)
  - State machine approach for phase transitions

- **Phase 1 roadmap**: New tdd-workflow-phase-1-orchestrator.md
  - Detailed state machine specifications
  - MCP integration plan with new tool definitions
  - Implementation checklist with 6 clear steps
  - Example usage flows

## Technical Details

**Preflight Checks**:
-  Test command detection
-  Git working tree status
-  Required tools validation
-  Default branch detection

**Task Validation**:
-  Task existence check
-  Status validation (no completed/cancelled tasks)
-  Subtask presence validation
-  Dependency resolution with circular detection
-  Execution order calculation

**Architecture Decision**:
Adopted orchestration model where WorkflowOrchestrator maintains state and generates work units, while Claude Code (via MCP) executes the actual work. This provides:
- Clean separation of concerns
- Human-in-the-loop capability
- Simpler implementation (no AI integration in orchestrator)
- Flexible executor support

## Out of Scope (Phase 0)
- Actual test generation
- Actual code implementation
- Git operations (commits, branches, PR)
- Test execution
→ All deferred to Phase 1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-08 16:46:34 +02:00
Ralph Khreish
ad9355f97a chore: improve phase-1 of tdd workflow 2025-10-08 14:59:20 +02:00
Ralph Khreish
ec3972ff10 chore: prepare branch 2025-10-08 14:59:20 +02:00
Ralph Khreish
959c6151fa chore: expand and analyze-complexity 2025-10-08 14:59:20 +02:00
Ralph Khreish
728787d869 chore: keep working on tasks 2025-10-08 14:59:19 +02:00
Ralph Khreish
27b2348a9a chore: create plan for task execution 2025-10-08 14:59:19 +02:00
293 changed files with 7025 additions and 23809 deletions

View File

@@ -0,0 +1,11 @@
---
"task-master-ai": minor
---
Add Codex CLI provider with OAuth authentication
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
- OAuth-first authentication via `codex login` - no API key required
- Optional OPENAI_CODEX_API_KEY support
- Codebase analysis capabilities automatically enabled
- Command-specific settings and approval/sandbox modes

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improve `analyze-complexity` cli docs and `--research` flag documentation

View File

@@ -11,7 +11,6 @@
"access": "public",
"baseBranch": "main",
"ignore": [
"docs",
"@tm/claude-code-plugin"
"docs"
]
}

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": minor
---
Add Cursor IDE custom slash command support
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Change parent task back to "pending" when all subtasks are in "pending" state

View File

@@ -2,4 +2,4 @@
"task-master-ai": patch
---
Improve auth token refresh flow
Do a quick fix on build

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Enable Task Master commands to traverse parent directories to find project root from nested paths
Fixes #1301

View File

@@ -1,5 +0,0 @@
---
"@tm/cli": patch
---
Fix warning message box width to match dashboard box width for consistent UI alignment

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys

View File

@@ -0,0 +1,10 @@
---
"task-master-ai": minor
---
Move to AI SDK v5:
- Works better with claude-code and gemini-cli as ai providers
- Improved openai model family compatibility
- Migrate ollama provider to v2
- Closes #1223, #1013, #1161, #1174

View File

@@ -0,0 +1,30 @@
---
"task-master-ai": minor
---
Migrate AI services to use generateObject for structured data generation
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
### Key Changes:
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
### Technical Improvements:
- Centralized provider configuration in `ai-providers-unified.js`
- Added `generateObject` support detection for each provider
- Implemented proper error handling for schema validation failures
- Maintained backward compatibility with existing prompt structures
### Bug Fixes:
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
- Enhanced prompt instructions to enforce proper ID generation patterns
- Ensured subtasks display correctly as X.1, X.2, X.3 format
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.

View File

@@ -1,35 +0,0 @@
---
"task-master-ai": minor
---
Add configurable MCP tool loading to optimize LLM context usage
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
**Configuration Options:**
- `all` (default): Load all 36 tools
- `core` or `lean`: Load only 7 essential tools for daily development
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
**Example .mcp.json configuration:**
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).

View File

@@ -0,0 +1,13 @@
---
"task-master-ai": minor
---
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Improve next command to work with remote

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix Claude Code settings validation for pathToClaudeCodeExecutable

26
.changeset/pre.json Normal file
View File

@@ -0,0 +1,26 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.27.3",
"docs": "0.0.4",
"extension": "0.25.4"
},
"changesets": [
"brave-lions-sing",
"chore-fix-docs",
"cursor-slash-commands",
"curvy-weeks-flow",
"easy-spiders-wave",
"fix-mcp-connection-errors",
"fix-mcp-default-tasks-path",
"flat-cities-say",
"forty-tables-invite",
"gentle-cats-dance",
"mcp-timeout-configuration",
"petite-ideas-grab",
"silly-pandas-find",
"sweet-maps-rule",
"whole-pigs-say"
]
}

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix sonar deep research model failing, should be called `sonar-deep-research`

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Upgrade grok-cli ai provider to ai sdk v5

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": patch
---
Fix complexity score not showing for `task-master show` and `task-master list`
- Added complexity score on "next task" when running `task-master list`
- Added colors to complexity to reflect complexity (easy, medium, hard)

View File

@@ -1,32 +0,0 @@
{
"name": "taskmaster",
"owner": {
"name": "Hamster",
"email": "ralph@tryhamster.com"
},
"metadata": {
"description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development",
"version": "1.0.0"
},
"plugins": [
{
"name": "taskmaster",
"source": "./packages/claude-code-plugin",
"description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination",
"author": {
"name": "Hamster"
},
"homepage": "https://github.com/eyaltoledano/claude-task-master",
"repository": "https://github.com/eyaltoledano/claude-task-master",
"keywords": [
"task-management",
"ai",
"workflow",
"orchestration",
"automation",
"mcp"
],
"category": "productivity"
}
]
}

View File

@@ -0,0 +1,92 @@
---
name: task-executor
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
model: sonnet
color: blue
---
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
- Execute ONE specific subtask or a small group of related subtasks
- Complete your work, verify it, mark for review, and exit
- Do NOT decide what to do next - the orchestrator handles task sequencing
- Focus on implementation excellence within your assigned scope
**Core Responsibilities:**
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
2. **Rapid Implementation Planning**: Quickly identify:
- The EXACT files you need to create/modify for THIS subtask
- What already exists that you can build upon
- The minimum viable implementation that satisfies requirements
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
- Use `Write` tool to create new files specified in the task
- Use `Edit` tool to modify existing files
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
- Use `Read` tool to verify your implementations
- Implement one subtask at a time for clarity and traceability
- Follow the project's coding standards from CLAUDE.md if available
- After each subtask, VERIFY the files exist using Read or ls commands
4. **Progress Documentation**:
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
- Tasks will be verified by task-checker before moving to 'done'
5. **Quality Assurance**:
- Implement the testing strategy specified in the task
- Verify that all acceptance criteria are met
- Check for any dependency conflicts or integration issues
- Run relevant tests before marking task as complete
6. **Dependency Management**:
- Check task dependencies before starting implementation
- If blocked by incomplete dependencies, clearly communicate this
- Use `task-master validate-dependencies` when needed
**Implementation Workflow:**
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
2. Check dependencies and prerequisites
3. Plan implementation approach - list specific files to create
4. Update task status to 'in-progress' using MCP tool
5. **ACTUALLY IMPLEMENT** the solution using tools:
- Use `Bash` to create directories
- Use `Write` to create new files with actual content
- Use `Edit` to modify existing files
- DO NOT just describe what should be done - DO IT
6. **VERIFY** your implementation:
- Use `ls` or `Read` to confirm files were created
- Use `Bash` to run any build/test commands
- Ensure the implementation is real, not theoretical
7. Log progress and decisions in subtask updates using MCP tools
8. Test and verify the implementation works
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
10. Report completion with:
- List of created/modified files
- Any issues encountered
- What needs verification by task-checker
**Key Principles:**
- Focus on completing one task thoroughly before moving to the next
- Maintain clear communication about what you're implementing and why
- Follow existing code patterns and project conventions
- Prioritize working code over extensive documentation unless docs are the task
- Ask for clarification if task requirements are ambiguous
- Consider edge cases and error handling in your implementations
**Integration with Task Master:**
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
- Track your progress
- Update task information
- Maintain project state
- Coordinate with the broader development workflow
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.

View File

@@ -0,0 +1,208 @@
---
name: task-orchestrator
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
model: opus
color: green
---
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
## Core Responsibilities
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
## Operational Workflow
### Initial Assessment Phase
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
2. Analyze task statuses, priorities, and dependencies
3. Identify tasks with status 'pending' that have no blocking dependencies
4. Group related tasks that could benefit from specialized executors
5. Create an execution plan that maximizes parallelization
### Executor Deployment Phase
1. For each independent task or task group:
- Deploy a task-executor agent with specific instructions
- Provide the executor with task ID, requirements, and context
- Set clear completion criteria and reporting expectations
2. Maintain a registry of active executors and their assigned tasks
3. Establish communication protocols for progress updates
### Coordination Phase
1. Monitor executor progress through task status updates
2. When a task completes:
- Verify completion with `get_task` or `task-master show <id>`
- Update task status if needed using `set_task_status`
- Reassess dependency graph for newly unblocked tasks
- Deploy new executors for available work
3. Handle executor failures or blocks:
- Reassign tasks to new executors if needed
- Escalate complex issues to the user
- Update task status to 'blocked' when appropriate
### Optimization Strategies
**Parallel Execution Rules**:
- Never assign dependent tasks to different executors simultaneously
- Prioritize high-priority tasks when resources are limited
- Group small, related subtasks for single executor efficiency
- Balance executor load to prevent bottlenecks
**Context Management**:
- Provide executors with minimal but sufficient context
- Share relevant completed task information when it aids execution
- Maintain a shared knowledge base of project-specific patterns
**Quality Assurance**:
- Verify task completion before marking as done
- Ensure test strategies are followed when specified
- Coordinate cross-task integration testing when needed
## Communication Protocols
When deploying executors, provide them with:
```
TASK ASSIGNMENT:
- Task ID: [specific ID]
- Objective: [clear goal]
- Dependencies: [list any completed prerequisites]
- Success Criteria: [specific completion requirements]
- Context: [relevant project information]
- Reporting: [when and how to report back]
```
When receiving executor updates:
1. Acknowledge completion or issues
2. Update task status in Task Master
3. Reassess execution strategy
4. Deploy new executors as appropriate
## Decision Framework
**When to parallelize**:
- Multiple pending tasks with no interdependencies
- Sufficient context available for independent execution
- Tasks are well-defined with clear success criteria
**When to serialize**:
- Strong dependencies between tasks
- Limited context or unclear requirements
- Integration points requiring careful coordination
**When to escalate**:
- Circular dependencies detected
- Critical blockers affecting multiple tasks
- Ambiguous requirements needing clarification
- Resource conflicts between executors
## Error Handling
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
3. **Task Ambiguity**: Request clarification from user before proceeding
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
## Performance Metrics
Track and optimize for:
- Task completion rate
- Parallel execution efficiency
- Executor success rate
- Time to completion for task groups
- Dependency resolution speed
## Integration with Task Master
Leverage these Task Master MCP tools effectively:
- `get_tasks` - Continuous queue monitoring
- `get_task` - Detailed task analysis
- `set_task_status` - Progress tracking
- `next_task` - Fallback for serial execution
- `analyze_project_complexity` - Strategic planning
- `complexity_report` - Resource allocation
## Output Format for Execution
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
After completing your dependency analysis, you MUST output a structured execution plan:
```yaml
execution_plan:
EXECUTE_IN_PARALLEL:
# Maximum 3 subtasks running simultaneously
- subtask_id: [e.g., 118.2]
parent_task: [e.g., 118]
title: [Specific subtask title]
priority: [high/medium/low]
estimated_time: [e.g., 10 minutes]
executor_prompt: |
Execute Subtask [ID]: [Specific subtask title]
SPECIFIC REQUIREMENTS:
[Exact implementation needed for THIS subtask only]
FILES TO CREATE/MODIFY:
[Specific file paths]
CONTEXT:
[What already exists that this subtask depends on]
SUCCESS CRITERIA:
[Specific completion criteria for this subtask]
IMPORTANT:
- Focus ONLY on this subtask
- Mark subtask as 'review' when complete
- Use MCP tool: mcp__task-master-ai__set_task_status
- subtask_id: [Another subtask that can run in parallel]
parent_task: [Parent task ID]
title: [Specific subtask title]
priority: [priority]
estimated_time: [time estimate]
executor_prompt: |
[Focused prompt for this specific subtask]
blocked:
- task_id: [ID]
title: [Task title]
waiting_for: [list of blocking task IDs]
becomes_ready_when: [condition for unblocking]
next_wave:
trigger: "After tasks [IDs] complete"
newly_available: [List of task IDs that will unblock]
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
critical_path: [Ordered list of task IDs forming the critical path]
parallelization_instruction: |
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
simultaneously using multiple Task tool invocations in a single response.
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
verification_needed:
- task_id: [ID of any task in 'review' status]
verification_focus: [what to check]
```
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
2. Use multiple Task tool invocations in a SINGLE response
3. Do not execute them sequentially - they must run in parallel
4. Wait for all parallel executors to complete before proceeding to next wave
**IMPORTANT NOTES**:
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
- Provide complete, self-contained prompts for each executor
- Executors should mark tasks as 'review' for verification, not 'done'
- Be explicit about which tasks can run simultaneously
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.

View File

@@ -48,7 +48,7 @@ After adding dependency:
## Example Flows
```
/taskmaster:add-dependency 5 needs 3
/project:tm/add-dependency 5 needs 3
→ Task #5 now depends on Task #3
→ Task #5 is now blocked until #3 completes
→ Suggested: Also consider if #5 needs #4

View File

@@ -56,12 +56,12 @@ task-master add-subtask --parent=<id> --task-id=<existing-id>
## Example Flows
```
/taskmaster:add-subtask to 5: implement user authentication
/project:tm/add-subtask to 5: implement user authentication
→ Created subtask #5.1: "implement user authentication"
→ Parent task #5 now has 1 subtask
→ Suggested next subtasks: tests, documentation
/taskmaster:add-subtask 5: setup, implement, test
/project:tm/add-subtask 5: setup, implement, test
→ Created 3 subtasks:
#5.1: setup
#5.2: implement

View File

@@ -53,7 +53,7 @@ task-master add-subtask --parent=<parent-id> --task-id=<task-to-convert>
## Example
```
/taskmaster:add-subtask/from-task 5 8
/project:tm/add-subtask/from-task 5 8
→ Converting: Task #8 becomes subtask #5.1
→ Updated: 3 dependency references
→ Parent task #5 now has 1 subtask

View File

@@ -115,7 +115,7 @@ Results are:
After analysis:
```
/taskmaster:expand 5 # Expand specific task
/taskmaster:expand-all # Expand all recommended
/taskmaster:complexity-report # View detailed report
/project:tm/expand 5 # Expand specific task
/project:tm/expand/all # Expand all recommended
/project:tm/complexity-report # View detailed report
```

View File

@@ -105,13 +105,13 @@ Use report for:
## Example Usage
```
/taskmaster:complexity-report
/project:tm/complexity-report
→ Opens latest analysis
/taskmaster:complexity-report --file=archived/2024-01-01.md
/project:tm/complexity-report --file=archived/2024-01-01.md
→ View historical analysis
After viewing:
/taskmaster:expand 5
/project:tm/expand 5
→ Expand high-complexity task
```

View File

@@ -70,7 +70,7 @@ Manual Review Needed:
⚠️ Task #45 has 8 dependencies
Suggestion: Break into subtasks
Run '/taskmaster:validate-dependencies' to verify fixes
Run '/project:tm/validate-dependencies' to verify fixes
```
## Safety

View File

@@ -0,0 +1,81 @@
Show help for Task Master commands.
Arguments: $ARGUMENTS
Display help for Task Master commands. If arguments provided, show specific command help.
## Task Master Command Help
### Quick Navigation
Type `/project:tm/` and use tab completion to explore all commands.
### Command Categories
#### 🚀 Setup & Installation
- `/project:tm/setup/install` - Comprehensive installation guide
- `/project:tm/setup/quick-install` - One-line global install
#### 📋 Project Setup
- `/project:tm/init` - Initialize new project
- `/project:tm/init/quick` - Quick setup with auto-confirm
- `/project:tm/models` - View AI configuration
- `/project:tm/models/setup` - Configure AI providers
#### 🎯 Task Generation
- `/project:tm/parse-prd` - Generate tasks from PRD
- `/project:tm/parse-prd/with-research` - Enhanced parsing
- `/project:tm/generate` - Create task files
#### 📝 Task Management
- `/project:tm/list` - List tasks (natural language filters)
- `/project:tm/show <id>` - Display task details
- `/project:tm/add-task` - Create new task
- `/project:tm/update` - Update tasks naturally
- `/project:tm/next` - Get next task recommendation
#### 🔄 Status Management
- `/project:tm/set-status/to-pending <id>`
- `/project:tm/set-status/to-in-progress <id>`
- `/project:tm/set-status/to-done <id>`
- `/project:tm/set-status/to-review <id>`
- `/project:tm/set-status/to-deferred <id>`
- `/project:tm/set-status/to-cancelled <id>`
#### 🔍 Analysis & Breakdown
- `/project:tm/analyze-complexity` - Analyze task complexity
- `/project:tm/expand <id>` - Break down complex task
- `/project:tm/expand/all` - Expand all eligible tasks
#### 🔗 Dependencies
- `/project:tm/add-dependency` - Add task dependency
- `/project:tm/remove-dependency` - Remove dependency
- `/project:tm/validate-dependencies` - Check for issues
#### 🤖 Workflows
- `/project:tm/workflows/smart-flow` - Intelligent workflows
- `/project:tm/workflows/pipeline` - Command chaining
- `/project:tm/workflows/auto-implement` - Auto-implementation
#### 📊 Utilities
- `/project:tm/utils/analyze` - Project analysis
- `/project:tm/status` - Project dashboard
- `/project:tm/learn` - Interactive learning
### Natural Language Examples
```
/project:tm/list pending high priority
/project:tm/update mark all API tasks as done
/project:tm/add-task create login system with OAuth
/project:tm/show current
```
### Getting Started
1. Install: `/project:tm/setup/quick-install`
2. Initialize: `/project:tm/init/quick`
3. Learn: `/project:tm/learn start`
4. Work: `/project:tm/workflows/smart-flow`
For detailed command info: `/project:tm/help <command-name>`

View File

@@ -30,17 +30,17 @@ task-master init -y
After quick init:
1. Configure AI models if needed:
```
/taskmaster:models/setup
/project:tm/models/setup
```
2. Parse PRD if available:
```
/taskmaster:parse-prd <file>
/project:tm/parse-prd <file>
```
3. Or create first task:
```
/taskmaster:add-task create initial setup
/project:tm/add-task create initial setup
```
Perfect for rapid project setup!

View File

@@ -45,6 +45,6 @@ After successful init:
If PRD file provided:
```
/taskmaster:init my-prd.md
/project:tm/init my-prd.md
→ Automatically runs parse-prd after init
```

View File

@@ -55,7 +55,7 @@ After removing:
## Example
```
/taskmaster:remove-dependency 5 from 3
/project:tm/remove-dependency 5 from 3
→ Removed: Task #5 no longer depends on #3
→ Task #5 is now UNBLOCKED and ready to start
→ Warning: Consider if #5 still needs #2 completed first

View File

@@ -63,13 +63,13 @@ task-master remove-subtask --id=<parentId.subtaskId> --convert
## Example Flows
```
/taskmaster:remove-subtask 5.1
/project:tm/remove-subtask 5.1
→ Warning: Subtask #5.1 is in-progress
→ This will delete all subtask data
→ Parent task #5 will be updated
Confirm deletion? (y/n)
/taskmaster:remove-subtask 5.1 convert
/project:tm/remove-subtask 5.1 convert
→ Converting subtask #5.1 to standalone task #89
→ Preserved: All task data and history
→ Updated: 2 dependency references

View File

@@ -77,7 +77,7 @@ Suggest alternatives:
## Example
```
/taskmaster:clear-subtasks 5
/project:tm/clear-subtasks 5
→ Found 4 subtasks to remove
→ Warning: Subtask #5.2 is in-progress
→ Cleared all subtasks from task #5

View File

@@ -85,17 +85,17 @@ Suggest before deletion:
## Example Flows
```
/taskmaster:remove-task 5
/project:tm/remove-task 5
→ Task #5 is in-progress with 8 hours logged
→ 3 other tasks depend on this
→ Suggestion: Mark as cancelled instead?
Remove anyway? (y/n)
/taskmaster:remove-task 5 -y
/project:tm/remove-task 5 -y
→ Removed: Task #5 and 4 subtasks
→ Updated: 3 task dependencies
→ Warning: Tasks #7, #8, #9 now have missing dependency
→ Run /taskmaster:fix-dependencies to resolve
→ Run /project:tm/fix-dependencies to resolve
```
## Safety Features

View File

@@ -8,11 +8,11 @@ Commands are organized hierarchically to match Task Master's CLI structure while
## Project Setup & Configuration
### `/taskmaster:init`
### `/project:tm/init`
- `init-project` - Initialize new project (handles PRD files intelligently)
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
### `/taskmaster:models`
### `/project:tm/models`
- `view-models` - View current AI model configuration
- `setup-models` - Interactive model configuration
- `set-main` - Set primary generation model
@@ -21,21 +21,21 @@ Commands are organized hierarchically to match Task Master's CLI structure while
## Task Generation
### `/taskmaster:parse-prd`
### `/project:tm/parse-prd`
- `parse-prd` - Generate tasks from PRD document
- `parse-prd-with-research` - Enhanced parsing with research mode
### `/taskmaster:generate`
### `/project:tm/generate`
- `generate-tasks` - Create individual task files from tasks.json
## Task Management
### `/taskmaster:list`
### `/project:tm/list`
- `list-tasks` - Smart listing with natural language filters
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
- `list-tasks-by-status` - Filter by specific status
### `/taskmaster:set-status`
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
@@ -43,84 +43,84 @@ Commands are organized hierarchically to match Task Master's CLI structure while
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/taskmaster:sync-readme`
### `/project:tm/sync-readme`
- `sync-readme` - Export tasks to README.md with formatting
### `/taskmaster:update`
### `/project:tm/update`
- `update-task` - Update tasks with natural language
- `update-tasks-from-id` - Update multiple tasks from a starting point
- `update-single-task` - Update specific task
### `/taskmaster:add-task`
### `/project:tm/add-task`
- `add-task` - Add new task with AI assistance
### `/taskmaster:remove-task`
### `/project:tm/remove-task`
- `remove-task` - Remove task with confirmation
## Subtask Management
### `/taskmaster:add-subtask`
### `/project:tm/add-subtask`
- `add-subtask` - Add new subtask to parent
- `convert-task-to-subtask` - Convert existing task to subtask
### `/taskmaster:remove-subtask`
### `/project:tm/remove-subtask`
- `remove-subtask` - Remove subtask (with optional conversion)
### `/taskmaster:clear-subtasks`
### `/project:tm/clear-subtasks`
- `clear-subtasks` - Clear subtasks from specific task
- `clear-all-subtasks` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/taskmaster:analyze-complexity`
### `/project:tm/analyze-complexity`
- `analyze-complexity` - Analyze and generate expansion recommendations
### `/taskmaster:complexity-report`
### `/project:tm/complexity-report`
- `complexity-report` - Display complexity analysis report
### `/taskmaster:expand`
### `/project:tm/expand`
- `expand-task` - Break down specific task
- `expand-all-tasks` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/taskmaster:next`
### `/project:tm/next`
- `next-task` - Intelligent next task recommendation
### `/taskmaster:show`
### `/project:tm/show`
- `show-task` - Display detailed task information
### `/taskmaster:status`
### `/project:tm/status`
- `project-status` - Comprehensive project dashboard
## Dependency Management
### `/taskmaster:add-dependency`
### `/project:tm/add-dependency`
- `add-dependency` - Add task dependency
### `/taskmaster:remove-dependency`
### `/project:tm/remove-dependency`
- `remove-dependency` - Remove task dependency
### `/taskmaster:validate-dependencies`
### `/project:tm/validate-dependencies`
- `validate-dependencies` - Check for dependency issues
### `/taskmaster:fix-dependencies`
### `/project:tm/fix-dependencies`
- `fix-dependencies` - Automatically fix dependency problems
## Workflows & Automation
### `/taskmaster:workflows`
### `/project:tm/workflows`
- `smart-workflow` - Context-aware intelligent workflow execution
- `command-pipeline` - Chain multiple commands together
- `auto-implement-tasks` - Advanced auto-implementation with code generation
## Utilities
### `/taskmaster:utils`
### `/project:tm/utils`
- `analyze-project` - Deep project analysis and insights
### `/taskmaster:setup`
### `/project:tm/setup`
- `install-taskmaster` - Comprehensive installation guide
- `quick-install-taskmaster` - One-line global installation
@@ -129,17 +129,17 @@ Commands are organized hierarchically to match Task Master's CLI structure while
### Natural Language
Most commands accept natural language arguments:
```
/taskmaster:add-task create user authentication system
/taskmaster:update mark all API tasks as high priority
/taskmaster:list show blocked tasks
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/taskmaster:show 45
/taskmaster:expand 23
/taskmaster:set-status/to-done 67
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults

View File

@@ -66,7 +66,7 @@ The AI:
## Example Updates
```
/taskmaster:update/single 5: add rate limiting
/project:tm/update/single 5: add rate limiting
→ Updating Task #5: "Implement API endpoints"
Current: Basic CRUD endpoints

View File

@@ -77,7 +77,7 @@ AI analyzes the update context and:
## Example Updates
```
/taskmaster:update/from-id 5: change database to PostgreSQL
/project:tm/update/from-id 5: change database to PostgreSQL
→ Analyzing impact starting from task #5
→ Found 6 related tasks to update
→ Updates will maintain consistency

View File

@@ -66,6 +66,6 @@ For each issue found:
## Next Steps
After validation:
- Run `/taskmaster:fix-dependencies` to auto-fix
- Run `/project:tm/fix-dependencies` to auto-fix
- Manually adjust problematic dependencies
- Rerun to verify fixes

View File

@@ -1,3 +1,3 @@
reviews:
profile: chill
profile: assertive
poem: false

View File

@@ -14,4 +14,4 @@ OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
VERTEX_PROJECT_ID=your-gcp-project-id
VERTEX_LOCATION=us-central1
# Optional: Path to service account credentials JSON file (alternative to API key)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json

5
.gitignore vendored
View File

@@ -96,7 +96,4 @@ apps/extension/.vscode-test/
apps/extension/vsix-build/
# turbo
.turbo
# TaskMaster Workflow State (now stored in ~/.taskmaster/sessions/)
# No longer needed in .gitignore as state is stored globally
.turbo

View File

@@ -1,93 +0,0 @@
{
"meta": {
"generatedAt": "2025-10-09T12:47:27.960Z",
"tasksAnalyzed": 10,
"totalTasks": 10,
"analysisCount": 10,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 1,
"taskTitle": "Design and Implement Global Storage System",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the global storage system implementation into: 1) Path normalization utilities with cross-platform support, 2) Run ID generation and validation, 3) Manifest.json structure and management, 4) Activity.jsonl append-only logging, 5) State.json mutable checkpoint handling, and 6) Directory structure creation and cleanup. Focus on robust error handling, atomic operations, and isolation between different runs.",
"reasoning": "Complex system requiring cross-platform path handling, multiple file formats (JSON/JSONL), atomic operations, and state management. The existing codebase shows sophisticated file operations infrastructure but this extends beyond current patterns. Implementation involves filesystem operations, concurrency concerns, and data integrity."
},
{
"taskId": 2,
"taskTitle": "Build GitAdapter with Safety Checks",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "Decompose GitAdapter into: 1) Git repository detection and validation, 2) Working tree status checking with detailed reporting, 3) Branch operations (create, checkout, list) with safety guards, 4) Commit operations with metadata embedding, 5) Default branch detection and protection logic, 6) Push operations with conflict handling, and 7) Branch name generation from patterns. Emphasize safety checks, confirmation gates, and comprehensive error messages.",
"reasoning": "High complexity due to git operations safety requirements, multiple git commands integration, error handling for various git states, and safety mechanisms. The PRD emphasizes never allowing commits on default branch and requiring clean working tree - critical safety features that need robust implementation."
},
{
"taskId": 3,
"taskTitle": "Implement Test Result Validator",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Split test validation into: 1) Input validation and schema definition for test results, 2) RED phase validation logic (ensuring failures exist), 3) GREEN phase validation logic (ensuring all tests pass), and 4) Coverage threshold validation with configurable limits. Include comprehensive validation messages and suggestions for common failure scenarios.",
"reasoning": "Moderate complexity focused on business logic validation. The validator is framework-agnostic (only validates reported numbers), has clear validation rules, and well-defined input/output. The existing codebase shows validation patterns that can be leveraged."
},
{
"taskId": 4,
"taskTitle": "Develop WorkflowOrchestrator State Machine",
"complexityScore": 9,
"recommendedSubtasks": 8,
"expansionPrompt": "Structure the orchestrator into: 1) State machine definition and transitions (Preflight → BranchSetup → SubtaskLoop → Finalize), 2) Event emission system with comprehensive event types, 3) State persistence and recovery mechanisms, 4) Phase coordination and validation, 5) Subtask iteration and progress tracking, 6) Error handling and recovery strategies, 7) Resume functionality from checkpoints, and 8) Integration points for Git, Test, and other adapters.",
"reasoning": "Very high complexity as the central coordination component. Must orchestrate multiple adapters, handle state transitions, event emission, persistence, and recovery. The state machine needs to be robust, resumable, and coordinate all other components. Critical for the entire workflow's reliability."
},
{
"taskId": 5,
"taskTitle": "Create Enhanced Commit Message Generator",
"complexityScore": 4,
"recommendedSubtasks": 3,
"expansionPrompt": "Organize commit message generation into: 1) Template parsing and variable substitution with configurable templates, 2) Scope detection from changed files with intelligent categorization, and 3) Metadata embedding (task context, test results, coverage) with conventional commits compliance. Ensure messages are parseable and contain all required task metadata.",
"reasoning": "Relatively straightforward text processing and template system. The conventional commits format is well-defined, and the metadata requirements are clear. The existing package.json shows commander dependency for CLI patterns that can be leveraged."
},
{
"taskId": 6,
"taskTitle": "Implement Subtask TDD Loop",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the TDD loop into: 1) RED phase orchestration with test generation coordination, 2) GREEN phase orchestration with implementation guidance, 3) COMMIT phase with file staging and commit creation, 4) Attempt tracking and maximum retry logic, 5) Phase transition validation and state updates, and 6) Activity logging for all phase transitions. Focus on robust state management and clear error recovery paths.",
"reasoning": "High complexity due to coordinating multiple phases, state transitions, retry logic, and integration with multiple adapters (Git, Test, State). This is the core workflow execution engine requiring careful orchestration and error handling."
},
{
"taskId": 7,
"taskTitle": "Build CLI Commands for AI Agent Orchestration",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Structure CLI commands into: 1) Command registration and argument parsing setup, 2) `start` and `resume` commands with initialization logic, 3) `next` and `status` commands with JSON output formatting, 4) `complete` command with result validation integration, and 5) `commit` and `abort` commands with git operation coordination. Ensure consistent JSON output for machine parsing and comprehensive error handling.",
"reasoning": "Moderate complexity leveraging existing CLI infrastructure. The codebase shows commander usage patterns and CLI structure. Main complexity is in JSON output formatting, argument validation, and integration with the orchestrator component."
},
{
"taskId": 8,
"taskTitle": "Develop MCP Tools for AI Agent Integration",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Organize MCP tools into: 1) Tool schema definition and parameter validation, 2) `autopilot_start` and `autopilot_resume` tool implementation, 3) `autopilot_next` and `autopilot_status` tools with context provision, 4) `autopilot_complete_phase` tool with validation integration, and 5) `autopilot_commit` tool with git operations. Ensure parity with CLI functionality and proper error handling.",
"reasoning": "Moderate complexity building on existing MCP infrastructure. The codebase shows extensive MCP tooling patterns. Main work is adapting CLI functionality to MCP interface patterns and ensuring consistent behavior between CLI and MCP interfaces."
},
{
"taskId": 9,
"taskTitle": "Write AI Agent Integration Documentation and Templates",
"complexityScore": 2,
"recommendedSubtasks": 2,
"expansionPrompt": "Structure documentation into: 1) Comprehensive workflow documentation with step-by-step examples, command usage, and integration patterns, and 2) Template creation for CLAUDE.md integration, example prompts, and troubleshooting guides. Focus on clear examples and practical integration guidance.",
"reasoning": "Low complexity documentation task. Requires understanding of the implemented system but primarily involves writing clear instructions and examples. The existing codebase shows good documentation patterns that can be followed."
},
{
"taskId": 10,
"taskTitle": "Implement Configuration System and Project Hygiene",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Structure configuration into: 1) Configuration schema definition with comprehensive validation using ajv, 2) Default configuration setup and loading mechanisms, 3) Gitignore management and project directory hygiene rules, and 4) Configuration validation and error reporting. Ensure configurations are validated on startup and provide clear error messages for invalid settings.",
"reasoning": "Moderate complexity involving schema validation, file operations, and configuration management. The package.json shows ajv dependency is available. Configuration systems require careful validation and user-friendly error reporting, but follow established patterns."
}
]
}

View File

@@ -1,6 +1,6 @@
{
"currentTag": "tdd-phase-1-core-rails",
"lastSwitched": "2025-10-09T12:41:40.367Z",
"currentTag": "master",
"lastSwitched": "2025-10-07T17:17:58.049Z",
"branchTagMapping": {
"v017-adds": "v017-adds",
"next": "next"

File diff suppressed because one or more lines are too long

View File

@@ -1,511 +0,0 @@
<rpg-method>
# Repository Planning Graph (RPG) Method - PRD Template
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
## Core Principles
1. **Dual-Semantics**: Think functional (capabilities) AND structural (code organization) separately, then map them
2. **Explicit Dependencies**: Never assume - always state what depends on what
3. **Topological Order**: Build foundation first, then layers on top
4. **Progressive Refinement**: Start broad, refine iteratively
## How to Use This Template
- Follow the instructions in each `<instruction>` block
- Look at `<example>` blocks to see good vs bad patterns
- Fill in the content sections with your project details
- The AI reading this will learn the RPG method by following along
- Task Master will parse the resulting PRD into dependency-aware tasks
## Recommended Tools for Creating PRDs
When using this template to **create** a PRD (not parse it), use **code-context-aware AI assistants** for best results:
**Why?** The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
**Recommended tools:**
- **Claude Code** (claude-code CLI) - Best for structured reasoning and large contexts
- **Cursor/Windsurf** - IDE integration with full codebase context
- **Gemini CLI** (gemini-cli) - Massive context window for large codebases
- **Codex/Grok CLI** - Strong code generation with context awareness
**Note:** Once your PRD is created, `task-master parse-prd` works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
</rpg-method>
---
<overview>
<instruction>
Start with the problem, not the solution. Be specific about:
- What pain point exists?
- Who experiences it?
- Why existing solutions don't work?
- What success looks like (measurable outcomes)?
Keep this section focused - don't jump into implementation details yet.
</instruction>
## Problem Statement
[Describe the core problem. Be concrete about user pain points.]
## Target Users
[Define personas, their workflows, and what they're trying to achieve.]
## Success Metrics
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
</overview>
---
<functional-decomposition>
<instruction>
Now think about CAPABILITIES (what the system DOES), not code structure yet.
Step 1: Identify high-level capability domains
- Think: "What major things does this system do?"
- Examples: Data Management, Core Processing, Presentation Layer
Step 2: For each capability, enumerate specific features
- Use explore-exploit strategy:
* Exploit: What features are REQUIRED for core value?
* Explore: What features make this domain COMPLETE?
Step 3: For each feature, define:
- Description: What it does in one sentence
- Inputs: What data/context it needs
- Outputs: What it produces/returns
- Behavior: Key logic or transformations
<example type="good">
Capability: Data Validation
Feature: Schema validation
- Description: Validate JSON payloads against defined schemas
- Inputs: JSON object, schema definition
- Outputs: Validation result (pass/fail) + error details
- Behavior: Iterate fields, check types, enforce constraints
Feature: Business rule validation
- Description: Apply domain-specific validation rules
- Inputs: Validated data object, rule set
- Outputs: Boolean + list of violated rules
- Behavior: Execute rules sequentially, short-circuit on failure
</example>
<example type="bad">
Capability: validation.js
(Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)
Capability: Validation
Feature: Make sure data is good
(Problem: Too vague. No inputs/outputs. Not actionable.)
</example>
</instruction>
## Capability Tree
### Capability: [Name]
[Brief description of what this capability domain covers]
#### Feature: [Name]
- **Description**: [One sentence]
- **Inputs**: [What it needs]
- **Outputs**: [What it produces]
- **Behavior**: [Key logic]
#### Feature: [Name]
- **Description**:
- **Inputs**:
- **Outputs**:
- **Behavior**:
### Capability: [Name]
...
</functional-decomposition>
---
<structural-decomposition>
<instruction>
NOW think about code organization. Map capabilities to actual file/folder structure.
Rules:
1. Each capability maps to a module (folder or file)
2. Features within a capability map to functions/classes
3. Use clear module boundaries - each module has ONE responsibility
4. Define what each module exports (public interface)
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
<example type="good">
Capability: Data Validation
→ Maps to: src/validation/
├── schema-validator.js (Schema validation feature)
├── rule-validator.js (Business rule validation feature)
└── index.js (Public exports)
Exports:
- validateSchema(data, schema)
- validateRules(data, rules)
</example>
<example type="bad">
Capability: Data Validation
→ Maps to: src/utils.js
(Problem: "utils" is not a clear module boundary. Where do I find validation logic?)
Capability: Data Validation
→ Maps to: src/validation/everything.js
(Problem: One giant file. Features should map to separate files for maintainability.)
</example>
</instruction>
## Repository Structure
```
project-root/
├── src/
│ ├── [module-name]/ # Maps to: [Capability Name]
│ │ ├── [file].js # Maps to: [Feature Name]
│ │ └── index.js # Public exports
│ └── [module-name]/
├── tests/
└── docs/
```
## Module Definitions
### Module: [Name]
- **Maps to capability**: [Capability from functional decomposition]
- **Responsibility**: [Single clear purpose]
- **File structure**:
```
module-name/
├── feature1.js
├── feature2.js
└── index.js
```
- **Exports**:
- `functionName()` - [what it does]
- `ClassName` - [what it does]
</structural-decomposition>
---
<dependency-graph>
<instruction>
This is THE CRITICAL SECTION for Task Master parsing.
Define explicit dependencies between modules. This creates the topological order for task execution.
Rules:
1. List modules in dependency order (foundation first)
2. For each module, state what it depends on
3. Foundation modules should have NO dependencies
4. Every non-foundation module should depend on at least one other module
5. Think: "What must EXIST before I can build this module?"
<example type="good">
Foundation Layer (no dependencies):
- error-handling: No dependencies
- config-manager: No dependencies
- base-types: No dependencies
Data Layer:
- schema-validator: Depends on [base-types, error-handling]
- data-ingestion: Depends on [schema-validator, config-manager]
Core Layer:
- algorithm-engine: Depends on [base-types, error-handling]
- pipeline-orchestrator: Depends on [algorithm-engine, data-ingestion]
</example>
<example type="bad">
- validation: Depends on API
- API: Depends on validation
(Problem: Circular dependency. This will cause build/runtime issues.)
- user-auth: Depends on everything
(Problem: Too many dependencies. Should be more focused.)
</example>
</instruction>
## Dependency Chain
### Foundation Layer (Phase 0)
No dependencies - these are built first.
- **[Module Name]**: [What it provides]
- **[Module Name]**: [What it provides]
### [Layer Name] (Phase 1)
- **[Module Name]**: Depends on [[module-from-phase-0], [module-from-phase-0]]
- **[Module Name]**: Depends on [[module-from-phase-0]]
### [Layer Name] (Phase 2)
- **[Module Name]**: Depends on [[module-from-phase-1], [module-from-foundation]]
[Continue building up layers...]
</dependency-graph>
---
<implementation-roadmap>
<instruction>
Turn the dependency graph into concrete development phases.
Each phase should:
1. Have clear entry criteria (what must exist before starting)
2. Contain tasks that can be parallelized (no inter-dependencies within phase)
3. Have clear exit criteria (how do we know phase is complete?)
4. Build toward something USABLE (not just infrastructure)
Phase ordering follows topological sort of dependency graph.
<example type="good">
Phase 0: Foundation
Entry: Clean repository
Tasks:
- Implement error handling utilities
- Create base type definitions
- Setup configuration system
Exit: Other modules can import foundation without errors
Phase 1: Data Layer
Entry: Phase 0 complete
Tasks:
- Implement schema validator (uses: base types, error handling)
- Build data ingestion pipeline (uses: validator, config)
Exit: End-to-end data flow from input to validated output
</example>
<example type="bad">
Phase 1: Build Everything
Tasks:
- API
- Database
- UI
- Tests
(Problem: No clear focus. Too broad. Dependencies not considered.)
</example>
</instruction>
## Development Phases
### Phase 0: [Foundation Name]
**Goal**: [What foundational capability this establishes]
**Entry Criteria**: [What must be true before starting]
**Tasks**:
- [ ] [Task name] (depends on: [none or list])
- Acceptance criteria: [How we know it's done]
- Test strategy: [What tests prove it works]
- [ ] [Task name] (depends on: [none or list])
**Exit Criteria**: [Observable outcome that proves phase complete]
**Delivers**: [What can users/developers do after this phase?]
---
### Phase 1: [Layer Name]
**Goal**:
**Entry Criteria**: Phase 0 complete
**Tasks**:
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
- [ ] [Task name] (depends on: [[tasks-from-phase-0]])
**Exit Criteria**:
**Delivers**:
---
[Continue with more phases...]
</implementation-roadmap>
---
<test-strategy>
<instruction>
Define how testing will be integrated throughout development (TDD approach).
Specify:
1. Test pyramid ratios (unit vs integration vs e2e)
2. Coverage requirements
3. Critical test scenarios
4. Test generation guidelines for Surgical Test Generator
This section guides the AI when generating tests during the RED phase of TDD.
<example type="good">
Critical Test Scenarios for Data Validation module:
- Happy path: Valid data passes all checks
- Edge cases: Empty strings, null values, boundary numbers
- Error cases: Invalid types, missing required fields
- Integration: Validator works with ingestion pipeline
</example>
</instruction>
## Test Pyramid
```
/\
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
/------\
/Integration\ ← [Y]% (Module interactions)
/------------\
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
/----------------\
```
## Coverage Requirements
- Line coverage: [X]% minimum
- Branch coverage: [X]% minimum
- Function coverage: [X]% minimum
- Statement coverage: [X]% minimum
## Critical Test Scenarios
### [Module/Feature Name]
**Happy path**:
- [Scenario description]
- Expected: [What should happen]
**Edge cases**:
- [Scenario description]
- Expected: [What should happen]
**Error cases**:
- [Scenario description]
- Expected: [How system handles failure]
**Integration points**:
- [What interactions to test]
- Expected: [End-to-end behavior]
## Test Generation Guidelines
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
</test-strategy>
---
<architecture>
<instruction>
Describe technical architecture, data models, and key design decisions.
Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure.
</instruction>
## System Components
[Major architectural pieces and their responsibilities]
## Data Models
[Core data structures, schemas, database design]
## Technology Stack
[Languages, frameworks, key libraries]
**Decision: [Technology/Pattern]**
- **Rationale**: [Why chosen]
- **Trade-offs**: [What we're giving up]
- **Alternatives considered**: [What else we looked at]
</architecture>
---
<risks>
<instruction>
Identify risks that could derail development and how to mitigate them.
Categories:
- Technical risks (complexity, unknowns)
- Dependency risks (blocking issues)
- Scope risks (creep, underestimation)
</instruction>
## Technical Risks
**Risk**: [Description]
- **Impact**: [High/Medium/Low - effect on project]
- **Likelihood**: [High/Medium/Low]
- **Mitigation**: [How to address]
- **Fallback**: [Plan B if mitigation fails]
## Dependency Risks
[External dependencies, blocking issues]
## Scope Risks
[Scope creep, underestimation, unclear requirements]
</risks>
---
<appendix>
## References
[Papers, documentation, similar systems]
## Glossary
[Domain-specific terms]
## Open Questions
[Things to resolve during development]
</appendix>
---
<task-master-integration>
# How Task Master Uses This PRD
When you run `task-master parse-prd <file>.txt`, the parser:
1. **Extracts capabilities** → Main tasks
- Each `### Capability:` becomes a top-level task
2. **Extracts features** → Subtasks
- Each `#### Feature:` becomes a subtask under its capability
3. **Parses dependencies** → Task dependencies
- `Depends on: [X, Y]` sets task.dependencies = ["X", "Y"]
4. **Orders by phases** → Task priorities
- Phase 0 tasks = highest priority
- Phase N tasks = lower priority, properly sequenced
5. **Uses test strategy** → Test generation context
- Feeds test scenarios to Surgical Test Generator during implementation
**Result**: A dependency-aware task graph that can be executed in topological order.
## Why RPG Structure Matters
Traditional flat PRDs lead to:
- ❌ Unclear task dependencies
- ❌ Arbitrary task ordering
- ❌ Circular dependencies discovered late
- ❌ Poorly scoped tasks
RPG-structured PRDs provide:
- ✅ Explicit dependency chains
- ✅ Topological execution order
- ✅ Clear module boundaries
- ✅ Validated task graph before implementation
## Tips for Best Results
1. **Spend time on dependency graph** - This is the most valuable section for Task Master
2. **Keep features atomic** - Each feature should be independently testable
3. **Progressive refinement** - Start broad, use `task-master expand` to break down complex tasks
4. **Use research mode** - `task-master parse-prd --research` leverages AI for better task generation
</task-master-integration>

View File

@@ -1,254 +1,5 @@
# task-master-ai
## 0.29.0
### Minor Changes
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
### Patch Changes
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.1
### Patch Changes
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.0
### Minor Changes
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
### Patch Changes
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
## 0.28.0
### Minor Changes
- [#1273](https://github.com/eyaltoledano/claude-task-master/pull/1273) [`b43b7ce`](https://github.com/eyaltoledano/claude-task-master/commit/b43b7ce201625eee956fb2f8cd332f238bb78c21) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add Codex CLI provider with OAuth authentication
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
- OAuth-first authentication via `codex login` - no API key required
- Optional OPENAI_CODEX_API_KEY support
- Codebase analysis capabilities automatically enabled
- Command-specific settings and approval/sandbox modes
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
- Works better with claude-code and gemini-cli as ai providers
- Improved openai model family compatibility
- Migrate ollama provider to v2
- Closes #1223, #1013, #1161, #1174
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
### Key Changes:
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
### Technical Improvements:
- Centralized provider configuration in `ai-providers-unified.js`
- Added `generateObject` support detection for each provider
- Implemented proper error handling for schema validation failures
- Maintained backward compatibility with existing prompt structures
### Bug Fixes:
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
- Enhanced prompt instructions to enforce proper ID generation patterns
- Ensured subtasks display correctly as X.1, X.2, X.3 format
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
### Patch Changes
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
- [#1277](https://github.com/eyaltoledano/claude-task-master/pull/1277) [`7b5a7c4`](https://github.com/eyaltoledano/claude-task-master/commit/7b5a7c4495a68b782f7407fc5d0e0d3ae81f42f5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.
- [#1276](https://github.com/eyaltoledano/claude-task-master/pull/1276) [`caee040`](https://github.com/eyaltoledano/claude-task-master/commit/caee040907f856d31a660171c9e6d966f23c632e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
- Added complexity score on "next task" when running `task-master list`
- Added colors to complexity to reflect complexity (easy, medium, hard)
## 0.28.0-rc.2
### Minor Changes

View File

@@ -6,20 +6,13 @@
## Test Guidelines
### Test File Placement
- **Package & tests**: Place in `packages/<package-name>/src/<module>/<file>.spec.ts` or `apps/<app-name>/src/<module>/<file.spec.ts>` alongside source
- **Package integration tests**: Place in `packages/<package-name>/tests/integration/<module>/<file>.test.ts` or `apps/<app-name>/tests/integration/<module>/<file>.test.ts` alongside source
- **Isolated unit tests**: Use `tests/unit/packages/<package-name>/` only when parallel placement isn't possible
- **Test extension**: Always use `.ts` for TypeScript tests, never `.js`
### Synchronous Tests
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
- Use synchronous top-level imports instead of dynamic `await import()`
- Test bodies should be synchronous whenever possible
- Example:
```typescript
// ✅ CORRECT - Synchronous imports with .ts extension
```javascript
// ✅ CORRECT - Synchronous imports
import { MyClass } from '../src/my-class.js';
it('should verify behavior', () => {
@@ -33,11 +26,6 @@
});
```
## Documentation Guidelines
- **Documentation location**: Write docs in `apps/docs/` (Mintlify site source), not `docs/`
- **Documentation URL**: Reference docs at https://docs.task-master.dev, not local file paths
## Changeset Guidelines
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.

View File

@@ -1,140 +0,0 @@
# Taskmaster AI - Claude Code Marketplace
This repository includes a Claude Code plugin marketplace in `.claude-plugin/marketplace.json`.
## Installation
### From GitHub (Public Repository)
Once this repository is pushed to GitHub, users can install with:
```bash
# Add the marketplace
/plugin marketplace add eyaltoledano/claude-task-master
# Install the plugin
/plugin install taskmaster@taskmaster
```
### Local Development/Testing
```bash
# From the project root directory
cd /path/to/claude-task-master
# Build the plugin first
cd packages/claude-code-plugin
npm run build
cd ../..
# In Claude Code
/plugin marketplace add .
/plugin install taskmaster@taskmaster
```
## Marketplace Structure
```
claude-task-master/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest (at repo root)
├── packages/claude-code-plugin/
│ ├── src/build.ts # Build tooling
│ └── [generated plugin files]
└── assets/claude/ # Plugin source files
├── commands/
└── agents/
```
## Available Plugins
### taskmaster
AI-powered task management system for ambitious development workflows.
**Features:**
- 49 slash commands for comprehensive task management
- 3 specialized AI agents (orchestrator, executor, checker)
- MCP server integration
- Complexity analysis and auto-expansion
- Dependency management and validation
- Automated workflow capabilities
**Quick Start:**
```bash
/tm:init
/tm:parse-prd
/tm:next
```
## For Contributors
### Adding New Plugins
To add more plugins to this marketplace:
1. **Update marketplace.json**:
```json
{
"plugins": [
{
"name": "new-plugin",
"source": "./path/to/plugin",
"description": "Plugin description",
"version": "1.0.0"
}
]
}
```
2. **Commit and push** the changes
3. **Users update** with: `/plugin marketplace update taskmaster`
### Marketplace Versioning
The marketplace version is tracked in `.claude-plugin/marketplace.json`:
```json
{
"metadata": {
"version": "1.0.0"
}
}
```
Increment the version when adding or updating plugins.
## Team Configuration
Organizations can auto-install this marketplace for all team members by adding to `.claude/settings.json`:
```json
{
"extraKnownMarketplaces": {
"task-master": {
"source": {
"source": "github",
"repo": "eyaltoledano/claude-task-master"
}
}
},
"enabledPlugins": {
"taskmaster": {
"marketplace": "taskmaster"
}
}
}
```
Team members who trust the repository folder will automatically get the marketplace and plugins installed.
## Documentation
- [Claude Code Plugin Docs](https://docs.claude.com/en/docs/claude-code/plugins)
- [Marketplace Documentation](https://docs.claude.com/en/docs/claude-code/plugin-marketplaces)

View File

@@ -119,7 +119,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -149,7 +148,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -198,7 +196,7 @@ Initialize taskmaster-ai in my project
#### 5. Make sure you have a PRD (Recommended)
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`.
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
@@ -284,76 +282,6 @@ task-master generate
task-master rules add windsurf,roo,vscode
```
## Tool Loading Configuration
### Optimizing MCP Tool Loading
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
### Available Modes
| Mode | Tools | Context Usage | Use Case |
|------|-------|--------------|----------|
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
| `standard` | 15 | ~10,000 tokens | Common task management operations |
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
| `custom` | Variable | Variable | Comma-separated list of specific tools |
### Configuration Methods
#### Method 1: Environment Variable in MCP Configuration
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
```jsonc
{
"mcpServers": { // or "servers" for VS Code
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
"ANTHROPIC_API_KEY": "your-key-here",
// ... other API keys
}
}
}
}
```
#### Method 2: Claude Code CLI (One-Time Setup)
For Claude Code users, you can set the mode during installation:
```bash
# Core mode example (~70% token reduction)
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
# Custom tools example
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
-- npx -y task-master-ai@latest
```
### Tool Sets Details
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
### Recommendations
- **New users**: Start with `"standard"` mode for a good balance
- **Large projects**: Use `"core"` mode to minimize token usage
- **Complex workflows**: Use `"all"` mode or custom selection
- **Backward compatibility**: If not specified, defaults to `"all"` mode
## Claude Code Support
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
@@ -382,12 +310,6 @@ cd claude-task-master
node scripts/init.js
```
## Join Our Team
<a href="https://tryhamster.com" target="_blank">
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
</a>
## Contributors
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">

View File

@@ -4,20 +4,6 @@
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null

View File

@@ -22,7 +22,6 @@
"test:ci": "vitest run --coverage --reporter=dot"
},
"dependencies": {
"@inquirer/search": "^3.2.0",
"@tm/core": "*",
"boxen": "^8.0.1",
"chalk": "5.6.2",
@@ -49,6 +48,5 @@
"*": {
"*": ["src/*"]
}
},
"version": ""
}
}

View File

@@ -8,13 +8,12 @@ import { Command } from 'commander';
// Import all commands
import { ListTasksCommand } from './commands/list.command.js';
import { ShowCommand } from './commands/show.command.js';
import { NextCommand } from './commands/next.command.js';
import { AuthCommand } from './commands/auth.command.js';
import { ContextCommand } from './commands/context.command.js';
import { StartCommand } from './commands/start.command.js';
import { SetStatusCommand } from './commands/set-status.command.js';
import { ExportCommand } from './commands/export.command.js';
import { AutopilotCommand } from './commands/autopilot/index.js';
import { AutopilotCommand } from './commands/autopilot.command.js';
/**
* Command metadata for registration
@@ -47,12 +46,6 @@ export class CommandRegistry {
commandClass: ShowCommand as any,
category: 'task'
},
{
name: 'next',
description: 'Find the next available task to work on',
commandClass: NextCommand as any,
category: 'task'
},
{
name: 'start',
description: 'Start working on a task with claude-code',
@@ -73,8 +66,7 @@ export class CommandRegistry {
},
{
name: 'autopilot',
description:
'AI agent orchestration for TDD workflow (start, resume, next, complete, commit, status, abort)',
description: 'Execute a task autonomously using TDD workflow',
commandClass: AutopilotCommand as any,
category: 'development'
},

View File

@@ -14,8 +14,6 @@ import {
type AuthCredentials
} from '@tm/core/auth';
import * as ui from '../utils/ui.js';
import { ContextCommand } from './context.command.js';
import { displayError } from '../utils/error-handler.js';
/**
* Result type from auth command
@@ -118,7 +116,8 @@ export class AuthCommand extends Command {
process.exit(0);
}, 100);
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -134,7 +133,8 @@ export class AuthCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -146,7 +146,8 @@ export class AuthCommand extends Command {
const result = this.displayStatus();
this.setLastResult(result);
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -162,7 +163,8 @@ export class AuthCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -185,29 +187,19 @@ export class AuthCommand extends Command {
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const timeRemaining = expiresAt.getTime() - now.getTime();
const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60));
const minutesRemaining = Math.floor(timeRemaining / (1000 * 60));
const hoursRemaining = Math.floor(
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
);
if (timeRemaining > 0) {
// Token is still valid
if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
)
);
}
} else {
// Token has expired
if (hoursRemaining > 0) {
console.log(
chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`)
chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
);
}
} else {
@@ -349,37 +341,6 @@ export class AuthCommand extends Command {
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
);
// Post-auth: Set up workspace context
console.log(); // Add spacing
try {
const contextCommand = new ContextCommand();
const contextResult = await contextCommand.setupContextInteractive();
if (contextResult.success) {
if (contextResult.orgSelected && contextResult.briefSelected) {
console.log(
chalk.green('✓ Workspace context configured successfully')
);
} else if (contextResult.orgSelected) {
console.log(chalk.green('✓ Organization selected'));
}
} else {
console.log(
chalk.yellow('⚠ Context setup was skipped or encountered issues')
);
console.log(
chalk.gray(' You can set up context later with "tm context"')
);
}
} catch (contextError) {
console.log(chalk.yellow('⚠ Context setup encountered an error'));
console.log(
chalk.gray(' You can set up context later with "tm context"')
);
if (process.env.DEBUG) {
console.error(chalk.gray((contextError as Error).message));
}
}
return {
success: true,
action: 'login',
@@ -387,7 +348,7 @@ export class AuthCommand extends Command {
message: 'Authentication successful'
};
} catch (error) {
displayError(error, { skipExit: true });
this.handleAuthError(error as AuthenticationError);
return {
success: false,
@@ -450,6 +411,51 @@ export class AuthCommand extends Command {
}
}
/**
* Handle authentication errors
*/
private handleAuthError(error: AuthenticationError): void {
console.error(chalk.red(`\n✗ ${error.message}`));
switch (error.code) {
case 'NETWORK_ERROR':
ui.displayWarning(
'Please check your internet connection and try again.'
);
break;
case 'INVALID_CREDENTIALS':
ui.displayWarning('Please check your credentials and try again.');
break;
case 'AUTH_EXPIRED':
ui.displayWarning(
'Your session has expired. Please authenticate again.'
);
break;
default:
if (process.env.DEBUG) {
console.error(chalk.gray(error.stack || ''));
}
}
}
/**
* Handle general errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
this.handleAuthError(error);
} else {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Set the last result for programmatic access
*/

View File

@@ -1,119 +0,0 @@
/**
* @fileoverview Abort Command - Safely terminate workflow
*/
import { Command } from 'commander';
import { WorkflowOrchestrator } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
deleteWorkflowState,
OutputFormatter
} from './shared.js';
import inquirer from 'inquirer';
interface AbortOptions extends AutopilotBaseOptions {
force?: boolean;
}
/**
* Abort Command - Safely terminate workflow and clean up state
*/
export class AbortCommand extends Command {
constructor() {
super('abort');
this.description('Abort the current TDD workflow and clean up state')
.option('-f, --force', 'Force abort without confirmation')
.action(async (options: AbortOptions) => {
await this.execute(options);
});
}
private async execute(options: AbortOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: AbortOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.warning('No active workflow to abort');
return;
}
// Load state
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
// Restore orchestrator
const orchestrator = new WorkflowOrchestrator(state.context);
orchestrator.restoreState(state);
// Get progress before abort
const progress = orchestrator.getProgress();
const currentSubtask = orchestrator.getCurrentSubtask();
// Confirm abort if not forced or in JSON mode
if (!mergedOptions.force && !mergedOptions.json) {
const { confirmed } = await inquirer.prompt([
{
type: 'confirm',
name: 'confirmed',
message:
`This will abort the workflow for task ${state.context.taskId}. ` +
`Progress: ${progress.completed}/${progress.total} subtasks completed. ` +
`Continue?`,
default: false
}
]);
if (!confirmed) {
formatter.info('Abort cancelled');
return;
}
}
// Trigger abort in orchestrator
orchestrator.transition({ type: 'ABORT' });
// Delete workflow state
await deleteWorkflowState(mergedOptions.projectRoot!);
// Output result
formatter.success('Workflow aborted', {
taskId: state.context.taskId,
branchName: state.context.branchName,
progress: {
completed: progress.completed,
total: progress.total
},
lastSubtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title
}
: null,
note: 'Branch and commits remain. Clean up manually if needed.'
});
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,169 +0,0 @@
/**
* @fileoverview Commit Command - Create commit with enhanced message generation
*/
import { Command } from 'commander';
import { WorkflowOrchestrator } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
createGitAdapter,
createCommitMessageGenerator,
OutputFormatter,
saveWorkflowState
} from './shared.js';
type CommitOptions = AutopilotBaseOptions;
/**
* Commit Command - Create commit using enhanced message generator
*/
export class CommitCommand extends Command {
constructor() {
super('commit');
this.description('Create a commit for the completed GREEN phase').action(
async (options: CommitOptions) => {
await this.execute(options);
}
);
}
private async execute(options: CommitOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: CommitOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.error('No active workflow', {
suggestion: 'Start a workflow with: autopilot start <taskId>'
});
process.exit(1);
}
// Load state
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
const orchestrator = new WorkflowOrchestrator(state.context);
orchestrator.restoreState(state);
orchestrator.enableAutoPersist(async (newState) => {
await saveWorkflowState(mergedOptions.projectRoot!, newState);
});
// Verify in COMMIT phase
const tddPhase = orchestrator.getCurrentTDDPhase();
if (tddPhase !== 'COMMIT') {
formatter.error('Not in COMMIT phase', {
currentPhase: tddPhase || orchestrator.getCurrentPhase(),
suggestion: 'Complete RED and GREEN phases first'
});
process.exit(1);
}
// Get current subtask
const currentSubtask = orchestrator.getCurrentSubtask();
if (!currentSubtask) {
formatter.error('No current subtask');
process.exit(1);
}
// Initialize git adapter
const gitAdapter = createGitAdapter(mergedOptions.projectRoot!);
await gitAdapter.ensureGitRepository();
// Check for staged changes
const hasStagedChanges = await gitAdapter.hasStagedChanges();
if (!hasStagedChanges) {
// Stage all changes
formatter.info('No staged changes, staging all changes...');
await gitAdapter.stageFiles(['.']);
}
// Get changed files for scope detection
const status = await gitAdapter.getStatus();
const changedFiles = [...status.staged, ...status.modified];
// Generate commit message
const messageGenerator = createCommitMessageGenerator();
const testResults = state.context.lastTestResults;
const commitMessage = messageGenerator.generateMessage({
type: 'feat',
description: currentSubtask.title,
changedFiles,
taskId: state.context.taskId,
phase: 'TDD',
tag: (state.context.metadata.tag as string) || undefined,
testsPassing: testResults?.passed,
testsFailing: testResults?.failed,
coveragePercent: undefined // Could be added if available
});
// Create commit with metadata
await gitAdapter.createCommit(commitMessage, {
metadata: {
taskId: state.context.taskId,
subtaskId: currentSubtask.id,
phase: 'COMMIT',
tddCycle: 'complete'
}
});
// Get commit info
const lastCommit = await gitAdapter.getLastCommit();
// Complete COMMIT phase (this marks subtask as completed)
orchestrator.transition({ type: 'COMMIT_COMPLETE' });
// Check if should advance to next subtask
const progress = orchestrator.getProgress();
if (progress.current < progress.total) {
orchestrator.transition({ type: 'SUBTASK_COMPLETE' });
} else {
// All subtasks complete
orchestrator.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
}
// Output success
formatter.success('Commit created', {
commitHash: lastCommit.hash.substring(0, 7),
message: commitMessage.split('\n')[0], // First line only
subtask: {
id: currentSubtask.id,
title: currentSubtask.title,
status: currentSubtask.status
},
progress: {
completed: progress.completed,
total: progress.total,
percentage: progress.percentage
},
nextAction:
progress.completed < progress.total
? 'Start next subtask with RED phase'
: 'All subtasks complete. Run: autopilot status'
});
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,172 +0,0 @@
/**
* @fileoverview Complete Command - Complete current TDD phase with validation
*/
import { Command } from 'commander';
import { WorkflowOrchestrator, TestResult } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
OutputFormatter
} from './shared.js';
interface CompleteOptions extends AutopilotBaseOptions {
results?: string;
coverage?: string;
}
/**
* Complete Command - Mark current phase as complete with validation
*/
export class CompleteCommand extends Command {
constructor() {
super('complete');
this.description('Complete the current TDD phase with result validation')
.option(
'-r, --results <json>',
'Test results JSON (with total, passed, failed, skipped)'
)
.option('-c, --coverage <percent>', 'Coverage percentage')
.action(async (options: CompleteOptions) => {
await this.execute(options);
});
}
private async execute(options: CompleteOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: CompleteOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.error('No active workflow', {
suggestion: 'Start a workflow with: autopilot start <taskId>'
});
process.exit(1);
}
// Load state
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
// Restore orchestrator with persistence
const { saveWorkflowState } = await import('./shared.js');
const orchestrator = new WorkflowOrchestrator(state.context);
orchestrator.restoreState(state);
orchestrator.enableAutoPersist(async (newState) => {
await saveWorkflowState(mergedOptions.projectRoot!, newState);
});
// Get current phase
const tddPhase = orchestrator.getCurrentTDDPhase();
const currentSubtask = orchestrator.getCurrentSubtask();
if (!tddPhase) {
formatter.error('Not in a TDD phase', {
phase: orchestrator.getCurrentPhase()
});
process.exit(1);
}
// Validate based on phase
if (tddPhase === 'RED' || tddPhase === 'GREEN') {
if (!mergedOptions.results) {
formatter.error('Test results required for RED/GREEN phase', {
usage:
'--results \'{"total":10,"passed":9,"failed":1,"skipped":0}\''
});
process.exit(1);
}
// Parse test results
let testResults: TestResult;
try {
const parsed = JSON.parse(mergedOptions.results);
testResults = {
total: parsed.total || 0,
passed: parsed.passed || 0,
failed: parsed.failed || 0,
skipped: parsed.skipped || 0,
phase: tddPhase
};
} catch (error) {
formatter.error('Invalid test results JSON', {
error: (error as Error).message
});
process.exit(1);
}
// Validate RED phase requirements
if (tddPhase === 'RED' && testResults.failed === 0) {
formatter.error('RED phase validation failed', {
reason: 'At least one test must be failing',
actual: {
passed: testResults.passed,
failed: testResults.failed
}
});
process.exit(1);
}
// Validate GREEN phase requirements
if (tddPhase === 'GREEN' && testResults.failed !== 0) {
formatter.error('GREEN phase validation failed', {
reason: 'All tests must pass',
actual: {
passed: testResults.passed,
failed: testResults.failed
}
});
process.exit(1);
}
// Complete phase with test results
if (tddPhase === 'RED') {
orchestrator.transition({
type: 'RED_PHASE_COMPLETE',
testResults
});
formatter.success('RED phase completed', {
nextPhase: 'GREEN',
testResults,
subtask: currentSubtask?.title
});
} else {
orchestrator.transition({
type: 'GREEN_PHASE_COMPLETE',
testResults
});
formatter.success('GREEN phase completed', {
nextPhase: 'COMMIT',
testResults,
subtask: currentSubtask?.title,
suggestion: 'Run: autopilot commit'
});
}
} else if (tddPhase === 'COMMIT') {
formatter.error('Use "autopilot commit" to complete COMMIT phase');
process.exit(1);
}
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,82 +0,0 @@
/**
* @fileoverview Autopilot CLI Commands for AI Agent Orchestration
* Provides subcommands for starting, resuming, and advancing the TDD workflow
* with JSON output for machine parsing.
*/
import { Command } from 'commander';
import { StartCommand } from './start.command.js';
import { ResumeCommand } from './resume.command.js';
import { NextCommand } from './next.command.js';
import { CompleteCommand } from './complete.command.js';
import { CommitCommand } from './commit.command.js';
import { StatusCommand } from './status.command.js';
import { AbortCommand } from './abort.command.js';
/**
* Shared command options for all autopilot commands
*/
export interface AutopilotBaseOptions {
json?: boolean;
verbose?: boolean;
projectRoot?: string;
}
/**
* AutopilotCommand with subcommands for TDD workflow orchestration
*/
export class AutopilotCommand extends Command {
constructor() {
super('autopilot');
// Configure main command
this.description('AI agent orchestration for TDD workflow execution')
.alias('ap')
// Global options for all subcommands
.option('--json', 'Output in JSON format for machine parsing')
.option('-v, --verbose', 'Enable verbose output')
.option(
'-p, --project-root <path>',
'Project root directory',
process.cwd()
);
// Register subcommands
this.registerSubcommands();
}
/**
* Register all autopilot subcommands
*/
private registerSubcommands(): void {
// Start new TDD workflow
this.addCommand(new StartCommand());
// Resume existing workflow
this.addCommand(new ResumeCommand());
// Get next action
this.addCommand(new NextCommand());
// Complete current phase
this.addCommand(new CompleteCommand());
// Create commit
this.addCommand(new CommitCommand());
// Show status
this.addCommand(new StatusCommand());
// Abort workflow
this.addCommand(new AbortCommand());
}
/**
* Register this command on an existing program
*/
static register(program: Command): AutopilotCommand {
const autopilotCommand = new AutopilotCommand();
program.addCommand(autopilotCommand);
return autopilotCommand;
}
}

View File

@@ -1,164 +0,0 @@
/**
* @fileoverview Next Command - Get next action in TDD workflow
*/
import { Command } from 'commander';
import { WorkflowOrchestrator } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
OutputFormatter
} from './shared.js';
type NextOptions = AutopilotBaseOptions;
/**
* Next Command - Get next action details
*/
export class NextCommand extends Command {
constructor() {
super('next');
this.description(
'Get the next action to perform in the TDD workflow'
).action(async (options: NextOptions) => {
await this.execute(options);
});
}
private async execute(options: NextOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: NextOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.error('No active workflow', {
suggestion: 'Start a workflow with: autopilot start <taskId>'
});
process.exit(1);
}
// Load state
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
// Restore orchestrator
const orchestrator = new WorkflowOrchestrator(state.context);
orchestrator.restoreState(state);
// Get current phase and subtask
const phase = orchestrator.getCurrentPhase();
const tddPhase = orchestrator.getCurrentTDDPhase();
const currentSubtask = orchestrator.getCurrentSubtask();
// Determine next action based on phase
let actionType: string;
let actionDescription: string;
let actionDetails: Record<string, unknown> = {};
if (phase === 'COMPLETE') {
formatter.success('Workflow complete', {
message: 'All subtasks have been completed',
taskId: state.context.taskId
});
return;
}
if (phase === 'SUBTASK_LOOP' && tddPhase) {
switch (tddPhase) {
case 'RED':
actionType = 'generate_test';
actionDescription = 'Write failing test for current subtask';
actionDetails = {
subtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title,
attempts: currentSubtask.attempts
}
: null,
testCommand: 'npm test', // Could be customized based on config
expectedOutcome: 'Test should fail'
};
break;
case 'GREEN':
actionType = 'implement_code';
actionDescription = 'Implement code to pass the failing test';
actionDetails = {
subtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title,
attempts: currentSubtask.attempts
}
: null,
testCommand: 'npm test',
expectedOutcome: 'All tests should pass',
lastTestResults: state.context.lastTestResults
};
break;
case 'COMMIT':
actionType = 'commit_changes';
actionDescription = 'Commit the changes';
actionDetails = {
subtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title,
attempts: currentSubtask.attempts
}
: null,
suggestion: 'Use: autopilot commit'
};
break;
default:
actionType = 'unknown';
actionDescription = 'Unknown TDD phase';
}
} else {
actionType = 'workflow_phase';
actionDescription = `Currently in ${phase} phase`;
}
// Output next action
const output = {
action: actionType,
description: actionDescription,
phase,
tddPhase,
taskId: state.context.taskId,
branchName: state.context.branchName,
...actionDetails
};
if (mergedOptions.json) {
formatter.output(output);
} else {
formatter.success('Next action', output);
}
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,111 +0,0 @@
/**
* @fileoverview Resume Command - Restore and resume TDD workflow
*/
import { Command } from 'commander';
import { WorkflowOrchestrator } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
OutputFormatter
} from './shared.js';
type ResumeOptions = AutopilotBaseOptions;
/**
* Resume Command - Restore workflow from saved state
*/
export class ResumeCommand extends Command {
constructor() {
super('resume');
this.description('Resume a previously started TDD workflow').action(
async (options: ResumeOptions) => {
await this.execute(options);
}
);
}
private async execute(options: ResumeOptions): Promise<void> {
// Inherit parent options (autopilot command)
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: ResumeOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.error('No workflow state found', {
suggestion: 'Start a new workflow with: autopilot start <taskId>'
});
process.exit(1);
}
// Load state
formatter.info('Loading workflow state...');
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
// Validate state can be resumed
const orchestrator = new WorkflowOrchestrator(state.context);
if (!orchestrator.canResumeFromState(state)) {
formatter.error('Invalid workflow state', {
suggestion:
'State file may be corrupted. Consider starting a new workflow.'
});
process.exit(1);
}
// Restore state
orchestrator.restoreState(state);
// Re-enable auto-persistence
const { saveWorkflowState } = await import('./shared.js');
orchestrator.enableAutoPersist(async (newState) => {
await saveWorkflowState(mergedOptions.projectRoot!, newState);
});
// Get progress
const progress = orchestrator.getProgress();
const currentSubtask = orchestrator.getCurrentSubtask();
// Output success
formatter.success('Workflow resumed', {
taskId: state.context.taskId,
phase: orchestrator.getCurrentPhase(),
tddPhase: orchestrator.getCurrentTDDPhase(),
branchName: state.context.branchName,
progress: {
completed: progress.completed,
total: progress.total,
percentage: progress.percentage
},
currentSubtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title,
attempts: currentSubtask.attempts
}
: null
});
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,262 +0,0 @@
/**
* @fileoverview Shared utilities for autopilot commands
*/
import {
WorkflowOrchestrator,
WorkflowStateManager,
GitAdapter,
CommitMessageGenerator
} from '@tm/core';
import type { WorkflowState, WorkflowContext, SubtaskInfo } from '@tm/core';
import chalk from 'chalk';
/**
* Base options interface for all autopilot commands
*/
export interface AutopilotBaseOptions {
projectRoot?: string;
json?: boolean;
verbose?: boolean;
}
/**
* Load workflow state from disk using WorkflowStateManager
*/
export async function loadWorkflowState(
projectRoot: string
): Promise<WorkflowState | null> {
const stateManager = new WorkflowStateManager(projectRoot);
if (!(await stateManager.exists())) {
return null;
}
try {
return await stateManager.load();
} catch (error) {
throw new Error(
`Failed to load workflow state: ${(error as Error).message}`
);
}
}
/**
* Save workflow state to disk using WorkflowStateManager
*/
export async function saveWorkflowState(
projectRoot: string,
state: WorkflowState
): Promise<void> {
const stateManager = new WorkflowStateManager(projectRoot);
try {
await stateManager.save(state);
} catch (error) {
throw new Error(
`Failed to save workflow state: ${(error as Error).message}`
);
}
}
/**
* Delete workflow state from disk using WorkflowStateManager
*/
export async function deleteWorkflowState(projectRoot: string): Promise<void> {
const stateManager = new WorkflowStateManager(projectRoot);
await stateManager.delete();
}
/**
* Check if workflow state exists using WorkflowStateManager
*/
export async function hasWorkflowState(projectRoot: string): Promise<boolean> {
const stateManager = new WorkflowStateManager(projectRoot);
return await stateManager.exists();
}
/**
* Initialize WorkflowOrchestrator with persistence
*/
export function createOrchestrator(
context: WorkflowContext,
projectRoot: string
): WorkflowOrchestrator {
const orchestrator = new WorkflowOrchestrator(context);
const stateManager = new WorkflowStateManager(projectRoot);
// Enable auto-persistence
orchestrator.enableAutoPersist(async (state: WorkflowState) => {
await stateManager.save(state);
});
return orchestrator;
}
/**
* Initialize GitAdapter for project
*/
export function createGitAdapter(projectRoot: string): GitAdapter {
return new GitAdapter(projectRoot);
}
/**
* Initialize CommitMessageGenerator
*/
export function createCommitMessageGenerator(): CommitMessageGenerator {
return new CommitMessageGenerator();
}
/**
* Output formatter for JSON and text modes
*/
export class OutputFormatter {
constructor(private useJson: boolean) {}
/**
* Output data in appropriate format
*/
output(data: Record<string, unknown>): void {
if (this.useJson) {
console.log(JSON.stringify(data, null, 2));
} else {
this.outputText(data);
}
}
/**
* Output data in human-readable text format
*/
private outputText(data: Record<string, unknown>): void {
for (const [key, value] of Object.entries(data)) {
if (typeof value === 'object' && value !== null) {
console.log(chalk.cyan(`${key}:`));
this.outputObject(value as Record<string, unknown>, ' ');
} else {
console.log(chalk.white(`${key}: ${value}`));
}
}
}
/**
* Output nested object with indentation
*/
private outputObject(obj: Record<string, unknown>, indent: string): void {
for (const [key, value] of Object.entries(obj)) {
if (typeof value === 'object' && value !== null) {
console.log(chalk.cyan(`${indent}${key}:`));
this.outputObject(value as Record<string, unknown>, indent + ' ');
} else {
console.log(chalk.gray(`${indent}${key}: ${value}`));
}
}
}
/**
* Output error message
*/
error(message: string, details?: Record<string, unknown>): void {
if (this.useJson) {
console.error(
JSON.stringify(
{
error: message,
...details
},
null,
2
)
);
} else {
console.error(chalk.red(`Error: ${message}`));
if (details) {
for (const [key, value] of Object.entries(details)) {
console.error(chalk.gray(` ${key}: ${value}`));
}
}
}
}
/**
* Output success message
*/
success(message: string, data?: Record<string, unknown>): void {
if (this.useJson) {
console.log(
JSON.stringify(
{
success: true,
message,
...data
},
null,
2
)
);
} else {
console.log(chalk.green(`${message}`));
if (data) {
this.output(data);
}
}
}
/**
* Output warning message
*/
warning(message: string): void {
if (this.useJson) {
console.warn(
JSON.stringify(
{
warning: message
},
null,
2
)
);
} else {
console.warn(chalk.yellow(`${message}`));
}
}
/**
* Output info message
*/
info(message: string): void {
if (this.useJson) {
// Don't output info messages in JSON mode
return;
}
console.log(chalk.blue(` ${message}`));
}
}
/**
* Validate task ID format
*/
export function validateTaskId(taskId: string): boolean {
// Task ID should be in format: number or number.number (e.g., "1" or "1.2")
const pattern = /^\d+(\.\d+)*$/;
return pattern.test(taskId);
}
/**
* Parse subtasks from task data
*/
export function parseSubtasks(
task: any,
maxAttempts: number = 3
): SubtaskInfo[] {
if (!task.subtasks || !Array.isArray(task.subtasks)) {
return [];
}
return task.subtasks.map((subtask: any) => ({
id: subtask.id,
title: subtask.title,
status: subtask.status === 'done' ? 'completed' : 'pending',
attempts: 0,
maxAttempts
}));
}

View File

@@ -1,168 +0,0 @@
/**
* @fileoverview Start Command - Initialize and start TDD workflow
*/
import { Command } from 'commander';
import { createTaskMasterCore, type WorkflowContext } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
createOrchestrator,
createGitAdapter,
OutputFormatter,
validateTaskId,
parseSubtasks
} from './shared.js';
interface StartOptions extends AutopilotBaseOptions {
force?: boolean;
maxAttempts?: string;
}
/**
* Start Command - Initialize new TDD workflow
*/
export class StartCommand extends Command {
constructor() {
super('start');
this.description('Initialize and start a new TDD workflow for a task')
.argument('<taskId>', 'Task ID to start workflow for')
.option('-f, --force', 'Force start even if workflow state exists')
.option('--max-attempts <number>', 'Maximum attempts per subtask', '3')
.action(async (taskId: string, options: StartOptions) => {
await this.execute(taskId, options);
});
}
private async execute(taskId: string, options: StartOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: StartOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Validate task ID
if (!validateTaskId(taskId)) {
formatter.error('Invalid task ID format', {
taskId,
expected: 'Format: number or number.number (e.g., "1" or "1.2")'
});
process.exit(1);
}
// Check for existing workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (hasState && !mergedOptions.force) {
formatter.error(
'Workflow state already exists. Use --force to overwrite or resume with "autopilot resume"'
);
process.exit(1);
}
// Initialize Task Master Core
const tmCore = await createTaskMasterCore({
projectPath: mergedOptions.projectRoot!
});
// Get current tag from ConfigManager
const currentTag = tmCore.getActiveTag();
// Load task
formatter.info(`Loading task ${taskId}...`);
const { task } = await tmCore.getTaskWithSubtask(taskId);
if (!task) {
formatter.error('Task not found', { taskId });
await tmCore.close();
process.exit(1);
}
// Validate task has subtasks
if (!task.subtasks || task.subtasks.length === 0) {
formatter.error('Task has no subtasks. Expand task first.', {
taskId,
suggestion: `Run: task-master expand --id=${taskId}`
});
await tmCore.close();
process.exit(1);
}
// Initialize Git adapter
const gitAdapter = createGitAdapter(mergedOptions.projectRoot!);
await gitAdapter.ensureGitRepository();
await gitAdapter.ensureCleanWorkingTree();
// Parse subtasks
const maxAttempts = parseInt(mergedOptions.maxAttempts || '3', 10);
const subtasks = parseSubtasks(task, maxAttempts);
// Create workflow context
const context: WorkflowContext = {
taskId: task.id,
subtasks,
currentSubtaskIndex: 0,
errors: [],
metadata: {
startedAt: new Date().toISOString(),
tags: task.tags || []
}
};
// Create orchestrator with persistence
const orchestrator = createOrchestrator(
context,
mergedOptions.projectRoot!
);
// Complete PREFLIGHT phase
orchestrator.transition({ type: 'PREFLIGHT_COMPLETE' });
// Generate descriptive branch name
const sanitizedTitle = task.title
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-+|-+$/g, '')
.substring(0, 50);
const formattedTaskId = taskId.replace(/\./g, '-');
const tagPrefix = currentTag ? `${currentTag}/` : '';
const branchName = `${tagPrefix}task-${formattedTaskId}-${sanitizedTitle}`;
// Create and checkout branch
formatter.info(`Creating branch: ${branchName}`);
await gitAdapter.createAndCheckoutBranch(branchName);
// Transition to SUBTASK_LOOP
orchestrator.transition({
type: 'BRANCH_CREATED',
branchName
});
// Output success
formatter.success('TDD workflow started', {
taskId: task.id,
title: task.title,
phase: orchestrator.getCurrentPhase(),
tddPhase: orchestrator.getCurrentTDDPhase(),
branchName,
subtasks: subtasks.length,
currentSubtask: subtasks[0]?.title
});
// Clean up
await tmCore.close();
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -1,114 +0,0 @@
/**
* @fileoverview Status Command - Show workflow progress
*/
import { Command } from 'commander';
import { WorkflowOrchestrator } from '@tm/core';
import {
AutopilotBaseOptions,
hasWorkflowState,
loadWorkflowState,
OutputFormatter
} from './shared.js';
type StatusOptions = AutopilotBaseOptions;
/**
* Status Command - Show current workflow status
*/
export class StatusCommand extends Command {
constructor() {
super('status');
this.description('Show current TDD workflow status and progress').action(
async (options: StatusOptions) => {
await this.execute(options);
}
);
}
private async execute(options: StatusOptions): Promise<void> {
// Inherit parent options
const parentOpts = this.parent?.opts() as AutopilotBaseOptions;
const mergedOptions: StatusOptions = {
...parentOpts,
...options,
projectRoot:
options.projectRoot || parentOpts?.projectRoot || process.cwd()
};
const formatter = new OutputFormatter(mergedOptions.json || false);
try {
// Check for workflow state
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
if (!hasState) {
formatter.error('No active workflow', {
suggestion: 'Start a workflow with: autopilot start <taskId>'
});
process.exit(1);
}
// Load state
const state = await loadWorkflowState(mergedOptions.projectRoot!);
if (!state) {
formatter.error('Failed to load workflow state');
process.exit(1);
}
// Restore orchestrator
const orchestrator = new WorkflowOrchestrator(state.context);
orchestrator.restoreState(state);
// Get status information
const phase = orchestrator.getCurrentPhase();
const tddPhase = orchestrator.getCurrentTDDPhase();
const progress = orchestrator.getProgress();
const currentSubtask = orchestrator.getCurrentSubtask();
const errors = state.context.errors ?? [];
// Build status output
const status = {
taskId: state.context.taskId,
phase,
tddPhase,
branchName: state.context.branchName,
progress: {
completed: progress.completed,
total: progress.total,
current: progress.current,
percentage: progress.percentage
},
currentSubtask: currentSubtask
? {
id: currentSubtask.id,
title: currentSubtask.title,
status: currentSubtask.status,
attempts: currentSubtask.attempts,
maxAttempts: currentSubtask.maxAttempts
}
: null,
subtasks: state.context.subtasks.map((st) => ({
id: st.id,
title: st.title,
status: st.status,
attempts: st.attempts
})),
errors: errors.length > 0 ? errors : undefined,
metadata: state.context.metadata
};
if (mergedOptions.json) {
formatter.output(status);
} else {
formatter.success('Workflow status', status);
}
} catch (error) {
formatter.error((error as Error).message);
if (mergedOptions.verbose) {
console.error((error as Error).stack);
}
process.exit(1);
}
}
}

View File

@@ -6,11 +6,13 @@
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import search from '@inquirer/search';
import ora, { Ora } from 'ora';
import { AuthManager, type UserContext } from '@tm/core/auth';
import {
AuthManager,
AuthenticationError,
type UserContext
} from '@tm/core/auth';
import * as ui from '../utils/ui.js';
import { displayError } from '../utils/error-handler.js';
/**
* Result type from context command
@@ -116,7 +118,8 @@ export class ContextCommand extends Command {
const result = this.displayContext();
this.setLastResult(result);
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -153,14 +156,10 @@ export class ContextCommand extends Command {
if (context.briefName || context.briefId) {
console.log(chalk.green('\n✓ Brief'));
if (context.briefName && context.briefId) {
const shortId = context.briefId.slice(0, 8);
console.log(
chalk.white(` ${context.briefName} `) + chalk.gray(`(${shortId})`)
);
} else if (context.briefName) {
if (context.briefName) {
console.log(chalk.white(` ${context.briefName}`));
} else if (context.briefId) {
}
if (context.briefId) {
console.log(chalk.gray(` ID: ${context.briefId}`));
}
}
@@ -212,7 +211,8 @@ export class ContextCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -250,10 +250,9 @@ export class ContextCommand extends Command {
]);
// Update context
this.authManager.updateContext({
await this.authManager.updateContext({
orgId: selectedOrg.id,
orgName: selectedOrg.name,
orgSlug: selectedOrg.slug,
// Clear brief when changing org
briefId: undefined,
briefName: undefined
@@ -300,7 +299,8 @@ export class ContextCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -324,54 +324,26 @@ export class ContextCommand extends Command {
};
}
// Prompt for selection with search
const selectedBrief = await search<(typeof briefs)[0] | null>({
message: 'Search for a brief:',
source: async (input) => {
const searchTerm = input?.toLowerCase() || '';
// Static option for no brief
const noBriefOption = {
name: '(No brief - organization level)',
value: null as any,
description: 'Clear brief selection'
};
// Filter and map brief options
const briefOptions = briefs
.filter((brief) => {
if (!searchTerm) return true;
const title = brief.document?.title || '';
const shortId = brief.id.slice(0, 8);
// Search by title first, then by UUID
return (
title.toLowerCase().includes(searchTerm) ||
brief.id.toLowerCase().includes(searchTerm) ||
shortId.toLowerCase().includes(searchTerm)
);
})
.map((brief) => {
const title =
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
const shortId = brief.id.slice(0, 8);
return {
name: `${title} ${chalk.gray(`(${shortId})`)}`,
value: brief
};
});
return [noBriefOption, ...briefOptions];
// Prompt for selection
const { selectedBrief } = await inquirer.prompt([
{
type: 'list',
name: 'selectedBrief',
message: 'Select a brief:',
choices: [
{ name: '(No brief - organization level)', value: null },
...briefs.map((brief) => ({
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
value: brief
}))
]
}
});
]);
if (selectedBrief) {
// Update context with brief
const briefName =
selectedBrief.document?.title ||
`Brief ${selectedBrief.id.slice(0, 8)}`;
this.authManager.updateContext({
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
await this.authManager.updateContext({
briefId: selectedBrief.id,
briefName: briefName
});
@@ -382,11 +354,11 @@ export class ContextCommand extends Command {
success: true,
action: 'select-brief',
context: this.authManager.getContext() || undefined,
message: `Selected brief: ${selectedBrief.document?.title}`
message: `Selected brief: ${selectedBrief.name}`
};
} else {
// Clear brief selection
this.authManager.updateContext({
await this.authManager.updateContext({
briefId: undefined,
briefName: undefined
});
@@ -424,7 +396,8 @@ export class ContextCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -470,7 +443,8 @@ export class ContextCommand extends Command {
process.exit(1);
}
} catch (error: any) {
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -494,7 +468,7 @@ export class ContextCommand extends Command {
if (!briefId) {
spinner.fail('Could not extract a brief ID from the provided input');
ui.displayError(
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
);
process.exit(1);
}
@@ -506,24 +480,20 @@ export class ContextCommand extends Command {
process.exit(1);
}
// Fetch org to get a friendly name and slug (optional)
// Fetch org to get a friendly name (optional)
let orgName: string | undefined;
let orgSlug: string | undefined;
try {
const org = await this.authManager.getOrganization(brief.accountId);
orgName = org?.name;
orgSlug = org?.slug;
} catch {
// Non-fatal if org lookup fails
}
// Update context: set org and brief
const briefName =
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
this.authManager.updateContext({
const briefName = `Brief ${brief.id.slice(0, 8)}`;
await this.authManager.updateContext({
orgId: brief.accountId,
orgName,
orgSlug,
briefId: brief.id,
briefName
});
@@ -545,7 +515,8 @@ export class ContextCommand extends Command {
try {
if (spinner?.isSpinning) spinner.stop();
} catch {}
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -642,7 +613,7 @@ export class ContextCommand extends Command {
};
}
this.authManager.updateContext(context);
await this.authManager.updateContext(context);
ui.displaySuccess('Context updated');
// Display what was set
@@ -674,6 +645,26 @@ export class ContextCommand extends Command {
}
}
/**
* Handle errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
console.error(chalk.red(`\n✗ ${error.message}`));
if (error.code === 'NOT_AUTHENTICATED') {
ui.displayWarning('Please authenticate first: tm auth login');
}
} else {
const msg = error?.message ?? String(error);
console.error(chalk.red(`Error: ${msg}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Set the last result for programmatic access
*/
@@ -695,53 +686,6 @@ export class ContextCommand extends Command {
return this.authManager.getContext();
}
/**
* Interactive context setup (for post-auth flow)
* Prompts user to select org and brief
*/
async setupContextInteractive(): Promise<{
success: boolean;
orgSelected: boolean;
briefSelected: boolean;
}> {
try {
// Ask if user wants to set up workspace context
const { setupContext } = await inquirer.prompt([
{
type: 'confirm',
name: 'setupContext',
message: 'Would you like to set up your workspace context now?',
default: true
}
]);
if (!setupContext) {
return { success: true, orgSelected: false, briefSelected: false };
}
// Select organization
const orgResult = await this.selectOrganization();
if (!orgResult.success || !orgResult.context?.orgId) {
return { success: false, orgSelected: false, briefSelected: false };
}
// Select brief
const briefResult = await this.selectBrief(orgResult.context.orgId);
return {
success: true,
orgSelected: true,
briefSelected: briefResult.success
};
} catch (error) {
console.error(
chalk.yellow(
'\nContext setup skipped due to error. You can set it up later with "tm context"'
)
);
return { success: false, orgSelected: false, briefSelected: false };
}
}
/**
* Clean up resources
*/

View File

@@ -7,10 +7,13 @@ import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { Ora } from 'ora';
import { AuthManager, type UserContext } from '@tm/core/auth';
import {
AuthManager,
AuthenticationError,
type UserContext
} from '@tm/core/auth';
import { TaskMasterCore, type ExportResult } from '@tm/core';
import * as ui from '../utils/ui.js';
import { displayError } from '../utils/error-handler.js';
/**
* Result type from export command
@@ -100,7 +103,7 @@ export class ExportCommand extends Command {
await this.initializeServices();
// Get current context
const context = await this.authManager.getContext();
const context = this.authManager.getContext();
// Determine org and brief IDs
let orgId = options?.org || context?.orgId;
@@ -194,7 +197,8 @@ export class ExportCommand extends Command {
};
} catch (error: any) {
if (spinner?.isSpinning) spinner.fail('Export failed');
displayError(error);
this.handleError(error);
process.exit(1);
}
}
@@ -330,6 +334,26 @@ export class ExportCommand extends Command {
return confirmed;
}
/**
* Handle errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
console.error(chalk.red(`\n✗ ${error.message}`));
if (error.code === 'NOT_AUTHENTICATED') {
ui.displayWarning('Please authenticate first: tm auth login');
}
} else {
const msg = error?.message ?? String(error);
console.error(chalk.red(`Error: ${msg}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Get the last export result (useful for testing)
*/

View File

@@ -17,9 +17,8 @@ import {
} from '@tm/core';
import type { StorageType } from '@tm/core/types';
import * as ui from '../utils/ui.js';
import { displayError } from '../utils/error-handler.js';
import { displayCommandHeader } from '../utils/display-helpers.js';
import {
displayHeader,
displayDashboards,
calculateTaskStatistics,
calculateSubtaskStatistics,
@@ -107,7 +106,14 @@ export class ListTasksCommand extends Command {
this.displayResults(result, options);
}
} catch (error: any) {
displayError(error);
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
process.exit(1);
}
}
@@ -251,12 +257,15 @@ export class ListTasksCommand extends Command {
* Display in text format with tables
*/
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
const { tasks, tag, storageType } = data;
const { tasks, tag } = data;
// Display header using utility function
displayCommandHeader(this.tmCore, {
// Get file path for display
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
// Display header without banner (banner already shown by main CLI)
displayHeader({
tag: tag || 'master',
storageType
filePath: filePath
});
// No tasks message

View File

@@ -1,248 +0,0 @@
/**
* @fileoverview NextCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import path from 'node:path';
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
import type { StorageType } from '@tm/core/types';
import { displayError } from '../utils/error-handler.js';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
import { displayCommandHeader } from '../utils/display-helpers.js';
/**
* Options interface for the next command
*/
export interface NextCommandOptions {
tag?: string;
format?: 'text' | 'json';
silent?: boolean;
project?: string;
}
/**
* Result type from next command
*/
export interface NextTaskResult {
task: Task | null;
found: boolean;
tag: string;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* NextCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class NextCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: NextTaskResult;
constructor(name?: string) {
super(name || 'next');
// Configure the command
this.description('Find the next available task to work on')
.option('-t, --tag <tag>', 'Filter by tag')
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: NextCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the next command
*/
private async executeCommand(options: NextCommandOptions): Promise<void> {
let hasError = false;
try {
// Validate options (throws on invalid options)
this.validateOptions(options);
// Initialize tm-core
await this.initializeCore(options.project || process.cwd());
// Get next task from core
const result = await this.getNextTask(options);
// Store result for programmatic access
this.setLastResult(result);
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
hasError = true;
displayError(error, { skipExit: true });
} finally {
// Always clean up resources, even on error
await this.cleanup();
}
// Exit after cleanup completes
if (hasError) {
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: NextCommandOptions): void {
// Validate format
if (options.format && !['text', 'json'].includes(options.format)) {
throw new Error(
`Invalid format: ${options.format}. Valid formats are: text, json`
);
}
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
const resolved = path.resolve(projectRoot);
this.tmCore = await createTaskMasterCore({ projectPath: resolved });
}
}
/**
* Get next task from tm-core
*/
private async getNextTask(
options: NextCommandOptions
): Promise<NextTaskResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Call tm-core to get next task
const task = await this.tmCore.getNextTask(options.tag);
// Get storage type and active tag
const storageType = this.tmCore.getStorageType();
if (storageType === 'auto') {
throw new Error('Storage type must be resolved before use');
}
const activeTag = options.tag || this.tmCore.getActiveTag();
return {
task,
found: task !== null,
tag: activeTag,
storageType
};
}
/**
* Display results based on format
*/
private displayResults(
result: NextTaskResult,
options: NextCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'text':
default:
this.displayText(result);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(result: NextTaskResult): void {
console.log(JSON.stringify(result, null, 2));
}
/**
* Display in text format
*/
private displayText(result: NextTaskResult): void {
// Display header with storage info
displayCommandHeader(this.tmCore, {
tag: result.tag || 'master',
storageType: result.storageType
});
if (!result.found || !result.task) {
// No next task available
console.log(
boxen(
chalk.yellow(
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
),
{
padding: 1,
borderStyle: 'round',
borderColor: 'yellow',
title: '⚠ NO TASKS AVAILABLE ⚠',
titleAlignment: 'center'
}
)
);
console.log(
`\n${chalk.dim('Tip: Try')} ${chalk.cyan('task-master list --status pending')} ${chalk.dim('to see all pending tasks')}`
);
return;
}
const task = result.task;
// Display the task details using the same component as 'show' command
// with a custom header indicating this is the next task
const customHeader = `Next Task: #${task.id} - ${task.title}`;
displayTaskDetails(task, {
customHeader,
headerColor: 'green',
showSuggestedActions: true
});
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: NextTaskResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): NextTaskResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): NextCommand {
const nextCommand = new NextCommand(name);
program.addCommand(nextCommand);
return nextCommand;
}
}

Some files were not shown because too many files have changed in this diff Show More