Compare commits
3 Commits
feat/exper
...
fix/update
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4f4b91900e | ||
|
|
2718c7ad5f | ||
|
|
5713bb17cf |
@@ -1,9 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Kiro editor rule profile support
|
||||
|
||||
- Add support for Kiro IDE with custom rule files and MCP configuration
|
||||
- Generate rule files in `.kiro/steering/` directory with markdown format
|
||||
- Include MCP server configuration with enhanced file inclusion patterns
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Implement Boundary-First Tag Resolution to ensure consistent and deterministic tag handling across CLI and MCP, resolving potential race conditions.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Update VS Code profile with MCP config transformation
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix MCP server error when retrieving tools and resources
|
||||
@@ -1,424 +0,0 @@
|
||||
---
|
||||
description: Guide for using Taskmaster to manage task-driven development workflows
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Taskmaster Development Workflow
|
||||
|
||||
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
|
||||
|
||||
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
|
||||
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
|
||||
|
||||
## The Basic Loop
|
||||
The fundamental development cycle you will facilitate is:
|
||||
1. **`list`**: Show the user what needs to be done.
|
||||
2. **`next`**: Help the user decide what to work on.
|
||||
3. **`show <id>`**: Provide details for a specific task.
|
||||
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
|
||||
5. **Implement**: The user writes the code and tests.
|
||||
6. **`update-subtask`**: Log progress and findings on behalf of the user.
|
||||
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
|
||||
8. **Repeat**.
|
||||
|
||||
All your standard command executions should operate on the user's current task context, which defaults to `master`.
|
||||
|
||||
---
|
||||
|
||||
## Standard Development Workflow Process
|
||||
|
||||
### Simple Workflow (Default Starting Point)
|
||||
|
||||
For new projects or when users are getting started, operate within the `master` tag context:
|
||||
|
||||
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see @`taskmaster.mdc`) to generate initial tasks.json with tagged structure
|
||||
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules cursor,windsurf`) or manage them later with `task-master rules add/remove` commands
|
||||
- Begin coding sessions with `get_tasks` / `task-master list` (see @`taskmaster.mdc`) to see current tasks, status, and IDs
|
||||
- Determine the next task to work on using `next_task` / `task-master next` (see @`taskmaster.mdc`)
|
||||
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.mdc`) before breaking down tasks
|
||||
- Review complexity report using `complexity_report` / `task-master complexity-report` (see @`taskmaster.mdc`)
|
||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||
- View specific task details using `get_task` / `task-master show <id>` (see @`taskmaster.mdc`) to understand implementation requirements
|
||||
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see @`taskmaster.mdc`) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
|
||||
- Implement code following task details, dependencies, and project standards
|
||||
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see @`taskmaster.mdc`)
|
||||
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see @`taskmaster.mdc`)
|
||||
|
||||
---
|
||||
|
||||
## Leveling Up: Agent-Led Multi-Context Workflows
|
||||
|
||||
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
|
||||
|
||||
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
|
||||
|
||||
### When to Introduce Tags: Your Decision Patterns
|
||||
|
||||
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
|
||||
|
||||
#### Pattern 1: Simple Git Feature Branching
|
||||
This is the most common and direct use case for tags.
|
||||
|
||||
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
|
||||
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
|
||||
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
|
||||
- **Tool to Use**: `task-master add-tag --from-branch`
|
||||
|
||||
#### Pattern 2: Team Collaboration
|
||||
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
|
||||
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
|
||||
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
|
||||
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
|
||||
|
||||
#### Pattern 3: Experiments or Risky Refactors
|
||||
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
|
||||
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
|
||||
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
|
||||
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
|
||||
|
||||
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
|
||||
This is a more structured approach for significant new features or epics.
|
||||
|
||||
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
|
||||
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
|
||||
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
|
||||
- **Your Implementation Flow**:
|
||||
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
|
||||
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
|
||||
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
|
||||
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
|
||||
|
||||
#### Pattern 5: Version-Based Development
|
||||
Tailor your approach based on the project maturity indicated by tag names.
|
||||
|
||||
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
|
||||
- **Your Approach**: Focus on speed and functionality over perfection
|
||||
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
|
||||
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
|
||||
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
|
||||
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
|
||||
|
||||
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
|
||||
- **Your Approach**: Emphasize robustness, testing, and maintainability
|
||||
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
|
||||
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
|
||||
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
|
||||
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
|
||||
|
||||
### Advanced Workflow (Tag-Based & PRD-Driven)
|
||||
|
||||
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
|
||||
- User mentions teammates or collaboration needs
|
||||
- Project has grown to 15+ tasks with mixed priorities
|
||||
- User creates feature branches or mentions major initiatives
|
||||
- User initializes Taskmaster on an existing, complex codebase
|
||||
- User describes large features that would benefit from dedicated planning
|
||||
|
||||
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
|
||||
|
||||
#### Master List Strategy (High-Value Focus)
|
||||
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
|
||||
- **High-level deliverables** that provide significant business value
|
||||
- **Major milestones** and epic-level features
|
||||
- **Critical infrastructure** work that affects the entire project
|
||||
- **Release-blocking** items
|
||||
|
||||
**What NOT to put in master**:
|
||||
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
|
||||
- Refactoring work (create dedicated tags like `refactor-auth`)
|
||||
- Experimental features (use `experiment-*` tags)
|
||||
- Team member-specific tasks (use person-specific tags)
|
||||
|
||||
#### PRD-Driven Feature Development
|
||||
|
||||
**For New Major Features**:
|
||||
1. **Identify the Initiative**: When user describes a significant feature
|
||||
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
|
||||
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
|
||||
4. **Parse & Prepare**:
|
||||
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
|
||||
- `analyze_project_complexity --tag=feature-[name] --research`
|
||||
- `expand_all --tag=feature-[name] --research`
|
||||
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
|
||||
|
||||
**For Existing Codebase Analysis**:
|
||||
When users initialize Taskmaster on existing projects:
|
||||
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
|
||||
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
|
||||
3. **Strategic PRD Creation**: Co-author PRDs that include:
|
||||
- Current state analysis (based on your codebase research)
|
||||
- Proposed improvements or new features
|
||||
- Implementation strategy considering existing code
|
||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||
|
||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
|
||||
### Workflow Transition Examples
|
||||
|
||||
**Example 1: Simple → Team-Based**
|
||||
```
|
||||
User: "Alice is going to help with the API work"
|
||||
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
|
||||
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
|
||||
```
|
||||
|
||||
**Example 2: Simple → PRD-Driven**
|
||||
```
|
||||
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
|
||||
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
|
||||
Actions:
|
||||
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
|
||||
2. Collaborate on PRD creation
|
||||
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
|
||||
4. Add high-level "User Dashboard" task to master
|
||||
```
|
||||
|
||||
**Example 3: Existing Project → Strategic Planning**
|
||||
```
|
||||
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
|
||||
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
|
||||
Actions:
|
||||
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
|
||||
2. Collaborate on improvement PRD based on findings
|
||||
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
|
||||
4. Keep only major improvement initiatives in master
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Primary Interaction: MCP Server vs. CLI
|
||||
|
||||
Taskmaster offers two primary ways to interact:
|
||||
|
||||
1. **MCP Server (Recommended for Integrated Tools)**:
|
||||
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
|
||||
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
|
||||
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
|
||||
- Refer to @`mcp.mdc` for details on the MCP architecture and available tools.
|
||||
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in @`taskmaster.mdc`.
|
||||
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
|
||||
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
|
||||
|
||||
2. **`task-master` CLI (For Users & Fallback)**:
|
||||
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
|
||||
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
|
||||
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
|
||||
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
|
||||
- Refer to @`taskmaster.mdc` for a detailed command reference.
|
||||
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
|
||||
|
||||
## How the Tag System Works (For Your Reference)
|
||||
|
||||
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
|
||||
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
|
||||
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
|
||||
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
|
||||
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to @`taskmaster.mdc` for a full command list.
|
||||
|
||||
---
|
||||
|
||||
## Task Complexity Analysis
|
||||
|
||||
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.mdc`) for comprehensive analysis
|
||||
- Review complexity report via `complexity_report` / `task-master complexity-report` (see @`taskmaster.mdc`) for a formatted, readable version.
|
||||
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
||||
- Use analysis results to determine appropriate subtask allocation
|
||||
- Note that reports are automatically used by the `expand_task` tool/command
|
||||
|
||||
## Task Breakdown Process
|
||||
|
||||
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||
- Review and adjust generated subtasks as necessary.
|
||||
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
|
||||
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
|
||||
|
||||
## Implementation Drift Handling
|
||||
|
||||
- When implementation differs significantly from planned approach
|
||||
- When future tasks need modification due to current implementation choices
|
||||
- When new dependencies or requirements emerge
|
||||
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
|
||||
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
|
||||
|
||||
## Task Status Management
|
||||
|
||||
- Use 'pending' for tasks ready to be worked on
|
||||
- Use 'done' for completed and verified tasks
|
||||
- Use 'deferred' for postponed tasks
|
||||
- Add custom status values as needed for project-specific workflows
|
||||
|
||||
## Task Structure Fields
|
||||
|
||||
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
|
||||
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
||||
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
||||
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
||||
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
|
||||
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
|
||||
- This helps quickly identify which prerequisite tasks are blocking work
|
||||
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
||||
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
||||
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
||||
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
||||
- Refer to task structure details (previously linked to `tasks.mdc`).
|
||||
|
||||
## Configuration Management (Updated)
|
||||
|
||||
Taskmaster configuration is managed through two main mechanisms:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Primary):**
|
||||
* Located in the project root directory.
|
||||
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
|
||||
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
|
||||
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
|
||||
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
|
||||
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
|
||||
|
||||
2. **Environment Variables (`.env` / `mcp.json`):**
|
||||
* Used **only** for sensitive API keys and specific endpoint URLs.
|
||||
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
|
||||
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
|
||||
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
|
||||
|
||||
3. **`.taskmaster/state.json` File (Tagged System State):**
|
||||
* Tracks current tag context and migration status.
|
||||
* Automatically created during tagged system migration.
|
||||
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
|
||||
|
||||
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
||||
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
||||
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||
|
||||
## Rules Management
|
||||
|
||||
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
|
||||
|
||||
- **Available Profiles**: Claude Code, Cline, Codex, Cursor, Roo Code, Trae, Windsurf (claude, cline, codex, cursor, roo, trae, windsurf)
|
||||
- **During Initialization**: Use `task-master init --rules cursor,windsurf` to specify which rule sets to include
|
||||
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
|
||||
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
|
||||
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
|
||||
- **Rule Structure**: Each profile creates its own directory (e.g., `.cursor/rules`, `.roo/rules`) with appropriate configuration files
|
||||
|
||||
## Determining the Next Task
|
||||
|
||||
- Run `next_task` / `task-master next` to show the next task to work on.
|
||||
- The command identifies tasks with all dependencies satisfied
|
||||
- Tasks are prioritized by priority level, dependency count, and ID
|
||||
- The command shows comprehensive task information including:
|
||||
- Basic task details and description
|
||||
- Implementation details
|
||||
- Subtasks (if they exist)
|
||||
- Contextual suggested actions
|
||||
- Recommended before starting any new development work
|
||||
- Respects your project's dependency structure
|
||||
- Ensures tasks are completed in the appropriate sequence
|
||||
- Provides ready-to-use commands for common task actions
|
||||
|
||||
## Viewing Specific Task Details
|
||||
|
||||
- Run `get_task` / `task-master show <id>` to view a specific task.
|
||||
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
|
||||
- Displays comprehensive information similar to the next command, but for a specific task
|
||||
- For parent tasks, shows all subtasks and their current status
|
||||
- For subtasks, shows parent task information and relationship
|
||||
- Provides contextual suggested actions appropriate for the specific task
|
||||
- Useful for examining task details before implementation or checking status
|
||||
|
||||
## Managing Task Dependencies
|
||||
|
||||
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
|
||||
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
|
||||
- The system prevents circular dependencies and duplicate dependency entries
|
||||
- Dependencies are checked for existence before being added or removed
|
||||
- Task files are automatically regenerated after dependency changes
|
||||
- Dependencies are visualized with status indicators in task listings and files
|
||||
|
||||
## Task Reorganization
|
||||
|
||||
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
|
||||
- This command supports several use cases:
|
||||
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
|
||||
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
|
||||
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
|
||||
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
|
||||
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
|
||||
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
|
||||
- The system includes validation to prevent data loss:
|
||||
- Allows moving to non-existent IDs by creating placeholder tasks
|
||||
- Prevents moving to existing task IDs that have content (to avoid overwriting)
|
||||
- Validates source tasks exist before attempting to move them
|
||||
- The system maintains proper parent-child relationships and dependency integrity
|
||||
- Task files are automatically regenerated after the move operation
|
||||
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
|
||||
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
|
||||
|
||||
## Iterative Subtask Implementation
|
||||
|
||||
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
||||
|
||||
1. **Understand the Goal (Preparation):**
|
||||
* Use `get_task` / `task-master show <subtaskId>` (see @`taskmaster.mdc`) to thoroughly understand the specific goals and requirements of the subtask.
|
||||
|
||||
2. **Initial Exploration & Planning (Iteration 1):**
|
||||
* This is the first attempt at creating a concrete implementation plan.
|
||||
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
|
||||
* Determine the intended code changes (diffs) and their locations.
|
||||
* Gather *all* relevant details from this exploration phase.
|
||||
|
||||
3. **Log the Plan:**
|
||||
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
|
||||
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
|
||||
|
||||
4. **Verify the Plan:**
|
||||
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
|
||||
|
||||
5. **Begin Implementation:**
|
||||
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
|
||||
* Start coding based on the logged plan.
|
||||
|
||||
6. **Refine and Log Progress (Iteration 2+):**
|
||||
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
|
||||
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
|
||||
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
|
||||
* **Crucially, log:**
|
||||
* What worked ("fundamental truths" discovered).
|
||||
* What didn't work and why (to avoid repeating mistakes).
|
||||
* Specific code snippets or configurations that were successful.
|
||||
* Decisions made, especially if confirmed with user input.
|
||||
* Any deviations from the initial plan and the reasoning.
|
||||
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
|
||||
|
||||
7. **Review & Update Rules (Post-Implementation):**
|
||||
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
|
||||
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
|
||||
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`).
|
||||
|
||||
8. **Mark Task Complete:**
|
||||
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
|
||||
|
||||
9. **Commit Changes (If using Git):**
|
||||
* Stage the relevant code changes and any updated/new rule files (`git add .`).
|
||||
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
|
||||
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
|
||||
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
|
||||
|
||||
10. **Proceed to Next Subtask:**
|
||||
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
|
||||
|
||||
## Code Analysis & Refactoring Techniques
|
||||
|
||||
- **Top-Level Function Search**:
|
||||
- Useful for understanding module structure or planning refactors.
|
||||
- Use grep/ripgrep to find exported functions/constants:
|
||||
`rg "export (async function|function|const) \w+"` or similar patterns.
|
||||
- Can help compare functions between files during migrations or identify potential naming conflicts.
|
||||
|
||||
---
|
||||
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*
|
||||
@@ -1,558 +0,0 @@
|
||||
---
|
||||
description: Comprehensive reference for Taskmaster MCP tools and CLI commands.
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Taskmaster Tool & Command Reference
|
||||
|
||||
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
|
||||
|
||||
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
|
||||
|
||||
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
|
||||
|
||||
**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag.
|
||||
|
||||
---
|
||||
|
||||
## Initialization & Setup
|
||||
|
||||
### 1. Initialize Project (`init`)
|
||||
|
||||
* **MCP Tool:** `initialize_project`
|
||||
* **CLI Command:** `task-master init [options]`
|
||||
* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.`
|
||||
* **Key CLI Options:**
|
||||
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
|
||||
* `--description <text>`: `Provide a brief description for your project.`
|
||||
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
|
||||
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
|
||||
* **Usage:** Run this once at the beginning of a new project.
|
||||
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
|
||||
* **Key MCP Parameters/Options:**
|
||||
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
|
||||
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
|
||||
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
|
||||
* `authorName`: `Author name.` (CLI: `--author <author>`)
|
||||
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
|
||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||
* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`.
|
||||
|
||||
### 2. Parse PRD (`parse_prd`)
|
||||
|
||||
* **MCP Tool:** `parse_prd`
|
||||
* **CLI Command:** `task-master parse-prd [file] [options]`
|
||||
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
|
||||
* **Key Parameters/Options:**
|
||||
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
|
||||
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
|
||||
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
|
||||
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
|
||||
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
|
||||
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
||||
|
||||
---
|
||||
|
||||
## AI Model Configuration
|
||||
|
||||
### 2. Manage Models (`models`)
|
||||
* **MCP Tool:** `models`
|
||||
* **CLI Command:** `task-master models [options]`
|
||||
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
|
||||
* **Key MCP Parameters/Options:**
|
||||
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
|
||||
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
|
||||
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
|
||||
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
|
||||
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
|
||||
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
|
||||
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
|
||||
* **Key CLI Options:**
|
||||
* `--set-main <model_id>`: `Set the primary model.`
|
||||
* `--set-research <model_id>`: `Set the research model.`
|
||||
* `--set-fallback <model_id>`: `Set the fallback model.`
|
||||
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
|
||||
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
|
||||
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
|
||||
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
||||
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
||||
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||
|
||||
---
|
||||
|
||||
## Task Listing & Viewing
|
||||
|
||||
### 3. Get Tasks (`get_tasks`)
|
||||
|
||||
* **MCP Tool:** `get_tasks`
|
||||
* **CLI Command:** `task-master list [options]`
|
||||
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`)
|
||||
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
|
||||
* `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Get an overview of the project status, often used at the start of a work session.
|
||||
|
||||
### 4. Get Next Task (`next_task`)
|
||||
|
||||
* **MCP Tool:** `next_task`
|
||||
* **CLI Command:** `task-master next [options]`
|
||||
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
|
||||
* **Key Parameters/Options:**
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* **Usage:** Identify what to work on next according to the plan.
|
||||
|
||||
### 5. Get Task Details (`get_task`)
|
||||
|
||||
* **MCP Tool:** `get_task`
|
||||
* **CLI Command:** `task-master show [id] [options]`
|
||||
* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
|
||||
* `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown.
|
||||
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
|
||||
|
||||
---
|
||||
|
||||
## Task Creation & Modification
|
||||
|
||||
### 6. Add Task (`add_task`)
|
||||
|
||||
* **MCP Tool:** `add_task`
|
||||
* **CLI Command:** `task-master add-task [options]`
|
||||
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
|
||||
* **Key Parameters/Options:**
|
||||
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
|
||||
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
|
||||
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Quickly add newly identified tasks during development.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 7. Add Subtask (`add_subtask`)
|
||||
|
||||
* **MCP Tool:** `add_subtask`
|
||||
* **CLI Command:** `task-master add-subtask [options]`
|
||||
* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
|
||||
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
|
||||
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
|
||||
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
|
||||
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
|
||||
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
|
||||
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
|
||||
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Break down tasks manually or reorganize existing tasks.
|
||||
|
||||
### 8. Update Tasks (`update`)
|
||||
|
||||
* **MCP Tool:** `update`
|
||||
* **CLI Command:** `task-master update [options]`
|
||||
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
|
||||
* **Key Parameters/Options:**
|
||||
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
|
||||
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 9. Update Task (`update_task`)
|
||||
|
||||
* **MCP Tool:** `update_task`
|
||||
* **CLI Command:** `task-master update-task [options]`
|
||||
* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`)
|
||||
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
|
||||
* `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 10. Update Subtask (`update_subtask`)
|
||||
|
||||
* **MCP Tool:** `update_subtask`
|
||||
* **CLI Command:** `task-master update-subtask [options]`
|
||||
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
|
||||
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 11. Set Task Status (`set_task_status`)
|
||||
|
||||
* **MCP Tool:** `set_task_status`
|
||||
* **CLI Command:** `task-master set-status [options]`
|
||||
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
|
||||
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Mark progress as tasks move through the development cycle.
|
||||
|
||||
### 12. Remove Task (`remove_task`)
|
||||
|
||||
* **MCP Tool:** `remove_task`
|
||||
* **CLI Command:** `task-master remove-task [options]`
|
||||
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
|
||||
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
|
||||
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
|
||||
|
||||
---
|
||||
|
||||
## Task Structure & Breakdown
|
||||
|
||||
### 13. Expand Task (`expand_task`)
|
||||
|
||||
* **MCP Tool:** `expand_task`
|
||||
* **CLI Command:** `task-master expand [options]`
|
||||
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
|
||||
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
|
||||
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
|
||||
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
|
||||
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 14. Expand All Tasks (`expand_all`)
|
||||
|
||||
* **MCP Tool:** `expand_all`
|
||||
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
|
||||
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
|
||||
* **Key Parameters/Options:**
|
||||
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
|
||||
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
|
||||
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
|
||||
* `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 15. Clear Subtasks (`clear_subtasks`)
|
||||
|
||||
* **MCP Tool:** `clear_subtasks`
|
||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
|
||||
|
||||
### 16. Remove Subtask (`remove_subtask`)
|
||||
|
||||
* **MCP Tool:** `remove_subtask`
|
||||
* **CLI Command:** `task-master remove-subtask [options]`
|
||||
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
|
||||
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
|
||||
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||
|
||||
### 17. Move Task (`move_task`)
|
||||
|
||||
* **MCP Tool:** `move_task`
|
||||
* **CLI Command:** `task-master move [options]`
|
||||
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
|
||||
* **Key Parameters/Options:**
|
||||
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
||||
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
||||
* Moving a task to become a subtask
|
||||
* Moving a subtask to become a standalone task
|
||||
* Moving a subtask to a different parent
|
||||
* Reordering subtasks within the same parent
|
||||
* Moving a task to a new, non-existent ID (automatically creates placeholders)
|
||||
* Moving multiple tasks at once with comma-separated IDs
|
||||
* **Validation Features:**
|
||||
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
|
||||
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
|
||||
* Validates that source tasks exist before attempting to move them
|
||||
* Maintains proper parent-child relationships
|
||||
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
|
||||
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
|
||||
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
|
||||
|
||||
---
|
||||
|
||||
## Dependency Management
|
||||
|
||||
### 18. Add Dependency (`add_dependency`)
|
||||
|
||||
* **MCP Tool:** `add_dependency`
|
||||
* **CLI Command:** `task-master add-dependency [options]`
|
||||
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
|
||||
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||
* **Usage:** Establish the correct order of execution between tasks.
|
||||
|
||||
### 19. Remove Dependency (`remove_dependency`)
|
||||
|
||||
* **MCP Tool:** `remove_dependency`
|
||||
* **CLI Command:** `task-master remove-dependency [options]`
|
||||
* **Description:** `Remove a dependency relationship between two Taskmaster tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
|
||||
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Update task relationships when the order of execution changes.
|
||||
|
||||
### 20. Validate Dependencies (`validate_dependencies`)
|
||||
|
||||
* **MCP Tool:** `validate_dependencies`
|
||||
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Audit the integrity of your task dependencies.
|
||||
|
||||
### 21. Fix Dependencies (`fix_dependencies`)
|
||||
|
||||
* **MCP Tool:** `fix_dependencies`
|
||||
* **CLI Command:** `task-master fix-dependencies [options]`
|
||||
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Clean up dependency errors automatically.
|
||||
|
||||
---
|
||||
|
||||
## Analysis & Reporting
|
||||
|
||||
### 22. Analyze Project Complexity (`analyze_project_complexity`)
|
||||
|
||||
* **MCP Tool:** `analyze_project_complexity`
|
||||
* **CLI Command:** `task-master analyze-complexity [options]`
|
||||
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
|
||||
* **Key Parameters/Options:**
|
||||
* `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`)
|
||||
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
|
||||
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||
|
||||
### 23. View Complexity Report (`complexity_report`)
|
||||
|
||||
* **MCP Tool:** `complexity_report`
|
||||
* **CLI Command:** `task-master complexity-report [options]`
|
||||
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||
|
||||
---
|
||||
|
||||
## File Management
|
||||
|
||||
### 24. Generate Task Files (`generate`)
|
||||
|
||||
* **MCP Tool:** `generate`
|
||||
* **CLI Command:** `task-master generate [options]`
|
||||
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
|
||||
* **Key Parameters/Options:**
|
||||
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
|
||||
* `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically.
|
||||
|
||||
---
|
||||
|
||||
## AI-Powered Research
|
||||
|
||||
### 25. Research (`research`)
|
||||
|
||||
* **MCP Tool:** `research`
|
||||
* **CLI Command:** `task-master research [options]`
|
||||
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
|
||||
* **Key Parameters/Options:**
|
||||
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
|
||||
* `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
|
||||
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
|
||||
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
|
||||
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
|
||||
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
|
||||
* `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`)
|
||||
* `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`)
|
||||
* `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`)
|
||||
* `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
|
||||
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
|
||||
* Get fresh information beyond knowledge cutoff dates
|
||||
* Research latest best practices, library updates, security patches
|
||||
* Find implementation examples for specific technologies
|
||||
* Validate approaches against current industry standards
|
||||
* Get contextual advice based on project files and tasks
|
||||
* **When to Consider Using Research:**
|
||||
* **Before implementing any task** - Research current best practices
|
||||
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
|
||||
* **For security-related tasks** - Find latest security recommendations
|
||||
* **When updating dependencies** - Research breaking changes and migration guides
|
||||
* **For performance optimization** - Get current performance best practices
|
||||
* **When debugging complex issues** - Research known solutions and workarounds
|
||||
* **Research + Action Pattern:**
|
||||
* Use `research` to gather fresh information
|
||||
* Use `update_subtask` to commit findings with timestamps
|
||||
* Use `update_task` to incorporate research into task details
|
||||
* Use `add_task` with research flag for informed task creation
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
|
||||
|
||||
---
|
||||
|
||||
## Tag Management
|
||||
|
||||
This new suite of commands allows you to manage different task contexts (tags).
|
||||
|
||||
### 26. List Tags (`tags`)
|
||||
|
||||
* **MCP Tool:** `list_tags`
|
||||
* **CLI Command:** `task-master tags [options]`
|
||||
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
|
||||
* **Key Parameters/Options:**
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
|
||||
|
||||
### 27. Add Tag (`add_tag`)
|
||||
|
||||
* **MCP Tool:** `add_tag`
|
||||
* **CLI Command:** `task-master add-tag <tagName> [options]`
|
||||
* **Description:** `Create a new, empty tag context, or copy tasks from another tag.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional)
|
||||
* `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`)
|
||||
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
|
||||
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
|
||||
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 28. Delete Tag (`delete_tag`)
|
||||
|
||||
* **MCP Tool:** `delete_tag`
|
||||
* **CLI Command:** `task-master delete-tag <tagName> [options]`
|
||||
* **Description:** `Permanently delete a tag and all of its associated tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
|
||||
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 29. Use Tag (`use_tag`)
|
||||
|
||||
* **MCP Tool:** `use_tag`
|
||||
* **CLI Command:** `task-master use-tag <tagName>`
|
||||
* **Description:** `Switch your active task context to a different tag.`
|
||||
* **Key Parameters/Options:**
|
||||
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 30. Rename Tag (`rename_tag`)
|
||||
|
||||
* **MCP Tool:** `rename_tag`
|
||||
* **CLI Command:** `task-master rename-tag <oldName> <newName>`
|
||||
* **Description:** `Rename an existing tag.`
|
||||
* **Key Parameters/Options:**
|
||||
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
|
||||
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
### 31. Copy Tag (`copy_tag`)
|
||||
|
||||
* **MCP Tool:** `copy_tag`
|
||||
* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]`
|
||||
* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.`
|
||||
* **Key Parameters/Options:**
|
||||
* `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional)
|
||||
* `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional)
|
||||
* `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`)
|
||||
|
||||
---
|
||||
|
||||
## Miscellaneous
|
||||
|
||||
### 32. Sync Readme (`sync-readme`) -- experimental
|
||||
|
||||
* **MCP Tool:** N/A
|
||||
* **CLI Command:** `task-master sync-readme [options]`
|
||||
* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.`
|
||||
* **Key Parameters/Options:**
|
||||
* `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
|
||||
* `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`)
|
||||
* `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables Configuration (Updated)
|
||||
|
||||
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
||||
|
||||
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
|
||||
|
||||
* **API Keys (Required for corresponding provider):**
|
||||
* `ANTHROPIC_API_KEY`
|
||||
* `PERPLEXITY_API_KEY`
|
||||
* `OPENAI_API_KEY`
|
||||
* `GOOGLE_API_KEY`
|
||||
* `MISTRAL_API_KEY`
|
||||
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
|
||||
* `OPENROUTER_API_KEY`
|
||||
* `XAI_API_KEY`
|
||||
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
|
||||
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
|
||||
* `AZURE_OPENAI_ENDPOINT`
|
||||
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
|
||||
|
||||
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
|
||||
|
||||
---
|
||||
|
||||
For details on how these commands fit into the development process, see the [dev_workflow.mdc](mdc:.cursor/rules/taskmaster/dev_workflow.mdc).
|
||||
12
.gitignore
vendored
12
.gitignore
vendored
@@ -22,17 +22,11 @@ lerna-debug.log*
|
||||
|
||||
# Coverage directory used by tools like istanbul
|
||||
coverage/
|
||||
coverage-e2e/
|
||||
*.lcov
|
||||
|
||||
# Jest cache
|
||||
.jest/
|
||||
|
||||
# Test results and reports
|
||||
test-results/
|
||||
jest-results.json
|
||||
junit.xml
|
||||
|
||||
# Test temporary files and directories
|
||||
tests/temp/
|
||||
tests/e2e/_runs/
|
||||
@@ -93,9 +87,3 @@ dev-debug.log
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
|
||||
# OS specific
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/
|
||||
|
||||
@@ -48,8 +48,5 @@ export default {
|
||||
verbose: true,
|
||||
|
||||
// Setup file
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.js'],
|
||||
|
||||
// Ignore e2e tests from default Jest runs
|
||||
testPathIgnorePatterns: ['<rootDir>/tests/e2e/']
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.js']
|
||||
};
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
/**
|
||||
* Jest configuration for E2E tests
|
||||
* Separate from unit tests to allow different settings
|
||||
*/
|
||||
|
||||
export default {
|
||||
displayName: 'E2E Tests',
|
||||
testMatch: ['<rootDir>/tests/e2e/**/*.test.js'],
|
||||
testPathIgnorePatterns: [
|
||||
'/node_modules/',
|
||||
'/tests/e2e/utils/',
|
||||
'/tests/e2e/config/',
|
||||
'/tests/e2e/runners/',
|
||||
'/tests/e2e/e2e_helpers.sh',
|
||||
'/tests/e2e/test_llm_analysis.sh',
|
||||
'/tests/e2e/run_e2e.sh',
|
||||
'/tests/e2e/run_fallback_verification.sh'
|
||||
],
|
||||
testEnvironment: 'node',
|
||||
testTimeout: 600000, // 10 minutes default (AI operations can be slow)
|
||||
maxWorkers: 10, // Run tests in parallel workers to avoid rate limits
|
||||
maxConcurrency: 10, // Limit concurrent test execution
|
||||
testSequencer: '<rootDir>/tests/e2e/setup/rate-limit-sequencer.cjs', // Custom sequencer for rate limiting
|
||||
verbose: true,
|
||||
// Suppress console output for cleaner test results
|
||||
silent: false,
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/e2e/setup/jest-setup.js'],
|
||||
globalSetup: '<rootDir>/tests/e2e/setup/global-setup.js',
|
||||
globalTeardown: '<rootDir>/tests/e2e/setup/global-teardown.js',
|
||||
collectCoverageFrom: [
|
||||
'src/**/*.js',
|
||||
'!src/**/*.test.js',
|
||||
'!src/**/__tests__/**'
|
||||
],
|
||||
coverageDirectory: '<rootDir>/coverage-e2e',
|
||||
// Custom reporters for better E2E test output
|
||||
// Transform configuration to match unit tests
|
||||
transform: {},
|
||||
transformIgnorePatterns: ['/node_modules/'],
|
||||
// Module configuration
|
||||
moduleNameMapper: {
|
||||
'^@/(.*)$': '<rootDir>/$1'
|
||||
},
|
||||
moduleDirectories: ['node_modules', '<rootDir>'],
|
||||
// Reporters configuration
|
||||
reporters: [
|
||||
'default',
|
||||
'jest-junit',
|
||||
[
|
||||
'jest-html-reporters',
|
||||
{
|
||||
publicPath: './test-results',
|
||||
filename: 'index.html',
|
||||
pageTitle: 'Task Master E2E Test Report',
|
||||
expand: true,
|
||||
openReport: false,
|
||||
hideIcon: false,
|
||||
includeFailureMsg: true,
|
||||
enableMergeData: true,
|
||||
dataMergeLevel: 1,
|
||||
inlineSource: false,
|
||||
customInfos: [
|
||||
{
|
||||
title: 'Environment',
|
||||
value: 'E2E Testing'
|
||||
},
|
||||
{
|
||||
title: 'Test Type',
|
||||
value: 'CLI Commands'
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
],
|
||||
// Environment variables for E2E tests
|
||||
testEnvironmentOptions: {
|
||||
env: {
|
||||
NODE_ENV: 'test',
|
||||
E2E_TEST: 'true'
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -1,116 +0,0 @@
|
||||
/**
|
||||
* Jest configuration using projects feature to separate AI and non-AI tests
|
||||
* This allows different concurrency settings for each type
|
||||
*/
|
||||
|
||||
const baseConfig = {
|
||||
testEnvironment: 'node',
|
||||
testTimeout: 600000,
|
||||
verbose: true,
|
||||
silent: false,
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/e2e/setup/jest-setup.js'],
|
||||
globalSetup: '<rootDir>/tests/e2e/setup/global-setup.js',
|
||||
globalTeardown: '<rootDir>/tests/e2e/setup/global-teardown.js',
|
||||
transform: {},
|
||||
transformIgnorePatterns: ['/node_modules/'],
|
||||
moduleNameMapper: {
|
||||
'^@/(.*)$': '<rootDir>/$1'
|
||||
},
|
||||
moduleDirectories: ['node_modules', '<rootDir>'],
|
||||
reporters: [
|
||||
'default',
|
||||
'jest-junit',
|
||||
[
|
||||
'jest-html-reporters',
|
||||
{
|
||||
publicPath: './test-results',
|
||||
filename: 'index.html',
|
||||
pageTitle: 'Task Master E2E Test Report',
|
||||
expand: true,
|
||||
openReport: false,
|
||||
hideIcon: false,
|
||||
includeFailureMsg: true,
|
||||
enableMergeData: true,
|
||||
dataMergeLevel: 1,
|
||||
inlineSource: false
|
||||
}
|
||||
]
|
||||
]
|
||||
};
|
||||
|
||||
export default {
|
||||
projects: [
|
||||
{
|
||||
...baseConfig,
|
||||
displayName: 'Non-AI E2E Tests',
|
||||
testMatch: [
|
||||
'<rootDir>/tests/e2e/**/add-dependency.test.js',
|
||||
'<rootDir>/tests/e2e/**/remove-dependency.test.js',
|
||||
'<rootDir>/tests/e2e/**/validate-dependencies.test.js',
|
||||
'<rootDir>/tests/e2e/**/fix-dependencies.test.js',
|
||||
'<rootDir>/tests/e2e/**/add-subtask.test.js',
|
||||
'<rootDir>/tests/e2e/**/remove-subtask.test.js',
|
||||
'<rootDir>/tests/e2e/**/clear-subtasks.test.js',
|
||||
'<rootDir>/tests/e2e/**/set-status.test.js',
|
||||
'<rootDir>/tests/e2e/**/show.test.js',
|
||||
'<rootDir>/tests/e2e/**/list.test.js',
|
||||
'<rootDir>/tests/e2e/**/next.test.js',
|
||||
'<rootDir>/tests/e2e/**/tags.test.js',
|
||||
'<rootDir>/tests/e2e/**/add-tag.test.js',
|
||||
'<rootDir>/tests/e2e/**/delete-tag.test.js',
|
||||
'<rootDir>/tests/e2e/**/rename-tag.test.js',
|
||||
'<rootDir>/tests/e2e/**/copy-tag.test.js',
|
||||
'<rootDir>/tests/e2e/**/use-tag.test.js',
|
||||
'<rootDir>/tests/e2e/**/init.test.js',
|
||||
'<rootDir>/tests/e2e/**/models.test.js',
|
||||
'<rootDir>/tests/e2e/**/move.test.js',
|
||||
'<rootDir>/tests/e2e/**/remove-task.test.js',
|
||||
'<rootDir>/tests/e2e/**/sync-readme.test.js',
|
||||
'<rootDir>/tests/e2e/**/rules.test.js',
|
||||
'<rootDir>/tests/e2e/**/lang.test.js',
|
||||
'<rootDir>/tests/e2e/**/migrate.test.js'
|
||||
],
|
||||
// Non-AI tests can run with more parallelism
|
||||
maxWorkers: 4,
|
||||
maxConcurrency: 5
|
||||
},
|
||||
{
|
||||
...baseConfig,
|
||||
displayName: 'Light AI E2E Tests',
|
||||
testMatch: [
|
||||
'<rootDir>/tests/e2e/**/add-task.test.js',
|
||||
'<rootDir>/tests/e2e/**/update-subtask.test.js',
|
||||
'<rootDir>/tests/e2e/**/complexity-report.test.js'
|
||||
],
|
||||
// Light AI tests with moderate parallelism
|
||||
maxWorkers: 3,
|
||||
maxConcurrency: 3
|
||||
},
|
||||
{
|
||||
...baseConfig,
|
||||
displayName: 'Heavy AI E2E Tests',
|
||||
testMatch: [
|
||||
'<rootDir>/tests/e2e/**/update-task.test.js',
|
||||
'<rootDir>/tests/e2e/**/expand-task.test.js',
|
||||
'<rootDir>/tests/e2e/**/research.test.js',
|
||||
'<rootDir>/tests/e2e/**/research-save.test.js',
|
||||
'<rootDir>/tests/e2e/**/parse-prd.test.js',
|
||||
'<rootDir>/tests/e2e/**/generate.test.js',
|
||||
'<rootDir>/tests/e2e/**/analyze-complexity.test.js',
|
||||
'<rootDir>/tests/e2e/**/update.test.js'
|
||||
],
|
||||
// Heavy AI tests run sequentially to avoid rate limits
|
||||
maxWorkers: 1,
|
||||
maxConcurrency: 1,
|
||||
// Even longer timeout for AI operations
|
||||
testTimeout: 900000 // 15 minutes
|
||||
}
|
||||
],
|
||||
// Global settings
|
||||
coverageDirectory: '<rootDir>/coverage-e2e',
|
||||
collectCoverageFrom: [
|
||||
'src/**/*.js',
|
||||
'!src/**/*.test.js',
|
||||
'!src/**/__tests__/**'
|
||||
]
|
||||
};
|
||||
@@ -16,14 +16,12 @@ import {
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string|number} args.id - Task ID to add dependency to
|
||||
* @param {string|number} args.dependsOn - Task ID that will become a dependency
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information
|
||||
*/
|
||||
export async function addDependencyDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, id, dependsOn, tag, projectRoot } = args;
|
||||
const { tasksJsonPath, id, dependsOn } = args;
|
||||
try {
|
||||
log.info(`Adding dependency with args: ${JSON.stringify(args)}`);
|
||||
|
||||
@@ -78,11 +76,8 @@ export async function addDependencyDirect(args, log) {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
// Create context object
|
||||
const context = { projectRoot, tag };
|
||||
|
||||
// Call the core function using the provided path
|
||||
await addDependency(tasksPath, taskId, dependencyId, context);
|
||||
await addDependency(tasksPath, taskId, dependencyId);
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
|
||||
@@ -24,7 +24,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {string} [args.tasksJsonPath] - Path to the tasks.json file (resolved by tool)
|
||||
* @param {boolean} [args.research=false] - Whether to use research capabilities for task creation
|
||||
* @param {string} [args.projectRoot] - Project root path
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @param {Object} context - Additional context (session)
|
||||
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||
@@ -37,8 +36,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
dependencies,
|
||||
priority,
|
||||
research,
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot
|
||||
} = args;
|
||||
const { session } = context; // Destructure session from context
|
||||
|
||||
@@ -123,8 +121,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
mcpLog,
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp',
|
||||
tag
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
manualTaskData, // Pass the manual task data
|
||||
@@ -150,8 +147,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
mcpLog,
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp',
|
||||
tag
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
null, // manualTaskData is null for AI creation
|
||||
|
||||
@@ -22,7 +22,6 @@ import { createLogWrapper } from '../../tools/utils.js'; // Import the new utili
|
||||
* @param {number} [args.from] - Starting task ID in a range to analyze
|
||||
* @param {number} [args.to] - Ending task ID in a range to analyze
|
||||
* @param {string} [args.projectRoot] - Project root path.
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @param {Object} [context={}] - Context object containing session data
|
||||
* @param {Object} [context.session] - MCP session object
|
||||
@@ -38,8 +37,7 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
projectRoot,
|
||||
ids,
|
||||
from,
|
||||
to,
|
||||
tag
|
||||
to
|
||||
} = args;
|
||||
|
||||
const logWrapper = createLogWrapper(log);
|
||||
@@ -93,8 +91,7 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
projectRoot: projectRoot, // Pass projectRoot here
|
||||
id: ids, // Pass the ids parameter to the core function as 'id'
|
||||
from: from, // Pass from parameter
|
||||
to: to, // Pass to parameter
|
||||
tag // forward tag
|
||||
to: to // Pass to parameter
|
||||
};
|
||||
// --- End Initial Checks ---
|
||||
|
||||
@@ -115,9 +112,7 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
commandName: 'analyze-complexity',
|
||||
outputType: 'mcp',
|
||||
projectRoot,
|
||||
tag
|
||||
outputType: 'mcp'
|
||||
});
|
||||
report = coreResult.report;
|
||||
} catch (error) {
|
||||
|
||||
@@ -18,7 +18,6 @@ import path from 'path';
|
||||
* @param {string} [args.id] - Task IDs (comma-separated) to clear subtasks from
|
||||
* @param {boolean} [args.all] - Clear subtasks from all tasks
|
||||
* @param {string} [args.tag] - Tag context to operate on (defaults to current active tag)
|
||||
* @param {string} [args.projectRoot] - Project root path (for MCP/env fallback)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
*/
|
||||
@@ -81,7 +80,7 @@ export async function clearSubtasksDirect(args, log) {
|
||||
};
|
||||
}
|
||||
|
||||
const currentTag = data.tag || tag;
|
||||
const currentTag = data.tag || 'master';
|
||||
const tasks = data.tasks;
|
||||
|
||||
// If all is specified, get all task IDs
|
||||
|
||||
@@ -18,7 +18,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {string} [args.prompt] - Additional context to guide subtask generation
|
||||
* @param {boolean} [args.force] - Force regeneration of subtasks for tasks that already have them
|
||||
* @param {string} [args.projectRoot] - Project root path.
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object from FastMCP
|
||||
* @param {Object} context - Context object containing session
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
@@ -26,8 +25,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
export async function expandAllTasksDirect(args, log, context = {}) {
|
||||
const { session } = context; // Extract session
|
||||
// Destructure expected args, including projectRoot
|
||||
const { tasksJsonPath, num, research, prompt, force, projectRoot, tag } =
|
||||
args;
|
||||
const { tasksJsonPath, num, research, prompt, force, projectRoot } = args;
|
||||
|
||||
// Create logger wrapper using the utility
|
||||
const mcpLog = createLogWrapper(log);
|
||||
@@ -46,7 +44,7 @@ export async function expandAllTasksDirect(args, log, context = {}) {
|
||||
enableSilentMode(); // Enable silent mode for the core function call
|
||||
try {
|
||||
log.info(
|
||||
`Calling core expandAllTasks with args: ${JSON.stringify({ num, research, prompt, force, projectRoot, tag })}`
|
||||
`Calling core expandAllTasks with args: ${JSON.stringify({ num, research, prompt, force, projectRoot })}`
|
||||
);
|
||||
|
||||
// Parse parameters (ensure correct types)
|
||||
@@ -62,7 +60,7 @@ export async function expandAllTasksDirect(args, log, context = {}) {
|
||||
useResearch,
|
||||
additionalContext,
|
||||
forceFlag,
|
||||
{ session, mcpLog, projectRoot, tag },
|
||||
{ session, mcpLog, projectRoot },
|
||||
'json'
|
||||
);
|
||||
|
||||
|
||||
@@ -35,17 +35,8 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
export async function expandTaskDirect(args, log, context = {}) {
|
||||
const { session } = context; // Extract session
|
||||
// Destructure expected args, including projectRoot
|
||||
const {
|
||||
tasksJsonPath,
|
||||
id,
|
||||
num,
|
||||
research,
|
||||
prompt,
|
||||
force,
|
||||
projectRoot,
|
||||
tag,
|
||||
complexityReportPath
|
||||
} = args;
|
||||
const { tasksJsonPath, id, num, research, prompt, force, projectRoot, tag } =
|
||||
args;
|
||||
|
||||
// Log session root data for debugging
|
||||
log.info(
|
||||
@@ -201,7 +192,6 @@ export async function expandTaskDirect(args, log, context = {}) {
|
||||
useResearch,
|
||||
additionalContext,
|
||||
{
|
||||
complexityReportPath,
|
||||
mcpLog,
|
||||
session,
|
||||
projectRoot,
|
||||
|
||||
@@ -53,9 +53,10 @@ export async function fixDependenciesDirect(args, log) {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
const options = { projectRoot, tag };
|
||||
// Call the original command function using the provided path and proper context
|
||||
await fixDependenciesCommand(tasksPath, options);
|
||||
await fixDependenciesCommand(tasksPath, {
|
||||
context: { projectRoot, tag }
|
||||
});
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
|
||||
@@ -13,16 +13,12 @@ import {
|
||||
* Direct function wrapper for generateTaskFiles with error handling.
|
||||
*
|
||||
* @param {Object} args - Command arguments containing tasksJsonPath and outputDir.
|
||||
* @param {string} args.tasksJsonPath - Path to the tasks.json file.
|
||||
* @param {string} args.outputDir - Path to the output directory.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
*/
|
||||
export async function generateTaskFilesDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, outputDir, projectRoot, tag } = args;
|
||||
const { tasksJsonPath, outputDir } = args;
|
||||
try {
|
||||
log.info(`Generating task files with args: ${JSON.stringify(args)}`);
|
||||
|
||||
@@ -55,12 +51,8 @@ export async function generateTaskFilesDirect(args, log) {
|
||||
// Enable silent mode to prevent logs from being written to stdout
|
||||
enableSilentMode();
|
||||
|
||||
// Pass projectRoot and tag so the core respects context
|
||||
generateTaskFiles(tasksPath, resolvedOutputDir, {
|
||||
projectRoot,
|
||||
tag,
|
||||
mcpLog: log
|
||||
});
|
||||
// The function is synchronous despite being awaited elsewhere
|
||||
generateTaskFiles(tasksPath, resolvedOutputDir);
|
||||
|
||||
// Restore normal logging after task generation
|
||||
disableSilentMode();
|
||||
|
||||
@@ -13,19 +13,12 @@ import {
|
||||
* Direct function wrapper for listTasks with error handling and caching.
|
||||
*
|
||||
* @param {Object} args - Command arguments (now expecting tasksJsonPath explicitly).
|
||||
* @param {string} args.tasksJsonPath - Path to the tasks.json file.
|
||||
* @param {string} args.reportPath - Path to the report file.
|
||||
* @param {string} args.status - Status of the task.
|
||||
* @param {boolean} args.withSubtasks - Whether to include subtasks.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @returns {Promise<Object>} - Task list result { success: boolean, data?: any, error?: { code: string, message: string } }.
|
||||
*/
|
||||
export async function listTasksDirect(args, log, context = {}) {
|
||||
// Destructure the explicit tasksJsonPath from args
|
||||
const { tasksJsonPath, reportPath, status, withSubtasks, projectRoot, tag } =
|
||||
args;
|
||||
const { tasksJsonPath, reportPath, status, withSubtasks, projectRoot } = args;
|
||||
const { session } = context;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
@@ -59,7 +52,8 @@ export async function listTasksDirect(args, log, context = {}) {
|
||||
reportPath,
|
||||
withSubtasksFilter,
|
||||
'json',
|
||||
{ projectRoot, session, tag }
|
||||
null, // tag
|
||||
{ projectRoot, session } // context
|
||||
);
|
||||
|
||||
if (!resultData || !resultData.tasks) {
|
||||
|
||||
@@ -17,14 +17,12 @@ import {
|
||||
* @param {string} args.destinationId - ID of the destination (e.g., '7' or '7.3' or '7,8,9')
|
||||
* @param {string} args.file - Alternative path to the tasks.json file
|
||||
* @param {string} args.projectRoot - Project root directory
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {boolean} args.generateFiles - Whether to regenerate task files after moving (default: true)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: Object}>}
|
||||
*/
|
||||
export async function moveTaskDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
const { projectRoot, tag } = args;
|
||||
|
||||
// Validate required parameters
|
||||
if (!args.sourceId) {
|
||||
@@ -75,8 +73,8 @@ export async function moveTaskDirect(args, log, context = {}) {
|
||||
args.destinationId,
|
||||
generateFiles,
|
||||
{
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
@@ -18,15 +18,12 @@ import {
|
||||
*
|
||||
* @param {Object} args - Command arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string} args.reportPath - Path to the report file.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<Object>} - Next task result { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||
*/
|
||||
export async function nextTaskDirect(args, log, context = {}) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, reportPath, projectRoot, tag } = args;
|
||||
const { tasksJsonPath, reportPath, projectRoot } = args;
|
||||
const { session } = context;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
@@ -49,7 +46,7 @@ export async function nextTaskDirect(args, log, context = {}) {
|
||||
log.info(`Finding next task from ${tasksJsonPath}`);
|
||||
|
||||
// Read tasks data using the provided path
|
||||
const data = readJSON(tasksJsonPath, projectRoot, tag);
|
||||
const data = readJSON(tasksJsonPath, projectRoot);
|
||||
if (!data || !data.tasks) {
|
||||
disableSilentMode(); // Disable before return
|
||||
return {
|
||||
|
||||
@@ -20,13 +20,6 @@ import { TASKMASTER_TASKS_FILE } from '../../../../src/constants/paths.js';
|
||||
* Direct function wrapper for parsing PRD documents and generating tasks.
|
||||
*
|
||||
* @param {Object} args - Command arguments containing projectRoot, input, output, numTasks options.
|
||||
* @param {string} args.input - Path to the input PRD file.
|
||||
* @param {string} args.output - Path to the output directory.
|
||||
* @param {string} args.numTasks - Number of tasks to generate.
|
||||
* @param {boolean} args.force - Whether to force parsing.
|
||||
* @param {boolean} args.append - Whether to append to the output file.
|
||||
* @param {boolean} args.research - Whether to use research mode.
|
||||
* @param {string} args.tag - Tag context for organizing tasks into separate task lists.
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
@@ -41,8 +34,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
force,
|
||||
append,
|
||||
research,
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot
|
||||
} = args;
|
||||
|
||||
// Create the standard logger wrapper
|
||||
@@ -160,7 +152,6 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
projectRoot,
|
||||
tag,
|
||||
force,
|
||||
append,
|
||||
research,
|
||||
|
||||
@@ -14,14 +14,12 @@ import {
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string|number} args.id - Task ID to remove dependency from
|
||||
* @param {string|number} args.dependsOn - Task ID to remove as a dependency
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
*/
|
||||
export async function removeDependencyDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, id, dependsOn, projectRoot, tag } = args;
|
||||
const { tasksJsonPath, id, dependsOn } = args;
|
||||
try {
|
||||
log.info(`Removing dependency with args: ${JSON.stringify(args)}`);
|
||||
|
||||
@@ -77,10 +75,7 @@ export async function removeDependencyDirect(args, log) {
|
||||
enableSilentMode();
|
||||
|
||||
// Call the core function using the provided tasksPath
|
||||
await removeDependency(tasksPath, taskId, dependencyId, {
|
||||
projectRoot,
|
||||
tag
|
||||
});
|
||||
await removeDependency(tasksPath, taskId, dependencyId);
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
|
||||
@@ -15,14 +15,12 @@ import {
|
||||
* @param {string} args.id - Subtask ID in format "parentId.subtaskId" (required)
|
||||
* @param {boolean} [args.convert] - Whether to convert the subtask to a standalone task
|
||||
* @param {boolean} [args.skipGenerate] - Skip regenerating task files
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
*/
|
||||
export async function removeSubtaskDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, id, convert, skipGenerate, projectRoot, tag } = args;
|
||||
const { tasksJsonPath, id, convert, skipGenerate } = args;
|
||||
try {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
@@ -84,11 +82,7 @@ export async function removeSubtaskDirect(args, log) {
|
||||
tasksPath,
|
||||
id,
|
||||
convertToTask,
|
||||
generateFiles,
|
||||
{
|
||||
projectRoot,
|
||||
tag
|
||||
}
|
||||
generateFiles
|
||||
);
|
||||
|
||||
// Restore normal logging
|
||||
|
||||
@@ -20,8 +20,7 @@ import {
|
||||
* @param {Object} args - Command arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {string} [args.tag] - Tag context to operate on (defaults to current active tag).
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||
*/
|
||||
@@ -118,7 +117,7 @@ export async function removeTaskDirect(args, log, context = {}) {
|
||||
removedTasks: result.removedTasks,
|
||||
message: result.message,
|
||||
tasksPath: tasksJsonPath,
|
||||
tag
|
||||
tag: data.tag || tag || 'master'
|
||||
}
|
||||
};
|
||||
} finally {
|
||||
|
||||
@@ -24,7 +24,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {string} [args.saveTo] - Automatically save to task/subtask ID (e.g., "15" or "15.2")
|
||||
* @param {boolean} [args.saveToFile=false] - Save research results to .taskmaster/docs/research/ directory
|
||||
* @param {string} [args.projectRoot] - Project root path
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @param {Object} context - Additional context (session)
|
||||
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||
@@ -40,8 +39,7 @@ export async function researchDirect(args, log, context = {}) {
|
||||
detailLevel = 'medium',
|
||||
saveTo,
|
||||
saveToFile = false,
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot
|
||||
} = args;
|
||||
const { session } = context; // Destructure session from context
|
||||
|
||||
@@ -113,7 +111,6 @@ export async function researchDirect(args, log, context = {}) {
|
||||
includeProjectTree,
|
||||
detailLevel,
|
||||
projectRoot,
|
||||
tag,
|
||||
saveToFile
|
||||
};
|
||||
|
||||
@@ -172,8 +169,7 @@ ${result.result}`;
|
||||
mcpLog,
|
||||
commandName: 'research-save',
|
||||
outputType: 'mcp',
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
@@ -204,8 +200,7 @@ ${result.result}`;
|
||||
mcpLog,
|
||||
commandName: 'research-save',
|
||||
outputType: 'mcp',
|
||||
projectRoot,
|
||||
tag
|
||||
projectRoot
|
||||
},
|
||||
'json',
|
||||
true // appendMode = true
|
||||
|
||||
@@ -14,11 +14,6 @@ import { nextTaskDirect } from './next-task.js';
|
||||
* Direct function wrapper for setTaskStatus with error handling.
|
||||
*
|
||||
* @param {Object} args - Command arguments containing id, status, tasksJsonPath, and projectRoot.
|
||||
* @param {string} args.id - The ID of the task to update.
|
||||
* @param {string} args.status - The new status to set for the task.
|
||||
* @param {string} args.tasksJsonPath - Path to the tasks.json file.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Additional context (session)
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
@@ -75,12 +70,17 @@ export async function setTaskStatusDirect(args, log, context = {}) {
|
||||
enableSilentMode(); // Enable silent mode before calling core function
|
||||
try {
|
||||
// Call the core function
|
||||
await setTaskStatus(tasksPath, taskId, newStatus, {
|
||||
mcpLog: log,
|
||||
projectRoot,
|
||||
session,
|
||||
await setTaskStatus(
|
||||
tasksPath,
|
||||
taskId,
|
||||
newStatus,
|
||||
{
|
||||
mcpLog: log,
|
||||
projectRoot,
|
||||
session
|
||||
},
|
||||
tag
|
||||
});
|
||||
);
|
||||
|
||||
log.info(`Successfully set task ${taskId} status to ${newStatus}`);
|
||||
|
||||
@@ -103,8 +103,7 @@ export async function setTaskStatusDirect(args, log, context = {}) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
reportPath: complexityReportPath,
|
||||
projectRoot: projectRoot,
|
||||
tag
|
||||
projectRoot: projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -19,7 +19,6 @@ import { findTasksPath } from '../utils/path-utils.js';
|
||||
* @param {string} args.reportPath - Explicit path to the complexity report file.
|
||||
* @param {string} [args.status] - Optional status to filter subtasks by.
|
||||
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
|
||||
* @param {string} [args.tag] - Tag for the task
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
@@ -27,7 +26,7 @@ import { findTasksPath } from '../utils/path-utils.js';
|
||||
export async function showTaskDirect(args, log) {
|
||||
// This function doesn't need session context since it only reads data
|
||||
// Destructure projectRoot and other args. projectRoot is assumed normalized.
|
||||
const { id, file, reportPath, status, projectRoot, tag } = args;
|
||||
const { id, file, reportPath, status, projectRoot } = args;
|
||||
|
||||
log.info(
|
||||
`Showing task direct function. ID: ${id}, File: ${file}, Status Filter: ${status}, ProjectRoot: ${projectRoot}`
|
||||
@@ -56,7 +55,7 @@ export async function showTaskDirect(args, log) {
|
||||
|
||||
// --- Rest of the function remains the same, using tasksJsonPath ---
|
||||
try {
|
||||
const tasksData = readJSON(tasksJsonPath, projectRoot, tag);
|
||||
const tasksData = readJSON(tasksJsonPath, projectRoot);
|
||||
if (!tasksData || !tasksData.tasks) {
|
||||
return {
|
||||
success: false,
|
||||
|
||||
@@ -20,7 +20,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {string} args.prompt - Information to append to the subtask.
|
||||
* @param {boolean} [args.research] - Whether to use research role.
|
||||
* @param {string} [args.projectRoot] - Project root path.
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
@@ -28,7 +27,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
// Destructure expected args, including projectRoot
|
||||
const { tasksJsonPath, id, prompt, research, projectRoot, tag } = args;
|
||||
const { tasksJsonPath, id, prompt, research, projectRoot } = args;
|
||||
|
||||
const logWrapper = createLogWrapper(log);
|
||||
|
||||
@@ -113,7 +112,6 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
||||
mcpLog: logWrapper,
|
||||
session,
|
||||
projectRoot,
|
||||
tag,
|
||||
commandName: 'update-subtask',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
|
||||
@@ -21,7 +21,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
* @param {boolean} [args.research] - Whether to use research role.
|
||||
* @param {boolean} [args.append] - Whether to append timestamped information instead of full update.
|
||||
* @param {string} [args.projectRoot] - Project root path.
|
||||
* @param {string} [args.tag] - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
@@ -29,8 +28,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
||||
export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
// Destructure expected args, including projectRoot
|
||||
const { tasksJsonPath, id, prompt, research, append, projectRoot, tag } =
|
||||
args;
|
||||
const { tasksJsonPath, id, prompt, research, append, projectRoot } = args;
|
||||
|
||||
const logWrapper = createLogWrapper(log);
|
||||
|
||||
@@ -118,7 +116,6 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||
mcpLog: logWrapper,
|
||||
session,
|
||||
projectRoot,
|
||||
tag,
|
||||
commandName: 'update-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
|
||||
@@ -15,12 +15,6 @@ import {
|
||||
* Direct function wrapper for updating tasks based on new context.
|
||||
*
|
||||
* @param {Object} args - Command arguments containing projectRoot, from, prompt, research options.
|
||||
* @param {string} args.from - The ID of the task to update.
|
||||
* @param {string} args.prompt - The prompt to update the task with.
|
||||
* @param {boolean} args.research - Whether to use research mode.
|
||||
* @param {string} args.tasksJsonPath - Path to the tasks.json file.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
|
||||
@@ -13,14 +13,12 @@ import fs from 'fs';
|
||||
* Validate dependencies in tasks.json
|
||||
* @param {Object} args - Function arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string} args.projectRoot - Project root path (for MCP/env fallback)
|
||||
* @param {string} args.tag - Tag for the task (optional)
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
*/
|
||||
export async function validateDependenciesDirect(args, log) {
|
||||
// Destructure the explicit tasksJsonPath
|
||||
const { tasksJsonPath, projectRoot, tag } = args;
|
||||
const { tasksJsonPath } = args;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
log.error('validateDependenciesDirect called without tasksJsonPath');
|
||||
@@ -53,9 +51,8 @@ export async function validateDependenciesDirect(args, log) {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
const options = { projectRoot, tag };
|
||||
// Call the original command function using the provided tasksPath
|
||||
await validateDependenciesCommand(tasksPath, options);
|
||||
await validateDependenciesCommand(tasksPath);
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
|
||||
@@ -121,7 +121,6 @@ export function resolveComplexityReportPath(args, log = silentLogger) {
|
||||
// Get explicit path from args.complexityReport if provided
|
||||
const explicitPath = args?.complexityReport;
|
||||
const rawProjectRoot = args?.projectRoot;
|
||||
const tag = args?.tag;
|
||||
|
||||
// If explicit path is provided and absolute, use it directly
|
||||
if (explicitPath && path.isAbsolute(explicitPath)) {
|
||||
@@ -140,11 +139,7 @@ export function resolveComplexityReportPath(args, log = silentLogger) {
|
||||
|
||||
// Use core findComplexityReportPath with explicit path and normalized projectRoot context
|
||||
if (projectRoot) {
|
||||
return coreFindComplexityReportPath(
|
||||
explicitPath,
|
||||
{ projectRoot, tag },
|
||||
log
|
||||
);
|
||||
return coreFindComplexityReportPath(explicitPath, { projectRoot }, log);
|
||||
}
|
||||
|
||||
// Fallback to core function without projectRoot context
|
||||
|
||||
@@ -32,6 +32,10 @@ class TaskMasterMCPServer {
|
||||
this.server = new FastMCP(this.options);
|
||||
this.initialized = false;
|
||||
|
||||
this.server.addResource({});
|
||||
|
||||
this.server.addResourceTemplate({});
|
||||
|
||||
// Bind methods
|
||||
this.init = this.init.bind(this);
|
||||
this.start = this.start.bind(this);
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { addDependencyDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the addDependency tool with the MCP server
|
||||
@@ -34,18 +33,14 @@ export function registerAddDependencyTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(
|
||||
`Adding dependency for task ${args.id} to depend on ${args.dependsOn}`
|
||||
);
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
@@ -66,9 +61,7 @@ export function registerAddDependencyTool(server) {
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
// Pass other relevant args
|
||||
id: args.id,
|
||||
dependsOn: args.dependsOn,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
dependsOn: args.dependsOn
|
||||
},
|
||||
log
|
||||
// Remove context object
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { addSubtaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the addSubtask tool with the MCP server
|
||||
@@ -53,21 +52,17 @@ export function registerAddSubtaskTool(server) {
|
||||
.describe(
|
||||
'Absolute path to the tasks file (default: tasks/tasks.json)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on'),
|
||||
skipGenerate: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Skip regenerating task files'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
@@ -96,7 +91,7 @@ export function registerAddSubtaskTool(server) {
|
||||
dependencies: args.dependencies,
|
||||
skipGenerate: args.skipGenerate,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { addTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the addTask tool with the MCP server
|
||||
@@ -59,7 +58,6 @@ export function registerAddTaskTool(server) {
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on'),
|
||||
research: z
|
||||
.boolean()
|
||||
.optional()
|
||||
@@ -69,11 +67,6 @@ export function registerAddTaskTool(server) {
|
||||
try {
|
||||
log.info(`Starting add-task with args: ${JSON.stringify(args)}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -100,8 +93,7 @@ export function registerAddTaskTool(server) {
|
||||
dependencies: args.dependencies,
|
||||
priority: args.priority,
|
||||
research: args.research,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -13,9 +13,7 @@ import {
|
||||
} from './utils.js';
|
||||
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; // Assuming core functions are exported via task-master-core.js
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||
import { resolveComplexityReportOutputPath } from '../../../src/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the analyze_project_complexity tool
|
||||
@@ -72,22 +70,15 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
.describe('Ending task ID in a range to analyze.'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const toolName = 'analyze_project_complexity'; // Define tool name for logging
|
||||
|
||||
try {
|
||||
log.info(
|
||||
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
@@ -102,14 +93,9 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
);
|
||||
}
|
||||
|
||||
const outputPath = resolveComplexityReportOutputPath(
|
||||
args.output,
|
||||
{
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
},
|
||||
log
|
||||
);
|
||||
const outputPath = args.output
|
||||
? path.resolve(args.projectRoot, args.output)
|
||||
: path.resolve(args.projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
|
||||
log.info(`${toolName}: Report output path: ${outputPath}`);
|
||||
|
||||
@@ -137,7 +123,6 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
threshold: args.threshold,
|
||||
research: args.research,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag,
|
||||
ids: args.ids,
|
||||
from: args.from,
|
||||
to: args.to
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { clearSubtasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the clearSubtasks tool with the MCP server
|
||||
@@ -47,11 +46,6 @@ export function registerClearSubtasksTool(server) {
|
||||
try {
|
||||
log.info(`Clearing subtasks with args: ${JSON.stringify(args)}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -71,9 +65,8 @@ export function registerClearSubtasksTool(server) {
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
id: args.id,
|
||||
all: args.all,
|
||||
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag || 'master'
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -12,7 +12,6 @@ import {
|
||||
import { complexityReportDirect } from '../core/task-master-core.js';
|
||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||
import { findComplexityReportPath } from '../core/utils/path-utils.js';
|
||||
import { getCurrentTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the complexityReport tool with the MCP server
|
||||
@@ -39,16 +38,12 @@ export function registerComplexityReportTool(server) {
|
||||
`Getting complexity report with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
|
||||
const resolvedTag = getCurrentTag(args.projectRoot);
|
||||
|
||||
const pathArgs = {
|
||||
projectRoot: args.projectRoot,
|
||||
complexityReport: args.file,
|
||||
tag: resolvedTag
|
||||
complexityReport: args.file
|
||||
};
|
||||
|
||||
const reportPath = findComplexityReportPath(pathArgs, log);
|
||||
log.info('Reading complexity report from path: ', reportPath);
|
||||
|
||||
if (!reportPath) {
|
||||
return createErrorResponse(
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { expandAllTasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the expandAll tool with the MCP server
|
||||
@@ -58,8 +57,7 @@ export function registerExpandAllTool(server) {
|
||||
.optional()
|
||||
.describe(
|
||||
'Absolute path to the project root directory (derived from session if possible)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
@@ -67,10 +65,6 @@ export function registerExpandAllTool(server) {
|
||||
`Tool expand_all execution started with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
@@ -92,8 +86,7 @@ export function registerExpandAllTool(server) {
|
||||
research: args.research,
|
||||
prompt: args.prompt,
|
||||
force: args.force,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -10,11 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { expandTaskDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
findTasksPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the expand-task tool with the MCP server
|
||||
@@ -55,10 +51,7 @@ export function registerExpandTaskTool(server) {
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(`Starting expand-task with args: ${JSON.stringify(args)}`);
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -73,11 +66,6 @@ export function registerExpandTaskTool(server) {
|
||||
);
|
||||
}
|
||||
|
||||
const complexityReportPath = findComplexityReportPath(
|
||||
{ ...args, tag: resolvedTag },
|
||||
log
|
||||
);
|
||||
|
||||
const result = await expandTaskDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
@@ -86,9 +74,8 @@ export function registerExpandTaskTool(server) {
|
||||
research: args.research,
|
||||
prompt: args.prompt,
|
||||
force: args.force,
|
||||
complexityReportPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag || 'master'
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,7 @@ import {
|
||||
} from './utils.js';
|
||||
import { fixDependenciesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the fixDependencies tool with the MCP server
|
||||
* @param {Object} server - FastMCP server instance
|
||||
@@ -31,11 +31,6 @@ export function registerFixDependenciesTool(server) {
|
||||
try {
|
||||
log.info(`Fixing dependencies with args: ${JSON.stringify(args)}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -54,7 +49,7 @@ export function registerFixDependenciesTool(server) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { generateTaskFilesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
import path from 'path';
|
||||
|
||||
/**
|
||||
@@ -31,17 +30,12 @@ export function registerGenerateTool(server) {
|
||||
.describe('Output directory (default: same directory as tasks file)'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(`Generating task files with args: ${JSON.stringify(args)}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -64,8 +58,7 @@ export function registerGenerateTool(server) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
outputDir: outputDir,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -14,7 +14,6 @@ import {
|
||||
findTasksPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Custom processor function that removes allTasks from the response
|
||||
@@ -68,8 +67,7 @@ export function registerShowTaskTool(server) {
|
||||
.string()
|
||||
.describe(
|
||||
'Absolute path to the project root directory (Optional, usually from session)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const { id, file, status, projectRoot } = args;
|
||||
@@ -78,10 +76,6 @@ export function registerShowTaskTool(server) {
|
||||
log.info(
|
||||
`Getting task details for ID: ${id}${status ? ` (filtering subtasks by status: ${status})` : ''} in root: ${projectRoot}`
|
||||
);
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Resolve the path to tasks.json using the NORMALIZED projectRoot from args
|
||||
let tasksJsonPath;
|
||||
@@ -105,8 +99,7 @@ export function registerShowTaskTool(server) {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
{
|
||||
projectRoot: projectRoot,
|
||||
complexityReport: args.complexityReport,
|
||||
tag: resolvedTag
|
||||
complexityReport: args.complexityReport
|
||||
},
|
||||
log
|
||||
);
|
||||
@@ -120,8 +113,7 @@ export function registerShowTaskTool(server) {
|
||||
// Pass other relevant args
|
||||
id: id,
|
||||
status: status,
|
||||
projectRoot: projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -15,8 +15,6 @@ import {
|
||||
resolveComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the getTasks tool with the MCP server
|
||||
* @param {Object} server - FastMCP server instance
|
||||
@@ -53,17 +51,12 @@ export function registerListTasksTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
// Resolve the path to tasks.json using new path utilities
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -78,10 +71,7 @@ export function registerListTasksTool(server) {
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = resolveComplexityReportPath(
|
||||
{ ...args, tag: resolvedTag },
|
||||
session
|
||||
);
|
||||
complexityReportPath = resolveComplexityReportPath(args, session);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
// This is optional, so we don't fail the operation
|
||||
@@ -94,8 +84,7 @@ export function registerListTasksTool(server) {
|
||||
status: args.status,
|
||||
withSubtasks: args.withSubtasks,
|
||||
reportPath: complexityReportPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { moveTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the moveTask tool with the MCP server
|
||||
@@ -37,15 +36,10 @@ export function registerMoveTaskTool(server) {
|
||||
.string()
|
||||
.describe(
|
||||
'Root directory of the project (typically derived from session)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
// Find tasks.json path if not provided
|
||||
let tasksJsonPath = args.file;
|
||||
|
||||
@@ -85,8 +79,7 @@ export function registerMoveTaskTool(server) {
|
||||
sourceId: fromId,
|
||||
destinationId: toId,
|
||||
tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
@@ -122,8 +115,7 @@ export function registerMoveTaskTool(server) {
|
||||
sourceId: args.from,
|
||||
destinationId: args.to,
|
||||
tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -14,7 +14,6 @@ import {
|
||||
resolveTasksPath,
|
||||
resolveComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the nextTask tool with the MCP server
|
||||
@@ -35,16 +34,11 @@ export function registerNextTaskTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(`Finding next task with args: ${JSON.stringify(args)}`);
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Resolve the path to tasks.json using new path utilities
|
||||
let tasksJsonPath;
|
||||
@@ -60,10 +54,7 @@ export function registerNextTaskTool(server) {
|
||||
// Resolve the path to complexity report (optional)
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = resolveComplexityReportPath(
|
||||
{ ...args, tag: resolvedTag },
|
||||
session
|
||||
);
|
||||
complexityReportPath = resolveComplexityReportPath(args, session);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
// This is optional, so we don't fail the operation
|
||||
@@ -74,8 +65,7 @@ export function registerNextTaskTool(server) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
reportPath: complexityReportPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -15,7 +15,6 @@ import {
|
||||
TASKMASTER_DOCS_DIR,
|
||||
TASKMASTER_TASKS_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the parse_prd tool
|
||||
@@ -25,7 +24,6 @@ export function registerParsePRDTool(server) {
|
||||
server.addTool({
|
||||
name: 'parse_prd',
|
||||
description: `Parse a Product Requirements Document (PRD) text file to automatically generate initial tasks. Reinitializing the project is not necessary to run this tool. It is recommended to run parse-prd after initializing the project and creating/importing a prd.txt file in the project root's ${TASKMASTER_DOCS_DIR} directory.`,
|
||||
|
||||
parameters: z.object({
|
||||
input: z
|
||||
.string()
|
||||
@@ -35,7 +33,6 @@ export function registerParsePRDTool(server) {
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on'),
|
||||
output: z
|
||||
.string()
|
||||
.optional()
|
||||
@@ -66,18 +63,7 @@ export function registerParsePRDTool(server) {
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
const result = await parsePRDDirect(
|
||||
{
|
||||
...args,
|
||||
tag: resolvedTag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
const result = await parsePRDDirect(args, log, { session });
|
||||
return handleApiResult(
|
||||
result,
|
||||
log,
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { removeDependencyDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the removeDependency tool with the MCP server
|
||||
@@ -32,15 +31,10 @@ export function registerRemoveDependencyTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(
|
||||
`Removing dependency for task ${args.id} from ${args.dependsOn} with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
@@ -63,9 +57,7 @@ export function registerRemoveDependencyTool(server) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
id: args.id,
|
||||
dependsOn: args.dependsOn,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
dependsOn: args.dependsOn
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { removeSubtaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the removeSubtask tool with the MCP server
|
||||
@@ -45,15 +44,10 @@ export function registerRemoveSubtaskTool(server) {
|
||||
.describe('Skip regenerating task files'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(`Removing subtask with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
@@ -76,8 +70,7 @@ export function registerRemoveSubtaskTool(server) {
|
||||
id: args.id,
|
||||
convert: args.convert,
|
||||
skipGenerate: args.skipGenerate,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { removeTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the remove-task tool with the MCP server
|
||||
@@ -46,11 +45,6 @@ export function registerRemoveTaskTool(server) {
|
||||
try {
|
||||
log.info(`Removing task(s) with ID(s): ${args.id}`);
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -72,7 +66,7 @@ export function registerRemoveTaskTool(server) {
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
id: args.id,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -10,7 +10,6 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { researchDirect } from '../core/task-master-core.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the research tool with the MCP server
|
||||
@@ -20,7 +19,6 @@ export function registerResearchTool(server) {
|
||||
server.addTool({
|
||||
name: 'research',
|
||||
description: 'Perform AI-powered research queries with project context',
|
||||
|
||||
parameters: z.object({
|
||||
query: z.string().describe('Research query/prompt (required)'),
|
||||
taskIds: z
|
||||
@@ -63,15 +61,10 @@ export function registerResearchTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(
|
||||
`Starting research with query: "${args.query.substring(0, 100)}${args.query.length > 100 ? '...' : ''}"`
|
||||
);
|
||||
@@ -87,8 +80,7 @@ export function registerResearchTool(server) {
|
||||
detailLevel: args.detailLevel || 'medium',
|
||||
saveTo: args.saveTo,
|
||||
saveToFile: args.saveToFile || false,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -18,7 +18,6 @@ import {
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the setTaskStatus tool with the MCP server
|
||||
@@ -53,15 +52,8 @@ export function registerSetTaskStatusTool(server) {
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(
|
||||
`Setting status of task(s) ${args.id} to: ${args.status} ${
|
||||
args.tag ? `in tag: ${args.tag}` : 'in current tag'
|
||||
}`
|
||||
);
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(`Setting status of task(s) ${args.id} to: ${args.status}`);
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
@@ -81,8 +73,7 @@ export function registerSetTaskStatusTool(server) {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
{
|
||||
projectRoot: args.projectRoot,
|
||||
complexityReport: args.complexityReport,
|
||||
tag: resolvedTag
|
||||
complexityReport: args.complexityReport
|
||||
},
|
||||
log
|
||||
);
|
||||
@@ -97,7 +88,7 @@ export function registerSetTaskStatusTool(server) {
|
||||
status: args.status,
|
||||
complexityReportPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { updateSubtaskByIdDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the update-subtask tool with the MCP server
|
||||
@@ -36,17 +35,11 @@ export function registerUpdateSubtaskTool(server) {
|
||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const toolName = 'update_subtask';
|
||||
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(`Updating subtask with args: ${JSON.stringify(args)}`);
|
||||
|
||||
let tasksJsonPath;
|
||||
@@ -68,8 +61,7 @@ export function registerUpdateSubtaskTool(server) {
|
||||
id: args.id,
|
||||
prompt: args.prompt,
|
||||
research: args.research,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { updateTaskByIdDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the update-task tool with the MCP server
|
||||
@@ -44,16 +43,11 @@ export function registerUpdateTaskTool(server) {
|
||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const toolName = 'update_task';
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(
|
||||
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
@@ -80,8 +74,7 @@ export function registerUpdateTaskTool(server) {
|
||||
prompt: args.prompt,
|
||||
research: args.research,
|
||||
append: args.append,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { updateTasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the update tool with the MCP server
|
||||
@@ -51,11 +50,6 @@ export function registerUpdateTool(server) {
|
||||
const toolName = 'update';
|
||||
const { from, prompt, research, file, projectRoot, tag } = args;
|
||||
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
|
||||
try {
|
||||
log.info(
|
||||
`Executing ${toolName} tool with normalized root: ${projectRoot}`
|
||||
@@ -79,7 +73,7 @@ export function registerUpdateTool(server) {
|
||||
prompt: prompt,
|
||||
research: research,
|
||||
projectRoot: projectRoot,
|
||||
tag: resolvedTag
|
||||
tag: tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
} from './utils.js';
|
||||
import { validateDependenciesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { resolveTag } from '../../../scripts/modules/utils.js';
|
||||
|
||||
/**
|
||||
* Register the validateDependencies tool with the MCP server
|
||||
@@ -26,15 +25,10 @@ export function registerValidateDependenciesTool(server) {
|
||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resolvedTag = resolveTag({
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
});
|
||||
log.info(`Validating dependencies with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
@@ -53,9 +47,7 @@ export function registerValidateDependenciesTool(server) {
|
||||
|
||||
const result = await validateDependenciesDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: resolvedTag
|
||||
tasksJsonPath: tasksJsonPath
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
182
package-lock.json
generated
182
package-lock.json
generated
@@ -69,9 +69,6 @@
|
||||
"ink": "^5.0.1",
|
||||
"jest": "^29.7.0",
|
||||
"jest-environment-node": "^29.7.0",
|
||||
"jest-html-reporters": "^3.1.7",
|
||||
"jest-junit": "^16.0.0",
|
||||
"mcp-jest": "^1.0.10",
|
||||
"mock-fs": "^5.5.0",
|
||||
"prettier": "^3.5.3",
|
||||
"react": "^18.3.1",
|
||||
@@ -3460,9 +3457,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@modelcontextprotocol/sdk": {
|
||||
"version": "1.15.1",
|
||||
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.15.1.tgz",
|
||||
"integrity": "sha512-W/XlN9c528yYn+9MQkVjxiTPgPxoxt+oczfjHBDsJx0+59+O7B75Zhsp0B16Xbwbz8ANISDajh6+V7nIcPMc5w==",
|
||||
"version": "1.15.0",
|
||||
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.15.0.tgz",
|
||||
"integrity": "sha512-67hnl/ROKdb03Vuu0YOr+baKTvf1/5YBHBm9KnZdjdAh8hjt4FRCPD5ucwxGB237sBpzlqQsLy1PFu7z/ekZ9Q==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ajv": "^6.12.6",
|
||||
@@ -9700,138 +9697,6 @@
|
||||
"fsevents": "^2.3.2"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters": {
|
||||
"version": "3.1.7",
|
||||
"resolved": "https://registry.npmjs.org/jest-html-reporters/-/jest-html-reporters-3.1.7.tgz",
|
||||
"integrity": "sha512-GTmjqK6muQ0S0Mnksf9QkL9X9z2FGIpNSxC52E0PHDzjPQ1XDu2+XTI3B3FS43ZiUzD1f354/5FfwbNIBzT7ew==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"fs-extra": "^10.0.0",
|
||||
"open": "^8.0.3"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/define-lazy-prop": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz",
|
||||
"integrity": "sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/fs-extra": {
|
||||
"version": "10.1.0",
|
||||
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-10.1.0.tgz",
|
||||
"integrity": "sha512-oRXApq54ETRj4eMiFzGnHWGy+zo5raudjuxN0b8H7s/RU2oW0Wvsx9O0ACRN/kRq9E8Vu/ReskGB5o3ji+FzHQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"graceful-fs": "^4.2.0",
|
||||
"jsonfile": "^6.0.1",
|
||||
"universalify": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/is-docker": {
|
||||
"version": "2.2.1",
|
||||
"resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.2.1.tgz",
|
||||
"integrity": "sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"is-docker": "cli.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/is-wsl": {
|
||||
"version": "2.2.0",
|
||||
"resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz",
|
||||
"integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"is-docker": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/jsonfile": {
|
||||
"version": "6.1.0",
|
||||
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.1.0.tgz",
|
||||
"integrity": "sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"universalify": "^2.0.0"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"graceful-fs": "^4.1.6"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/open": {
|
||||
"version": "8.4.2",
|
||||
"resolved": "https://registry.npmjs.org/open/-/open-8.4.2.tgz",
|
||||
"integrity": "sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"define-lazy-prop": "^2.0.0",
|
||||
"is-docker": "^2.1.1",
|
||||
"is-wsl": "^2.2.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-html-reporters/node_modules/universalify": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz",
|
||||
"integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 10.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-junit": {
|
||||
"version": "16.0.0",
|
||||
"resolved": "https://registry.npmjs.org/jest-junit/-/jest-junit-16.0.0.tgz",
|
||||
"integrity": "sha512-A94mmw6NfJab4Fg/BlvVOUXzXgF0XIH6EmTgJ5NDPp4xoKq0Kr7sErb+4Xs9nZvu58pJojz5RFGpqnZYJTrRfQ==",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"mkdirp": "^1.0.4",
|
||||
"strip-ansi": "^6.0.1",
|
||||
"uuid": "^8.3.2",
|
||||
"xml": "^1.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10.12.0"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-junit/node_modules/uuid": {
|
||||
"version": "8.3.2",
|
||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
|
||||
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"uuid": "dist/bin/uuid"
|
||||
}
|
||||
},
|
||||
"node_modules/jest-leak-detector": {
|
||||
"version": "29.7.0",
|
||||
"resolved": "https://registry.npmjs.org/jest-leak-detector/-/jest-leak-detector-29.7.0.tgz",
|
||||
@@ -10861,27 +10726,6 @@
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/mcp-jest": {
|
||||
"version": "1.0.10",
|
||||
"resolved": "https://registry.npmjs.org/mcp-jest/-/mcp-jest-1.0.10.tgz",
|
||||
"integrity": "sha512-gmvWzgj+p789Hofeuej60qBDfHTFn98aNfpgb+Q7a69vLSLvXBXDv2pcjYOLEuBvss/AGe26xq0WHbbX01X5AA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.12.1",
|
||||
"zod": "^3.22.0"
|
||||
},
|
||||
"bin": {
|
||||
"mcp-jest": "dist/cli.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"funding": {
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/josharsh"
|
||||
}
|
||||
},
|
||||
"node_modules/mcp-proxy": {
|
||||
"version": "5.3.0",
|
||||
"resolved": "https://registry.npmjs.org/mcp-proxy/-/mcp-proxy-5.3.0.tgz",
|
||||
@@ -11131,19 +10975,6 @@
|
||||
"node": ">=16 || 14 >=14.17"
|
||||
}
|
||||
},
|
||||
"node_modules/mkdirp": {
|
||||
"version": "1.0.4",
|
||||
"resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz",
|
||||
"integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"mkdirp": "bin/cmd.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/mock-fs": {
|
||||
"version": "5.5.0",
|
||||
"resolved": "https://registry.npmjs.org/mock-fs/-/mock-fs-5.5.0.tgz",
|
||||
@@ -13791,13 +13622,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/xml": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/xml/-/xml-1.0.1.tgz",
|
||||
"integrity": "sha512-huCv9IH9Tcf95zuYCsQraZtWnJvBtLVE0QHMOs8bWyZAFZNDcYjsPq1nEx8jKA9y+Beo9v+7OBPRisQTjinQMw==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/xsschema": {
|
||||
"version": "0.3.0-beta.8",
|
||||
"resolved": "https://registry.npmjs.org/xsschema/-/xsschema-0.3.0-beta.8.tgz",
|
||||
|
||||
28
package.json
28
package.json
@@ -9,28 +9,21 @@
|
||||
"task-master-mcp": "mcp-server/server.js",
|
||||
"task-master-ai": "mcp-server/server.js"
|
||||
},
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
"."
|
||||
],
|
||||
"workspaces": ["apps/*", "."],
|
||||
"scripts": {
|
||||
"test": "node --experimental-vm-modules node_modules/.bin/jest",
|
||||
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
|
||||
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
|
||||
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
|
||||
"test:e2e:bash": "./tests/e2e/run_e2e.sh",
|
||||
"test:e2e:bash:analyze": "./tests/e2e/run_e2e.sh --analyze-log",
|
||||
"e2e": "node --experimental-vm-modules node_modules/.bin/jest --config jest.e2e.config.js",
|
||||
"e2e:watch": "node --experimental-vm-modules node_modules/.bin/jest --config jest.e2e.config.js --watch",
|
||||
"e2e:ai": "node --experimental-vm-modules node_modules/.bin/jest --config jest.e2e.projects.config.js --selectProjects='Heavy AI E2E Tests'",
|
||||
"e2e:non-ai": "node --experimental-vm-modules node_modules/.bin/jest --config jest.e2e.projects.config.js --selectProjects='Non-AI E2E Tests'",
|
||||
"e2e:report": "open test-results/index.html",
|
||||
"test:e2e": "./tests/e2e/run_e2e.sh",
|
||||
"test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
|
||||
"prepare": "chmod +x bin/task-master.js mcp-server/server.js",
|
||||
"changeset": "changeset",
|
||||
"release": "changeset publish",
|
||||
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
|
||||
"mcp-server": "node mcp-server/server.js",
|
||||
"format": "biome format . --write",
|
||||
"format:check": "biome format ."
|
||||
"format-check": "biome format .",
|
||||
"format": "biome format . --write"
|
||||
},
|
||||
"keywords": [
|
||||
"claude",
|
||||
@@ -91,8 +84,8 @@
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@anthropic-ai/claude-code": "^1.0.25",
|
||||
"@biomejs/cli-linux-x64": "^1.9.4",
|
||||
"ai-sdk-provider-gemini-cli": "^0.0.4"
|
||||
"ai-sdk-provider-gemini-cli": "^0.0.4",
|
||||
"@biomejs/cli-linux-x64": "^1.9.4"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
@@ -128,13 +121,10 @@
|
||||
"ink": "^5.0.1",
|
||||
"jest": "^29.7.0",
|
||||
"jest-environment-node": "^29.7.0",
|
||||
"jest-html-reporters": "^3.1.7",
|
||||
"jest-junit": "^16.0.0",
|
||||
"mcp-jest": "^1.0.10",
|
||||
"mock-fs": "^5.5.0",
|
||||
"prettier": "^3.5.3",
|
||||
"react": "^18.3.1",
|
||||
"supertest": "^7.1.0",
|
||||
"tsx": "^4.16.2"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -826,8 +826,7 @@ function registerCommands(programInstance) {
|
||||
let taskMaster;
|
||||
try {
|
||||
const initOptions = {
|
||||
prdPath: file || options.input || true,
|
||||
tag: options.tag
|
||||
prdPath: file || options.input || true
|
||||
};
|
||||
// Only include tasksPath if output is explicitly specified
|
||||
if (options.output) {
|
||||
@@ -853,7 +852,8 @@ function registerCommands(programInstance) {
|
||||
const useAppend = append;
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -966,8 +966,7 @@ function registerCommands(programInstance) {
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const fromId = parseInt(options.from, 10); // Validation happens here
|
||||
@@ -977,7 +976,8 @@ function registerCommands(programInstance) {
|
||||
const tasksPath = taskMaster.getTasksPath();
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -1066,13 +1066,13 @@ function registerCommands(programInstance) {
|
||||
try {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
const tasksPath = taskMaster.getTasksPath();
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -1238,13 +1238,13 @@ function registerCommands(programInstance) {
|
||||
try {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
const tasksPath = taskMaster.getTasksPath();
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -1404,12 +1404,11 @@ function registerCommands(programInstance) {
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const outputDir = options.output;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag = options.tag;
|
||||
|
||||
console.log(
|
||||
chalk.blue(`Generating task files from: ${taskMaster.getTasksPath()}`)
|
||||
@@ -1445,12 +1444,12 @@ function registerCommands(programInstance) {
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const taskId = options.id;
|
||||
const status = options.status;
|
||||
const tag = options.tag;
|
||||
|
||||
if (!taskId || !status) {
|
||||
console.error(chalk.red('Error: Both --id and --status are required'));
|
||||
@@ -1466,9 +1465,11 @@ function registerCommands(programInstance) {
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
displayCurrentTagIndicator(tag);
|
||||
// Resolve tag using standard pattern and show current tag context
|
||||
const resolvedTag =
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
displayCurrentTagIndicator(resolvedTag);
|
||||
|
||||
console.log(
|
||||
chalk.blue(`Setting status of task(s) ${taskId} to: ${status}`)
|
||||
@@ -1500,8 +1501,7 @@ function registerCommands(programInstance) {
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
};
|
||||
|
||||
// Only pass complexityReportPath if user provided a custom path
|
||||
@@ -1513,7 +1513,9 @@ function registerCommands(programInstance) {
|
||||
|
||||
const statusFilter = options.status;
|
||||
const withSubtasks = options.withSubtasks || false;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
|
||||
@@ -1533,7 +1535,8 @@ function registerCommands(programInstance) {
|
||||
taskMaster.getComplexityReportPath(),
|
||||
withSubtasks,
|
||||
'text',
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag }
|
||||
tag,
|
||||
{ projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
});
|
||||
|
||||
@@ -1562,29 +1565,18 @@ function registerCommands(programInstance) {
|
||||
'Path to the tasks file (relative to project root)',
|
||||
TASKMASTER_TASKS_FILE // Allow file override
|
||||
) // Allow file override
|
||||
.option(
|
||||
'-cr, --complexity-report <file>',
|
||||
'Path to the report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
if (options.complexityReport) {
|
||||
initOptions.complexityReportPath = options.complexityReport;
|
||||
}
|
||||
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
const tag = options.tag;
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
displayCurrentTagIndicator(
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
|
||||
);
|
||||
|
||||
if (options.all) {
|
||||
// --- Handle expand --all ---
|
||||
@@ -1597,11 +1589,7 @@ function registerCommands(programInstance) {
|
||||
options.research, // Pass research flag
|
||||
options.prompt, // Pass additional context
|
||||
options.force, // Pass force flag
|
||||
{
|
||||
projectRoot: taskMaster.getProjectRoot(),
|
||||
tag,
|
||||
complexityReportPath: taskMaster.getComplexityReportPath()
|
||||
} // Pass context with projectRoot and tag
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag } // Pass context with projectRoot and tag
|
||||
// outputFormat defaults to 'text' in expandAllTasks for CLI
|
||||
);
|
||||
} catch (error) {
|
||||
@@ -1628,11 +1616,7 @@ function registerCommands(programInstance) {
|
||||
options.num,
|
||||
options.research,
|
||||
options.prompt,
|
||||
{
|
||||
projectRoot: taskMaster.getProjectRoot(),
|
||||
tag,
|
||||
complexityReportPath: taskMaster.getComplexityReportPath()
|
||||
}, // Pass context with projectRoot and tag
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag }, // Pass context with projectRoot and tag
|
||||
options.force // Pass the force flag down
|
||||
);
|
||||
// expandTask logs its own success/failure for single task
|
||||
@@ -1685,28 +1669,34 @@ function registerCommands(programInstance) {
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true, // Tasks file is required to analyze
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true // Tasks file is required to analyze
|
||||
};
|
||||
// Only include complexityReportPath if output is explicitly specified
|
||||
if (options.output) {
|
||||
initOptions.complexityReportPath = options.output;
|
||||
}
|
||||
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
|
||||
const tag = options.tag;
|
||||
const modelOverride = options.model;
|
||||
const thresholdScore = parseFloat(options.threshold);
|
||||
const useResearch = options.research || false;
|
||||
|
||||
// Use the provided tag, or the current active tag, or default to 'master'
|
||||
const targetTag = taskMaster.getCurrentTag();
|
||||
const targetTag =
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(targetTag);
|
||||
|
||||
// Use user's explicit output path if provided, otherwise use tag-aware default
|
||||
const outputPath = taskMaster.getComplexityReportPath();
|
||||
// Tag-aware output file naming: master -> task-complexity-report.json, other tags -> task-complexity-report_tagname.json
|
||||
const baseOutputPath =
|
||||
taskMaster.getComplexityReportPath() ||
|
||||
path.join(taskMaster.getProjectRoot(), COMPLEXITY_REPORT_FILE);
|
||||
const outputPath =
|
||||
options.output === COMPLEXITY_REPORT_FILE && targetTag !== 'master'
|
||||
? baseOutputPath.replace('.json', `_${targetTag}.json`)
|
||||
: options.output || baseOutputPath;
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
@@ -1787,12 +1777,9 @@ function registerCommands(programInstance) {
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (prompt, options) => {
|
||||
// Initialize TaskMaster
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
// Parameter validation
|
||||
if (!prompt || typeof prompt !== 'string' || prompt.trim().length === 0) {
|
||||
@@ -1892,7 +1879,8 @@ function registerCommands(programInstance) {
|
||||
}
|
||||
}
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2125,17 +2113,17 @@ ${result.result}
|
||||
.action(async (options) => {
|
||||
const taskIds = options.id;
|
||||
const all = options.all;
|
||||
const tag = options.tag;
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
displayCurrentTagIndicator(
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
|
||||
);
|
||||
|
||||
if (!taskIds && !all) {
|
||||
console.error(
|
||||
@@ -2231,16 +2219,15 @@ ${result.result}
|
||||
// Correctly determine projectRoot
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const projectRoot = taskMaster.getProjectRoot();
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
displayCurrentTagIndicator(
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
|
||||
);
|
||||
|
||||
let manualTaskData = null;
|
||||
if (isManualCreation) {
|
||||
@@ -2276,7 +2263,7 @@ ${result.result}
|
||||
|
||||
const context = {
|
||||
projectRoot,
|
||||
tag,
|
||||
tag: options.tag,
|
||||
commandName: 'add-task',
|
||||
outputType: 'cli'
|
||||
};
|
||||
@@ -2322,36 +2309,22 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
|
||||
initOptions.complexityReportPath = options.report;
|
||||
}
|
||||
const tag = options.tag;
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag,
|
||||
complexityReportPath: options.report || false
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
const context = {
|
||||
projectRoot: taskMaster.getProjectRoot(),
|
||||
tag
|
||||
};
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
displayCurrentTagIndicator(
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
|
||||
);
|
||||
|
||||
await displayNextTask(
|
||||
taskMaster.getTasksPath(),
|
||||
taskMaster.getComplexityReportPath(),
|
||||
context
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag }
|
||||
);
|
||||
});
|
||||
|
||||
@@ -2391,10 +2364,12 @@ ${result.result}
|
||||
|
||||
const idArg = taskId || options.id;
|
||||
const statusFilter = options.status;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag = options.tag;
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
displayCurrentTagIndicator(
|
||||
tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
|
||||
);
|
||||
|
||||
if (!idArg) {
|
||||
console.error(chalk.red('Error: Please provide a task ID'));
|
||||
@@ -2423,7 +2398,8 @@ ${result.result}
|
||||
taskIds[0],
|
||||
taskMaster.getComplexityReportPath(),
|
||||
statusFilter,
|
||||
{ projectRoot: taskMaster.getProjectRoot(), tag }
|
||||
tag,
|
||||
{ projectRoot: taskMaster.getProjectRoot() }
|
||||
);
|
||||
}
|
||||
});
|
||||
@@ -2441,19 +2417,17 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const taskId = options.id;
|
||||
const dependencyId = options.dependsOn;
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2498,19 +2472,17 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const taskId = options.id;
|
||||
const dependencyId = options.dependsOn;
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2555,16 +2527,14 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2585,16 +2555,14 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2615,21 +2583,26 @@ ${result.result}
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
if (options.file && options.file !== COMPLEXITY_REPORT_FILE) {
|
||||
initOptions.complexityReportPath = options.file;
|
||||
}
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster(initOptions);
|
||||
const taskMaster = initTaskMaster({
|
||||
complexityReportPath: options.file || true
|
||||
});
|
||||
|
||||
// Use the provided tag, or the current active tag, or default to 'master'
|
||||
const targetTag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(taskMaster.getCurrentTag());
|
||||
displayCurrentTagIndicator(targetTag);
|
||||
|
||||
await displayComplexityReport(taskMaster.getComplexityReportPath());
|
||||
// Tag-aware report file naming: master -> task-complexity-report.json, other tags -> task-complexity-report_tagname.json
|
||||
const baseReportPath = taskMaster.getComplexityReportPath();
|
||||
const reportPath =
|
||||
options.file === COMPLEXITY_REPORT_FILE && targetTag !== 'master'
|
||||
? baseReportPath.replace('.json', `_${targetTag}.json`)
|
||||
: baseReportPath;
|
||||
|
||||
await displayComplexityReport(reportPath);
|
||||
});
|
||||
|
||||
// add-subtask command
|
||||
@@ -2659,8 +2632,7 @@ ${result.result}
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const parentId = options.parent;
|
||||
@@ -2668,7 +2640,8 @@ ${result.result}
|
||||
const generateFiles = !options.skipGenerate;
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -2843,14 +2816,13 @@ ${result.result}
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const subtaskIds = options.id;
|
||||
const convertToTask = options.convert || false;
|
||||
const generateFiles = !options.skipGenerate;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag = options.tag;
|
||||
|
||||
if (!subtaskIds) {
|
||||
console.error(
|
||||
@@ -3145,14 +3117,14 @@ ${result.result}
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const taskIdsString = options.id;
|
||||
|
||||
// Resolve tag using standard pattern
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag =
|
||||
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master';
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
@@ -3796,13 +3768,12 @@ Examples:
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const sourceId = options.from;
|
||||
const destinationId = options.to;
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
const tag = options.tag;
|
||||
|
||||
if (!sourceId || !destinationId) {
|
||||
console.error(
|
||||
@@ -4230,19 +4201,15 @@ Examples:
|
||||
'-s, --status <status>',
|
||||
'Show only tasks matching this status (e.g., pending, done)'
|
||||
)
|
||||
.option('-t, --tag <tag>', 'Tag to use for the task list (default: master)')
|
||||
.action(async (options) => {
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
tasksPath: options.file || true
|
||||
});
|
||||
|
||||
const withSubtasks = options.withSubtasks || false;
|
||||
const status = options.status || null;
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`📝 Syncing tasks to README.md${withSubtasks ? ' (with subtasks)' : ''}${status ? ` (status: ${status})` : ''}...`
|
||||
@@ -4252,8 +4219,7 @@ Examples:
|
||||
const success = await syncTasksToReadme(taskMaster.getProjectRoot(), {
|
||||
withSubtasks,
|
||||
status,
|
||||
tasksPath: taskMaster.getTasksPath(),
|
||||
tag
|
||||
tasksPath: taskMaster.getTasksPath()
|
||||
});
|
||||
|
||||
if (!success) {
|
||||
@@ -4975,33 +4941,6 @@ async function runCLI(argv = process.argv) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the final complexity-report path.
|
||||
* Rules:
|
||||
* 1. If caller passes --output, always respect it.
|
||||
* 2. If no explicit output AND tag === 'master' → default report file
|
||||
* 3. If no explicit output AND tag !== 'master' → append _<tag>.json
|
||||
*
|
||||
* @param {string|undefined} outputOpt --output value from CLI (may be undefined)
|
||||
* @param {string} targetTag resolved tag (defaults to 'master')
|
||||
* @param {string} projectRoot absolute project root
|
||||
* @returns {string} absolute path for the report
|
||||
*/
|
||||
export function resolveComplexityReportPath({
|
||||
projectRoot,
|
||||
tag = 'master',
|
||||
output // may be undefined
|
||||
}) {
|
||||
// 1. user knows best
|
||||
if (output) {
|
||||
return path.isAbsolute(output) ? output : path.join(projectRoot, output);
|
||||
}
|
||||
|
||||
// 2. default naming
|
||||
const base = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
return tag !== 'master' ? base.replace('.json', `_${tag}.json`) : base;
|
||||
}
|
||||
|
||||
export {
|
||||
registerCommands,
|
||||
setupCLI,
|
||||
|
||||
@@ -27,8 +27,6 @@ import { generateTaskFiles } from './task-manager.js';
|
||||
* @param {number|string} taskId - ID of the task to add dependency to
|
||||
* @param {number|string} dependencyId - ID of the task to add as dependency
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
*/
|
||||
async function addDependency(tasksPath, taskId, dependencyId, context = {}) {
|
||||
log('info', `Adding dependency ${dependencyId} to task ${taskId}...`);
|
||||
@@ -216,8 +214,6 @@ async function addDependency(tasksPath, taskId, dependencyId, context = {}) {
|
||||
* @param {number|string} taskId - ID of the task to remove dependency from
|
||||
* @param {number|string} dependencyId - ID of the task to remove as dependency
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
*/
|
||||
async function removeDependency(tasksPath, taskId, dependencyId, context = {}) {
|
||||
log('info', `Removing dependency ${dependencyId} from task ${taskId}...`);
|
||||
|
||||
@@ -91,12 +91,11 @@ function createEndMarker() {
|
||||
* @param {string} options.status - Filter by status (e.g., 'pending', 'done')
|
||||
* @param {string} options.tasksPath - Custom path to tasks.json
|
||||
* @returns {boolean} - True if sync was successful, false otherwise
|
||||
* TODO: Add tag support - this is not currently supported how we want to handle this - Parthy
|
||||
*/
|
||||
export async function syncTasksToReadme(projectRoot = null, options = {}) {
|
||||
try {
|
||||
const actualProjectRoot = projectRoot || findProjectRoot() || '.';
|
||||
const { withSubtasks = false, status, tasksPath, tag } = options;
|
||||
const { withSubtasks = false, status, tasksPath } = options;
|
||||
|
||||
// Get current tasks using the list-tasks functionality with markdown-readme format
|
||||
const tasksOutput = await listTasks(
|
||||
@@ -105,8 +104,7 @@ export async function syncTasksToReadme(projectRoot = null, options = {}) {
|
||||
status,
|
||||
null,
|
||||
withSubtasks,
|
||||
'markdown-readme',
|
||||
{ projectRoot, tag }
|
||||
'markdown-readme'
|
||||
);
|
||||
|
||||
if (!tasksOutput) {
|
||||
|
||||
@@ -12,8 +12,6 @@ import generateTaskFiles from './generate-task-files.js';
|
||||
* @param {Object} newSubtaskData - Data for creating a new subtask (used if existingTaskId is null)
|
||||
* @param {boolean} generateFiles - Whether to regenerate task files after adding the subtask
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @param {string} context.projectRoot - Project root path
|
||||
* @param {string} context.tag - Tag for the task
|
||||
* @returns {Object} The newly created or converted subtask
|
||||
*/
|
||||
async function addSubtask(
|
||||
@@ -24,12 +22,13 @@ async function addSubtask(
|
||||
generateFiles = true,
|
||||
context = {}
|
||||
) {
|
||||
const { projectRoot, tag } = context;
|
||||
try {
|
||||
log('info', `Adding subtask to parent task ${parentId}...`);
|
||||
|
||||
const currentTag =
|
||||
context.tag || getCurrentTag(context.projectRoot) || 'master';
|
||||
// Read the existing tasks with proper context
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
const data = readJSON(tasksPath, context.projectRoot, currentTag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
|
||||
}
|
||||
@@ -140,7 +139,7 @@ async function addSubtask(
|
||||
}
|
||||
|
||||
// Write the updated tasks back to the file with proper context
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, context.projectRoot, currentTag);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
|
||||
@@ -22,7 +22,8 @@ import {
|
||||
truncate,
|
||||
ensureTagMetadata,
|
||||
performCompleteTagMigration,
|
||||
markMigrationForNotice
|
||||
markMigrationForNotice,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { getDefaultPriority } from '../config-manager.js';
|
||||
@@ -92,7 +93,7 @@ function getAllTasks(rawData) {
|
||||
* @param {string} [context.projectRoot] - Project root path (for MCP/env fallback)
|
||||
* @param {string} [context.commandName] - The name of the command being executed (for telemetry)
|
||||
* @param {string} [context.outputType] - The output type ('cli' or 'mcp', for telemetry)
|
||||
* @param {string} [context.tag] - Tag for the task (optional)
|
||||
* @param {string} [tag] - Tag for the task (optional)
|
||||
* @returns {Promise<object>} An object containing newTaskId and telemetryData
|
||||
*/
|
||||
async function addTask(
|
||||
@@ -103,10 +104,10 @@ async function addTask(
|
||||
context = {},
|
||||
outputFormat = 'text', // Default to text for CLI
|
||||
manualTaskData = null,
|
||||
useResearch = false
|
||||
useResearch = false,
|
||||
tag = null
|
||||
) {
|
||||
const { session, mcpLog, projectRoot, commandName, outputType, tag } =
|
||||
context;
|
||||
const { session, mcpLog, projectRoot, commandName, outputType } = context;
|
||||
const isMCP = !!mcpLog;
|
||||
|
||||
// Create a consistent logFn object regardless of context
|
||||
@@ -223,7 +224,7 @@ async function addTask(
|
||||
|
||||
try {
|
||||
// Read the existing tasks - IMPORTANT: Read the raw data without tag resolution
|
||||
let rawData = readJSON(tasksPath, projectRoot, tag); // No tag parameter
|
||||
let rawData = readJSON(tasksPath, projectRoot); // No tag parameter
|
||||
|
||||
// Handle the case where readJSON returns resolved data with _rawTaggedData
|
||||
if (rawData && rawData._rawTaggedData) {
|
||||
@@ -278,7 +279,8 @@ async function addTask(
|
||||
}
|
||||
|
||||
// Use the provided tag, or the current active tag, or default to 'master'
|
||||
const targetTag = tag;
|
||||
const targetTag =
|
||||
tag || context.tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
// Ensure the target tag exists
|
||||
if (!rawData[targetTag]) {
|
||||
@@ -387,7 +389,7 @@ async function addTask(
|
||||
report(`Generating task data with AI with prompt:\n${prompt}`, 'info');
|
||||
|
||||
// --- Use the new ContextGatherer ---
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const gatherResult = await contextGatherer.gather({
|
||||
semanticQuery: prompt,
|
||||
dependencyTasks: numericDependencies,
|
||||
|
||||
@@ -19,7 +19,6 @@ import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
LEGACY_TASKS_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
import { resolveComplexityReportOutputPath } from '../../../src/utils/path-utils.js';
|
||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||
import { flattenTasksWithSubtasks } from '../utils.js';
|
||||
@@ -72,7 +71,6 @@ Do not include any explanatory text, markdown formatting, or code block markers
|
||||
* @param {string|number} [options.threshold] - Complexity threshold
|
||||
* @param {boolean} [options.research] - Use research role
|
||||
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
||||
* @param {string} [options.tag] - Tag for the task
|
||||
* @param {string} [options.id] - Comma-separated list of task IDs to analyze specifically
|
||||
* @param {number} [options.from] - Starting task ID in a range to analyze
|
||||
* @param {number} [options.to] - Ending task ID in a range to analyze
|
||||
@@ -86,6 +84,7 @@ Do not include any explanatory text, markdown formatting, or code block markers
|
||||
async function analyzeTaskComplexity(options, context = {}) {
|
||||
const { session, mcpLog } = context;
|
||||
const tasksPath = options.file || LEGACY_TASKS_FILE;
|
||||
const outputPath = options.output || COMPLEXITY_REPORT_FILE;
|
||||
const thresholdScore = parseFloat(options.threshold || '5');
|
||||
const useResearch = options.research || false;
|
||||
const projectRoot = options.projectRoot;
|
||||
@@ -110,13 +109,6 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
}
|
||||
};
|
||||
|
||||
// Resolve output path using tag-aware resolution
|
||||
const outputPath = resolveComplexityReportOutputPath(
|
||||
options.output,
|
||||
{ projectRoot, tag },
|
||||
reportLog
|
||||
);
|
||||
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
@@ -228,7 +220,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
let gatheredContext = '';
|
||||
if (originalData && originalData.tasks.length > 0) {
|
||||
try {
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const allTasksFlat = flattenTasksWithSubtasks(originalData.tasks);
|
||||
const fuzzySearch = new FuzzyTaskSearch(
|
||||
allTasksFlat,
|
||||
@@ -543,7 +535,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
}
|
||||
}
|
||||
|
||||
// Merge with existing report - only keep entries from the current tag
|
||||
// Merge with existing report
|
||||
let finalComplexityAnalysis = [];
|
||||
|
||||
if (existingReport && Array.isArray(existingReport.complexityAnalysis)) {
|
||||
@@ -552,14 +544,10 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
complexityAnalysis.map((item) => item.taskId)
|
||||
);
|
||||
|
||||
// Keep existing entries that weren't in this analysis run AND belong to the current tag
|
||||
// We determine tag membership by checking if the task ID exists in the current tag's tasks
|
||||
const currentTagTaskIds = new Set(tasksData.tasks.map((t) => t.id));
|
||||
// Keep existing entries that weren't in this analysis run
|
||||
const existingEntriesNotAnalyzed =
|
||||
existingReport.complexityAnalysis.filter(
|
||||
(item) =>
|
||||
!analyzedTaskIds.has(item.taskId) &&
|
||||
currentTagTaskIds.has(item.taskId) // Only keep entries for tasks in current tag
|
||||
(item) => !analyzedTaskIds.has(item.taskId)
|
||||
);
|
||||
|
||||
// Combine with new analysis
|
||||
@@ -569,7 +557,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
];
|
||||
|
||||
reportLog(
|
||||
`Merged ${complexityAnalysis.length} new analyses with ${existingEntriesNotAnalyzed.length} existing entries from current tag`,
|
||||
`Merged ${complexityAnalysis.length} new analyses with ${existingEntriesNotAnalyzed.length} existing entries`,
|
||||
'info'
|
||||
);
|
||||
} else {
|
||||
|
||||
@@ -11,8 +11,6 @@ import { displayBanner } from '../ui.js';
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {string} taskIds - Task IDs to clear subtasks from
|
||||
* @param {Object} context - Context object containing projectRoot and tag
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
*/
|
||||
function clearSubtasks(tasksPath, taskIds, context = {}) {
|
||||
const { projectRoot, tag } = context;
|
||||
|
||||
@@ -20,8 +20,6 @@ import boxen from 'boxen';
|
||||
* @param {Object} context - Context object containing session and mcpLog.
|
||||
* @param {Object} [context.session] - Session object from MCP.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). MCP calls should use 'json'.
|
||||
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, telemetryData: Array<Object>}>} - Result summary.
|
||||
*/
|
||||
@@ -34,7 +32,12 @@ async function expandAllTasks(
|
||||
context = {},
|
||||
outputFormat = 'text' // Assume text default for CLI
|
||||
) {
|
||||
const { session, mcpLog, projectRoot: providedProjectRoot, tag } = context;
|
||||
const {
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot: providedProjectRoot,
|
||||
tag: contextTag
|
||||
} = context;
|
||||
const isMCPCall = !!mcpLog; // Determine if called from MCP
|
||||
|
||||
const projectRoot = providedProjectRoot || findProjectRoot();
|
||||
@@ -76,7 +79,7 @@ async function expandAllTasks(
|
||||
|
||||
try {
|
||||
logger.info(`Reading tasks from ${tasksPath}`);
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
const data = readJSON(tasksPath, projectRoot, contextTag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
||||
}
|
||||
@@ -126,7 +129,7 @@ async function expandAllTasks(
|
||||
numSubtasks,
|
||||
useResearch,
|
||||
additionalContext,
|
||||
{ ...context, projectRoot, tag: data.tag || tag }, // Pass the whole context object with projectRoot and resolved tag
|
||||
{ ...context, projectRoot, tag: data.tag || contextTag }, // Pass the whole context object with projectRoot and resolved tag
|
||||
force
|
||||
);
|
||||
expandedCount++;
|
||||
|
||||
@@ -290,8 +290,6 @@ function parseSubtasksFromText(
|
||||
* @param {Object} context - Context object containing session and mcpLog.
|
||||
* @param {Object} [context.session] - Session object from MCP.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {boolean} [force=false] - If true, replace existing subtasks; otherwise, append.
|
||||
* @returns {Promise<Object>} The updated parent task object with new subtasks.
|
||||
* @throws {Error} If task not found, AI service fails, or parsing fails.
|
||||
@@ -305,13 +303,7 @@ async function expandTask(
|
||||
context = {},
|
||||
force = false
|
||||
) {
|
||||
const {
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot: contextProjectRoot,
|
||||
tag,
|
||||
complexityReportPath
|
||||
} = context;
|
||||
const { session, mcpLog, projectRoot: contextProjectRoot, tag } = context;
|
||||
const outputFormat = mcpLog ? 'json' : 'text';
|
||||
|
||||
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
|
||||
@@ -358,7 +350,7 @@ async function expandTask(
|
||||
// --- Context Gathering ---
|
||||
let gatheredContext = '';
|
||||
try {
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const allTasksFlat = flattenTasksWithSubtasks(data.tasks);
|
||||
const fuzzySearch = new FuzzyTaskSearch(allTasksFlat, 'expand-task');
|
||||
const searchQuery = `${task.title} ${task.description}`;
|
||||
@@ -387,10 +379,17 @@ async function expandTask(
|
||||
// --- Complexity Report Integration ---
|
||||
let finalSubtaskCount;
|
||||
let complexityReasoningContext = '';
|
||||
|
||||
// Use tag-aware complexity report path
|
||||
const complexityReportPath = getTagAwareFilePath(
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
tag,
|
||||
projectRoot
|
||||
);
|
||||
let taskAnalysis = null;
|
||||
|
||||
logger.info(
|
||||
`Looking for complexity report at: ${complexityReportPath}${tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
|
||||
`Looking for complexity report at: ${complexityReportPath}${tag && tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
|
||||
);
|
||||
|
||||
try {
|
||||
|
||||
@@ -12,20 +12,16 @@ import { getDebugFlag } from '../config-manager.js';
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {string} outputDir - Output directory for task files
|
||||
* @param {Object} options - Additional options (mcpLog for MCP mode, projectRoot, tag)
|
||||
* @param {string} [options.projectRoot] - Project root path
|
||||
* @param {string} [options.tag] - Tag for the task
|
||||
* @param {Object} [options.mcpLog] - MCP logger object
|
||||
* @returns {Object|undefined} Result object in MCP mode, undefined in CLI mode
|
||||
*/
|
||||
function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
try {
|
||||
const isMcpMode = !!options?.mcpLog;
|
||||
const { projectRoot, tag } = options;
|
||||
|
||||
// 1. Read the raw data structure, ensuring we have all tags.
|
||||
// We call readJSON without a specific tag to get the resolved default view,
|
||||
// which correctly contains the full structure in `_rawTaggedData`.
|
||||
const resolvedData = readJSON(tasksPath, projectRoot, tag);
|
||||
const resolvedData = readJSON(tasksPath, options.projectRoot);
|
||||
if (!resolvedData) {
|
||||
throw new Error(`Could not read or parse tasks file: ${tasksPath}`);
|
||||
}
|
||||
@@ -33,10 +29,13 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
const rawData = resolvedData._rawTaggedData || resolvedData;
|
||||
|
||||
// 2. Determine the target tag we need to generate files for.
|
||||
const tagData = rawData[tag];
|
||||
const targetTag = options.tag || resolvedData.tag || 'master';
|
||||
const tagData = rawData[targetTag];
|
||||
|
||||
if (!tagData || !tagData.tasks) {
|
||||
throw new Error(`Tag '${tag}' not found or has no tasks in the data.`);
|
||||
throw new Error(
|
||||
`Tag '${targetTag}' not found or has no tasks in the data.`
|
||||
);
|
||||
}
|
||||
const tasksForGeneration = tagData.tasks;
|
||||
|
||||
@@ -47,15 +46,15 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
|
||||
log(
|
||||
'info',
|
||||
`Preparing to regenerate ${tasksForGeneration.length} task files for tag '${tag}'`
|
||||
`Preparing to regenerate ${tasksForGeneration.length} task files for tag '${targetTag}'`
|
||||
);
|
||||
|
||||
// 3. Validate dependencies using the FULL, raw data structure to prevent data loss.
|
||||
validateAndFixDependencies(
|
||||
rawData, // Pass the entire object with all tags
|
||||
tasksPath,
|
||||
projectRoot,
|
||||
tag // Provide the current tag context for the operation
|
||||
options.projectRoot,
|
||||
targetTag // Provide the current tag context for the operation
|
||||
);
|
||||
|
||||
const allTasksInTag = tagData.tasks;
|
||||
@@ -67,14 +66,14 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
const files = fs.readdirSync(outputDir);
|
||||
// Tag-aware file patterns: master -> task_001.txt, other tags -> task_001_tagname.txt
|
||||
const masterFilePattern = /^task_(\d+)\.txt$/;
|
||||
const taggedFilePattern = new RegExp(`^task_(\\d+)_${tag}\\.txt$`);
|
||||
const taggedFilePattern = new RegExp(`^task_(\\d+)_${targetTag}\\.txt$`);
|
||||
|
||||
const orphanedFiles = files.filter((file) => {
|
||||
let match = null;
|
||||
let fileTaskId = null;
|
||||
|
||||
// Check if file belongs to current tag
|
||||
if (tag === 'master') {
|
||||
if (targetTag === 'master') {
|
||||
match = file.match(masterFilePattern);
|
||||
if (match) {
|
||||
fileTaskId = parseInt(match[1], 10);
|
||||
@@ -95,7 +94,7 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
if (orphanedFiles.length > 0) {
|
||||
log(
|
||||
'info',
|
||||
`Found ${orphanedFiles.length} orphaned task files to remove for tag '${tag}'`
|
||||
`Found ${orphanedFiles.length} orphaned task files to remove for tag '${targetTag}'`
|
||||
);
|
||||
orphanedFiles.forEach((file) => {
|
||||
const filePath = path.join(outputDir, file);
|
||||
@@ -109,13 +108,13 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
}
|
||||
|
||||
// Generate task files for the target tag
|
||||
log('info', `Generating individual task files for tag '${tag}'...`);
|
||||
log('info', `Generating individual task files for tag '${targetTag}'...`);
|
||||
tasksForGeneration.forEach((task) => {
|
||||
// Tag-aware file naming: master -> task_001.txt, other tags -> task_001_tagname.txt
|
||||
const taskFileName =
|
||||
tag === 'master'
|
||||
targetTag === 'master'
|
||||
? `task_${task.id.toString().padStart(3, '0')}.txt`
|
||||
: `task_${task.id.toString().padStart(3, '0')}_${tag}.txt`;
|
||||
: `task_${task.id.toString().padStart(3, '0')}_${targetTag}.txt`;
|
||||
|
||||
const taskPath = path.join(outputDir, taskFileName);
|
||||
|
||||
@@ -175,7 +174,7 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
|
||||
log(
|
||||
'success',
|
||||
`All ${tasksForGeneration.length} tasks for tag '${tag}' have been generated into '${outputDir}'.`
|
||||
`All ${tasksForGeneration.length} tasks for tag '${targetTag}' have been generated into '${outputDir}'.`
|
||||
);
|
||||
|
||||
if (isMcpMode) {
|
||||
|
||||
@@ -26,9 +26,8 @@ import {
|
||||
* @param {string} reportPath - Path to the complexity report
|
||||
* @param {boolean} withSubtasks - Whether to show subtasks
|
||||
* @param {string} outputFormat - Output format (text or json)
|
||||
* @param {Object} context - Context object (required)
|
||||
* @param {string} context.projectRoot - Project root path
|
||||
* @param {string} context.tag - Tag for the task
|
||||
* @param {string} tag - Optional tag to override current tag resolution
|
||||
* @param {Object} context - Optional context object containing projectRoot and other options
|
||||
* @returns {Object} - Task list result for json format
|
||||
*/
|
||||
function listTasks(
|
||||
@@ -37,18 +36,18 @@ function listTasks(
|
||||
reportPath = null,
|
||||
withSubtasks = false,
|
||||
outputFormat = 'text',
|
||||
tag = null,
|
||||
context = {}
|
||||
) {
|
||||
const { projectRoot, tag } = context;
|
||||
try {
|
||||
// Extract projectRoot from context if provided
|
||||
const projectRoot = context.projectRoot || null;
|
||||
const data = readJSON(tasksPath, projectRoot, tag); // Pass projectRoot to readJSON
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`No valid tasks found in ${tasksPath}`);
|
||||
}
|
||||
|
||||
// Add complexity scores to tasks if report exists
|
||||
// `reportPath` is already tag-aware (resolved at the CLI boundary).
|
||||
const complexityReport = readComplexityReport(reportPath);
|
||||
// Apply complexity scores to tasks
|
||||
if (complexityReport && complexityReport.complexityAnalysis) {
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
import path from 'path';
|
||||
import { log, readJSON, writeJSON, setTasksForTag } from '../utils.js';
|
||||
import {
|
||||
log,
|
||||
readJSON,
|
||||
writeJSON,
|
||||
getCurrentTag,
|
||||
setTasksForTag
|
||||
} from '../utils.js';
|
||||
import { isTaskDependentOn } from '../task-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
@@ -21,7 +27,6 @@ async function moveTask(
|
||||
generateFiles = false,
|
||||
options = {}
|
||||
) {
|
||||
const { projectRoot, tag } = options;
|
||||
// Check if we have comma-separated IDs (batch move)
|
||||
const sourceIds = sourceId.split(',').map((id) => id.trim());
|
||||
const destinationIds = destinationId.split(',').map((id) => id.trim());
|
||||
@@ -48,10 +53,7 @@ async function moveTask(
|
||||
|
||||
// Generate files once at the end if requested
|
||||
if (generateFiles) {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
tag: tag,
|
||||
projectRoot: projectRoot
|
||||
});
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -62,7 +64,7 @@ async function moveTask(
|
||||
|
||||
// Single move logic
|
||||
// Read the raw data without tag resolution to preserve tagged structure
|
||||
let rawData = readJSON(tasksPath, projectRoot, tag);
|
||||
let rawData = readJSON(tasksPath, options.projectRoot); // No tag parameter
|
||||
|
||||
// Handle the case where readJSON returns resolved data with _rawTaggedData
|
||||
if (rawData && rawData._rawTaggedData) {
|
||||
@@ -70,19 +72,27 @@ async function moveTask(
|
||||
rawData = rawData._rawTaggedData;
|
||||
}
|
||||
|
||||
// Determine the current tag
|
||||
const currentTag =
|
||||
options.tag || getCurrentTag(options.projectRoot) || 'master';
|
||||
|
||||
// Ensure the tag exists in the raw data
|
||||
if (!rawData || !rawData[tag] || !Array.isArray(rawData[tag].tasks)) {
|
||||
if (
|
||||
!rawData ||
|
||||
!rawData[currentTag] ||
|
||||
!Array.isArray(rawData[currentTag].tasks)
|
||||
) {
|
||||
throw new Error(
|
||||
`Invalid tasks file or tag "${tag}" not found at ${tasksPath}`
|
||||
`Invalid tasks file or tag "${currentTag}" not found at ${tasksPath}`
|
||||
);
|
||||
}
|
||||
|
||||
// Get the tasks for the current tag
|
||||
const tasks = rawData[tag].tasks;
|
||||
const tasks = rawData[currentTag].tasks;
|
||||
|
||||
log(
|
||||
'info',
|
||||
`Moving task/subtask ${sourceId} to ${destinationId} (tag: ${tag})`
|
||||
`Moving task/subtask ${sourceId} to ${destinationId} (tag: ${currentTag})`
|
||||
);
|
||||
|
||||
// Parse source and destination IDs
|
||||
@@ -106,17 +116,14 @@ async function moveTask(
|
||||
}
|
||||
|
||||
// Update the data structure with the modified tasks
|
||||
rawData[tag].tasks = tasks;
|
||||
rawData[currentTag].tasks = tasks;
|
||||
|
||||
// Always write the data object, never the _rawTaggedData directly
|
||||
// The writeJSON function will filter out _rawTaggedData automatically
|
||||
writeJSON(tasksPath, rawData, options.projectRoot, tag);
|
||||
writeJSON(tasksPath, rawData, options.projectRoot, currentTag);
|
||||
|
||||
if (generateFiles) {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
tag: tag,
|
||||
projectRoot: projectRoot
|
||||
});
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
}
|
||||
|
||||
return result;
|
||||
|
||||
@@ -76,7 +76,7 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
const outputFormat = isMCP ? 'json' : 'text';
|
||||
|
||||
// Use the provided tag, or the current active tag, or default to 'master'
|
||||
const targetTag = tag;
|
||||
const targetTag = tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
const logFn = mcpLog
|
||||
? mcpLog
|
||||
|
||||
@@ -9,8 +9,6 @@ import generateTaskFiles from './generate-task-files.js';
|
||||
* @param {boolean} convertToTask - Whether to convert the subtask to a standalone task
|
||||
* @param {boolean} generateFiles - Whether to regenerate task files after removing the subtask
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @returns {Object|null} The removed subtask if convertToTask is true, otherwise null
|
||||
*/
|
||||
async function removeSubtask(
|
||||
@@ -20,12 +18,11 @@ async function removeSubtask(
|
||||
generateFiles = true,
|
||||
context = {}
|
||||
) {
|
||||
const { projectRoot, tag } = context;
|
||||
try {
|
||||
log('info', `Removing subtask ${subtaskId}...`);
|
||||
|
||||
// Read the existing tasks with proper context
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
const data = readJSON(tasksPath, context.projectRoot, context.tag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
|
||||
}
|
||||
@@ -106,7 +103,7 @@ async function removeSubtask(
|
||||
}
|
||||
|
||||
// Write the updated tasks back to the file with proper context
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, context.projectRoot, context.tag);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
|
||||
@@ -9,8 +9,6 @@ import taskExists from './task-exists.js';
|
||||
* @param {string} tasksPath - Path to the tasks file
|
||||
* @param {string} taskIds - Comma-separated string of task/subtask IDs to remove (e.g., '5,6.1,7')
|
||||
* @param {Object} context - Context object containing projectRoot and tag information
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @returns {Object} Result object with success status, messages, and removed task info
|
||||
*/
|
||||
async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
@@ -34,7 +32,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
|
||||
try {
|
||||
// Read the tasks file ONCE before the loop, preserving the full tagged structure
|
||||
const rawData = readJSON(tasksPath, projectRoot, tag); // Read raw data
|
||||
const rawData = readJSON(tasksPath, projectRoot); // Read raw data
|
||||
if (!rawData) {
|
||||
throw new Error(`Could not read tasks file at ${tasksPath}`);
|
||||
}
|
||||
@@ -42,18 +40,19 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
// Use the full tagged data if available, otherwise use the data as is
|
||||
const fullTaggedData = rawData._rawTaggedData || rawData;
|
||||
|
||||
if (!fullTaggedData[tag] || !fullTaggedData[tag].tasks) {
|
||||
throw new Error(`Tag '${tag}' not found or has no tasks.`);
|
||||
const currentTag = tag || rawData.tag || 'master';
|
||||
if (!fullTaggedData[currentTag] || !fullTaggedData[currentTag].tasks) {
|
||||
throw new Error(`Tag '${currentTag}' not found or has no tasks.`);
|
||||
}
|
||||
|
||||
const tasks = fullTaggedData[tag].tasks; // Work with tasks from the correct tag
|
||||
const tasks = fullTaggedData[currentTag].tasks; // Work with tasks from the correct tag
|
||||
|
||||
const tasksToDeleteFiles = []; // Collect IDs of main tasks whose files should be deleted
|
||||
|
||||
for (const taskId of taskIdsToRemove) {
|
||||
// Check if the task ID exists *before* attempting removal
|
||||
if (!taskExists(tasks, taskId)) {
|
||||
const errorMsg = `Task with ID ${taskId} in tag '${tag}' not found or already removed.`;
|
||||
const errorMsg = `Task with ID ${taskId} in tag '${currentTag}' not found or already removed.`;
|
||||
results.errors.push(errorMsg);
|
||||
results.success = false; // Mark overall success as false if any error occurs
|
||||
continue; // Skip to the next ID
|
||||
@@ -95,7 +94,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
parentTask.subtasks.splice(subtaskIndex, 1);
|
||||
|
||||
results.messages.push(
|
||||
`Successfully removed subtask ${taskId} from tag '${tag}'`
|
||||
`Successfully removed subtask ${taskId} from tag '${currentTag}'`
|
||||
);
|
||||
}
|
||||
// Handle main task removal
|
||||
@@ -103,7 +102,9 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
const taskIdNum = parseInt(taskId, 10);
|
||||
const taskIndex = tasks.findIndex((t) => t.id === taskIdNum);
|
||||
if (taskIndex === -1) {
|
||||
throw new Error(`Task with ID ${taskId} not found in tag '${tag}'`);
|
||||
throw new Error(
|
||||
`Task with ID ${taskId} not found in tag '${currentTag}'`
|
||||
);
|
||||
}
|
||||
|
||||
// Store the task info before removal
|
||||
@@ -115,7 +116,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
tasks.splice(taskIndex, 1);
|
||||
|
||||
results.messages.push(
|
||||
`Successfully removed task ${taskId} from tag '${tag}'`
|
||||
`Successfully removed task ${taskId} from tag '${currentTag}'`
|
||||
);
|
||||
}
|
||||
} catch (innerError) {
|
||||
@@ -138,7 +139,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
);
|
||||
|
||||
// Update the tasks in the current tag of the full data structure
|
||||
fullTaggedData[tag].tasks = tasks;
|
||||
fullTaggedData[currentTag].tasks = tasks;
|
||||
|
||||
// Remove dependencies from all tags
|
||||
for (const tagName in fullTaggedData) {
|
||||
@@ -170,7 +171,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
}
|
||||
|
||||
// Save the updated raw data structure
|
||||
writeJSON(tasksPath, fullTaggedData, projectRoot, tag);
|
||||
writeJSON(tasksPath, fullTaggedData, projectRoot, currentTag);
|
||||
|
||||
// Delete task files AFTER saving tasks.json
|
||||
for (const taskIdNum of tasksToDeleteFiles) {
|
||||
@@ -195,7 +196,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
try {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
projectRoot,
|
||||
tag
|
||||
tag: currentTag
|
||||
});
|
||||
results.messages.push('Task files regenerated successfully.');
|
||||
} catch (genError) {
|
||||
|
||||
@@ -35,7 +35,6 @@ import {
|
||||
* @param {boolean} [options.includeProjectTree] - Include project file tree
|
||||
* @param {string} [options.detailLevel] - Detail level: 'low', 'medium', 'high'
|
||||
* @param {string} [options.projectRoot] - Project root directory
|
||||
* @param {string} [options.tag] - Tag for the task
|
||||
* @param {boolean} [options.saveToFile] - Whether to save results to file (MCP mode)
|
||||
* @param {Object} [context] - Execution context
|
||||
* @param {Object} [context.session] - MCP session object
|
||||
@@ -60,7 +59,6 @@ async function performResearch(
|
||||
includeProjectTree = false,
|
||||
detailLevel = 'medium',
|
||||
projectRoot: providedProjectRoot,
|
||||
tag,
|
||||
saveToFile = false
|
||||
} = options;
|
||||
|
||||
@@ -103,7 +101,7 @@ async function performResearch(
|
||||
|
||||
try {
|
||||
// Initialize context gatherer
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
|
||||
// Auto-discover relevant tasks using fuzzy search to supplement provided tasks
|
||||
let finalTaskIds = [...taskIds]; // Start with explicitly provided tasks
|
||||
@@ -116,7 +114,7 @@ async function performResearch(
|
||||
'tasks',
|
||||
'tasks.json'
|
||||
);
|
||||
const tasksData = await readJSON(tasksPath, projectRoot, tag);
|
||||
const tasksData = await readJSON(tasksPath, projectRoot);
|
||||
|
||||
if (tasksData && tasksData.tasks && tasksData.tasks.length > 0) {
|
||||
// Flatten tasks to include subtasks for fuzzy search
|
||||
@@ -771,7 +769,10 @@ async function handleSaveToTask(
|
||||
return;
|
||||
}
|
||||
|
||||
const data = readJSON(tasksPath, projectRoot, context.tag);
|
||||
// Validate ID exists - use tag from context
|
||||
const { getCurrentTag } = await import('../utils.js');
|
||||
const tag = context.tag || getCurrentTag(projectRoot) || 'master';
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
if (!data || !data.tasks) {
|
||||
console.log(chalk.red('❌ No valid tasks found.'));
|
||||
return;
|
||||
@@ -805,7 +806,7 @@ async function handleSaveToTask(
|
||||
trimmedTaskId,
|
||||
conversationThread,
|
||||
false, // useResearch = false for simple append
|
||||
context,
|
||||
{ ...context, tag },
|
||||
'text'
|
||||
);
|
||||
|
||||
@@ -832,7 +833,7 @@ async function handleSaveToTask(
|
||||
taskIdNum,
|
||||
conversationThread,
|
||||
false, // useResearch = false for simple append
|
||||
context,
|
||||
{ ...context, tag },
|
||||
'text',
|
||||
true // appendMode = true
|
||||
);
|
||||
|
||||
@@ -7,6 +7,7 @@ import {
|
||||
readJSON,
|
||||
writeJSON,
|
||||
findTaskById,
|
||||
getCurrentTag,
|
||||
ensureTagMetadata
|
||||
} from '../utils.js';
|
||||
import { displayBanner } from '../ui.js';
|
||||
@@ -25,13 +26,16 @@ import {
|
||||
* @param {string} taskIdInput - Task ID(s) to update
|
||||
* @param {string} newStatus - New status
|
||||
* @param {Object} options - Additional options (mcpLog for MCP mode, projectRoot for tag resolution)
|
||||
* @param {string} [options.projectRoot] - Project root path
|
||||
* @param {string} [options.tag] - Optional tag to override current tag resolution
|
||||
* @param {string} [options.mcpLog] - MCP logger object
|
||||
* @param {string} tag - Optional tag to override current tag resolution
|
||||
* @returns {Object|undefined} Result object in MCP mode, undefined in CLI mode
|
||||
*/
|
||||
async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
|
||||
const { projectRoot, tag } = options;
|
||||
async function setTaskStatus(
|
||||
tasksPath,
|
||||
taskIdInput,
|
||||
newStatus,
|
||||
options = {},
|
||||
tag = null
|
||||
) {
|
||||
try {
|
||||
if (!isValidTaskStatus(newStatus)) {
|
||||
throw new Error(
|
||||
@@ -55,7 +59,7 @@ async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
|
||||
log('info', `Reading tasks from ${tasksPath}...`);
|
||||
|
||||
// Read the raw data without tag resolution to preserve tagged structure
|
||||
let rawData = readJSON(tasksPath, projectRoot, tag); // No tag parameter
|
||||
let rawData = readJSON(tasksPath, options.projectRoot); // No tag parameter
|
||||
|
||||
// Handle the case where readJSON returns resolved data with _rawTaggedData
|
||||
if (rawData && rawData._rawTaggedData) {
|
||||
@@ -63,17 +67,24 @@ async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
|
||||
rawData = rawData._rawTaggedData;
|
||||
}
|
||||
|
||||
// Determine the current tag
|
||||
const currentTag = tag || getCurrentTag(options.projectRoot) || 'master';
|
||||
|
||||
// Ensure the tag exists in the raw data
|
||||
if (!rawData || !rawData[tag] || !Array.isArray(rawData[tag].tasks)) {
|
||||
if (
|
||||
!rawData ||
|
||||
!rawData[currentTag] ||
|
||||
!Array.isArray(rawData[currentTag].tasks)
|
||||
) {
|
||||
throw new Error(
|
||||
`Invalid tasks file or tag "${tag}" not found at ${tasksPath}`
|
||||
`Invalid tasks file or tag "${currentTag}" not found at ${tasksPath}`
|
||||
);
|
||||
}
|
||||
|
||||
// Get the tasks for the current tag
|
||||
const data = {
|
||||
tasks: rawData[tag].tasks,
|
||||
tag,
|
||||
tasks: rawData[currentTag].tasks,
|
||||
tag: currentTag,
|
||||
_rawTaggedData: rawData
|
||||
};
|
||||
|
||||
@@ -112,16 +123,16 @@ async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
|
||||
}
|
||||
|
||||
// Update the raw data structure with the modified tasks
|
||||
rawData[tag].tasks = data.tasks;
|
||||
rawData[currentTag].tasks = data.tasks;
|
||||
|
||||
// Ensure the tag has proper metadata
|
||||
ensureTagMetadata(rawData[tag], {
|
||||
description: `Tasks for ${tag} context`
|
||||
ensureTagMetadata(rawData[currentTag], {
|
||||
description: `Tasks for ${currentTag} context`
|
||||
});
|
||||
|
||||
// Write the updated raw data back to the file
|
||||
// The writeJSON function will automatically filter out _rawTaggedData
|
||||
writeJSON(tasksPath, rawData, projectRoot, tag);
|
||||
writeJSON(tasksPath, rawData, options.projectRoot, currentTag);
|
||||
|
||||
// Validate dependencies after status update
|
||||
log('info', 'Validating dependencies after status update...');
|
||||
|
||||
@@ -17,7 +17,8 @@ import {
|
||||
truncate,
|
||||
isSilentMode,
|
||||
findProjectRoot,
|
||||
flattenTasksWithSubtasks
|
||||
flattenTasksWithSubtasks,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
@@ -36,7 +37,6 @@ import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||
* @param {Object} [context.session] - Session object from MCP server.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [context.projectRoot] - Project root path (needed for AI service key resolution).
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). Automatically 'json' if mcpLog is present.
|
||||
* @returns {Promise<Object|null>} - The updated subtask or null if update failed.
|
||||
*/
|
||||
@@ -92,7 +92,10 @@ async function updateSubtaskById(
|
||||
throw new Error('Could not determine project root directory');
|
||||
}
|
||||
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
// Determine the tag to use
|
||||
const currentTag = tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
const data = readJSON(tasksPath, projectRoot, currentTag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(
|
||||
`No valid tasks found in ${tasksPath}. The file may be corrupted or have an invalid format.`
|
||||
@@ -139,7 +142,7 @@ async function updateSubtaskById(
|
||||
// --- Context Gathering ---
|
||||
let gatheredContext = '';
|
||||
try {
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const allTasksFlat = flattenTasksWithSubtasks(data.tasks);
|
||||
const fuzzySearch = new FuzzyTaskSearch(allTasksFlat, 'update-subtask');
|
||||
const searchQuery = `${parentTask.title} ${subtask.title} ${prompt}`;
|
||||
@@ -328,17 +331,13 @@ async function updateSubtaskById(
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: About to call writeJSON with updated data...');
|
||||
}
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, projectRoot, currentTag);
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: writeJSON call completed.');
|
||||
}
|
||||
|
||||
report('success', `Successfully updated subtask ${subtaskId}`);
|
||||
// Updated function call to make sure if uncommented it will generate the task files for the updated subtask based on the tag
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
// tag: tag,
|
||||
// projectRoot: projectRoot
|
||||
// });
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
if (outputFormat === 'text') {
|
||||
if (loadingIndicator) {
|
||||
|
||||
@@ -12,7 +12,8 @@ import {
|
||||
truncate,
|
||||
isSilentMode,
|
||||
flattenTasksWithSubtasks,
|
||||
findProjectRoot
|
||||
findProjectRoot,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
|
||||
import {
|
||||
@@ -261,7 +262,6 @@ function parseUpdatedTaskFromText(text, expectedTaskId, logFn, isMCP) {
|
||||
* @param {Object} [context.session] - Session object from MCP server.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [context.projectRoot] - Project root path.
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
||||
* @param {boolean} [appendMode=false] - If true, append to details instead of full update.
|
||||
* @returns {Promise<Object|null>} - The updated task or null if update failed.
|
||||
@@ -320,8 +320,11 @@ async function updateTaskById(
|
||||
throw new Error('Could not determine project root directory');
|
||||
}
|
||||
|
||||
// Determine the tag to use
|
||||
const currentTag = tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
// --- Task Loading and Status Check (Keep existing) ---
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
const data = readJSON(tasksPath, projectRoot, currentTag);
|
||||
if (!data || !data.tasks)
|
||||
throw new Error(`No valid tasks found in ${tasksPath}.`);
|
||||
const taskIndex = data.tasks.findIndex((task) => task.id === taskId);
|
||||
@@ -361,7 +364,7 @@ async function updateTaskById(
|
||||
// --- Context Gathering ---
|
||||
let gatheredContext = '';
|
||||
try {
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const allTasksFlat = flattenTasksWithSubtasks(data.tasks);
|
||||
const fuzzySearch = new FuzzyTaskSearch(allTasksFlat, 'update-task');
|
||||
const searchQuery = `${taskToUpdate.title} ${taskToUpdate.description} ${prompt}`;
|
||||
@@ -556,7 +559,7 @@ async function updateTaskById(
|
||||
|
||||
// Write the updated task back to file
|
||||
data.tasks[taskIndex] = taskToUpdate;
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, projectRoot, currentTag);
|
||||
report('success', `Successfully appended to task ${taskId}`);
|
||||
|
||||
// Display success message for CLI
|
||||
@@ -701,7 +704,7 @@ async function updateTaskById(
|
||||
// --- End Update Task Data ---
|
||||
|
||||
// --- Write File and Generate (Unchanged) ---
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, projectRoot, currentTag);
|
||||
report('success', `Successfully updated task ${taskId}`);
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
// --- End Write File ---
|
||||
|
||||
@@ -9,7 +9,8 @@ import {
|
||||
readJSON,
|
||||
writeJSON,
|
||||
truncate,
|
||||
isSilentMode
|
||||
isSilentMode,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
|
||||
import {
|
||||
@@ -233,8 +234,8 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
||||
* @param {Object} context - Context object containing session and mcpLog.
|
||||
* @param {Object} [context.session] - Session object from MCP server.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
||||
* @param {string} [tag=null] - Tag associated with the tasks.
|
||||
*/
|
||||
async function updateTasks(
|
||||
tasksPath,
|
||||
@@ -268,8 +269,11 @@ async function updateTasks(
|
||||
throw new Error('Could not determine project root directory');
|
||||
}
|
||||
|
||||
// Determine the current tag - prioritize explicit tag, then context.tag, then current tag
|
||||
const currentTag = tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
// --- Task Loading/Filtering (Updated to pass projectRoot and tag) ---
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
const data = readJSON(tasksPath, projectRoot, currentTag);
|
||||
if (!data || !data.tasks)
|
||||
throw new Error(`No valid tasks found in ${tasksPath}`);
|
||||
const tasksToUpdate = data.tasks.filter(
|
||||
@@ -288,7 +292,7 @@ async function updateTasks(
|
||||
// --- Context Gathering ---
|
||||
let gatheredContext = '';
|
||||
try {
|
||||
const contextGatherer = new ContextGatherer(projectRoot, tag);
|
||||
const contextGatherer = new ContextGatherer(projectRoot);
|
||||
const allTasksFlat = flattenTasksWithSubtasks(data.tasks);
|
||||
const fuzzySearch = new FuzzyTaskSearch(allTasksFlat, 'update');
|
||||
const searchResults = fuzzySearch.findRelevantTasks(prompt, {
|
||||
@@ -474,7 +478,7 @@ async function updateTasks(
|
||||
);
|
||||
|
||||
// Fix: Pass projectRoot and currentTag to writeJSON
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
writeJSON(tasksPath, data, projectRoot, currentTag);
|
||||
if (isMCP)
|
||||
logFn.info(
|
||||
`Successfully updated ${actualUpdateCount} tasks in ${tasksPath}`
|
||||
|
||||
@@ -1197,18 +1197,18 @@ async function displayNextTask(
|
||||
* @param {string|number} taskId - The ID of the task to display
|
||||
* @param {string} complexityReportPath - Path to the complexity report file
|
||||
* @param {string} [statusFilter] - Optional status to filter subtasks by
|
||||
* @param {object} context - Context object containing projectRoot and tag
|
||||
* @param {string} context.projectRoot - Project root path
|
||||
* @param {string} context.tag - Tag for the task
|
||||
* @param {string} tag - Optional tag to override current tag resolution
|
||||
*/
|
||||
async function displayTaskById(
|
||||
tasksPath,
|
||||
taskId,
|
||||
complexityReportPath = null,
|
||||
statusFilter = null,
|
||||
tag = null,
|
||||
context = {}
|
||||
) {
|
||||
const { projectRoot, tag } = context;
|
||||
// Extract projectRoot from context
|
||||
const projectRoot = context.projectRoot || null;
|
||||
|
||||
// Read the tasks file with proper projectRoot for tag resolution
|
||||
const data = readJSON(tasksPath, projectRoot, tag);
|
||||
@@ -2251,9 +2251,7 @@ function displayAiUsageSummary(telemetryData, outputType = 'cli') {
|
||||
* @param {Array<string>} taskIds - Array of task IDs to display
|
||||
* @param {string} complexityReportPath - Path to complexity report
|
||||
* @param {string} statusFilter - Optional status filter for subtasks
|
||||
* @param {Object} context - Context object containing projectRoot and tag
|
||||
* @param {string} [context.projectRoot] - Project root path
|
||||
* @param {string} [context.tag] - Tag for the task
|
||||
* @param {Object} context - Optional context object containing projectRoot and tag
|
||||
*/
|
||||
async function displayMultipleTasksSummary(
|
||||
tasksPath,
|
||||
@@ -2604,6 +2602,7 @@ async function displayMultipleTasksSummary(
|
||||
choice.trim(),
|
||||
complexityReportPath,
|
||||
statusFilter,
|
||||
tag,
|
||||
context
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1190,7 +1190,6 @@ function aggregateTelemetry(telemetryArray, overallCommandName) {
|
||||
}
|
||||
|
||||
/**
|
||||
* @deprecated Use TaskMaster.getCurrentTag() instead
|
||||
* Gets the current tag from state.json or falls back to defaultTag from config
|
||||
* @param {string} projectRoot - The project root directory (required)
|
||||
* @returns {string} The current tag name
|
||||
|
||||
@@ -21,7 +21,7 @@ const { encode } = pkg;
|
||||
* Context Gatherer class for collecting and formatting context from various sources
|
||||
*/
|
||||
export class ContextGatherer {
|
||||
constructor(projectRoot, tag) {
|
||||
constructor(projectRoot) {
|
||||
this.projectRoot = projectRoot;
|
||||
this.tasksPath = path.join(
|
||||
projectRoot,
|
||||
@@ -29,13 +29,12 @@ export class ContextGatherer {
|
||||
'tasks',
|
||||
'tasks.json'
|
||||
);
|
||||
this.tag = tag;
|
||||
this.allTasks = this._loadAllTasks();
|
||||
}
|
||||
|
||||
_loadAllTasks() {
|
||||
try {
|
||||
const data = readJSON(this.tasksPath, this.projectRoot, this.tag);
|
||||
const data = readJSON(this.tasksPath, this.projectRoot);
|
||||
const tasks = data?.tasks || [];
|
||||
return tasks;
|
||||
} catch (error) {
|
||||
@@ -959,15 +958,10 @@ export class ContextGatherer {
|
||||
/**
|
||||
* Factory function to create a context gatherer instance
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {string} tag - Tag for the task
|
||||
* @returns {ContextGatherer} Context gatherer instance
|
||||
* @throws {Error} If tag is not provided
|
||||
*/
|
||||
export function createContextGatherer(projectRoot, tag) {
|
||||
if (!tag) {
|
||||
throw new Error('Tag is required');
|
||||
}
|
||||
return new ContextGatherer(projectRoot, tag);
|
||||
export function createContextGatherer(projectRoot) {
|
||||
return new ContextGatherer(projectRoot);
|
||||
}
|
||||
|
||||
export default ContextGatherer;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/**
|
||||
* @typedef {'amp' | 'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'kiro' | 'opencode' | 'roo' | 'trae' | 'windsurf' | 'vscode' | 'zed'} RulesProfile
|
||||
* @typedef {'amp' | 'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'opencode' | 'roo' | 'trae' | 'windsurf' | 'vscode' | 'zed'} RulesProfile
|
||||
*/
|
||||
|
||||
/**
|
||||
@@ -16,7 +16,6 @@
|
||||
* - codex: Codex integration
|
||||
* - cursor: Cursor IDE rules
|
||||
* - gemini: Gemini integration
|
||||
* - kiro: Kiro IDE rules
|
||||
* - opencode: OpenCode integration
|
||||
* - roo: Roo Code IDE rules
|
||||
* - trae: Trae IDE rules
|
||||
@@ -36,7 +35,6 @@ export const RULE_PROFILES = [
|
||||
'codex',
|
||||
'cursor',
|
||||
'gemini',
|
||||
'kiro',
|
||||
'opencode',
|
||||
'roo',
|
||||
'trae',
|
||||
|
||||
@@ -5,7 +5,6 @@ export { clineProfile } from './cline.js';
|
||||
export { codexProfile } from './codex.js';
|
||||
export { cursorProfile } from './cursor.js';
|
||||
export { geminiProfile } from './gemini.js';
|
||||
export { kiroProfile } from './kiro.js';
|
||||
export { opencodeProfile } from './opencode.js';
|
||||
export { rooProfile } from './roo.js';
|
||||
export { traeProfile } from './trae.js';
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
// Kiro profile for rule-transformer
|
||||
import { createProfile } from './base-profile.js';
|
||||
|
||||
// Create and export kiro profile using the base factory
|
||||
export const kiroProfile = createProfile({
|
||||
name: 'kiro',
|
||||
displayName: 'Kiro',
|
||||
url: 'kiro.dev',
|
||||
docsUrl: 'kiro.dev/docs',
|
||||
profileDir: '.kiro',
|
||||
rulesDir: '.kiro/steering', // Kiro rules location (full path)
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'settings/mcp.json', // Create directly in settings subdirectory
|
||||
includeDefaultRules: true, // Include default rules to get all the standard files
|
||||
targetExtension: '.md',
|
||||
fileMap: {
|
||||
// Override specific mappings - the base profile will create:
|
||||
// 'rules/cursor_rules.mdc': 'kiro_rules.md'
|
||||
// 'rules/dev_workflow.mdc': 'dev_workflow.md'
|
||||
// 'rules/self_improve.mdc': 'self_improve.md'
|
||||
// 'rules/taskmaster.mdc': 'taskmaster.md'
|
||||
// We can add additional custom mappings here if needed
|
||||
},
|
||||
customReplacements: [
|
||||
// Core Kiro directory structure changes
|
||||
{ from: /\.cursor\/rules/g, to: '.kiro/steering' },
|
||||
{ from: /\.cursor\/mcp\.json/g, to: '.kiro/settings/mcp.json' },
|
||||
|
||||
// Fix any remaining kiro/rules references that might be created during transformation
|
||||
{ from: /\.kiro\/rules/g, to: '.kiro/steering' },
|
||||
|
||||
// Essential markdown link transformations for Kiro structure
|
||||
{
|
||||
from: /\[(.+?)\]\(mdc:\.cursor\/rules\/(.+?)\.mdc\)/g,
|
||||
to: '[$1](.kiro/steering/$2.md)'
|
||||
},
|
||||
|
||||
// Kiro specific terminology
|
||||
{ from: /rules directory/g, to: 'steering directory' },
|
||||
{ from: /cursor rules/gi, to: 'Kiro steering files' }
|
||||
]
|
||||
});
|
||||
@@ -1,162 +1,6 @@
|
||||
// VS Code conversion profile for rule-transformer
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
/**
|
||||
* Transform standard MCP config format to VS Code format
|
||||
* @param {Object} mcpConfig - Standard MCP configuration object
|
||||
* @returns {Object} - Transformed VS Code configuration object
|
||||
*/
|
||||
function transformToVSCodeFormat(mcpConfig) {
|
||||
const vscodeConfig = {};
|
||||
|
||||
// Transform mcpServers to servers
|
||||
if (mcpConfig.mcpServers) {
|
||||
vscodeConfig.servers = {};
|
||||
|
||||
for (const [serverName, serverConfig] of Object.entries(
|
||||
mcpConfig.mcpServers
|
||||
)) {
|
||||
// Transform server configuration
|
||||
const transformedServer = {
|
||||
...serverConfig
|
||||
};
|
||||
|
||||
// Add type: "stdio" after the env block
|
||||
if (transformedServer.env) {
|
||||
// Reorder properties: keep command, args, env, then add type
|
||||
const reorderedServer = {};
|
||||
if (transformedServer.command)
|
||||
reorderedServer.command = transformedServer.command;
|
||||
if (transformedServer.args)
|
||||
reorderedServer.args = transformedServer.args;
|
||||
if (transformedServer.env) reorderedServer.env = transformedServer.env;
|
||||
reorderedServer.type = 'stdio';
|
||||
|
||||
// Add any other properties that might exist
|
||||
Object.keys(transformedServer).forEach((key) => {
|
||||
if (!['command', 'args', 'env', 'type'].includes(key)) {
|
||||
reorderedServer[key] = transformedServer[key];
|
||||
}
|
||||
});
|
||||
|
||||
vscodeConfig.servers[serverName] = reorderedServer;
|
||||
} else {
|
||||
// If no env block, just add type at the end
|
||||
transformedServer.type = 'stdio';
|
||||
vscodeConfig.servers[serverName] = transformedServer;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return vscodeConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Lifecycle function called after MCP config generation to transform to VS Code format
|
||||
* @param {string} targetDir - Target project directory
|
||||
* @param {string} assetsDir - Assets directory (unused for VS Code)
|
||||
*/
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
const vscodeConfigPath = path.join(targetDir, '.vscode', 'mcp.json');
|
||||
|
||||
if (!fs.existsSync(vscodeConfigPath)) {
|
||||
log('debug', '[VS Code] No .vscode/mcp.json found to transform');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Read the generated standard MCP config
|
||||
const mcpConfigContent = fs.readFileSync(vscodeConfigPath, 'utf8');
|
||||
const mcpConfig = JSON.parse(mcpConfigContent);
|
||||
|
||||
// Check if it's already in VS Code format (has servers instead of mcpServers)
|
||||
if (mcpConfig.servers) {
|
||||
log(
|
||||
'info',
|
||||
'[VS Code] mcp.json already in VS Code format, skipping transformation'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Transform to VS Code format
|
||||
const vscodeConfig = transformToVSCodeFormat(mcpConfig);
|
||||
|
||||
// Write back the transformed config with proper formatting
|
||||
fs.writeFileSync(
|
||||
vscodeConfigPath,
|
||||
JSON.stringify(vscodeConfig, null, 2) + '\n'
|
||||
);
|
||||
|
||||
log('info', '[VS Code] Transformed mcp.json to VS Code format');
|
||||
log('debug', `[VS Code] Renamed mcpServers->servers, added type: "stdio"`);
|
||||
} catch (error) {
|
||||
log('error', `[VS Code] Failed to transform mcp.json: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Lifecycle function called when removing VS Code profile
|
||||
* @param {string} targetDir - Target project directory
|
||||
*/
|
||||
function onRemoveRulesProfile(targetDir) {
|
||||
const vscodeConfigPath = path.join(targetDir, '.vscode', 'mcp.json');
|
||||
|
||||
if (!fs.existsSync(vscodeConfigPath)) {
|
||||
log('debug', '[VS Code] No .vscode/mcp.json found to clean up');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Read the current config
|
||||
const configContent = fs.readFileSync(vscodeConfigPath, 'utf8');
|
||||
const config = JSON.parse(configContent);
|
||||
|
||||
// Check if it has the servers section and task-master-ai server
|
||||
if (config.servers && config.servers['task-master-ai']) {
|
||||
// Remove task-master-ai server
|
||||
delete config.servers['task-master-ai'];
|
||||
|
||||
// Check if there are other MCP servers
|
||||
const remainingServers = Object.keys(config.servers);
|
||||
|
||||
if (remainingServers.length === 0) {
|
||||
// No other servers, remove entire file
|
||||
fs.rmSync(vscodeConfigPath, { force: true });
|
||||
log('info', '[VS Code] Removed empty mcp.json file');
|
||||
|
||||
// Also remove .vscode directory if it's empty
|
||||
const vscodeDir = path.dirname(vscodeConfigPath);
|
||||
try {
|
||||
const dirContents = fs.readdirSync(vscodeDir);
|
||||
if (dirContents.length === 0) {
|
||||
fs.rmSync(vscodeDir, { recursive: true, force: true });
|
||||
log('debug', '[VS Code] Removed empty .vscode directory');
|
||||
}
|
||||
} catch (err) {
|
||||
// Directory might not be empty or might not exist, that's fine
|
||||
}
|
||||
} else {
|
||||
// Write back the modified config
|
||||
fs.writeFileSync(
|
||||
vscodeConfigPath,
|
||||
JSON.stringify(config, null, 2) + '\n'
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
'[VS Code] Removed TaskMaster from mcp.json, preserved other configurations'
|
||||
);
|
||||
}
|
||||
} else {
|
||||
log('debug', '[VS Code] TaskMaster not found in mcp.json');
|
||||
}
|
||||
} catch (error) {
|
||||
log('error', `[VS Code] Failed to clean up mcp.json: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Create and export vscode profile using the base factory
|
||||
export const vscodeProfile = createProfile({
|
||||
name: 'vscode',
|
||||
@@ -164,8 +8,6 @@ export const vscodeProfile = createProfile({
|
||||
url: 'code.visualstudio.com',
|
||||
docsUrl: 'code.visualstudio.com/docs',
|
||||
rulesDir: '.github/instructions', // VS Code instructions location
|
||||
profileDir: '.vscode', // VS Code configuration directory
|
||||
mcpConfigName: 'mcp.json', // VS Code uses mcp.json in .vscode directory
|
||||
customReplacements: [
|
||||
// Core VS Code directory structure changes
|
||||
{ from: /\.cursor\/rules/g, to: '.github/instructions' },
|
||||
@@ -186,10 +28,5 @@ export const vscodeProfile = createProfile({
|
||||
// VS Code specific terminology
|
||||
{ from: /rules directory/g, to: 'instructions directory' },
|
||||
{ from: /cursor rules/gi, to: 'VS Code instructions' }
|
||||
],
|
||||
onPostConvert: onPostConvertRulesProfile,
|
||||
onRemove: onRemoveRulesProfile
|
||||
]
|
||||
});
|
||||
|
||||
// Export lifecycle functions separately to avoid naming conflicts
|
||||
export { onPostConvertRulesProfile, onRemoveRulesProfile };
|
||||
|
||||
@@ -14,8 +14,7 @@ import {
|
||||
TASKMASTER_DOCS_DIR,
|
||||
TASKMASTER_REPORTS_DIR,
|
||||
TASKMASTER_CONFIG_FILE,
|
||||
LEGACY_CONFIG_FILE,
|
||||
COMPLEXITY_REPORT_FILE
|
||||
LEGACY_CONFIG_FILE
|
||||
} from './constants/paths.js';
|
||||
|
||||
/**
|
||||
@@ -24,16 +23,13 @@ import {
|
||||
*/
|
||||
export class TaskMaster {
|
||||
#paths;
|
||||
#tag;
|
||||
|
||||
/**
|
||||
* The constructor is intended to be used only by the initTaskMaster factory function.
|
||||
* @param {object} paths - A pre-resolved object of all application paths.
|
||||
* @param {string|undefined} tag - The current tag.
|
||||
*/
|
||||
constructor(paths, tag) {
|
||||
constructor(paths) {
|
||||
this.#paths = Object.freeze({ ...paths });
|
||||
this.#tag = tag;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -68,19 +64,7 @@ export class TaskMaster {
|
||||
* @returns {string|null} The absolute path to the complexity report.
|
||||
*/
|
||||
getComplexityReportPath() {
|
||||
if (this.#paths.complexityReportPath) {
|
||||
return this.#paths.complexityReportPath;
|
||||
}
|
||||
|
||||
const complexityReportFile =
|
||||
this.getCurrentTag() !== 'master'
|
||||
? COMPLEXITY_REPORT_FILE.replace(
|
||||
'.json',
|
||||
`_${this.getCurrentTag()}.json`
|
||||
)
|
||||
: COMPLEXITY_REPORT_FILE;
|
||||
|
||||
return path.join(this.#paths.projectRoot, complexityReportFile);
|
||||
return this.#paths.complexityReportPath;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -103,45 +87,6 @@ export class TaskMaster {
|
||||
getAllPaths() {
|
||||
return this.#paths;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the current tag from state.json or falls back to defaultTag from config
|
||||
* @returns {string} The current tag name
|
||||
*/
|
||||
getCurrentTag() {
|
||||
if (this.#tag) {
|
||||
return this.#tag;
|
||||
}
|
||||
|
||||
try {
|
||||
// Try to read current tag from state.json using fs directly
|
||||
if (fs.existsSync(this.#paths.statePath)) {
|
||||
const rawState = fs.readFileSync(this.#paths.statePath, 'utf8');
|
||||
const stateData = JSON.parse(rawState);
|
||||
if (stateData && stateData.currentTag) {
|
||||
return stateData.currentTag;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore errors, fall back to default
|
||||
}
|
||||
|
||||
// Fall back to defaultTag from config using fs directly
|
||||
try {
|
||||
if (fs.existsSync(this.#paths.configPath)) {
|
||||
const rawConfig = fs.readFileSync(this.#paths.configPath, 'utf8');
|
||||
const configData = JSON.parse(rawConfig);
|
||||
if (configData && configData.global && configData.global.defaultTag) {
|
||||
return configData.global.defaultTag;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore errors, use hardcoded default
|
||||
}
|
||||
|
||||
// Final fallback
|
||||
return 'master';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -155,7 +100,6 @@ export class TaskMaster {
|
||||
* @param {string} [overrides.complexityReportPath]
|
||||
* @param {string} [overrides.configPath]
|
||||
* @param {string} [overrides.statePath]
|
||||
* @param {string} [overrides.tag]
|
||||
* @returns {TaskMaster} An initialized TaskMaster instance.
|
||||
*/
|
||||
export function initTaskMaster(overrides = {}) {
|
||||
@@ -179,33 +123,17 @@ export function initTaskMaster(overrides = {}) {
|
||||
pathType,
|
||||
override,
|
||||
defaultPaths = [],
|
||||
basePath = null,
|
||||
createParentDirs = false
|
||||
basePath = null
|
||||
) => {
|
||||
if (typeof override === 'string') {
|
||||
const resolvedPath = path.isAbsolute(override)
|
||||
? override
|
||||
: path.resolve(basePath || process.cwd(), override);
|
||||
|
||||
if (createParentDirs) {
|
||||
// For output paths, create parent directory if it doesn't exist
|
||||
const parentDir = path.dirname(resolvedPath);
|
||||
if (!fs.existsSync(parentDir)) {
|
||||
try {
|
||||
fs.mkdirSync(parentDir, { recursive: true });
|
||||
} catch (error) {
|
||||
throw new Error(
|
||||
`Could not create directory for ${pathType}: ${parentDir}. Error: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Original validation logic
|
||||
if (!fs.existsSync(resolvedPath)) {
|
||||
throw new Error(
|
||||
`${pathType} override path does not exist: ${resolvedPath}`
|
||||
);
|
||||
}
|
||||
if (!fs.existsSync(resolvedPath)) {
|
||||
throw new Error(
|
||||
`${pathType} override path does not exist: ${resolvedPath}`
|
||||
);
|
||||
}
|
||||
return resolvedPath;
|
||||
}
|
||||
@@ -361,10 +289,9 @@ export function initTaskMaster(overrides = {}) {
|
||||
'task-complexity-report.json',
|
||||
'complexity-report.json'
|
||||
],
|
||||
paths.projectRoot,
|
||||
true // Enable parent directory creation for output paths
|
||||
paths.projectRoot
|
||||
);
|
||||
}
|
||||
|
||||
return new TaskMaster(paths, overrides.tag);
|
||||
return new TaskMaster(paths);
|
||||
}
|
||||
|
||||
@@ -271,12 +271,7 @@ export function findComplexityReportPath(
|
||||
'' // Project root
|
||||
];
|
||||
|
||||
const fileNames = ['task-complexity', 'complexity-report'].map((fileName) => {
|
||||
if (args?.tag && args?.tag !== 'master') {
|
||||
return `${fileName}_${args.tag}.json`;
|
||||
}
|
||||
return `${fileName}.json`;
|
||||
});
|
||||
const fileNames = ['task-complexity-report.json', 'complexity-report.json'];
|
||||
|
||||
for (const location of locations) {
|
||||
for (const fileName of fileNames) {
|
||||
@@ -358,7 +353,6 @@ export function resolveComplexityReportOutputPath(
|
||||
log = null
|
||||
) {
|
||||
const logger = getLoggerOrDefault(log);
|
||||
const tag = args?.tag;
|
||||
|
||||
// 1. If explicit path is provided, use it
|
||||
if (explicitPath) {
|
||||
@@ -375,19 +369,13 @@ export function resolveComplexityReportOutputPath(
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const rawProjectRoot =
|
||||
args?.projectRoot || findProjectRoot() || process.cwd();
|
||||
|
||||
// 3. Normalize project root to prevent double .taskmaster paths
|
||||
const projectRoot = normalizeProjectRoot(rawProjectRoot);
|
||||
|
||||
// 3. Use tag-aware filename
|
||||
let filename = 'task-complexity-report.json';
|
||||
if (tag && tag !== 'master') {
|
||||
filename = `task-complexity-report_${tag}.json`;
|
||||
}
|
||||
|
||||
// 4. Use new .taskmaster structure by default
|
||||
const defaultPath = path.join(projectRoot, '.taskmaster/reports', filename);
|
||||
logger.info?.(
|
||||
`Using tag-aware complexity report output path: ${defaultPath}`
|
||||
);
|
||||
const defaultPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
logger.info?.(`Using default complexity report output path: ${defaultPath}`);
|
||||
|
||||
// Ensure the directory exists
|
||||
const outputDir = path.dirname(defaultPath);
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
# Task Master E2E Tests
|
||||
|
||||
This directory contains the modern end-to-end test suite for Task Master AI. The JavaScript implementation provides parallel execution, better error handling, and improved maintainability compared to the legacy bash script.
|
||||
|
||||
## Features
|
||||
|
||||
- **Parallel Execution**: Run test groups concurrently for faster test completion
|
||||
- **Modular Architecture**: Tests are organized into logical groups (setup, core, providers, advanced)
|
||||
- **Comprehensive Logging**: Detailed logs with timestamps, cost tracking, and color-coded output
|
||||
- **LLM Analysis**: Automatic analysis of test results using AI
|
||||
- **Error Handling**: Robust error handling with categorization and recommendations
|
||||
- **Flexible Configuration**: Easy to configure test settings and provider configurations
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
tests/e2e/
|
||||
├── config/
|
||||
│ └── test-config.js # Test configuration and settings
|
||||
├── utils/
|
||||
│ ├── logger.js # Test logging utilities
|
||||
│ ├── test-helpers.js # Common test helper functions
|
||||
│ ├── llm-analyzer.js # LLM-based log analysis
|
||||
│ └── error-handler.js # Error handling and reporting
|
||||
├── tests/
|
||||
│ ├── setup.test.js # Setup and initialization tests
|
||||
│ ├── core.test.js # Core task management tests
|
||||
│ ├── providers.test.js # Multi-provider tests
|
||||
│ └── advanced.test.js # Advanced feature tests
|
||||
├── runners/
|
||||
│ ├── parallel-runner.js # Parallel test execution
|
||||
│ └── test-worker.js # Worker thread for parallel execution
|
||||
├── run-e2e-tests.js # Main test runner
|
||||
├── run_e2e.sh # Legacy bash implementation
|
||||
└── e2e_helpers.sh # Legacy bash helpers
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Run All Tests (Recommended)
|
||||
|
||||
```bash
|
||||
# Runs all test groups in the correct order
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
### Run Tests Sequentially
|
||||
|
||||
```bash
|
||||
# Runs all test groups sequentially instead of in parallel
|
||||
npm run test:e2e:sequential
|
||||
```
|
||||
|
||||
### Run Individual Test Groups
|
||||
|
||||
Each test command automatically handles setup if needed, creating a fresh test directory:
|
||||
|
||||
```bash
|
||||
# Each command creates its own test environment automatically
|
||||
npm run test:e2e:setup # Setup only (initialize, parse PRD, analyze complexity)
|
||||
npm run test:e2e:core # Auto-runs setup + core tests (task CRUD, dependencies, status)
|
||||
npm run test:e2e:providers # Auto-runs setup + provider tests (multi-provider testing)
|
||||
npm run test:e2e:advanced # Auto-runs setup + advanced tests (tags, subtasks, expand)
|
||||
```
|
||||
|
||||
**Note**: Each command creates a fresh test directory, so running individual tests will not share state. This ensures test isolation but means each run will parse the PRD and set up from scratch.
|
||||
|
||||
### Run Multiple Groups
|
||||
|
||||
```bash
|
||||
# Specify multiple groups to run together
|
||||
node tests/e2e/run-e2e-tests.js --groups core,providers
|
||||
|
||||
# This automatically runs setup first if needed
|
||||
node tests/e2e/run-e2e-tests.js --groups providers,advanced
|
||||
```
|
||||
|
||||
### Run Tests Against Existing Directory
|
||||
|
||||
If you want to reuse a test directory from a previous run:
|
||||
|
||||
```bash
|
||||
# First, find your test directory from a previous run:
|
||||
ls tests/e2e/_runs/
|
||||
|
||||
# Then run specific tests against that directory:
|
||||
node tests/e2e/run-e2e-tests.js --groups core --test-dir tests/e2e/_runs/run_2025-07-03_094800611
|
||||
```
|
||||
|
||||
### Analyze Existing Log
|
||||
```bash
|
||||
npm run test:e2e:analyze
|
||||
|
||||
# Or analyze specific log file
|
||||
node tests/e2e/run-e2e-tests.js --analyze-log path/to/log.log
|
||||
```
|
||||
|
||||
### Skip Verification Tests
|
||||
```bash
|
||||
node tests/e2e/run-e2e-tests.js --skip-verification
|
||||
```
|
||||
|
||||
### Run Legacy Bash Tests
|
||||
```bash
|
||||
npm run test:e2e:bash
|
||||
```
|
||||
|
||||
## Test Groups
|
||||
|
||||
### Setup (`setup`)
|
||||
- NPM global linking
|
||||
- Project initialization
|
||||
- PRD parsing
|
||||
- Complexity analysis
|
||||
|
||||
### Core (`core`)
|
||||
- Task CRUD operations
|
||||
- Dependency management
|
||||
- Status management
|
||||
- Subtask operations
|
||||
|
||||
### Providers (`providers`)
|
||||
- Multi-provider add-task testing
|
||||
- Provider comparison
|
||||
- Model switching
|
||||
- Error handling per provider
|
||||
|
||||
### Advanced (`advanced`)
|
||||
- Tag management
|
||||
- Model configuration
|
||||
- Task expansion
|
||||
- File generation
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config/test-config.js` to customize:
|
||||
|
||||
- Test paths and directories
|
||||
- Provider configurations
|
||||
- Test prompts
|
||||
- Parallel execution settings
|
||||
- LLM analysis settings
|
||||
|
||||
## Output
|
||||
|
||||
- **Log Files**: Saved to `tests/e2e/log/` with timestamp
|
||||
- **Test Artifacts**: Created in `tests/e2e/_runs/run_TIMESTAMP/`
|
||||
- **Console Output**: Color-coded with progress indicators
|
||||
- **Cost Tracking**: Automatic tracking of AI API costs
|
||||
|
||||
## Requirements
|
||||
|
||||
- Node.js >= 18.0.0
|
||||
- Dependencies: chalk, boxen, dotenv, node-fetch
|
||||
- System utilities: jq, bc
|
||||
- Valid API keys in `.env` file
|
||||
|
||||
## Comparison with Bash Tests
|
||||
|
||||
| Feature | Bash Script | JavaScript |
|
||||
|---------|------------|------------|
|
||||
| Parallel Execution | ❌ | ✅ |
|
||||
| Error Categorization | Basic | Advanced |
|
||||
| Test Isolation | Limited | Full |
|
||||
| Performance | Slower | Faster |
|
||||
| Debugging | Harder | Easier |
|
||||
| Cross-platform | Limited | Better |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **Missing Dependencies**: Install system utilities with `brew install jq bc` (macOS) or `apt-get install jq bc` (Linux)
|
||||
2. **API Errors**: Check `.env` file for valid API keys
|
||||
3. **Permission Errors**: Ensure proper file permissions
|
||||
4. **Timeout Issues**: Adjust timeout in config file
|
||||
|
||||
## Development
|
||||
|
||||
To add new tests:
|
||||
|
||||
1. Create a new test file in `tests/` directory
|
||||
2. Export a default async function that accepts (logger, helpers, context)
|
||||
3. Return a results object with status and errors
|
||||
4. Add the test to appropriate group in `test-config.js`
|
||||
|
||||
Example test structure:
|
||||
```javascript
|
||||
export default async function myTest(logger, helpers, context) {
|
||||
const results = {
|
||||
status: 'passed',
|
||||
errors: []
|
||||
};
|
||||
|
||||
try {
|
||||
logger.step('Running my test');
|
||||
// Test implementation
|
||||
logger.success('Test passed');
|
||||
} catch (error) {
|
||||
results.status = 'failed';
|
||||
results.errors.push(error.message);
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
@@ -1,81 +0,0 @@
|
||||
# E2E Test Reports
|
||||
|
||||
Task Master's E2E tests now generate comprehensive test reports using Jest Stare, providing an interactive and visually appealing test report similar to Playwright's reporting capabilities.
|
||||
|
||||
## Test Report Formats
|
||||
|
||||
When you run `npm run test:e2e:jest`, the following reports are generated:
|
||||
|
||||
### 1. Jest Stare HTML Report
|
||||
- **Location**: `test-results/index.html`
|
||||
- **Features**:
|
||||
- Interactive dashboard with charts and graphs
|
||||
- Test execution timeline and performance metrics
|
||||
- Detailed failure messages with stack traces
|
||||
- Console output for each test
|
||||
- Search and filter capabilities
|
||||
- Pass/Fail/Skip statistics with visual charts
|
||||
- Test duration analysis
|
||||
- Collapsible test suites
|
||||
- Coverage link integration
|
||||
- Summary statistics
|
||||
|
||||
### 2. JSON Results
|
||||
- **Location**: `test-results/jest-results.json`
|
||||
- **Use Cases**:
|
||||
- Programmatic access to test results
|
||||
- Custom reporting tools
|
||||
- Test result analysis
|
||||
|
||||
### 3. JUnit XML Report
|
||||
- **Location**: `test-results/e2e-junit.xml`
|
||||
- **Use Cases**:
|
||||
- CI/CD integration
|
||||
- Test result parsing
|
||||
- Historical tracking
|
||||
|
||||
### 4. Console Output
|
||||
- Standard Jest terminal output with verbose mode enabled
|
||||
|
||||
## Running Tests with Reports
|
||||
|
||||
```bash
|
||||
# Run all E2E tests and generate reports
|
||||
npm run test:e2e:jest
|
||||
|
||||
# View the HTML report
|
||||
npm run test:e2e:jest:report
|
||||
|
||||
# Run specific tests
|
||||
npm run test:e2e:jest:command "add-task"
|
||||
```
|
||||
|
||||
## Report Configuration
|
||||
|
||||
The report configuration is defined in `jest.e2e.config.js`:
|
||||
|
||||
- **HTML Reporter**: Includes failure messages, console logs, and execution warnings
|
||||
- **JUnit Reporter**: Includes console output and suite errors
|
||||
- **Coverage**: Separate coverage directory at `coverage-e2e/`
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
The JUnit XML report can be consumed by CI tools like:
|
||||
- Jenkins (JUnit plugin)
|
||||
- GitHub Actions (test-reporter action)
|
||||
- GitLab CI (artifact reports)
|
||||
- CircleCI (test results)
|
||||
|
||||
## Ignored Files
|
||||
|
||||
The following are automatically ignored by git:
|
||||
- `test-results/` directory
|
||||
- `coverage-e2e/` directory
|
||||
- Individual report files
|
||||
|
||||
## Viewing Historical Results
|
||||
|
||||
To keep historical test results:
|
||||
1. Copy the `test-results` directory before running new tests
|
||||
2. Use a timestamp suffix: `test-results-2024-01-15/`
|
||||
3. Compare HTML reports side by side
|
||||
@@ -1,72 +0,0 @@
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { config as dotenvConfig } from 'dotenv';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Load environment variables
|
||||
const projectRoot = join(__dirname, '../../..');
|
||||
dotenvConfig({ path: join(projectRoot, '.env') });
|
||||
|
||||
export const testConfig = {
|
||||
// Paths
|
||||
paths: {
|
||||
projectRoot,
|
||||
sourceDir: projectRoot,
|
||||
baseTestDir: join(projectRoot, 'tests/e2e/_runs'),
|
||||
logDir: join(projectRoot, 'tests/e2e/log'),
|
||||
samplePrdSource: join(projectRoot, 'tests/fixtures/sample-prd.txt'),
|
||||
mainEnvFile: join(projectRoot, '.env'),
|
||||
supportedModelsFile: join(
|
||||
projectRoot,
|
||||
'scripts/modules/supported-models.json'
|
||||
)
|
||||
},
|
||||
|
||||
// Test settings
|
||||
settings: {
|
||||
runVerificationTest: true,
|
||||
parallelTestGroups: 4, // Number of parallel test groups
|
||||
timeout: 600000, // 10 minutes default timeout
|
||||
retryAttempts: 2
|
||||
},
|
||||
|
||||
// Provider test configuration
|
||||
providers: [
|
||||
{ name: 'anthropic', model: 'claude-3-7-sonnet-20250219', flags: [] },
|
||||
{ name: 'openai', model: 'gpt-4o', flags: [] },
|
||||
{ name: 'google', model: 'gemini-2.5-pro-preview-05-06', flags: [] },
|
||||
{ name: 'perplexity', model: 'sonar-pro', flags: [] },
|
||||
{ name: 'xai', model: 'grok-3', flags: [] },
|
||||
{ name: 'openrouter', model: 'anthropic/claude-3.7-sonnet', flags: [] }
|
||||
],
|
||||
|
||||
// Test prompts
|
||||
prompts: {
|
||||
addTask:
|
||||
'Create a task to implement user authentication using OAuth 2.0 with Google as the provider. Include steps for registering the app, handling the callback, and storing user sessions.',
|
||||
updateTask:
|
||||
'Update backend server setup: Ensure CORS is configured to allow requests from the frontend origin.',
|
||||
updateFromTask:
|
||||
'Refactor the backend storage module to use a simple JSON file (storage.json) instead of an in-memory object for persistence. Update relevant tasks.',
|
||||
updateSubtask:
|
||||
'Implementation note: Remember to handle potential API errors and display a user-friendly message.'
|
||||
},
|
||||
|
||||
// LLM Analysis settings
|
||||
llmAnalysis: {
|
||||
enabled: true,
|
||||
model: 'claude-3-7-sonnet-20250219',
|
||||
provider: 'anthropic',
|
||||
maxTokens: 3072
|
||||
}
|
||||
};
|
||||
|
||||
// Export test groups for parallel execution
|
||||
export const testGroups = {
|
||||
setup: ['setup'],
|
||||
core: ['core'],
|
||||
providers: ['providers'],
|
||||
advanced: ['advanced']
|
||||
};
|
||||
@@ -368,7 +368,7 @@ log_step() {
|
||||
log_success "Formatted complexity report saved to complexity_report_formatted.log"
|
||||
|
||||
log_step "Expanding Task 1 (assuming it exists)"
|
||||
cmd_output_expand1=$(task-master expand --id=1 --cr complexity_results.json 2>&1)
|
||||
cmd_output_expand1=$(task-master expand --id=1 2>&1)
|
||||
exit_status_expand1=$?
|
||||
echo "$cmd_output_expand1"
|
||||
extract_and_sum_cost "$cmd_output_expand1"
|
||||
|
||||
@@ -1,225 +0,0 @@
|
||||
import { Worker } from 'worker_threads';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { EventEmitter } from 'events';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
export class ParallelTestRunner extends EventEmitter {
|
||||
constructor(logger) {
|
||||
super();
|
||||
this.logger = logger;
|
||||
this.workers = [];
|
||||
this.results = {};
|
||||
}
|
||||
|
||||
/**
|
||||
* Run test groups in parallel
|
||||
* @param {Object} testGroups - Groups of tests to run
|
||||
* @param {Object} sharedContext - Shared context for all tests
|
||||
* @returns {Promise<Object>} Combined results from all test groups
|
||||
*/
|
||||
async runTestGroups(testGroups, sharedContext) {
|
||||
const groupNames = Object.keys(testGroups);
|
||||
const workerPromises = [];
|
||||
|
||||
this.logger.info(
|
||||
`Starting parallel execution of ${groupNames.length} test groups`
|
||||
);
|
||||
|
||||
for (const groupName of groupNames) {
|
||||
const workerPromise = this.runTestGroup(
|
||||
groupName,
|
||||
testGroups[groupName],
|
||||
sharedContext
|
||||
);
|
||||
workerPromises.push(workerPromise);
|
||||
}
|
||||
|
||||
// Wait for all workers to complete
|
||||
const results = await Promise.allSettled(workerPromises);
|
||||
|
||||
// Process results
|
||||
const combinedResults = {
|
||||
overall: 'passed',
|
||||
groups: {},
|
||||
summary: {
|
||||
totalGroups: groupNames.length,
|
||||
passedGroups: 0,
|
||||
failedGroups: 0,
|
||||
errors: []
|
||||
}
|
||||
};
|
||||
|
||||
results.forEach((result, index) => {
|
||||
const groupName = groupNames[index];
|
||||
|
||||
if (result.status === 'fulfilled') {
|
||||
combinedResults.groups[groupName] = result.value;
|
||||
if (result.value.status === 'passed') {
|
||||
combinedResults.summary.passedGroups++;
|
||||
} else {
|
||||
combinedResults.summary.failedGroups++;
|
||||
combinedResults.overall = 'failed';
|
||||
}
|
||||
} else {
|
||||
combinedResults.groups[groupName] = {
|
||||
status: 'failed',
|
||||
error: result.reason.message || 'Unknown error'
|
||||
};
|
||||
combinedResults.summary.failedGroups++;
|
||||
combinedResults.summary.errors.push({
|
||||
group: groupName,
|
||||
error: result.reason.message
|
||||
});
|
||||
combinedResults.overall = 'failed';
|
||||
}
|
||||
});
|
||||
|
||||
return combinedResults;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run a single test group in a worker thread
|
||||
*/
|
||||
async runTestGroup(groupName, testModules, sharedContext) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const workerPath = join(__dirname, 'test-worker.js');
|
||||
|
||||
const worker = new Worker(workerPath, {
|
||||
workerData: {
|
||||
groupName,
|
||||
testModules,
|
||||
sharedContext,
|
||||
logDir: this.logger.logDir,
|
||||
testRunId: this.logger.testRunId
|
||||
}
|
||||
});
|
||||
|
||||
this.workers.push(worker);
|
||||
|
||||
// Handle messages from worker
|
||||
worker.on('message', (message) => {
|
||||
if (message.type === 'log') {
|
||||
const level = message.level.toLowerCase();
|
||||
if (typeof this.logger[level] === 'function') {
|
||||
this.logger[level](message.message);
|
||||
} else {
|
||||
// Fallback to info if the level doesn't exist
|
||||
this.logger.info(message.message);
|
||||
}
|
||||
} else if (message.type === 'step') {
|
||||
this.logger.step(message.message);
|
||||
} else if (message.type === 'cost') {
|
||||
this.logger.addCost(message.cost);
|
||||
} else if (message.type === 'results') {
|
||||
this.results[groupName] = message.results;
|
||||
}
|
||||
});
|
||||
|
||||
// Handle worker completion
|
||||
worker.on('exit', (code) => {
|
||||
this.workers = this.workers.filter((w) => w !== worker);
|
||||
|
||||
if (code === 0) {
|
||||
resolve(
|
||||
this.results[groupName] || { status: 'passed', group: groupName }
|
||||
);
|
||||
} else {
|
||||
reject(
|
||||
new Error(`Worker for group ${groupName} exited with code ${code}`)
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle worker errors
|
||||
worker.on('error', (error) => {
|
||||
this.workers = this.workers.filter((w) => w !== worker);
|
||||
reject(error);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Terminate all running workers
|
||||
*/
|
||||
async terminate() {
|
||||
const terminationPromises = this.workers.map((worker) =>
|
||||
worker
|
||||
.terminate()
|
||||
.catch((err) =>
|
||||
this.logger.warning(`Failed to terminate worker: ${err.message}`)
|
||||
)
|
||||
);
|
||||
|
||||
await Promise.all(terminationPromises);
|
||||
this.workers = [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Sequential test runner for comparison or fallback
|
||||
*/
|
||||
export class SequentialTestRunner {
|
||||
constructor(logger, helpers) {
|
||||
this.logger = logger;
|
||||
this.helpers = helpers;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run tests sequentially
|
||||
*/
|
||||
async runTests(testModules, context) {
|
||||
const results = {
|
||||
overall: 'passed',
|
||||
tests: {},
|
||||
summary: {
|
||||
totalTests: testModules.length,
|
||||
passedTests: 0,
|
||||
failedTests: 0,
|
||||
errors: []
|
||||
}
|
||||
};
|
||||
|
||||
for (const testModule of testModules) {
|
||||
try {
|
||||
this.logger.step(`Running ${testModule} tests`);
|
||||
|
||||
// Dynamic import of test module
|
||||
const testPath = join(
|
||||
dirname(__dirname),
|
||||
'tests',
|
||||
`${testModule}.test.js`
|
||||
);
|
||||
const { default: testFn } = await import(testPath);
|
||||
|
||||
// Run the test
|
||||
const testResults = await testFn(this.logger, this.helpers, context);
|
||||
|
||||
results.tests[testModule] = testResults;
|
||||
|
||||
if (testResults.status === 'passed') {
|
||||
results.summary.passedTests++;
|
||||
} else {
|
||||
results.summary.failedTests++;
|
||||
results.overall = 'failed';
|
||||
}
|
||||
} catch (error) {
|
||||
this.logger.error(`Failed to run ${testModule}: ${error.message}`);
|
||||
results.tests[testModule] = {
|
||||
status: 'failed',
|
||||
error: error.message
|
||||
};
|
||||
results.summary.failedTests++;
|
||||
results.summary.errors.push({
|
||||
test: testModule,
|
||||
error: error.message
|
||||
});
|
||||
results.overall = 'failed';
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
}
|
||||
@@ -1,135 +0,0 @@
|
||||
import { parentPort, workerData } from 'worker_threads';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { TestLogger } from '../utils/logger.js';
|
||||
import { TestHelpers } from '../utils/test-helpers.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Worker logger that sends messages to parent
|
||||
class WorkerLogger extends TestLogger {
|
||||
constructor(logDir, testRunId, groupName) {
|
||||
super(logDir, `${testRunId}_${groupName}`);
|
||||
this.groupName = groupName;
|
||||
}
|
||||
|
||||
log(level, message, options = {}) {
|
||||
super.log(level, message, options);
|
||||
|
||||
// Send log to parent
|
||||
parentPort.postMessage({
|
||||
type: 'log',
|
||||
level: level.toLowerCase(),
|
||||
message: `[${this.groupName}] ${message}`
|
||||
});
|
||||
}
|
||||
|
||||
step(message) {
|
||||
super.step(message);
|
||||
|
||||
parentPort.postMessage({
|
||||
type: 'step',
|
||||
message: `[${this.groupName}] ${message}`
|
||||
});
|
||||
}
|
||||
|
||||
addCost(cost) {
|
||||
super.addCost(cost);
|
||||
|
||||
parentPort.postMessage({
|
||||
type: 'cost',
|
||||
cost
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Main worker execution
|
||||
async function runTestGroup() {
|
||||
const { groupName, testModules, sharedContext, logDir, testRunId } =
|
||||
workerData;
|
||||
|
||||
const logger = new WorkerLogger(logDir, testRunId, groupName);
|
||||
const helpers = new TestHelpers(logger);
|
||||
|
||||
logger.info(`Worker started for test group: ${groupName}`);
|
||||
|
||||
const results = {
|
||||
group: groupName,
|
||||
status: 'passed',
|
||||
tests: {},
|
||||
errors: [],
|
||||
startTime: Date.now()
|
||||
};
|
||||
|
||||
try {
|
||||
// Run each test module in the group
|
||||
for (const testModule of testModules) {
|
||||
try {
|
||||
logger.info(`Running test: ${testModule}`);
|
||||
|
||||
// Dynamic import of test module
|
||||
const testPath = join(
|
||||
dirname(__dirname),
|
||||
'tests',
|
||||
`${testModule}.test.js`
|
||||
);
|
||||
const { default: testFn } = await import(testPath);
|
||||
|
||||
// Run the test with shared context
|
||||
const testResults = await testFn(logger, helpers, sharedContext);
|
||||
|
||||
results.tests[testModule] = testResults;
|
||||
|
||||
if (testResults.status !== 'passed') {
|
||||
results.status = 'failed';
|
||||
if (testResults.errors) {
|
||||
results.errors.push(...testResults.errors);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Test ${testModule} failed: ${error.message}`);
|
||||
results.tests[testModule] = {
|
||||
status: 'failed',
|
||||
error: error.message,
|
||||
stack: error.stack
|
||||
};
|
||||
results.status = 'failed';
|
||||
results.errors.push({
|
||||
test: testModule,
|
||||
error: error.message
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Worker error: ${error.message}`);
|
||||
results.status = 'failed';
|
||||
results.errors.push({
|
||||
group: groupName,
|
||||
error: error.message,
|
||||
stack: error.stack
|
||||
});
|
||||
}
|
||||
|
||||
results.endTime = Date.now();
|
||||
results.duration = results.endTime - results.startTime;
|
||||
|
||||
// Flush logs and get summary
|
||||
logger.flush();
|
||||
const summary = logger.getSummary();
|
||||
results.summary = summary;
|
||||
|
||||
// Send results to parent
|
||||
parentPort.postMessage({
|
||||
type: 'results',
|
||||
results
|
||||
});
|
||||
|
||||
logger.info(`Worker completed for test group: ${groupName}`);
|
||||
}
|
||||
|
||||
// Run the test group
|
||||
runTestGroup().catch((error) => {
|
||||
console.error('Worker fatal error:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -1,59 +0,0 @@
|
||||
/**
|
||||
* Global setup for E2E tests
|
||||
* Runs once before all test suites
|
||||
*/
|
||||
|
||||
import { execSync } from 'child_process';
|
||||
import { existsSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname } from 'path';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
export default async () => {
|
||||
// Silent mode for cleaner output
|
||||
if (!process.env.JEST_SILENT_REPORTER) {
|
||||
console.log('\n🚀 Setting up E2E test environment...\n');
|
||||
}
|
||||
|
||||
try {
|
||||
// Ensure task-master is linked globally
|
||||
const projectRoot = join(__dirname, '../../..');
|
||||
if (!process.env.JEST_SILENT_REPORTER) {
|
||||
console.log('📦 Linking task-master globally...');
|
||||
}
|
||||
execSync('npm link', {
|
||||
cwd: projectRoot,
|
||||
stdio: 'inherit'
|
||||
});
|
||||
|
||||
// Verify .env file exists
|
||||
const envPath = join(projectRoot, '.env');
|
||||
if (!existsSync(envPath)) {
|
||||
console.warn(
|
||||
'⚠️ Warning: .env file not found. Some tests may fail without API keys.'
|
||||
);
|
||||
} else {
|
||||
if (!process.env.JEST_SILENT_REPORTER) {
|
||||
console.log('✅ .env file found');
|
||||
}
|
||||
}
|
||||
|
||||
// Verify task-master command is available
|
||||
try {
|
||||
execSync('task-master --version', { stdio: 'pipe' });
|
||||
if (!process.env.JEST_SILENT_REPORTER) {
|
||||
console.log('✅ task-master command is available\n');
|
||||
}
|
||||
} catch (error) {
|
||||
throw new Error(
|
||||
'task-master command not found. Please ensure npm link succeeded.'
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('❌ Global setup failed:', error.message);
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
@@ -1,14 +0,0 @@
|
||||
/**
|
||||
* Global teardown for E2E tests
|
||||
* Runs once after all test suites
|
||||
*/
|
||||
|
||||
export default async () => {
|
||||
// Silent mode for cleaner output
|
||||
if (!process.env.JEST_SILENT_REPORTER) {
|
||||
console.log('\n🧹 Cleaning up E2E test environment...\n');
|
||||
}
|
||||
|
||||
// Any global cleanup needed
|
||||
// Note: Individual test directories are cleaned up in afterEach hooks
|
||||
};
|
||||
@@ -1,91 +0,0 @@
|
||||
/**
|
||||
* Jest setup file for E2E tests
|
||||
* Runs before each test file
|
||||
*/
|
||||
|
||||
import { jest, expect, afterAll } from '@jest/globals';
|
||||
import { TestHelpers } from '../utils/test-helpers.js';
|
||||
import { TestLogger } from '../utils/logger.js';
|
||||
|
||||
// Increase timeout for all E2E tests (can be overridden per test)
|
||||
jest.setTimeout(600000);
|
||||
|
||||
// Add custom matchers for CLI testing
|
||||
expect.extend({
|
||||
toContainTaskId(received) {
|
||||
const taskIdRegex = /#?\d+/;
|
||||
const pass = taskIdRegex.test(received);
|
||||
|
||||
if (pass) {
|
||||
return {
|
||||
message: () => `expected ${received} not to contain a task ID`,
|
||||
pass: true
|
||||
};
|
||||
} else {
|
||||
return {
|
||||
message: () => `expected ${received} to contain a task ID (e.g., #123)`,
|
||||
pass: false
|
||||
};
|
||||
}
|
||||
},
|
||||
|
||||
toHaveExitCode(received, expected) {
|
||||
const pass = received.exitCode === expected;
|
||||
|
||||
if (pass) {
|
||||
return {
|
||||
message: () => `expected exit code not to be ${expected}`,
|
||||
pass: true
|
||||
};
|
||||
} else {
|
||||
return {
|
||||
message: () =>
|
||||
`expected exit code ${expected} but got ${received.exitCode}\nstderr: ${received.stderr}`,
|
||||
pass: false
|
||||
};
|
||||
}
|
||||
},
|
||||
|
||||
toContainInOutput(received, expected) {
|
||||
const output = (received.stdout || '') + (received.stderr || '');
|
||||
const pass = output.includes(expected);
|
||||
|
||||
if (pass) {
|
||||
return {
|
||||
message: () => `expected output not to contain "${expected}"`,
|
||||
pass: true
|
||||
};
|
||||
} else {
|
||||
return {
|
||||
message: () =>
|
||||
`expected output to contain "${expected}"\nstdout: ${received.stdout}\nstderr: ${received.stderr}`,
|
||||
pass: false
|
||||
};
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Global test helpers
|
||||
global.TestHelpers = TestHelpers;
|
||||
global.TestLogger = TestLogger;
|
||||
|
||||
// Helper to create test context
|
||||
import { mkdtempSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { tmpdir } from 'os';
|
||||
|
||||
global.createTestContext = (testName) => {
|
||||
// Create a proper log directory in temp for tests
|
||||
const testLogDir = mkdtempSync(join(tmpdir(), `task-master-test-logs-${testName}-`));
|
||||
const testRunId = Date.now().toString();
|
||||
|
||||
const logger = new TestLogger(testLogDir, testRunId);
|
||||
const helpers = new TestHelpers(logger);
|
||||
return { logger, helpers };
|
||||
};
|
||||
|
||||
// Clean up any hanging processes
|
||||
afterAll(async () => {
|
||||
// Give time for any async operations to complete
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
});
|
||||
@@ -1,73 +0,0 @@
|
||||
/**
|
||||
* Custom Jest test sequencer to manage parallel execution
|
||||
* and avoid hitting AI rate limits
|
||||
*/
|
||||
|
||||
const Sequencer = require('@jest/test-sequencer').default;
|
||||
|
||||
class RateLimitSequencer extends Sequencer {
|
||||
/**
|
||||
* Sort tests to optimize execution and avoid rate limits
|
||||
*/
|
||||
sort(tests) {
|
||||
// Categorize tests by their AI usage
|
||||
const aiHeavyTests = [];
|
||||
const aiLightTests = [];
|
||||
const nonAiTests = [];
|
||||
|
||||
tests.forEach((test) => {
|
||||
const testPath = test.path.toLowerCase();
|
||||
|
||||
// Tests that make heavy use of AI APIs
|
||||
if (
|
||||
testPath.includes('update-task') ||
|
||||
testPath.includes('expand-task') ||
|
||||
testPath.includes('research') ||
|
||||
testPath.includes('parse-prd') ||
|
||||
testPath.includes('generate') ||
|
||||
testPath.includes('analyze-complexity')
|
||||
) {
|
||||
aiHeavyTests.push(test);
|
||||
}
|
||||
// Tests that make light use of AI APIs
|
||||
else if (
|
||||
testPath.includes('add-task') ||
|
||||
testPath.includes('update-subtask')
|
||||
) {
|
||||
aiLightTests.push(test);
|
||||
}
|
||||
// Tests that don't use AI APIs
|
||||
else {
|
||||
nonAiTests.push(test);
|
||||
}
|
||||
});
|
||||
|
||||
// Sort each category by duration (fastest first)
|
||||
const sortByDuration = (a, b) => {
|
||||
const aTime = a.duration || 0;
|
||||
const bTime = b.duration || 0;
|
||||
return aTime - bTime;
|
||||
};
|
||||
|
||||
aiHeavyTests.sort(sortByDuration);
|
||||
aiLightTests.sort(sortByDuration);
|
||||
nonAiTests.sort(sortByDuration);
|
||||
|
||||
// Return tests in order: non-AI first, then light AI, then heavy AI
|
||||
// This allows non-AI tests to run quickly while AI tests are distributed
|
||||
return [...nonAiTests, ...aiLightTests, ...aiHeavyTests];
|
||||
}
|
||||
|
||||
/**
|
||||
* Shard tests across workers to balance AI load
|
||||
*/
|
||||
shard(tests, { shardIndex, shardCount }) {
|
||||
const shardSize = Math.ceil(tests.length / shardCount);
|
||||
const start = shardSize * shardIndex;
|
||||
const end = shardSize * (shardIndex + 1);
|
||||
|
||||
return tests.slice(start, end);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = RateLimitSequencer;
|
||||
@@ -1,501 +0,0 @@
|
||||
/**
|
||||
* E2E tests for add-dependency command
|
||||
* Tests dependency management functionality
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from '@jest/globals';
|
||||
import { mkdtempSync, existsSync, readFileSync, rmSync, writeFileSync, mkdirSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { tmpdir } from 'os';
|
||||
|
||||
describe('task-master add-dependency', () => {
|
||||
let testDir;
|
||||
let helpers;
|
||||
|
||||
beforeEach(async () => {
|
||||
// Create test directory
|
||||
testDir = mkdtempSync(join(tmpdir(), 'task-master-add-dep-'));
|
||||
|
||||
// Initialize test helpers
|
||||
const context = global.createTestContext('add-dependency');
|
||||
helpers = context.helpers;
|
||||
|
||||
// Copy .env file if it exists
|
||||
const mainEnvPath = join(process.cwd(), '.env');
|
||||
const testEnvPath = join(testDir, '.env');
|
||||
if (existsSync(mainEnvPath)) {
|
||||
const envContent = readFileSync(mainEnvPath, 'utf8');
|
||||
writeFileSync(testEnvPath, envContent);
|
||||
}
|
||||
|
||||
// Initialize task-master project
|
||||
const initResult = await helpers.taskMaster('init', ['-y'], {
|
||||
cwd: testDir
|
||||
});
|
||||
expect(initResult).toHaveExitCode(0);
|
||||
|
||||
// Ensure tasks.json exists (bug workaround)
|
||||
const tasksJsonPath = join(testDir, '.taskmaster/tasks/tasks.json');
|
||||
if (!existsSync(tasksJsonPath)) {
|
||||
mkdirSync(join(testDir, '.taskmaster/tasks'), { recursive: true });
|
||||
writeFileSync(tasksJsonPath, JSON.stringify({ master: { tasks: [] } }));
|
||||
}
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
if (testDir && existsSync(testDir)) {
|
||||
rmSync(testDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe('Basic dependency creation', () => {
|
||||
it('should add a single dependency to a task', async () => {
|
||||
// Create tasks
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Dependency task', '--description', 'A dependency'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Main task', '--description', 'Main task description'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
// Add dependency
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
|
||||
// Verify dependency was added
|
||||
const showResult = await helpers.taskMaster('show', [taskId], { cwd: testDir });
|
||||
expect(showResult.stdout).toContain('Dependencies:');
|
||||
expect(showResult.stdout).toContain(depId);
|
||||
});
|
||||
|
||||
it('should add multiple dependencies one by one', async () => {
|
||||
// Create dependency tasks
|
||||
const dep1 = await helpers.taskMaster('add-task', ['--title', 'First dependency', '--description', 'First dep'], { cwd: testDir });
|
||||
const depId1 = helpers.extractTaskId(dep1.stdout);
|
||||
|
||||
const dep2 = await helpers.taskMaster('add-task', ['--title', 'Second dependency', '--description', 'Second dep'], { cwd: testDir });
|
||||
const depId2 = helpers.extractTaskId(dep2.stdout);
|
||||
|
||||
const dep3 = await helpers.taskMaster('add-task', ['--title', 'Third dependency', '--description', 'Third dep'], { cwd: testDir });
|
||||
const depId3 = helpers.extractTaskId(dep3.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Main task', '--description', 'Main task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
// Add dependencies one by one
|
||||
const result1 = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId1], { cwd: testDir });
|
||||
expect(result1).toHaveExitCode(0);
|
||||
|
||||
const result2 = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId2], { cwd: testDir });
|
||||
expect(result2).toHaveExitCode(0);
|
||||
|
||||
const result3 = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId3], { cwd: testDir });
|
||||
expect(result3).toHaveExitCode(0);
|
||||
|
||||
// Verify all dependencies were added
|
||||
const showResult = await helpers.taskMaster('show', [taskId], { cwd: testDir });
|
||||
expect(showResult.stdout).toContain(depId1);
|
||||
expect(showResult.stdout).toContain(depId2);
|
||||
expect(showResult.stdout).toContain(depId3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Dependency validation', () => {
|
||||
it('should prevent circular dependencies', async () => {
|
||||
// Create circular dependency chain
|
||||
const task1 = await helpers.taskMaster('add-task', ['--title', 'Task 1', '--description', 'First task'], { cwd: testDir });
|
||||
const id1 = helpers.extractTaskId(task1.stdout);
|
||||
|
||||
const task2 = await helpers.taskMaster('add-task', ['--title', 'Task 2', '--description', 'Second task'], { cwd: testDir });
|
||||
const id2 = helpers.extractTaskId(task2.stdout);
|
||||
|
||||
// Add first dependency
|
||||
await helpers.taskMaster('add-dependency', ['--id', id2, '--depends-on', id1], { cwd: testDir });
|
||||
|
||||
// Try to create circular dependency
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', id1, '--depends-on', id2], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
// The command exits with code 1 but doesn't output to stderr
|
||||
});
|
||||
|
||||
it('should prevent self-dependencies', async () => {
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', taskId], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
// The command exits with code 1 but doesn't output to stderr
|
||||
});
|
||||
|
||||
it('should detect transitive circular dependencies', async () => {
|
||||
// Create chain: A -> B -> C, then try C -> A
|
||||
const taskA = await helpers.taskMaster('add-task', ['--title', 'Task A', '--description', 'Task A'], { cwd: testDir });
|
||||
const idA = helpers.extractTaskId(taskA.stdout);
|
||||
|
||||
const taskB = await helpers.taskMaster('add-task', ['--title', 'Task B', '--description', 'Task B'], { cwd: testDir });
|
||||
const idB = helpers.extractTaskId(taskB.stdout);
|
||||
|
||||
const taskC = await helpers.taskMaster('add-task', ['--title', 'Task C', '--description', 'Task C'], { cwd: testDir });
|
||||
const idC = helpers.extractTaskId(taskC.stdout);
|
||||
|
||||
// Create chain
|
||||
await helpers.taskMaster('add-dependency', ['--id', idB, '--depends-on', idA], { cwd: testDir });
|
||||
await helpers.taskMaster('add-dependency', ['--id', idC, '--depends-on', idB], { cwd: testDir });
|
||||
|
||||
// Try to create circular dependency
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', idA, '--depends-on', idC], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
// The command exits with code 1 but doesn't output to stderr
|
||||
});
|
||||
|
||||
it('should prevent duplicate dependencies', async () => {
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Dependency', '--description', 'A dependency'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
// Add dependency first time
|
||||
await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId], { cwd: testDir });
|
||||
|
||||
// Try to add same dependency again
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('already exists');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Status updates', () => {
|
||||
it('should update task status to blocked when adding dependencies', async () => {
|
||||
const dep = await helpers.taskMaster('add-task', [
|
||||
'--title',
|
||||
'Incomplete dependency',
|
||||
'--description',
|
||||
'Not done yet'
|
||||
], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
// Start the task
|
||||
await helpers.taskMaster('set-status', ['--id', taskId, '--status', 'in-progress'], { cwd: testDir });
|
||||
|
||||
// Add dependency (does not automatically change status to blocked)
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
// The add-dependency command doesn't automatically change task status
|
||||
|
||||
// Verify status remains in-progress
|
||||
const showResult = await helpers.taskMaster('show', [taskId], { cwd: testDir });
|
||||
expect(showResult.stdout).toContain('► in-progress');
|
||||
});
|
||||
|
||||
it('should not change status if all dependencies are complete', async () => {
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Complete dependency', '--description', 'Done'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
await helpers.taskMaster('set-status', ['--id', depId, '--status', 'done'], { cwd: testDir });
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
await helpers.taskMaster('set-status', ['--id', taskId, '--status', 'in-progress'], { cwd: testDir });
|
||||
|
||||
// Add completed dependency
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', depId], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).not.toContain('Status changed');
|
||||
|
||||
// Status should remain in-progress
|
||||
const showResult = await helpers.taskMaster('show', [taskId], { cwd: testDir });
|
||||
expect(showResult.stdout).toContain('► in-progress');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Subtask dependencies', () => {
|
||||
it('should add dependency to a subtask', async () => {
|
||||
// Create parent and dependency
|
||||
const parent = await helpers.taskMaster('add-task', ['--title', 'Parent task', '--description', 'Parent'], { cwd: testDir });
|
||||
const parentId = helpers.extractTaskId(parent.stdout);
|
||||
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Dependency', '--description', 'A dependency'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
// Expand parent
|
||||
const expandResult = await helpers.taskMaster('expand', ['--id', parentId, '--num', '2'], {
|
||||
cwd: testDir,
|
||||
timeout: 60000
|
||||
});
|
||||
|
||||
// Verify expand succeeded
|
||||
expect(expandResult).toHaveExitCode(0);
|
||||
|
||||
// Add dependency to subtask
|
||||
const subtaskId = `${parentId}.1`;
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', subtaskId, '--depends-on', depId], { cwd: testDir, allowFailure: true });
|
||||
|
||||
// Debug output
|
||||
if (result.exitCode !== 0) {
|
||||
console.log('STDERR:', result.stderr);
|
||||
console.log('STDOUT:', result.stdout);
|
||||
}
|
||||
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
});
|
||||
|
||||
it('should allow subtask to depend on another subtask', async () => {
|
||||
// Create parent task
|
||||
const parent = await helpers.taskMaster('add-task', ['--title', 'Parent', '--description', 'Parent task'], { cwd: testDir });
|
||||
const parentId = helpers.extractTaskId(parent.stdout);
|
||||
|
||||
// Expand to create subtasks
|
||||
const expandResult = await helpers.taskMaster('expand', ['--id', parentId, '--num', '3'], {
|
||||
cwd: testDir,
|
||||
timeout: 60000
|
||||
});
|
||||
expect(expandResult).toHaveExitCode(0);
|
||||
|
||||
// Make subtask 2 depend on subtask 1
|
||||
const result = await helpers.taskMaster('add-dependency', [
|
||||
'--id', `${parentId}.2`,
|
||||
'--depends-on', `${parentId}.1`
|
||||
], { cwd: testDir, allowFailure: true });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
});
|
||||
|
||||
it('should allow parent to depend on its own subtask', async () => {
|
||||
// Note: Current implementation allows parent-subtask dependencies
|
||||
const parent = await helpers.taskMaster('add-task', ['--title', 'Parent', '--description', 'Parent task'], { cwd: testDir });
|
||||
const parentId = helpers.extractTaskId(parent.stdout);
|
||||
|
||||
const expandResult = await helpers.taskMaster('expand', ['--id', parentId, '--num', '2'], {
|
||||
cwd: testDir,
|
||||
timeout: 60000
|
||||
});
|
||||
expect(expandResult).toHaveExitCode(0);
|
||||
|
||||
const result = await helpers.taskMaster(
|
||||
'add-dependency',
|
||||
['--id', parentId, '--depends-on', `${parentId}.1`],
|
||||
{ cwd: testDir, allowFailure: true }
|
||||
);
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
});
|
||||
});
|
||||
|
||||
// Note: The add-dependency command only supports single task/dependency operations
|
||||
// Bulk operations are not implemented in the current version
|
||||
|
||||
describe('Complex dependency graphs', () => {
|
||||
it('should handle diamond dependency pattern', async () => {
|
||||
// Create diamond: A depends on B and C, both B and C depend on D
|
||||
const taskD = await helpers.taskMaster('add-task', ['--title', 'Task D - base', '--description', 'Base task'], { cwd: testDir });
|
||||
const idD = helpers.extractTaskId(taskD.stdout);
|
||||
|
||||
const taskB = await helpers.taskMaster('add-task', ['--title', 'Task B', '--description', 'Middle task B'], { cwd: testDir });
|
||||
const idB = helpers.extractTaskId(taskB.stdout);
|
||||
await helpers.taskMaster('add-dependency', ['--id', idB, '--depends-on', idD], { cwd: testDir });
|
||||
|
||||
const taskC = await helpers.taskMaster('add-task', ['--title', 'Task C', '--description', 'Middle task C'], { cwd: testDir });
|
||||
const idC = helpers.extractTaskId(taskC.stdout);
|
||||
await helpers.taskMaster('add-dependency', ['--id', idC, '--depends-on', idD], { cwd: testDir });
|
||||
|
||||
const taskA = await helpers.taskMaster('add-task', ['--title', 'Task A - top', '--description', 'Top task'], { cwd: testDir });
|
||||
const idA = helpers.extractTaskId(taskA.stdout);
|
||||
|
||||
// Add both dependencies to create diamond (one by one)
|
||||
const result1 = await helpers.taskMaster('add-dependency', ['--id', idA, '--depends-on', idB], { cwd: testDir });
|
||||
expect(result1).toHaveExitCode(0);
|
||||
expect(result1.stdout).toContain('Successfully added dependency');
|
||||
|
||||
const result2 = await helpers.taskMaster('add-dependency', ['--id', idA, '--depends-on', idC], { cwd: testDir });
|
||||
expect(result2).toHaveExitCode(0);
|
||||
expect(result2.stdout).toContain('Successfully added dependency');
|
||||
|
||||
// Verify the structure
|
||||
const showResult = await helpers.taskMaster('show', [idA], { cwd: testDir });
|
||||
expect(showResult.stdout).toContain(idB);
|
||||
expect(showResult.stdout).toContain(idC);
|
||||
});
|
||||
|
||||
it('should show transitive dependencies', async () => {
|
||||
// Create chain A -> B -> C -> D
|
||||
const taskD = await helpers.taskMaster('add-task', ['--title', 'Task D', '--description', 'End task'], { cwd: testDir });
|
||||
const idD = helpers.extractTaskId(taskD.stdout);
|
||||
|
||||
const taskC = await helpers.taskMaster('add-task', ['--title', 'Task C', '--description', 'Middle task'], { cwd: testDir });
|
||||
const idC = helpers.extractTaskId(taskC.stdout);
|
||||
await helpers.taskMaster('add-dependency', ['--id', idC, '--depends-on', idD], { cwd: testDir });
|
||||
|
||||
const taskB = await helpers.taskMaster('add-task', ['--title', 'Task B', '--description', 'Middle task'], { cwd: testDir });
|
||||
const idB = helpers.extractTaskId(taskB.stdout);
|
||||
await helpers.taskMaster('add-dependency', ['--id', idB, '--depends-on', idC], { cwd: testDir });
|
||||
|
||||
const taskA = await helpers.taskMaster('add-task', ['--title', 'Task A', '--description', 'Start task'], { cwd: testDir });
|
||||
const idA = helpers.extractTaskId(taskA.stdout);
|
||||
await helpers.taskMaster('add-dependency', ['--id', idA, '--depends-on', idB], { cwd: testDir });
|
||||
|
||||
// Show should indicate full dependency chain
|
||||
const result = await helpers.taskMaster('show', [idA], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Dependencies:');
|
||||
expect(result.stdout).toContain(idB);
|
||||
// May also show transitive dependencies in some views
|
||||
});
|
||||
});
|
||||
|
||||
describe('Tag context', () => {
|
||||
it('should add dependencies within a tag', async () => {
|
||||
// Create tag
|
||||
await helpers.taskMaster('add-tag', ['feature'], { cwd: testDir });
|
||||
await helpers.taskMaster('use-tag', ['feature'], { cwd: testDir });
|
||||
|
||||
// Create tasks in feature tag
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Feature dependency', '--description', 'Dep in feature'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Feature task', '--description', 'Task in feature'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
// Add dependency with tag context
|
||||
const result = await helpers.taskMaster('add-dependency', [
|
||||
'--id', taskId,
|
||||
'--depends-on', depId,
|
||||
'--tag',
|
||||
'feature'
|
||||
], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
// Tag context is shown in the emoji header
|
||||
expect(result.stdout).toContain('🏷️ tag: feature');
|
||||
});
|
||||
|
||||
it('should prevent cross-tag dependencies by default', async () => {
|
||||
// Create tasks in different tags
|
||||
const masterTask = await helpers.taskMaster('add-task', ['--title', 'Master task', '--description', 'In master tag'], { cwd: testDir });
|
||||
const masterId = helpers.extractTaskId(masterTask.stdout);
|
||||
|
||||
await helpers.taskMaster('add-tag', ['feature'], { cwd: testDir });
|
||||
await helpers.taskMaster('use-tag', ['feature'], { cwd: testDir });
|
||||
const featureTask = await helpers.taskMaster('add-task', [
|
||||
'--title',
|
||||
'Feature task',
|
||||
'--description',
|
||||
'In feature tag'
|
||||
], { cwd: testDir });
|
||||
const featureId = helpers.extractTaskId(featureTask.stdout);
|
||||
|
||||
// Try to add cross-tag dependency
|
||||
const result = await helpers.taskMaster(
|
||||
'add-dependency',
|
||||
['--id', featureId, '--depends-on', masterId, '--tag', 'feature'],
|
||||
{ cwd: testDir, allowFailure: true }
|
||||
);
|
||||
// Depending on implementation, this might warn or fail
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error handling', () => {
|
||||
it('should handle non-existent task IDs', async () => {
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', taskId, '--depends-on', '999'], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
// The command exits with code 1 but doesn't output to stderr
|
||||
});
|
||||
|
||||
it('should handle invalid task ID format', async () => {
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', 'invalid-id', '--depends-on', '1'], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
// The command exits with code 1 but doesn't output to stderr
|
||||
});
|
||||
|
||||
it('should require both task and dependency IDs', async () => {
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', '1'], {
|
||||
cwd: testDir,
|
||||
allowFailure: true
|
||||
});
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
expect(result.stderr).toContain('required');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Output options', () => {
|
||||
it.skip('should support quiet mode (not implemented)', async () => {
|
||||
// The -q flag is not supported by add-dependency command
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Dep', '--description', 'A dep'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
const result = await helpers.taskMaster('add-dependency', [
|
||||
'--id', taskId,
|
||||
'--depends-on', depId
|
||||
], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
});
|
||||
|
||||
it.skip('should support JSON output (not implemented)', async () => {
|
||||
// The --json flag is not supported by add-dependency command
|
||||
const dep = await helpers.taskMaster('add-task', ['--title', 'Dep', '--description', 'A dep'], { cwd: testDir });
|
||||
const depId = helpers.extractTaskId(dep.stdout);
|
||||
|
||||
const task = await helpers.taskMaster('add-task', ['--title', 'Task', '--description', 'A task'], { cwd: testDir });
|
||||
const taskId = helpers.extractTaskId(task.stdout);
|
||||
|
||||
const result = await helpers.taskMaster('add-dependency', [
|
||||
'--id', taskId,
|
||||
'--depends-on', depId,
|
||||
'--json'
|
||||
], { cwd: testDir });
|
||||
expect(result).toHaveExitCode(0);
|
||||
|
||||
const json = JSON.parse(result.stdout);
|
||||
expect(json.task.id).toBe(parseInt(taskId));
|
||||
expect(json.task.dependencies).toContain(parseInt(depId));
|
||||
expect(json.added).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Visualization', () => {
|
||||
it('should show dependency graph after adding', async () => {
|
||||
// Create simple dependency chain
|
||||
const task1 = await helpers.taskMaster('add-task', ['--title', 'Base task', '--description', 'Base'], { cwd: testDir });
|
||||
const id1 = helpers.extractTaskId(task1.stdout);
|
||||
|
||||
const task2 = await helpers.taskMaster('add-task', ['--title', 'Middle task', '--description', 'Middle'], { cwd: testDir });
|
||||
const id2 = helpers.extractTaskId(task2.stdout);
|
||||
|
||||
const task3 = await helpers.taskMaster('add-task', ['--title', 'Top task', '--description', 'Top'], { cwd: testDir });
|
||||
const id3 = helpers.extractTaskId(task3.stdout);
|
||||
|
||||
// Build chain
|
||||
await helpers.taskMaster('add-dependency', ['--id', id2, '--depends-on', id1], { cwd: testDir });
|
||||
const result = await helpers.taskMaster('add-dependency', ['--id', id3, '--depends-on', id2], { cwd: testDir });
|
||||
|
||||
expect(result).toHaveExitCode(0);
|
||||
// Check for dependency added message
|
||||
expect(result.stdout).toContain('Successfully added dependency');
|
||||
});
|
||||
});
|
||||
});
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user