Compare commits

..

2 Commits

Author SHA1 Message Date
Ralph Khreish
2b9bb8b94f chore: fix CI issues 2025-05-26 07:40:57 -04:00
Ralph Khreish
6f27399df8 feat(config): Implement TASK_MASTER_PROJECT_ROOT support for project root resolution
- Added support for the TASK_MASTER_PROJECT_ROOT environment variable in MCP configuration, establishing a clear precedence order for project root resolution.
- Updated utility functions to prioritize the environment variable, followed by args.projectRoot and session-based resolution.
- Enhanced error handling and logging for project root determination.
- Introduced new tasks for comprehensive testing and documentation updates related to the new configuration options.
2025-05-26 07:35:50 -04:00
227 changed files with 7645 additions and 13921 deletions

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
improve findTasks algorithm for resolving tasks path

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix update tool on MCP giving `No valid tasks found`

View File

@@ -1,39 +0,0 @@
---
"task-master-ai": patch
---
Enhanced add-task fuzzy search intelligence and improved user experience
**Smarter Task Discovery:**
- Remove hardcoded category system that always matched "Task management"
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
- Generate context-driven task recommendations based on true semantic similarity
**Enhanced Terminal Experience:**
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
- Remove console.clear() and redundant displayBanner() calls from UI functions
- Preserve command history for better development workflow
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
**Visual Improvements:**
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
- Improve consistency and readability of task complexity display
**AI Provider Compatibility:**
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
- Add qwen3-235n-a22b:free model support (closes #687)
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
**Technical Improvements:**
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
- Improved dependency analysis and common pattern detection
- Better handling of task relationships and relevance scoring
- More intelligent task suggestion algorithms
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix double .taskmaster directory paths in file resolution utilities
- Closes #636

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Add one-click MCP server installation for Cursor

View File

@@ -1,11 +0,0 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.16.1"
},
"changesets": [
"pink-houses-lay",
"polite-areas-shave"
]
}

View File

@@ -0,0 +1,7 @@
---
'task-master-ai': minor
---
Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Fix add-task MCP command causing an error

View File

@@ -1,22 +0,0 @@
---
"task-master-ai": minor
---
Add sync-readme command for a task export to GitHub README
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
**Features:**
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
**Usage:**
- `task-master sync-readme` - Export tasks without subtasks
- `task-master sync-readme --with-subtasks` - Include subtasks in export
- `task-master sync-readme --status=pending` - Only export pending tasks
- `task-master sync-readme --status=done --with-subtasks` - Export completed tasks with subtasks
Perfect for showcasing project progress on GitHub. Experimental. Open to feedback.

View File

@@ -104,7 +104,7 @@ Task Master offers two primary ways to interact:
Taskmaster configuration is managed through two main mechanisms:
1. **`.taskmaster/config.json` File (Primary):**
1. **`.taskmasterconfig` File (Primary):**
* Located in the project root directory.
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.

View File

@@ -36,7 +36,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt.
### 2. Parse PRD (`parse_prd`)
@@ -45,12 +45,12 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
* **Key Parameters/Options:**
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to 'tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
---
@@ -77,10 +77,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
---
@@ -348,7 +348,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master analyze-complexity [options]`
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
* **Key Parameters/Options:**
* `output`: `Where to save the complexity analysis report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
@@ -361,7 +361,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master complexity-report [options]`
* **Description:** `Display the task complexity analysis report in a readable format.`
* **Key Parameters/Options:**
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
* `file`: `Path to the complexity report (default: 'scripts/task-complexity-report.json').` (CLI: `-f, --file <file>`)
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
---
@@ -382,7 +382,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
## Environment Variables Configuration (Updated)
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
@@ -395,12 +395,12 @@ Environment variables are used **only** for sensitive API keys related to AI pro
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
* `OPENROUTER_API_KEY`
* `XAI_API_KEY`
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
* `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):**
* `AZURE_OPENAI_ENDPOINT`
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool.
---

View File

@@ -7,9 +7,3 @@ MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
XAI_API_KEY=YOUR_XAI_KEY_HERE
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
# Google Vertex AI Configuration
VERTEX_PROJECT_ID=your-gcp-project-id
VERTEX_LOCATION=us-central1
# Optional: Path to service account credentials JSON file (alternative to API key)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json

1
.nvmrc
View File

@@ -1 +0,0 @@
22

7
.prettierignore Normal file
View File

@@ -0,0 +1,7 @@
# Ignore artifacts:
build
coverage
.changeset
tasks
package-lock.json
tests/fixture/*.json

11
.prettierrc Normal file
View File

@@ -0,0 +1,11 @@
{
"printWidth": 80,
"tabWidth": 2,
"useTabs": true,
"semi": true,
"singleQuote": true,
"trailingComma": "none",
"bracketSpacing": true,
"arrowParens": "always",
"endOfLine": "lf"
}

View File

@@ -1,528 +0,0 @@
# Claude Task Master - Product Requirements Document
<PRD>
# Technical Architecture
## System Components
1. **Task Management Core**
- Tasks.json file structure (single source of truth)
- Task model with dependencies, priorities, and metadata
- Task state management system
- Task file generation subsystem
2. **AI Integration Layer**
- Anthropic Claude API integration
- Perplexity API integration (optional)
- Prompt engineering components
- Response parsing and processing
3. **Command Line Interface**
- Command parsing and execution
- Interactive user input handling
- Display and formatting utilities
- Status reporting and feedback system
4. **Cursor AI Integration**
- Cursor rules documentation
- Agent interaction patterns
- Workflow guideline specifications
## Data Models
### Task Model
```json
{
"id": 1,
"title": "Task Title",
"description": "Brief task description",
"status": "pending|done|deferred",
"dependencies": [0],
"priority": "high|medium|low",
"details": "Detailed implementation instructions",
"testStrategy": "Verification approach details",
"subtasks": [
{
"id": 1,
"title": "Subtask Title",
"description": "Subtask description",
"status": "pending|done|deferred",
"dependencies": [],
"acceptanceCriteria": "Verification criteria"
}
]
}
```
### Tasks Collection Model
```json
{
"meta": {
"projectName": "Project Name",
"version": "1.0.0",
"prdSource": "path/to/prd.txt",
"createdAt": "ISO-8601 timestamp",
"updatedAt": "ISO-8601 timestamp"
},
"tasks": [
// Array of Task objects
]
}
```
### Task File Format
```
# Task ID: <id>
# Title: <title>
# Status: <status>
# Dependencies: <comma-separated list of dependency IDs>
# Priority: <priority>
# Description: <brief description>
# Details:
<detailed implementation notes>
# Test Strategy:
<verification approach>
# Subtasks:
1. <subtask title> - <subtask description>
```
## APIs and Integrations
1. **Anthropic Claude API**
- Authentication via API key
- Prompt construction and streaming
- Response parsing and extraction
- Error handling and retries
2. **Perplexity API (via OpenAI client)**
- Authentication via API key
- Research-oriented prompt construction
- Enhanced contextual response handling
- Fallback mechanisms to Claude
3. **File System API**
- Reading/writing tasks.json
- Managing individual task files
- Command execution logging
- Debug logging system
## Infrastructure Requirements
1. **Node.js Runtime**
- Version 14.0.0 or higher
- ES Module support
- File system access rights
- Command execution capabilities
2. **Configuration Management**
- Environment variable handling
- .env file support
- Configuration validation
- Sensible defaults with overrides
3. **Development Environment**
- Git repository
- NPM package management
- Cursor editor integration
- Command-line terminal access
# Development Roadmap
## Phase 1: Core Task Management System
1. **Task Data Structure**
- Design and implement the tasks.json structure
- Create task model validation
- Implement basic task operations (create, read, update)
- Develop file system interactions
2. **Command Line Interface Foundation**
- Implement command parsing with Commander.js
- Create help documentation
- Implement colorized console output
- Add logging system with configurable levels
3. **Basic Task Operations**
- Implement task listing functionality
- Create task status update capability
- Add dependency tracking
- Implement priority management
4. **Task File Generation**
- Create task file templates
- Implement generation from tasks.json
- Add bi-directional synchronization
- Implement proper file naming and organization
## Phase 2: AI Integration
1. **Claude API Integration**
- Implement API authentication
- Create prompt templates for PRD parsing
- Design response handlers
- Add error management and retries
2. **PRD Parsing System**
- Implement PRD file reading
- Create PRD to task conversion logic
- Add intelligent dependency inference
- Implement priority assignment logic
3. **Task Expansion With Claude**
- Create subtask generation prompts
- Implement subtask creation workflow
- Add context-aware expansion capabilities
- Implement parent-child relationship management
4. **Implementation Drift Handling**
- Add capability to update future tasks
- Implement task rewriting based on new context
- Create dependency chain updates
- Preserve completed work while updating future tasks
## Phase 3: Advanced Features
1. **Perplexity Integration**
- Implement Perplexity API authentication
- Create research-oriented prompts
- Add fallback to Claude when unavailable
- Implement response quality comparison logic
2. **Research-Backed Subtask Generation**
- Create specialized research prompts
- Implement context enrichment
- Add domain-specific knowledge incorporation
- Create more detailed subtask generation
3. **Batch Operations**
- Implement multi-task status updates
- Add bulk subtask generation
- Create task filtering and querying
- Implement advanced dependency management
4. **Project Initialization**
- Create project templating system
- Implement interactive setup
- Add environment configuration
- Create documentation generation
## Phase 4: Cursor AI Integration
1. **Cursor Rules Implementation**
- Create dev_workflow.mdc documentation
- Implement cursor_rules.mdc
- Add self_improve.mdc
- Design rule integration documentation
2. **Agent Workflow Guidelines**
- Document task discovery workflow
- Create task selection guidelines
- Implement implementation guidance
- Add verification procedures
3. **Agent Command Integration**
- Document command syntax for agents
- Create example interactions
- Implement agent response patterns
- Add context management for agents
4. **User Documentation**
- Create detailed README
- Add scripts documentation
- Implement example workflows
- Create troubleshooting guides
# Logical Dependency Chain
## Foundation Layer
1. **Task Data Structure**
- Must be implemented first as all other functionality depends on this
- Defines the core data model for the entire system
- Establishes the single source of truth concept
2. **Command Line Interface**
- Built on top of the task data structure
- Provides the primary user interaction mechanism
- Required for all subsequent operations to be accessible
3. **Basic Task Operations**
- Depends on both task data structure and CLI
- Provides the fundamental operations for task management
- Enables the minimal viable workflow
## Functional Layer
4. **Task File Generation**
- Depends on task data structure and basic operations
- Creates the individual task files for reference
- Enables the file-based workflow complementing tasks.json
5. **Claude API Integration**
- Independent of most previous components but needs the task data structure
- Provides the AI capabilities that enhance the system
- Gateway to advanced task generation features
6. **PRD Parsing System**
- Depends on Claude API integration and task data structure
- Enables the initial task generation workflow
- Creates the starting point for new projects
## Enhancement Layer
7. **Task Expansion With Claude**
- Depends on Claude API integration and basic task operations
- Enhances existing tasks with more detailed subtasks
- Improves the implementation guidance
8. **Implementation Drift Handling**
- Depends on Claude API integration and task operations
- Addresses a key challenge in AI-driven development
- Maintains the relevance of task planning as implementation evolves
9. **Perplexity Integration**
- Can be developed in parallel with other features after Claude integration
- Enhances the quality of generated content
- Provides research-backed improvements
## Advanced Layer
10. **Research-Backed Subtask Generation**
- Depends on Perplexity integration and task expansion
- Provides higher quality, more contextual subtasks
- Enhances the value of the task breakdown
11. **Batch Operations**
- Depends on basic task operations
- Improves efficiency for managing multiple tasks
- Quality-of-life enhancement for larger projects
12. **Project Initialization**
- Depends on most previous components being stable
- Provides a smooth onboarding experience
- Creates a complete project setup in one step
## Integration Layer
13. **Cursor Rules Implementation**
- Can be developed in parallel after basic functionality
- Provides the guidance for Cursor AI agent
- Enhances the AI-driven workflow
14. **Agent Workflow Guidelines**
- Depends on Cursor rules implementation
- Structures how the agent interacts with the system
- Ensures consistent agent behavior
15. **Agent Command Integration**
- Depends on agent workflow guidelines
- Provides specific command patterns for the agent
- Optimizes the agent-user interaction
16. **User Documentation**
- Should be developed alongside all features
- Must be completed before release
- Ensures users can effectively use the system
# Risks and Mitigations
## Technical Challenges
### API Reliability
**Risk**: Anthropic or Perplexity API could have downtime, rate limiting, or breaking changes.
**Mitigation**:
- Implement robust error handling with exponential backoff
- Add fallback mechanisms (Claude fallback for Perplexity)
- Cache important responses to reduce API dependency
- Support offline mode for critical functions
### Model Output Variability
**Risk**: AI models may produce inconsistent or unexpected outputs.
**Mitigation**:
- Design robust prompt templates with strict output formatting requirements
- Implement response validation and error detection
- Add self-correction mechanisms and retries with improved prompts
- Allow manual editing of generated content
### Node.js Version Compatibility
**Risk**: Differences in Node.js versions could cause unexpected behavior.
**Mitigation**:
- Clearly document minimum Node.js version requirements
- Use transpilers if needed for compatibility
- Test across multiple Node.js versions
- Handle version-specific features gracefully
## MVP Definition
### Feature Prioritization
**Risk**: Including too many features in the MVP could delay release and adoption.
**Mitigation**:
- Define MVP as core task management + basic Claude integration
- Ensure each phase delivers a complete, usable product
- Implement feature flags for easy enabling/disabling of features
- Get early user feedback to validate feature importance
### Scope Creep
**Risk**: The project could expand beyond its original intent, becoming too complex.
**Mitigation**:
- Maintain a strict definition of what the tool is and isn't
- Focus on task management for AI-driven development
- Evaluate new features against core value proposition
- Implement extensibility rather than building every feature
### User Expectations
**Risk**: Users might expect a full project management solution rather than a task tracking system.
**Mitigation**:
- Clearly communicate the tool's purpose and limitations
- Provide integration points with existing project management tools
- Focus on the unique value of AI-driven development
- Document specific use cases and example workflows
## Resource Constraints
### Development Capacity
**Risk**: Limited development resources could delay implementation.
**Mitigation**:
- Phase implementation to deliver value incrementally
- Focus on core functionality first
- Leverage open source libraries where possible
- Design for extensibility to allow community contributions
### AI Cost Management
**Risk**: Excessive API usage could lead to high costs.
**Mitigation**:
- Implement token usage tracking and reporting
- Add configurable limits to prevent unexpected costs
- Cache responses where appropriate
- Optimize prompts for token efficiency
- Support local LLM options in the future
### Documentation Overhead
**Risk**: Complexity of the system requires extensive documentation that is time-consuming to maintain.
**Mitigation**:
- Use AI to help generate and maintain documentation
- Create self-documenting commands and features
- Implement progressive documentation (basic to advanced)
- Build help directly into the CLI
# Appendix
## AI Prompt Engineering Specifications
### PRD Parsing Prompt Structure
```
You are assisting with transforming a Product Requirements Document (PRD) into a structured set of development tasks.
Given the following PRD, create a comprehensive list of development tasks that would be needed to implement the described product.
For each task:
1. Assign a short, descriptive title
2. Write a concise description
3. Identify dependencies (which tasks must be completed before this one)
4. Assign a priority (high, medium, low)
5. Include detailed implementation notes
6. Describe a test strategy to verify completion
Structure the tasks in a logical order of implementation.
PRD:
{prd_content}
```
### Task Expansion Prompt Structure
```
You are helping to break down a development task into more manageable subtasks.
Main task:
Title: {task_title}
Description: {task_description}
Details: {task_details}
Please create {num_subtasks} specific subtasks that together would accomplish this main task.
For each subtask, provide:
1. A clear, actionable title
2. A concise description
3. Any dependencies on other subtasks
4. Specific acceptance criteria to verify completion
Additional context:
{additional_context}
```
### Research-Backed Expansion Prompt Structure
```
You are a technical researcher and developer helping to break down a software development task into detailed, well-researched subtasks.
Main task:
Title: {task_title}
Description: {task_description}
Details: {task_details}
Research the latest best practices, technologies, and implementation patterns for this type of task. Then create {num_subtasks} specific, actionable subtasks that together would accomplish the main task.
For each subtask:
1. Provide a clear, specific title
2. Write a detailed description including technical approach
3. Identify dependencies on other subtasks
4. Include specific acceptance criteria
5. Reference any relevant libraries, tools, or resources that should be used
Consider security, performance, maintainability, and user experience in your recommendations.
```
## Task File System Specification
### Directory Structure
```
/
├── .cursor/
│ └── rules/
│ ├── dev_workflow.mdc
│ ├── cursor_rules.mdc
│ └── self_improve.mdc
├── scripts/
│ ├── dev.js
│ └── README.md
├── tasks/
│ ├── task_001.txt
│ ├── task_002.txt
│ └── ...
├── .env
├── .env.example
├── .gitignore
├── package.json
├── README.md
└── tasks.json
```
### Task ID Specification
- Main tasks: Sequential integers (1, 2, 3, ...)
- Subtasks: Parent ID + dot + sequential integer (1.1, 1.2, 2.1, ...)
- ID references: Used in dependencies, command parameters
- ID ordering: Implies suggested implementation order
## Command-Line Interface Specification
### Global Options
- `--help`: Display help information
- `--version`: Display version information
- `--file=<file>`: Specify an alternative tasks.json file
- `--quiet`: Reduce output verbosity
- `--debug`: Increase output verbosity
- `--json`: Output in JSON format (for programmatic use)
### Command Structure
- `node scripts/dev.js <command> [options]`
- All commands operate on tasks.json by default
- Commands follow consistent parameter naming
- Common parameter styles: `--id=<id>`, `--status=<status>`, `--prompt="<text>"`
- Boolean flags: `--all`, `--force`, `--with-subtasks`
## API Integration Specifications
### Anthropic API Configuration
- Authentication: ANTHROPIC_API_KEY environment variable
- Model selection: MODEL environment variable
- Default model: claude-3-7-sonnet-20250219
- Maximum tokens: MAX_TOKENS environment variable (default: 4000)
- Temperature: TEMPERATURE environment variable (default: 0.7)
### Perplexity API Configuration
- Authentication: PERPLEXITY_API_KEY environment variable
- Model selection: PERPLEXITY_MODEL environment variable
- Default model: sonar-medium-online
- Connection: Via OpenAI client
- Fallback: Use Claude if Perplexity unavailable
</PRD>

View File

@@ -1,357 +0,0 @@
{
"meta": {
"generatedAt": "2025-05-22T05:48:33.026Z",
"tasksAnalyzed": 6,
"totalTasks": 88,
"analysisCount": 43,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 24,
"taskTitle": "Implement AI-Powered Test Generation Command",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
},
{
"taskId": 26,
"taskTitle": "Implement Context Foundation for AI Operations",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
},
{
"taskId": 27,
"taskTitle": "Implement Context Enhancements for AI Operations",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
},
{
"taskId": 28,
"taskTitle": "Implement Advanced ContextManager System",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
},
{
"taskId": 40,
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
},
{
"taskId": 41,
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
"complexityScore": 8,
"recommendedSubtasks": 10,
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
},
{
"taskId": 42,
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
"complexityScore": 9,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
},
{
"taskId": 44,
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
},
{
"taskId": 45,
"taskTitle": "Implement GitHub Issue Import Feature",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
},
{
"taskId": 46,
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
},
{
"taskId": 47,
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
},
{
"taskId": 48,
"taskTitle": "Refactor Prompts into Centralized Structure",
"complexityScore": 4,
"recommendedSubtasks": 3,
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
},
{
"taskId": 49,
"taskTitle": "Implement Code Quality Analysis Command",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
},
{
"taskId": 50,
"taskTitle": "Implement Test Coverage Tracking System by Task",
"complexityScore": 9,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
},
{
"taskId": 51,
"taskTitle": "Implement Perplexity Research Command",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
},
{
"taskId": 52,
"taskTitle": "Implement Task Suggestion Command for CLI",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
},
{
"taskId": 53,
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
},
{
"taskId": 55,
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
},
{
"taskId": 57,
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
},
{
"taskId": 60,
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
},
{
"taskId": 62,
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
"complexityScore": 4,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
},
{
"taskId": 63,
"taskTitle": "Add pnpm Support for the Taskmaster Package",
"complexityScore": 5,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
},
{
"taskId": 64,
"taskTitle": "Add Yarn Support for Taskmaster Installation",
"complexityScore": 5,
"recommendedSubtasks": 9,
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
},
{
"taskId": 65,
"taskTitle": "Add Bun Support for Taskmaster Installation",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
},
{
"taskId": 67,
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
},
{
"taskId": 68,
"taskTitle": "Ability to create tasks without parsing PRD",
"complexityScore": 3,
"recommendedSubtasks": 2,
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
},
{
"taskId": 72,
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
},
{
"taskId": 75,
"taskTitle": "Integrate Google Search Grounding for Research Role",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
},
{
"taskId": 76,
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
},
{
"taskId": 77,
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
"complexityScore": 7,
"recommendedSubtasks": 18,
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
},
{
"taskId": 80,
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
},
{
"taskId": 81,
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
},
{
"taskId": 82,
"taskTitle": "Update supported-models.json with token limit fields",
"complexityScore": 3,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 83,
"taskTitle": "Update config-manager.js defaults and getters",
"complexityScore": 4,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 84,
"taskTitle": "Implement token counting utility",
"complexityScore": 5,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 69,
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
},
{
"taskId": 70,
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
},
{
"taskId": 85,
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
},
{
"taskId": 86,
"taskTitle": "Update .taskmasterconfig schema and user guide",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "Expand this task into subtasks: (1) Draft a migration guide for users, (2) Update user documentation to explain new config fields, (3) Modify schema validation logic in config-manager.js, (4) Test and validate backward compatibility and error messaging.",
"reasoning": "The task spans documentation, schema changes, migration guidance, and validation logic. While not algorithmically complex, it requires careful coordination and thorough testing to ensure a smooth user transition and robust validation."
},
{
"taskId": 87,
"taskTitle": "Implement validation and error handling",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
},
{
"taskId": 89,
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
},
{
"taskId": 90,
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
},
{
"taskId": 91,
"taskTitle": "Implement Move Command for Tasks and Subtasks",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
}
]
}

View File

@@ -1,55 +0,0 @@
# Task ID: 93
# Title: Implement Google Vertex AI Provider Integration
# Status: pending
# Dependencies: 19, 94
# Priority: medium
# Description: Develop a dedicated Google Vertex AI provider in the codebase, enabling users to leverage Vertex AI models with enterprise-grade configuration and authentication.
# Details:
1. Create a new provider class in `src/ai-providers/google-vertex.js` that extends the existing BaseAIProvider, following the established structure used by other providers (e.g., google.js, openai.js).
2. Integrate the Vercel AI SDK's `@ai-sdk/google-vertex` package. Use the default `vertex` provider for standard usage, and allow for custom configuration via `createVertex` for advanced scenarios (e.g., specifying project ID, location, and credentials).
3. Implement all required interface methods (such as `getClient`, `generateText`, etc.) to ensure compatibility with the provider system. Reference the implementation patterns from other providers for consistency.
4. Handle Vertex AI-specific configuration, including project ID, location, and Google Cloud authentication. Support both environment-based authentication and explicit service account credentials via `googleAuthOptions`.
5. Implement robust error handling for Vertex-specific issues, including authentication failures and API errors, leveraging the system-wide error handling patterns.
6. Update `src/ai-providers/index.js` to export the new provider, and add the 'vertex' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`.
7. Update documentation to provide clear setup instructions for Google Vertex AI, including required environment variables, service account setup, and configuration examples.
8. Ensure the implementation is modular and maintainable, supporting future expansion for additional Vertex AI features or models.
# Test Strategy:
- Write unit tests for the new provider class, covering all interface methods and configuration scenarios (default, custom, error cases).
- Verify that the provider can successfully authenticate using both environment-based and explicit service account credentials.
- Test integration with the provider system by selecting 'vertex' as the provider and generating text using supported Vertex AI models (e.g., Gemini).
- Simulate authentication and API errors to confirm robust error handling and user feedback.
- Confirm that the provider is correctly exported and available in the PROVIDERS object.
- Review and validate the updated documentation for accuracy and completeness.
# Subtasks:
## 1. Create Google Vertex AI Provider Class [pending]
### Dependencies: None
### Description: Develop a new provider class in `src/ai-providers/google-vertex.js` that extends the BaseAIProvider, following the structure of existing providers.
### Details:
Ensure the new class is consistent with the architecture of other providers such as google.js and openai.js, and is ready to integrate with the AI SDK.
## 2. Integrate Vercel AI SDK Google Vertex Package [pending]
### Dependencies: 93.1
### Description: Integrate the `@ai-sdk/google-vertex` package, supporting both the default provider and custom configuration via `createVertex`.
### Details:
Allow for standard usage with the default `vertex` provider and advanced scenarios using `createVertex` for custom project ID, location, and credentials as per SDK documentation.
## 3. Implement Provider Interface Methods [pending]
### Dependencies: 93.2
### Description: Implement all required interface methods (e.g., `getClient`, `generateText`) to ensure compatibility with the provider system.
### Details:
Reference implementation patterns from other providers to maintain consistency and ensure all required methods are present and functional.
## 4. Handle Vertex AI Configuration and Authentication [pending]
### Dependencies: 93.3
### Description: Implement support for Vertex AI-specific configuration, including project ID, location, and authentication via environment variables or explicit service account credentials.
### Details:
Support both environment-based authentication and explicit credentials using `googleAuthOptions`, following Google Cloud and Vertex AI setup best practices.
## 5. Update Exports, Documentation, and Error Handling [pending]
### Dependencies: 93.4
### Description: Export the new provider, update the PROVIDERS object, and document setup instructions, including robust error handling for Vertex-specific issues.
### Details:
Update `src/ai-providers/index.js` and `scripts/modules/ai-services-unified.js`, and provide clear documentation for setup, configuration, and error handling patterns.

View File

@@ -1,103 +0,0 @@
# Task ID: 94
# Title: Implement Azure OpenAI Provider Integration
# Status: done
# Dependencies: 19, 26
# Priority: medium
# Description: Create a comprehensive Azure OpenAI provider implementation that integrates with the existing AI provider system, enabling users to leverage Azure-hosted OpenAI models through proper authentication and configuration.
# Details:
Implement the Azure OpenAI provider following the established provider pattern:
1. **Create Azure Provider Class** (`src/ai-providers/azure.js`):
- Extend BaseAIProvider class following the same pattern as openai.js and google.js
- Import and use `createAzureOpenAI` from `@ai-sdk/azure` package
- Implement required interface methods: `getClient()`, `validateConfig()`, and any other abstract methods
- Handle Azure-specific configuration: endpoint URL, API key, and deployment name
- Add proper error handling for missing or invalid Azure configuration
2. **Configuration Management**:
- Support environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT
- Validate that both endpoint and API key are provided
- Provide clear error messages for configuration issues
- Follow the same configuration pattern as other providers
3. **Integration Updates**:
- Update `src/ai-providers/index.js` to export the new AzureProvider
- Add 'azure' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`
- Ensure the provider is properly registered and accessible through the unified AI services
4. **Error Handling**:
- Implement Azure-specific error handling for authentication failures
- Handle endpoint connectivity issues with helpful error messages
- Validate deployment name and provide guidance for common configuration mistakes
- Follow the established error handling patterns from Task 19
5. **Documentation Updates**:
- Update any provider documentation to include Azure OpenAI setup instructions
- Add configuration examples for Azure OpenAI environment variables
- Include troubleshooting guidance for common Azure-specific issues
The implementation should maintain consistency with existing provider implementations while handling Azure's unique authentication and endpoint requirements.
# Test Strategy:
Verify the Azure OpenAI provider implementation through comprehensive testing:
1. **Unit Testing**:
- Test provider class instantiation and configuration validation
- Verify getClient() method returns properly configured Azure OpenAI client
- Test error handling for missing/invalid configuration parameters
- Validate that the provider correctly extends BaseAIProvider
2. **Integration Testing**:
- Test provider registration in the unified AI services system
- Verify the provider appears in the PROVIDERS object and is accessible
- Test end-to-end functionality with valid Azure OpenAI credentials
- Validate that the provider works with existing AI operation workflows
3. **Configuration Testing**:
- Test with various environment variable combinations
- Verify proper error messages for missing endpoint or API key
- Test with invalid endpoint URLs and ensure graceful error handling
- Validate deployment name handling and error reporting
4. **Manual Verification**:
- Set up test Azure OpenAI credentials and verify successful connection
- Test actual AI operations (like task expansion) using the Azure provider
- Verify that the provider selection works correctly in the CLI
- Confirm that error messages are helpful and actionable for users
5. **Documentation Verification**:
- Ensure all configuration examples work as documented
- Verify that setup instructions are complete and accurate
- Test troubleshooting guidance with common error scenarios
# Subtasks:
## 1. Create Azure Provider Class [done]
### Dependencies: None
### Description: Implement the AzureProvider class that extends BaseAIProvider to handle Azure OpenAI integration
### Details:
Create the AzureProvider class in src/ai-providers/azure.js that extends BaseAIProvider. Import createAzureOpenAI from @ai-sdk/azure package. Implement required interface methods including getClient() and validateConfig(). Handle Azure-specific configuration parameters: endpoint URL, API key, and deployment name. Follow the established pattern in openai.js and google.js. Ensure proper error handling for missing or invalid configuration.
## 2. Implement Configuration Management [done]
### Dependencies: 94.1
### Description: Add support for Azure OpenAI environment variables and configuration validation
### Details:
Implement configuration management for Azure OpenAI provider that supports environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and AZURE_OPENAI_DEPLOYMENT. Add validation logic to ensure both endpoint and API key are provided. Create clear error messages for configuration issues. Follow the same configuration pattern as implemented in other providers. Ensure the validateConfig() method properly checks all required Azure configuration parameters.
## 3. Update Provider Integration [done]
### Dependencies: 94.1, 94.2
### Description: Integrate the Azure provider into the existing AI provider system
### Details:
Update src/ai-providers/index.js to export the new AzureProvider class. Add 'azure' entry to the PROVIDERS object in scripts/modules/ai-services-unified.js. Ensure the provider is properly registered and accessible through the unified AI services. Test that the provider can be instantiated and used through the provider selection mechanism. Follow the same integration pattern used for existing providers.
## 4. Implement Azure-Specific Error Handling [done]
### Dependencies: 94.1, 94.2
### Description: Add specialized error handling for Azure OpenAI-specific issues
### Details:
Implement Azure-specific error handling for authentication failures, endpoint connectivity issues, and deployment name validation. Provide helpful error messages that guide users to resolve common configuration mistakes. Follow the established error handling patterns from Task 19. Create custom error classes if needed for Azure-specific errors. Ensure errors are properly propagated and formatted for user display.
## 5. Update Documentation [done]
### Dependencies: 94.1, 94.2, 94.3, 94.4
### Description: Create comprehensive documentation for the Azure OpenAI provider integration
### Details:
Update provider documentation to include Azure OpenAI setup instructions. Add configuration examples for Azure OpenAI environment variables. Include troubleshooting guidance for common Azure-specific issues. Document the required Azure resource creation process with references to Microsoft's documentation. Provide examples of valid configuration settings and explain each required parameter. Include information about Azure OpenAI model deployment requirements.

View File

@@ -1,149 +0,0 @@
# Task ID: 95
# Title: Implement .taskmaster Directory Structure
# Status: done
# Dependencies: 1, 3, 4, 17
# Priority: high
# Description: Consolidate all Task Master-managed files in user projects into a clean, centralized .taskmaster/ directory structure to improve organization and keep user project directories clean, based on GitHub issue #275.
# Details:
This task involves restructuring how Task Master organizes files within user projects to improve maintainability and keep project directories clean:
1. Create a new `.taskmaster/` directory structure in user projects:
- Move task files from `tasks/` to `.taskmaster/tasks/`
- Move PRD files from `scripts/` to `.taskmaster/docs/`
- Move analysis reports to `.taskmaster/reports/`
- Move configuration from `.taskmasterconfig` to `.taskmaster/config.json`
- Create `.taskmaster/templates/` for user templates
2. Update all Task Master code that creates/reads user files:
- Modify task file generation to use `.taskmaster/tasks/`
- Update PRD file handling to use `.taskmaster/docs/`
- Adjust report generation to save to `.taskmaster/reports/`
- Update configuration loading to look for `.taskmaster/config.json`
- Modify any path resolution logic in Task Master's codebase
3. Ensure backward compatibility during migration:
- Implement path fallback logic that checks both old and new locations
- Add deprecation warnings when old paths are detected
- Create a migration command to help users transition to the new structure
- Preserve existing user data during migration
4. Update the project initialization process:
- Modify the init command to create the new `.taskmaster/` directory structure
- Update default file creation to use new paths
5. Benefits of the new structure:
- Keeps user project directories clean and organized
- Clearly separates Task Master files from user project files
- Makes it easier to add Task Master to .gitignore if desired
- Provides logical grouping of different file types
6. Test thoroughly to ensure all functionality works with the new structure:
- Verify all Task Master commands work with the new paths
- Ensure backward compatibility functions correctly
- Test migration process preserves all user data
7. Update documentation:
- Update README.md to reflect the new user file structure
- Add migration guide for existing users
- Document the benefits of the cleaner organization
# Test Strategy:
1. Unit Testing:
- Create unit tests for path resolution that verify both new and old paths work
- Test configuration loading with both `.taskmasterconfig` and `.taskmaster/config.json`
- Verify the migration command correctly moves files and preserves content
- Test file creation in all new subdirectories
2. Integration Testing:
- Run all existing integration tests with the new directory structure
- Verify that all Task Master commands function correctly with new paths
- Test backward compatibility by running commands with old file structure
3. Migration Testing:
- Test the migration process on sample projects with existing tasks and files
- Verify all tasks, PRDs, reports, and configurations are correctly moved
- Ensure no data loss occurs during migration
- Test migration with partial existing structures (e.g., only tasks/ exists)
4. User Workflow Testing:
- Test complete workflows: init → create tasks → generate reports → update PRDs
- Verify all generated files go to correct locations in `.taskmaster/`
- Test that user project directories remain clean
5. Manual Testing:
- Perform end-to-end testing with the new structure
- Create, update, and delete tasks using the new structure
- Generate reports and verify they're saved to `.taskmaster/reports/`
6. Documentation Verification:
- Review all documentation to ensure it accurately reflects the new user file structure
- Verify the migration guide provides clear instructions
7. Regression Testing:
- Run the full test suite to ensure no regressions were introduced
- Verify existing user projects continue to work during transition period
# Subtasks:
## 1. Create .taskmaster directory structure [done]
### Dependencies: None
### Description: Create the new .taskmaster directory and move existing files to their new locations
### Details:
Create a new .taskmaster/ directory in the project root. Move the tasks/ directory to .taskmaster/tasks/. Move the scripts/ directory to .taskmaster/scripts/. Move the .taskmasterconfig file to .taskmaster/config.json. Ensure proper file permissions are maintained during the move.
<info added on 2025-05-29T15:03:56.912Z>
Create the new .taskmaster/ directory structure in user projects with subdirectories for tasks/, docs/, reports/, and templates/. Move the existing .taskmasterconfig file to .taskmaster/config.json. Since this project is also a Task Master user, move this project's current user files (tasks.json, PRD files, etc.) to the new .taskmaster/ structure to test the implementation. This subtask focuses on user project directory structure, not Task Master source code relocation.
</info added on 2025-05-29T15:03:56.912Z>
## 2. Update Task Master code for new user file paths [done]
### Dependencies: 95.1
### Description: Modify all Task Master code that creates or reads user project files to use the new .taskmaster structure
### Details:
Update Task Master's file handling code to use the new paths: tasks in .taskmaster/tasks/, PRD files in .taskmaster/docs/, reports in .taskmaster/reports/, and config in .taskmaster/config.json. Modify path resolution logic throughout the Task Master codebase to reference the new user file locations.
## 3. Update task file generation system [done]
### Dependencies: 95.1
### Description: Modify the task file generation system to use the new directory structure
### Details:
Update the task file generation system to create and read task files from .taskmaster/tasks/ instead of tasks/. Ensure all template paths are updated. Modify any path resolution logic specific to task file handling.
## 4. Implement backward compatibility logic [done]
### Dependencies: 95.2, 95.3
### Description: Add fallback mechanisms to support both old and new file locations during transition
### Details:
Implement path fallback logic that checks both old and new locations when files aren't found. Add deprecation warnings when old paths are used, informing users about the new structure. Ensure error messages are clear about the transition.
## 5. Create migration command for users [done]
### Dependencies: 95.1, 95.4
### Description: Develop a Task Master command to help users transition their existing projects to the new structure
### Details:
Create a 'taskmaster migrate' command that automatically moves user files from old locations to the new .taskmaster structure. Move tasks/ to .taskmaster/tasks/, scripts/prd.txt to .taskmaster/docs/, reports to .taskmaster/reports/, and .taskmasterconfig to .taskmaster/config.json. Include backup functionality and validation to ensure migration completed successfully.
## 6. Update project initialization process [done]
### Dependencies: 95.1
### Description: Modify the init command to create the new directory structure for new projects
### Details:
Update the init command to create the .taskmaster directory and its subdirectories (tasks/, docs/, reports/, templates/). Modify default file creation to use the new paths. Ensure new projects are created with the correct structure from the start.
## 7. Update PRD and report file handling [done]
### Dependencies: 95.2, 95.6
### Description: Modify PRD file creation and report generation to use the new directory structure
### Details:
Update PRD file handling to create and read files from .taskmaster/docs/ instead of scripts/. Modify report generation (like task-complexity-report.json) to save to .taskmaster/reports/. Ensure all file operations use the new paths consistently.
## 8. Update documentation and create migration guide [done]
### Dependencies: 95.5, 95.6, 95.7
### Description: Update all documentation to reflect the new directory structure and provide migration guidance
### Details:
Update README.md and other documentation to reflect the new .taskmaster structure for user projects. Create a comprehensive migration guide explaining the benefits of the new structure and how to migrate existing projects. Include examples of the new directory layout and explain how it keeps user project directories clean.
## 9. Add templates directory support [done]
### Dependencies: 95.2, 95.6
### Description: Implement support for user templates in the .taskmaster/templates/ directory
### Details:
Create functionality to support user-defined templates in .taskmaster/templates/. Allow users to store custom task templates, PRD templates, or other reusable files. Update Task Master commands to recognize and use templates from this directory when available.
## 10. Verify clean user project directories [done]
### Dependencies: 95.8, 95.9
### Description: Ensure the new structure keeps user project root directories clean and organized
### Details:
Validate that after implementing the new structure, user project root directories only contain their actual project files plus the single .taskmaster/ directory. Verify that no Task Master files are created outside of .taskmaster/. Test that users can easily add .taskmaster/ to .gitignore if they choose to exclude Task Master files from version control.

View File

@@ -1,37 +0,0 @@
# Task ID: 96
# Title: Create Export Command for On-Demand Task File and PDF Generation
# Status: pending
# Dependencies: 2, 4, 95
# Priority: medium
# Description: Develop an 'export' CLI command that generates task files and comprehensive PDF exports on-demand, replacing automatic file generation and providing users with flexible export options.
# Details:
Implement a new 'export' command in the CLI that supports two primary modes: (1) generating individual task files on-demand (superseding the current automatic generation system), and (2) producing a comprehensive PDF export. The PDF should include: a first page with the output of 'tm list --with-subtasks', followed by individual pages for each task (using 'tm show <task_id>') and each subtask (using 'tm show <subtask_id>'). Integrate PDF generation using a robust library (e.g., pdfkit, Puppeteer, or jsPDF) to ensure high-quality output and proper pagination. Refactor or disable any existing automatic file generation logic to avoid performance overhead. Ensure the command supports flexible output paths and options for exporting only files, only PDF, or both. Update documentation and help output to reflect the new export capabilities. Consider concurrency and error handling for large projects. Ensure the export process is efficient and does not block the main CLI thread unnecessarily.
# Test Strategy:
1. Run the 'export' command with various options and verify that task files are generated only on-demand, not automatically. 2. Generate a PDF export and confirm that the first page contains the correct 'tm list --with-subtasks' output, and that each subsequent page accurately reflects the output of 'tm show <task_id>' and 'tm show <subtask_id>' for all tasks and subtasks. 3. Test exporting in projects with large numbers of tasks and subtasks to ensure performance and correctness. 4. Attempt exports with invalid paths or missing data to verify robust error handling. 5. Confirm that no automatic file generation occurs during normal task operations. 6. Review CLI help output and documentation for accuracy regarding the new export functionality.
# Subtasks:
## 1. Remove Automatic Task File Generation from Task Operations [pending]
### Dependencies: None
### Description: Eliminate all calls to generateTaskFiles() from task operations such as add-task, remove-task, set-status, and similar commands to prevent unnecessary performance overhead.
### Details:
Audit the codebase for any automatic invocations of generateTaskFiles() and remove or refactor them to ensure task files are not generated automatically during task operations.
## 2. Implement Export Command Infrastructure with On-Demand Task File Generation [pending]
### Dependencies: 96.1
### Description: Develop the CLI 'export' command infrastructure, enabling users to generate task files on-demand by invoking the preserved generateTaskFiles function only when requested.
### Details:
Create the export command with options for output paths and modes (files, PDF, or both). Ensure generateTaskFiles is only called within this command and not elsewhere.
## 3. Implement Comprehensive PDF Export Functionality [pending]
### Dependencies: 96.2
### Description: Add PDF export capability to the export command, generating a structured PDF with a first page listing all tasks and subtasks, followed by individual pages for each task and subtask, using a robust PDF library.
### Details:
Integrate a PDF generation library (e.g., pdfkit, Puppeteer, or jsPDF). Ensure the PDF includes the output of 'tm list --with-subtasks' on the first page, and uses 'tm show <task_id>' and 'tm show <subtask_id>' for subsequent pages. Handle pagination, concurrency, and error handling for large projects.
## 4. Update Documentation, Tests, and CLI Help for Export Workflow [pending]
### Dependencies: 96.2, 96.3
### Description: Revise all relevant documentation, automated tests, and CLI help output to reflect the new export-based workflow and available options.
### Details:
Update user guides, README files, and CLI help text. Add or modify tests to cover the new export command and its options. Ensure all documentation accurately describes the new workflow and usage.

View File

@@ -1,47 +0,0 @@
<context>
# Overview
[Provide a high-level overview of your product here. Explain what problem it solves, who it's for, and why it's valuable.]
# Core Features
[List and describe the main features of your product. For each feature, include:
- What it does
- Why it's important
- How it works at a high level]
# User Experience
[Describe the user journey and experience. Include:
- User personas
- Key user flows
- UI/UX considerations]
</context>
<PRD>
# Technical Architecture
[Outline the technical implementation details:
- System components
- Data models
- APIs and integrations
- Infrastructure requirements]
# Development Roadmap
[Break down the development process into phases:
- MVP requirements
- Future enhancements
- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks]
# Logical Dependency Chain
[Define the logical order of development:
- Which features need to be built first (foundation)
- Getting as quickly as possible to something usable/visible front end that works
- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches]
# Risks and Mitigations
[Identify potential risks and how they'll be addressed:
- Technical challenges
- Figuring out the MVP that we can build upon
- Resource constraints]
# Appendix
[Include any additional information:
- Research findings
- Technical specifications]
</PRD>

View File

@@ -20,14 +20,13 @@
}
},
"global": {
"userId": "1234567890",
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"azureBaseURL": "https://your-endpoint.azure.com/"
"ollamaBaseUrl": "http://localhost:11434/api",
"userId": "1234567890",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -1,149 +1,5 @@
# task-master-ai
## 0.16.2-rc.0
### Patch Changes
- [#655](https://github.com/eyaltoledano/claude-task-master/pull/655) [`edaa5fe`](https://github.com/eyaltoledano/claude-task-master/commit/edaa5fe0d56e0e4e7c4370670a7a388eebd922ac) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix double .taskmaster directory paths in file resolution utilities
- Closes #636
- [#671](https://github.com/eyaltoledano/claude-task-master/pull/671) [`86ea6d1`](https://github.com/eyaltoledano/claude-task-master/commit/86ea6d1dbc03eeb39f524f565b50b7017b1d2c9c) Thanks [@joedanz](https://github.com/joedanz)! - Add one-click MCP server installation for Cursor
## 0.16.1
### Patch Changes
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bedrock issues
- [#648](https://github.com/eyaltoledano/claude-task-master/pull/648) [`9b4168b`](https://github.com/eyaltoledano/claude-task-master/commit/9b4168bb4e4dfc2f4fb0cf6bd5f81a8565879176) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP tool calls logging errors
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Update rules for new directory structure
- [#648](https://github.com/eyaltoledano/claude-task-master/pull/648) [`9b4168b`](https://github.com/eyaltoledano/claude-task-master/commit/9b4168bb4e4dfc2f4fb0cf6bd5f81a8565879176) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug in expand_all mcp tool
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP crashing after certain commands due to console logs
## 0.16.0
### Minor Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
- Implemented `VertexAIProvider` class extending BaseAIProvider
- Added authentication and configuration handling for Vertex AI
- Updated configuration manager with Vertex-specific getters
- Modified AI services unified system to integrate the provider
- Added documentation for Vertex AI setup and configuration
- Updated environment variable examples for Vertex AI support
- Implemented specialized error handling for Vertex-specific issues
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for Azure
- [#612](https://github.com/eyaltoledano/claude-task-master/pull/612) [`669b744`](https://github.com/eyaltoledano/claude-task-master/commit/669b744ced454116a7b29de6c58b4b8da977186a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Increased minimum required node version to > 18 (was > 14)
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
**New Directory Structure:**
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
- `.taskmaster/templates/` - Template files like example PRD
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
**Migration & Backward Compatibility:**
- Existing projects continue to work with legacy file locations
- New projects use the consolidated structure automatically
- Run `task-master migrate` to move existing projects to the new structure
- All CLI commands and MCP tools automatically detect and use appropriate file locations
**Benefits:**
- Cleaner project root with Task Master files organized in one location
- Reduced file scatter across multiple directories
- Improved project navigation and maintenance
- Consistent file organization across all Task Master projects
This change maintains full backward compatibility while providing a migration path to the improved structure.
### Patch Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4
- [#625](https://github.com/eyaltoledano/claude-task-master/pull/625) [`2d520de`](https://github.com/eyaltoledano/claude-task-master/commit/2d520de2694da3efe537b475ca52baf3c869edda) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix add-task MCP command causing an error
## 0.16.0-rc.0
### Minor Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
- Implemented `VertexAIProvider` class extending BaseAIProvider
- Added authentication and configuration handling for Vertex AI
- Updated configuration manager with Vertex-specific getters
- Modified AI services unified system to integrate the provider
- Added documentation for Vertex AI setup and configuration
- Updated environment variable examples for Vertex AI support
- Implemented specialized error handling for Vertex-specific issues
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for Azure
- [#612](https://github.com/eyaltoledano/claude-task-master/pull/612) [`669b744`](https://github.com/eyaltoledano/claude-task-master/commit/669b744ced454116a7b29de6c58b4b8da977186a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Increased minimum required node version to > 18 (was > 14)
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
**New Directory Structure:**
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
- `.taskmaster/templates/` - Template files like example PRD
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
**Migration & Backward Compatibility:**
- Existing projects continue to work with legacy file locations
- New projects use the consolidated structure automatically
- Run `task-master migrate` to move existing projects to the new structure
- All CLI commands and MCP tools automatically detect and use appropriate file locations
**Benefits:**
- Cleaner project root with Task Master files organized in one location
- Reduced file scatter across multiple directories
- Improved project navigation and maintenance
- Consistent file organization across all Task Master projects
This change maintains full backward compatibility while providing a migration path to the improved structure.
### Patch Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4
- [#597](https://github.com/eyaltoledano/claude-task-master/pull/597) [`2d520de`](https://github.com/eyaltoledano/claude-task-master/commit/2d520de2694da3efe537b475ca52baf3c869edda) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix add-task MCP command causing an error
## 0.15.0
### Minor Changes

View File

@@ -1,335 +0,0 @@
# Contributing to Task Master
Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀
## 🤝 Our Collaborative Approach
We're a **PR-friendly team** that values collaboration:
-**We review PRs quickly** - Usually within hours, not days
-**We're super reactive** - Expect fast feedback and engagement
-**We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it
-**We're open to all contributions** - From bug fixes to major features
**We don't mind AI-generated code**, but we do expect you to:
-**Review and understand** what the AI generated
-**Test the code thoroughly** before submitting
-**Ensure it's well-written** and follows our patterns
-**Don't submit "AI slop"** - untested, unreviewed AI output
> **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time!
## 🚀 Quick Start for Contributors
### 1. Fork and Clone
```bash
git clone https://github.com/YOUR_USERNAME/claude-task-master.git
cd claude-task-master
npm install
```
### 2. Create a Feature Branch
**Important**: Always target the `next` branch, not `main`:
```bash
git checkout next
git pull origin next
git checkout -b feature/your-feature-name
```
### 3. Make Your Changes
Follow our development guidelines below.
### 4. Test Everything Yourself
**Before submitting your PR**, ensure:
```bash
# Run all tests
npm test
# Check formatting
npm run format-check
# Fix formatting if needed
npm run format
```
### 5. Create a Changeset
**Required for most changes**:
```bash
npm run changeset
```
See the [Changeset Guidelines](#changeset-guidelines) below for details.
### 6. Submit Your PR
- Target the `next` branch
- Write a clear description
- Reference any related issues
## 📋 Development Guidelines
### Branch Strategy
- **`main`**: Production-ready code
- **`next`**: Development branch - **target this for PRs**
- **Feature branches**: `feature/description` or `fix/description`
### Code Quality Standards
1. **Write tests** for new functionality
2. **Follow existing patterns** in the codebase
3. **Add JSDoc comments** for functions
4. **Keep functions focused** and single-purpose
### Testing Requirements
Your PR **must pass all CI checks**:
-**Unit tests**: `npm test`
-**Format check**: `npm run format-check`
**Test your changes locally first** - this saves review time and shows you care about quality.
## 📦 Changeset Guidelines
We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs.
### When to Create a Changeset
**Always create a changeset for**:
- ✅ New features
- ✅ Bug fixes
- ✅ Breaking changes
- ✅ Performance improvements
- ✅ User-facing documentation updates
- ✅ Dependency updates that affect functionality
**Skip changesets for**:
- ❌ Internal documentation only
- ❌ Test-only changes
- ❌ Code formatting/linting
- ❌ Development tooling that doesn't affect users
### How to Create a Changeset
1. **After making your changes**:
```bash
npm run changeset
```
2. **Choose the bump type**:
- **Major**: Breaking changes
- **Minor**: New features
- **Patch**: Bug fixes, docs, performance improvements
3. **Write a clear summary**:
```
Add support for custom AI models in MCP configuration
```
4. **Commit the changeset file** with your changes:
```bash
git add .changeset/*.md
git commit -m "feat: add custom AI model support"
```
### Changeset vs Git Commit Messages
- **Changeset summary**: User-facing, goes in CHANGELOG.md
- **Git commit**: Developer-facing, explains the technical change
Example:
```bash
# Changeset summary (user-facing)
"Add support for custom Ollama models"
# Git commit message (developer-facing)
"feat(models): implement custom Ollama model validation
- Add model validation for custom Ollama endpoints
- Update configuration schema to support custom models
- Add tests for new validation logic"
```
## 🔧 Development Setup
### Prerequisites
- Node.js 18+
- npm or yarn
### Environment Setup
1. **Copy environment template**:
```bash
cp .env.example .env
```
2. **Add your API keys** (for testing AI features):
```bash
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
# Add others as needed
```
### Running Tests
```bash
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run with coverage
npm run test:coverage
# Run E2E tests
npm run test:e2e
```
### Code Formatting
We use Prettier for consistent formatting:
```bash
# Check formatting
npm run format-check
# Fix formatting
npm run format
```
## 📝 PR Guidelines
### Before Submitting
- [ ] **Target the `next` branch**
- [ ] **Test everything locally**
- [ ] **Run the full test suite**
- [ ] **Check code formatting**
- [ ] **Create a changeset** (if needed)
- [ ] **Re-read your changes** - ensure they're clean and well-thought-out
### PR Description Template
```markdown
## Description
Brief description of what this PR does.
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] I have tested this locally
- [ ] All existing tests pass
- [ ] I have added tests for new functionality
## Changeset
- [ ] I have created a changeset (or this change doesn't need one)
## Additional Notes
Any additional context or notes for reviewers.
```
### What We Look For
✅ **Good PRs**:
- Clear, focused changes
- Comprehensive testing
- Good commit messages
- Proper changeset (when needed)
- Self-reviewed code
❌ **Avoid**:
- Massive PRs that change everything
- Untested code
- Formatting issues
- Missing changesets for user-facing changes
- AI-generated code that wasn't reviewed
## 🏗️ Project Structure
```
claude-task-master/
├── bin/ # CLI executables
├── mcp-server/ # MCP server implementation
├── scripts/ # Core task management logic
├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here)
├── tests/ # Test files
├── docs/ # Documentation
└── .cursor/ # Cursor IDE rules and configuration
└── assets/ # Assets like rules and configuration for all IDEs
```
### Key Areas for Contribution
- **CLI Commands**: `scripts/modules/commands.js`
- **MCP Tools**: `mcp-server/src/tools/`
- **Core Logic**: `scripts/modules/task-manager/`
- **AI Providers**: `src/ai-providers/`
- **Tests**: `tests/`
## 🐛 Reporting Issues
### Bug Reports
Include:
- Task Master version
- Node.js version
- Operating system
- Steps to reproduce
- Expected vs actual behavior
- Error messages/logs
### Feature Requests
Include:
- Clear description of the feature
- Use case/motivation
- Proposed implementation (if you have ideas)
- Willingness to contribute
## 💬 Getting Help
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
## 📄 License
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause).
---
**Thank you for contributing to Task Master!** 🎉
Your contributions help make AI-driven development more accessible and efficient for everyone.

1292
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -25,8 +25,7 @@
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com"
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -7,7 +7,7 @@
```bash
# Project Setup
task-master init # Initialize Task Master in current project
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
task-master parse-prd scripts/prd.txt # Generate tasks from PRD document
task-master models --setup # Configure AI models interactively
# Daily Development Workflow
@@ -39,10 +39,10 @@ task-master generate # Update task markd
### Core Files
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmasterconfig` - AI model configuration (use `task-master models` to modify)
- `scripts/prd.txt` - Product Requirements Document for parsing
- `tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `.env` - API keys for CLI usage
### Claude Code Integration Files
@@ -56,24 +56,20 @@ task-master generate # Update task markd
```
project/
├── .taskmaster/
│ ├── tasks/ # Task files directory
│ ├── tasks.json # Main task database
│ ├── task-1.md # Individual task files
│ │ └── task-2.md
│ ├── docs/ # Documentation directory
│ ├── prd.txt # Product requirements
│ ├── reports/ # Analysis reports directory
│ │ └── task-complexity-report.json
│ ├── templates/ # Template files
│ │ └── example_prd.txt # Example PRD template
│ └── config.json # AI models & settings
├── tasks/
│ ├── tasks.json # Main task database
│ ├── task-1.md # Individual task files
── task-2.md
├── scripts/
│ ├── prd.txt # Product requirements
└── task-complexity-report.json
├── .claude/
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .env # API keys
├── .mcp.json # MCP configuration
── CLAUDE.md # This file - auto-loaded by Claude Code
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .taskmasterconfig # AI models & settings
├── .env # API keys
── .mcp.json # MCP configuration
└── CLAUDE.md # This file - auto-loaded by Claude Code
```
## MCP Integration
@@ -82,23 +78,23 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
}
```
@@ -139,7 +135,7 @@ complexity_report; // = task-master complexity-report
task-master init
# Create or obtain PRD, then parse it
task-master parse-prd .taskmaster/docs/prd.txt
task-master parse-prd scripts/prd.txt
# Analyze complexity and expand tasks
task-master analyze-complexity --research
@@ -212,14 +208,14 @@ Add to `.claude/settings.json`:
```json
{
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_ai__*"
]
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_ai__*"
]
}
```
@@ -272,15 +268,15 @@ task-master models --set-fallback gpt-4o-mini
```json
{
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
}
```
@@ -388,7 +384,7 @@ These commands make AI calls and may take up to a minute:
### File Management
- Never manually edit `tasks.json` - use commands instead
- Never manually edit `.taskmaster/config.json` - use `task-master models`
- Never manually edit `.taskmasterconfig` - use `task-master models`
- Task markdown files in `tasks/` are auto-generated
- Run `task-master generate` after manual changes to tasks.json

View File

@@ -5,5 +5,5 @@ OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI/Ope
GOOGLE_API_KEY="your_google_api_key_here" # Optional, for Google Gemini models.
MISTRAL_API_KEY="your_mistral_key_here" # Optional, for Mistral AI models.
XAI_API_KEY="YOUR_XAI_KEY_HERE" # Optional, for xAI AI models.
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmaster/config.json).
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).
OLLAMA_API_KEY="your_ollama_api_key_here" # Optional: For remote Ollama servers that require authentication.

View File

@@ -20,7 +20,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
Task Master configuration is now managed through two primary methods:
1. **`.taskmaster/config.json` File (Project Root - Primary)**
1. **`.taskmasterconfig` File (Project Root - Primary)**
- Stores AI model selections (`main`, `research`, `fallback`), model parameters (`maxTokens`, `temperature`), `logLevel`, `defaultSubtasks`, `defaultPriority`, `projectName`, etc.
- Managed using the `task-master models --setup` command or the `models` MCP tool.
@@ -192,7 +192,7 @@ Notes:
## AI Integration (Updated)
- The script now uses a unified AI service layer (`ai-services-unified.js`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmaster/config.json` based on the requested `role` (`main` or `research`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmasterconfig` based on the requested `role` (`main` or `research`).
- API keys are automatically resolved from your `.env` file (for CLI) or MCP session environment.
- To use the research capabilities (e.g., `expand --research`), ensure you have:
1. Configured a model for the `research` role using `task-master models --setup` (Perplexity models are recommended).
@@ -357,25 +357,25 @@ The output report structure is:
```json
{
"meta": {
"generatedAt": "2023-06-15T12:34:56.789Z",
"tasksAnalyzed": 20,
"thresholdScore": 5,
"projectName": "Your Project Name",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 8,
"taskTitle": "Develop Implementation Drift Handling",
"complexityScore": 9.5,
"recommendedSubtasks": 6,
"expansionPrompt": "Create subtasks that handle detecting...",
"reasoning": "This task requires sophisticated logic...",
"expansionCommand": "task-master expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
}
// More tasks sorted by complexity score (highest first)
]
"meta": {
"generatedAt": "2023-06-15T12:34:56.789Z",
"tasksAnalyzed": 20,
"thresholdScore": 5,
"projectName": "Your Project Name",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 8,
"taskTitle": "Develop Implementation Drift Handling",
"complexityScore": 9.5,
"recommendedSubtasks": 6,
"expansionPrompt": "Create subtasks that handle detecting...",
"reasoning": "This task requires sophisticated logic...",
"expansionCommand": "task-master expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
}
// More tasks sorted by complexity score (highest first)
]
}
```

View File

@@ -1,47 +0,0 @@
{
"files": {
"ignore": [
"build",
"coverage",
".changeset",
"tasks",
"package-lock.json",
"tests/fixture/*.json"
]
},
"formatter": {
"bracketSpacing": true,
"enabled": true,
"indentStyle": "tab",
"lineWidth": 80
},
"javascript": {
"formatter": {
"arrowParentheses": "always",
"quoteStyle": "single",
"trailingCommas": "none"
}
},
"linter": {
"rules": {
"complexity": {
"noForEach": "off",
"useOptionalChain": "off"
},
"correctness": {
"noConstantCondition": "off",
"noUnreachable": "off"
},
"suspicious": {
"noDuplicateTestHooks": "off",
"noPrototypeBuiltins": "off"
},
"style": {
"noUselessElse": "off",
"useNodejsImportProtocol": "off",
"useNumberNamespace": "off",
"noParameterAssign": "off"
}
}
}
}

View File

@@ -2,82 +2,68 @@
Taskmaster uses two primary methods for configuration:
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
- **Location:** This file is created in the root directory of your project when you run the `task-master models --setup` interactive setup. You typically do this during the initialization sequence. Do not manually edit this file beyond adjusting Temperature and Max Tokens depending on your model.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseURL": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
}
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseUrl": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseUrl": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}
```
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key.
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseUrl` in `.taskmasterconfig`:** You can add a `baseUrl` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseUrl` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- For projects that haven't migrated to the new structure yet.
- **Location:** Project root directory.
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
## Example `.env` File (for API Keys)
@@ -92,20 +78,14 @@ PERPLEXITY_API_KEY=pplx-your-key-here
# Optional Endpoint Overrides
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Google Vertex AI Configuration (Required if using 'vertex' provider)
# VERTEX_PROJECT_ID=your-gcp-project-id
# VERTEX_LOCATION=us-central1
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
```
## Troubleshooting
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in `.taskmasterconfig`.
### If `task-master init` doesn't respond:
@@ -122,45 +102,3 @@ git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
```
## Provider-Specific Configuration
### Google Vertex AI Configuration
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
1. **Prerequisites**:
- A Google Cloud account with Vertex AI API enabled
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
- A Google Cloud project ID
2. **Authentication Options**:
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
3. **Required Configuration**:
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
4. **Example Setup**:
```bash
# In .env file
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
```
Or using service account:
```bash
# In .env file
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
```
5. **In .taskmasterconfig**:
```json
"global": {
"vertexProjectId": "my-gcp-project-123",
"vertexLocation": "us-central1"
}
```

View File

@@ -5,7 +5,7 @@ Here are some common interactions with Cursor AI when using Task Master:
## Starting a new project
```
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```

View File

@@ -1,235 +0,0 @@
# Migration Guide: New .taskmaster Directory Structure
## Overview
Task Master v0.16.0 introduces a new `.taskmaster/` directory structure to keep your project directories clean and organized. This guide explains the benefits of the new structure and how to migrate existing projects.
## What's New
### Before (Legacy Structure)
```
your-project/
├── tasks/ # Task files
│ ├── tasks.json
│ ├── task-1.txt
│ └── task-2.txt
├── scripts/ # PRD and reports
│ ├── prd.txt
│ ├── example_prd.txt
│ └── task-complexity-report.json
├── .taskmasterconfig # Configuration
└── ... (your project files)
```
### After (New Structure)
```
your-project/
├── .taskmaster/ # Consolidated Task Master files
│ ├── config.json # Configuration (was .taskmasterconfig)
│ ├── tasks/ # Task files
│ │ ├── tasks.json
│ │ ├── task-1.txt
│ │ └── task-2.txt
│ ├── docs/ # Project documentation
│ │ └── prd.txt
│ ├── reports/ # Generated reports
│ │ └── task-complexity-report.json
│ └── templates/ # Example/template files
│ └── example_prd.txt
└── ... (your project files)
```
## Benefits of the New Structure
**Cleaner Project Root**: No more scattered Task Master files
**Better Organization**: Logical separation of tasks, docs, reports, and templates
**Hidden by Default**: `.taskmaster/` directory is hidden from most file browsers
**Future-Proof**: Centralized location for Task Master extensions
**Backward Compatible**: Existing projects continue to work until migrated
## Migration Options
### Option 1: Automatic Migration (Recommended)
Task Master provides a built-in migration command that handles everything automatically:
#### CLI Migration
```bash
# Dry run to see what would be migrated
task-master migrate --dry-run
# Perform the migration with backup
task-master migrate --backup
# Force migration (overwrites existing files)
task-master migrate --force
# Clean up legacy files after migration
task-master migrate --cleanup
```
#### MCP Migration (Cursor/AI Editors)
Ask your AI assistant:
```
Please migrate my Task Master project to the new .taskmaster directory structure
```
### Option 2: Manual Migration
If you prefer to migrate manually:
1. **Create the new directory structure:**
```bash
mkdir -p .taskmaster/{tasks,docs,reports,templates}
```
2. **Move your files:**
```bash
# Move tasks
mv tasks/* .taskmaster/tasks/
# Move configuration
mv .taskmasterconfig .taskmaster/config.json
# Move PRD and documentation
mv scripts/prd.txt .taskmaster/docs/
mv scripts/example_prd.txt .taskmaster/templates/
# Move reports (if they exist)
mv scripts/task-complexity-report.json .taskmaster/reports/ 2>/dev/null || true
```
3. **Clean up empty directories:**
```bash
rmdir tasks scripts 2>/dev/null || true
```
## What Gets Migrated
The migration process handles these file types:
### Tasks Directory → `.taskmaster/tasks/`
- `tasks.json`
- Individual task text files (`.txt`)
### Scripts Directory → Multiple Destinations
- **PRD files** → `.taskmaster/docs/`
- `prd.txt`, `requirements.txt`, etc.
- **Example/Template files** → `.taskmaster/templates/`
- `example_prd.txt`, template files
- **Reports** → `.taskmaster/reports/`
- `task-complexity-report.json`
### Configuration
- `.taskmasterconfig` → `.taskmaster/config.json`
## After Migration
Once migrated, Task Master will:
✅ **Automatically use** the new directory structure
✅ **Show deprecation warnings** when legacy files are detected
✅ **Create new files** in the proper locations
✅ **Fall back gracefully** to legacy locations if new ones don't exist
### Verification
After migration, verify everything works:
1. **List your tasks:**
```bash
task-master list
```
2. **Check your configuration:**
```bash
task-master models
```
3. **Generate new task files:**
```bash
task-master generate
```
## Troubleshooting
### Migration Issues
**Q: Migration says "no files to migrate"**
A: Your project may already be using the new structure or have no Task Master files to migrate.
**Q: Migration fails with permission errors**
A: Ensure you have write permissions in your project directory.
**Q: Some files weren't migrated**
A: Check the migration output - some files may not match the expected patterns. You can migrate these manually.
### Working with Legacy Projects
If you're working with an older project that hasn't been migrated:
- Task Master will continue to work with the old structure
- You'll see deprecation warnings in the output
- New files will still be created in legacy locations
- Use the migration command when ready to upgrade
### New Project Initialization
New projects automatically use the new structure:
```bash
task-master init # Creates .taskmaster/ structure
```
## Path Changes for Developers
If you're developing tools or scripts that interact with Task Master files:
### Configuration File
- **Old:** `.taskmasterconfig`
- **New:** `.taskmaster/config.json`
- **Fallback:** Task Master checks both locations
### Tasks File
- **Old:** `tasks/tasks.json`
- **New:** `.taskmaster/tasks/tasks.json`
- **Fallback:** Task Master checks both locations
### Reports
- **Old:** `scripts/task-complexity-report.json`
- **New:** `.taskmaster/reports/task-complexity-report.json`
- **Fallback:** Task Master checks both locations
### PRD Files
- **Old:** `scripts/prd.txt`
- **New:** `.taskmaster/docs/prd.txt`
- **Fallback:** Task Master checks both locations
## Need Help?
If you encounter issues during migration:
1. **Check the logs:** Add `--debug` flag for detailed output
2. **Backup first:** Always use `--backup` option for safety
3. **Test with dry-run:** Use `--dry-run` to preview changes
4. **Ask for help:** Use our Discord community or GitHub issues
---
_This migration guide applies to Task Master v3.x and later. For older versions, please upgrade to the latest version first._

View File

@@ -1,4 +1,4 @@
# Available Models as of June 8, 2025
# Available Models as of May 26, 2025
## Main Models
@@ -24,7 +24,6 @@
| google | gemini-2.5-flash-preview-04-17 | — | — | — |
| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
@@ -71,8 +70,6 @@
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |

View File

@@ -20,22 +20,22 @@ npm i -g task-master-ai
```json
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}
}
```
@@ -60,12 +60,12 @@ The AI will:
- Set up initial configuration files
- Guide you through the rest of the process
5. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
6. **Use natural language commands** to interact with Task Master:
```
Can you parse my PRD at .taskmaster/docs/prd.txt?
Can you parse my PRD at scripts/prd.txt?
What's the next task I should work on?
Can you help me implement task 3?
```
@@ -132,7 +132,7 @@ If you're not using MCP, you can still set up Cursor integration:
1. After initializing your project, open it in Cursor
2. The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
3. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
4. Open Cursor's AI chat and switch to Agent mode
### Alternative MCP Setup in Cursor
@@ -155,13 +155,13 @@ Once configured, you can interact with Task Master's task management commands di
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
```
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at .taskmaster/docs/prd.txt.
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
```
The agent will execute:
```bash
task-master parse-prd .taskmaster/docs/prd.txt
task-master parse-prd scripts/prd.txt
```
This will:
@@ -377,7 +377,7 @@ task-master expand --id=5 --research
### Starting a new project
```
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```

View File

@@ -1,4 +1,4 @@
# Taskmaster AI Installation Guide
``# Taskmaster AI Installation Guide
This guide helps AI assistants install and configure Taskmaster for users in their development projects.

View File

@@ -28,7 +28,8 @@ export async function complexityReportDirect(args, log) {
log.error('complexityReportDirect called without reportPath');
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: 'reportPath is required' }
error: { code: 'MISSING_ARGUMENT', message: 'reportPath is required' },
fromCache: false
};
}
@@ -110,7 +111,8 @@ export async function complexityReportDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -60,8 +60,7 @@ export async function expandAllTasksDirect(args, log, context = {}) {
useResearch,
additionalContext,
forceFlag,
{ session, mcpLog, projectRoot },
'json'
{ session, mcpLog, projectRoot }
);
// Core function now returns a summary object including the *aggregated* telemetryData

View File

@@ -29,7 +29,7 @@ import { createLogWrapper } from '../../tools/utils.js';
* @param {Object} log - Logger object
* @param {Object} context - Context object containing session
* @param {Object} [context.session] - MCP Session object
* @returns {Promise<Object>} - Task expansion result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Task expansion result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function expandTaskDirect(args, log, context = {}) {
const { session } = context; // Extract session
@@ -54,7 +54,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -72,7 +73,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'INPUT_VALIDATION_ERROR',
message: 'Task ID is required'
}
},
fromCache: false
};
}
@@ -103,7 +105,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksPath}. readJSON returned: ${JSON.stringify(data)}`
}
},
fromCache: false
};
}
@@ -118,7 +121,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'TASK_NOT_FOUND',
message: `Task with ID ${taskId} not found`
}
},
fromCache: false
};
}
@@ -129,7 +133,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'TASK_COMPLETED',
message: `Task ${taskId} is already marked as ${task.status} and cannot be expanded`
}
},
fromCache: false
};
}
@@ -146,7 +151,8 @@ export async function expandTaskDirect(args, log, context = {}) {
task,
subtasksAdded: 0,
hasExistingSubtasks
}
},
fromCache: false
};
}
@@ -226,7 +232,8 @@ export async function expandTaskDirect(args, log, context = {}) {
subtasksAdded,
hasExistingSubtasks,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
// Make sure to restore normal logging even if there's an error
@@ -238,7 +245,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'CORE_FUNCTION_ERROR',
message: error.message || 'Failed to expand task'
}
},
fromCache: false
};
}
} catch (error) {
@@ -248,7 +256,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'CORE_FUNCTION_ERROR',
message: error.message || 'Failed to expand task'
}
},
fromCache: false
};
}
}

View File

@@ -28,7 +28,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
if (!outputDir) {
@@ -36,7 +37,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(`Error in generateTaskFiles: ${genError.message}`);
return {
success: false,
error: { code: 'GENERATE_FILES_ERROR', message: genError.message }
error: { code: 'GENERATE_FILES_ERROR', message: genError.message },
fromCache: false
};
}
@@ -76,7 +79,8 @@ export async function generateTaskFilesDirect(args, log) {
outputDir: resolvedOutputDir,
taskFiles:
'Individual task files have been generated in the output directory'
}
},
fromCache: false // This operation always modifies state and should never be cached
};
} catch (error) {
// Make sure to restore normal logging if an outer error occurs
@@ -88,7 +92,8 @@ export async function generateTaskFilesDirect(args, log) {
error: {
code: 'GENERATE_TASKS_ERROR',
message: error.message || 'Unknown error generating task files'
}
},
fromCache: false
};
}
}

View File

@@ -41,7 +41,8 @@ export async function initializeProjectDirect(args, log, context = {}) {
code: 'INVALID_TARGET_DIRECTORY',
message: `Cannot initialize project: Invalid target directory '${targetDirectory}' received. Please ensure a valid workspace/folder is open or specified.`,
details: `Received args.projectRoot: ${args.projectRoot}` // Show what was received
}
},
fromCache: false
};
}
@@ -74,7 +75,7 @@ export async function initializeProjectDirect(args, log, context = {}) {
resultData = {
message: 'Project initialized successfully.',
next_step:
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in .taskmaster/docs/ directory). You can create a prd.txt file by asking the user about their idea, and then using the .taskmaster/templates/example_prd.txt file as a template to generate a prd.txt file in .taskmaster/docs/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in .taskmaster/docs/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in the project root directory, scripts/ directory). You can create a prd.txt file by asking the user about their idea, and then using the scripts/example_prd.txt file as a template to genrate a prd.txt file in scripts/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in scripts/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
...result
};
success = true;
@@ -96,8 +97,8 @@ export async function initializeProjectDirect(args, log, context = {}) {
}
if (success) {
return { success: true, data: resultData };
return { success: true, data: resultData, fromCache: false };
} else {
return { success: false, error: errorResult };
return { success: false, error: errorResult, fromCache: false };
}
}

View File

@@ -14,7 +14,7 @@ import {
*
* @param {Object} args - Command arguments (now expecting tasksJsonPath explicitly).
* @param {Object} log - Logger object.
* @returns {Promise<Object>} - Task list result { success: boolean, data?: any, error?: { code: string, message: string } }.
* @returns {Promise<Object>} - Task list result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }.
*/
export async function listTasksDirect(args, log) {
// Destructure the explicit tasksJsonPath from args
@@ -27,7 +27,8 @@ export async function listTasksDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}

View File

@@ -3,7 +3,7 @@
*/
import { moveTask } from '../../../../scripts/modules/task-manager.js';
import { findTasksPath } from '../utils/path-utils.js';
import { findTasksJsonPath } from '../utils/path-utils.js';
import {
enableSilentMode,
disableSilentMode
@@ -58,7 +58,7 @@ export async function moveTaskDirect(args, log, context = {}) {
}
};
}
tasksPath = findTasksPath(args, log);
tasksPath = findTasksJsonPath(args, log);
}
// Enable silent mode to prevent console output during MCP operation

View File

@@ -19,7 +19,7 @@ import {
* @param {Object} args - Command arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Next task result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Next task result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function nextTaskDirect(args, log) {
// Destructure expected args
@@ -32,7 +32,8 @@ export async function nextTaskDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -120,7 +121,7 @@ export async function nextTaskDirect(args, log) {
// Use the caching utility
try {
const result = await coreNextTaskAction();
log.info('nextTaskDirect completed.');
log.info(`nextTaskDirect completed.`);
return result;
} catch (error) {
log.error(`Unexpected error during nextTask: ${error.message}`);
@@ -129,7 +130,8 @@ export async function nextTaskDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -13,8 +13,6 @@ import {
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
import { getDefaultNumTasks } from '../../../../scripts/modules/config-manager.js';
import { resolvePrdPath, resolveProjectPath } from '../utils/path-utils.js';
import { TASKMASTER_TASKS_FILE } from '../../../../src/constants/paths.js';
/**
* Direct function wrapper for parsing PRD documents and generating tasks.
@@ -51,20 +49,7 @@ export async function parsePRDDirect(args, log, context = {}) {
}
};
}
// Resolve input path using path utilities
let inputPath;
if (inputArg) {
try {
inputPath = resolvePrdPath({ input: inputArg, projectRoot }, session);
} catch (error) {
logWrapper.error(`Error resolving PRD path: ${error.message}`);
return {
success: false,
error: { code: 'FILE_NOT_FOUND', message: error.message }
};
}
} else {
if (!inputArg) {
logWrapper.error('parsePRDDirect called without input path');
return {
success: false,
@@ -72,13 +57,11 @@ export async function parsePRDDirect(args, log, context = {}) {
};
}
// Resolve output path - use new path utilities for default
// Resolve input and output paths relative to projectRoot
const inputPath = path.resolve(projectRoot, inputArg);
const outputPath = outputArg
? path.isAbsolute(outputArg)
? outputArg
: path.resolve(projectRoot, outputArg)
: resolveProjectPath(TASKMASTER_TASKS_FILE, args) ||
path.resolve(projectRoot, TASKMASTER_TASKS_FILE);
? path.resolve(projectRoot, outputArg)
: path.resolve(projectRoot, 'tasks', 'tasks.json'); // Default output path
// Check if input file exists
if (!fs.existsSync(inputPath)) {
@@ -96,12 +79,17 @@ export async function parsePRDDirect(args, log, context = {}) {
logWrapper.info(`Creating output directory: ${outputDir}`);
fs.mkdirSync(outputDir, { recursive: true });
}
} catch (error) {
const errorMsg = `Failed to create output directory ${outputDir}: ${error.message}`;
logWrapper.error(errorMsg);
} catch (dirError) {
logWrapper.error(
`Failed to create output directory ${outputDir}: ${dirError.message}`
);
// Return an error response immediately if dir creation fails
return {
success: false,
error: { code: 'DIRECTORY_CREATE_FAILED', message: errorMsg }
error: {
code: 'DIRECTORY_CREATION_ERROR',
message: `Failed to create output directory: ${dirError.message}`
}
};
}
@@ -109,7 +97,7 @@ export async function parsePRDDirect(args, log, context = {}) {
if (numTasksArg) {
numTasks =
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
if (Number.isNaN(numTasks) || numTasks <= 0) {
if (isNaN(numTasks) || numTasks <= 0) {
// Ensure positive number
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
logWrapper.warn(

View File

@@ -21,7 +21,7 @@ import {
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: false }
*/
export async function removeTaskDirect(args, log) {
// Destructure expected args
@@ -35,7 +35,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -47,7 +48,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INPUT_VALIDATION_ERROR',
message: 'Task ID is required'
}
},
fromCache: false
};
}
@@ -66,7 +68,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksJsonPath}`
}
},
fromCache: false
};
}
@@ -80,7 +83,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INVALID_TASK_ID',
message: `The following tasks were not found: ${invalidTasks.join(', ')}`
}
},
fromCache: false
};
}
@@ -129,7 +133,8 @@ export async function removeTaskDirect(args, log) {
details: failedRemovals
.map((r) => `${r.taskId}: ${r.error}`)
.join('; ')
}
},
fromCache: false
};
}
@@ -142,7 +147,8 @@ export async function removeTaskDirect(args, log) {
failed: failedRemovals.length,
results: results,
tasksPath: tasksJsonPath
}
},
fromCache: false
};
} catch (error) {
// Ensure silent mode is disabled even if an outer error occurs
@@ -155,7 +161,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -29,7 +29,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -40,7 +41,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_TASK_ID', message: errorMessage }
error: { code: 'MISSING_TASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -50,7 +52,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_STATUS', message: errorMessage }
error: { code: 'MISSING_STATUS', message: errorMessage },
fromCache: false
};
}
@@ -79,7 +82,8 @@ export async function setTaskStatusDirect(args, log) {
taskId,
status: newStatus,
tasksPath: tasksPath // Return the path used
}
},
fromCache: false // This operation always modifies state and should never be cached
};
// If the task was completed, attempt to fetch the next task
@@ -122,7 +126,8 @@ export async function setTaskStatusDirect(args, log) {
error: {
code: 'SET_STATUS_ERROR',
message: error.message || 'Unknown error setting task status'
}
},
fromCache: false
};
} finally {
// ALWAYS restore normal logging in finally block
@@ -140,7 +145,8 @@ export async function setTaskStatusDirect(args, log) {
error: {
code: 'SET_STATUS_ERROR',
message: error.message || 'Unknown error setting task status'
}
},
fromCache: false
};
}
}

View File

@@ -8,14 +8,14 @@ import {
readComplexityReport,
readJSON
} from '../../../../scripts/modules/utils.js';
import { findTasksPath } from '../utils/path-utils.js';
import { findTasksJsonPath } from '../utils/path-utils.js';
/**
* Direct function wrapper for getting task details.
*
* @param {Object} args - Command arguments.
* @param {string} args.id - Task ID to show.
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksPath).
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksJsonPath).
* @param {string} args.reportPath - Explicit path to the complexity report file.
* @param {string} [args.status] - Optional status to filter subtasks by.
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
@@ -37,7 +37,7 @@ export async function showTaskDirect(args, log) {
let tasksJsonPath;
try {
// Use the projectRoot passed directly from args
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: projectRoot, file: file },
log
);

View File

@@ -42,7 +42,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -53,7 +54,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_PROMPT', message: errorMessage }
error: { code: 'MISSING_PROMPT', message: errorMessage },
fromCache: false
};
}
@@ -74,7 +77,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
log.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID_TYPE', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID_TYPE', message: errorMessage },
fromCache: false
};
}
@@ -84,7 +88,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
log.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID_FORMAT', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID_FORMAT', message: errorMessage },
fromCache: false
};
}
@@ -123,7 +128,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(message);
return {
success: false,
error: { code: 'SUBTASK_NOT_FOUND', message: message }
error: { code: 'SUBTASK_NOT_FOUND', message: message },
fromCache: false
};
}
@@ -140,7 +146,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
tasksPath,
useResearch,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
logWrapper.error(`Error updating subtask by ID: ${error.message}`);
@@ -149,7 +156,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
error: {
code: 'UPDATE_SUBTASK_CORE_ERROR',
message: error.message || 'Unknown error updating subtask'
}
},
fromCache: false
};
} finally {
if (!wasSilent && isSilentMode()) {
@@ -166,7 +174,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
error: {
code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message || 'Unknown setup error'
}
},
fromCache: false
};
}
}

View File

@@ -42,7 +42,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -53,7 +54,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_TASK_ID', message: errorMessage }
error: { code: 'MISSING_TASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_PROMPT', message: errorMessage }
error: { code: 'MISSING_PROMPT', message: errorMessage },
fromCache: false
};
}
@@ -81,7 +84,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_TASK_ID', message: errorMessage }
error: { code: 'INVALID_TASK_ID', message: errorMessage },
fromCache: false
};
}
}
@@ -133,7 +137,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
taskId: taskId,
updated: false,
telemetryData: coreResult?.telemetryData
}
},
fromCache: false
};
}
@@ -150,7 +155,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
updated: true,
updatedTask: coreResult.updatedTask,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
logWrapper.error(`Error updating task by ID: ${error.message}`);
@@ -159,7 +165,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
error: {
code: 'UPDATE_TASK_CORE_ERROR',
message: error.message || 'Unknown error updating task'
}
},
fromCache: false
};
} finally {
if (!wasSilent && isSilentMode()) {
@@ -174,7 +181,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
error: {
code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message || 'Unknown setup error'
}
},
fromCache: false
};
}
}

View File

@@ -21,7 +21,7 @@ import {
*/
export async function updateTasksDirect(args, log, context = {}) {
const { session } = context;
const { from, prompt, research, tasksJsonPath, projectRoot } = args;
const { from, prompt, research, file: fileArg, projectRoot } = args;
// Create the standard logger wrapper
const logWrapper = createLogWrapper(log);
@@ -60,15 +60,20 @@ export async function updateTasksDirect(args, log, context = {}) {
};
}
// Resolve tasks file path
const tasksFile = fileArg
? path.resolve(projectRoot, fileArg)
: path.resolve(projectRoot, 'tasks', 'tasks.json');
logWrapper.info(
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksJsonPath}, ProjectRoot: ${projectRoot}`
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksFile}, ProjectRoot: ${projectRoot}`
);
enableSilentMode(); // Enable silent mode
try {
// Call the core updateTasks function
const result = await updateTasks(
tasksJsonPath,
tasksFile,
from,
prompt,
research,
@@ -88,7 +93,7 @@ export async function updateTasksDirect(args, log, context = {}) {
success: true,
data: {
message: `Successfully updated ${result.updatedTasks.length} tasks.`,
tasksPath: tasksJsonPath,
tasksFile,
updatedCount: result.updatedTasks.length,
telemetryData: result.telemetryData
}

View File

@@ -33,7 +33,7 @@ import { modelsDirect } from './direct-functions/models.js';
import { moveTaskDirect } from './direct-functions/move-task.js';
// Re-export utility functions
export { findTasksPath } from './utils/path-utils.js';
export { findTasksJsonPath } from './utils/path-utils.js';
// Use Map for potential future enhancements like introspection or dynamic dispatch
export const directFunctions = new Map([

View File

@@ -1,220 +1,436 @@
/**
* path-utils.js
* Utility functions for file path operations in Task Master
*
* This module provides robust path resolution for both:
* 1. PACKAGE PATH: Where task-master code is installed
* (global node_modules OR local ./node_modules/task-master OR direct from repo)
* 2. PROJECT PATH: Where user's tasks.json resides (typically user's project root)
*/
import path from 'path';
import {
findTasksPath as coreFindTasksPath,
findPRDPath as coreFindPrdPath,
findComplexityReportPath as coreFindComplexityReportPath,
findProjectRoot as coreFindProjectRoot,
normalizeProjectRoot
} from '../../../../src/utils/path-utils.js';
import { PROJECT_MARKERS } from '../../../../src/constants/paths.js';
import fs from 'fs';
import { fileURLToPath } from 'url';
import os from 'os';
// Store last found project root to improve performance on subsequent calls (primarily for CLI)
export let lastFoundProjectRoot = null;
// Project marker files that indicate a potential project root
export const PROJECT_MARKERS = [
// Task Master specific
'tasks.json',
'tasks/tasks.json',
// Common version control
'.git',
'.svn',
// Common package files
'package.json',
'pyproject.toml',
'Gemfile',
'go.mod',
'Cargo.toml',
// Common IDE/editor folders
'.cursor',
'.vscode',
'.idea',
// Common dependency directories (check if directory)
'node_modules',
'venv',
'.venv',
// Common config files
'.env',
'.eslintrc',
'tsconfig.json',
'babel.config.js',
'jest.config.js',
'webpack.config.js',
// Common CI/CD files
'.github/workflows',
'.gitlab-ci.yml',
'.circleci/config.yml'
];
/**
* MCP-specific path utilities that extend core path utilities with session support
* This module handles session-specific path resolution for the MCP server
* Gets the path to the task-master package installation directory
* NOTE: This might become unnecessary if CLI fallback in MCP utils is removed.
* @returns {string} - Absolute path to the package installation directory
*/
export function getPackagePath() {
// When running from source, __dirname is the directory containing this file
// When running from npm, we need to find the package root
const thisFilePath = fileURLToPath(import.meta.url);
const thisFileDir = path.dirname(thisFilePath);
/**
* Silent logger for MCP context to prevent console output
*/
const silentLogger = {
info: () => {},
warn: () => {},
error: () => {},
debug: () => {},
success: () => {}
};
/**
* Cache for last found project root to improve performance
*/
export const lastFoundProjectRoot = null;
/**
* Find PRD file with MCP support
* @param {string} [explicitPath] - Explicit path to PRD file (highest priority)
* @param {Object} [args] - Arguments object for context
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
*/
export function findPrdPath(explicitPath, args = null, log = silentLogger) {
return coreFindPrdPath(explicitPath, args, log);
// Navigate from core/utils up to the package root
// In dev: /path/to/task-master/mcp-server/src/core/utils -> /path/to/task-master
// In npm: /path/to/node_modules/task-master/mcp-server/src/core/utils -> /path/to/node_modules/task-master
return path.resolve(thisFileDir, '../../../../');
}
/**
* Resolve tasks.json path from arguments
* Prioritizes explicit path parameter, then uses fallback logic
* @param {Object} args - Arguments object containing projectRoot and optional file path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to tasks.json or null if not found
* Finds the absolute path to the tasks.json file based on project root and arguments.
* @param {Object} args - Command arguments, potentially including 'projectRoot' and 'file'.
* @param {Object} log - Logger object.
* @returns {string} - Absolute path to the tasks.json file.
* @throws {Error} - If tasks.json cannot be found.
*/
export function resolveTasksPath(args, log = silentLogger) {
// Get explicit path from args.file if provided
const explicitPath = args?.file;
const rawProjectRoot = args?.projectRoot;
export function findTasksJsonPath(args, log) {
// PRECEDENCE ORDER for finding tasks.json:
// 1. Explicitly provided `projectRoot` in args (Highest priority, expected in MCP context)
// 2. Previously found/cached `lastFoundProjectRoot` (primarily for CLI performance)
// 3. Search upwards from current working directory (`process.cwd()`) - CLI usage
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
// 1. If project root is explicitly provided (e.g., from MCP session), use it directly
if (args.projectRoot) {
const projectRoot = args.projectRoot;
log.info(`Using explicitly provided project root: ${projectRoot}`);
try {
// This will throw if tasks.json isn't found within this root
return findTasksJsonInDirectory(projectRoot, args.file, log);
} catch (error) {
// Include debug info in error
const debugInfo = {
projectRoot,
currentDir: process.cwd(),
serverDir: path.dirname(process.argv[1]),
possibleProjectRoot: path.resolve(
path.dirname(process.argv[1]),
'../..'
),
lastFoundProjectRoot,
searchedPaths: error.message
};
error.message = `Tasks file not found in any of the expected locations relative to project root "${projectRoot}" (from session).\nDebug Info: ${JSON.stringify(debugInfo, null, 2)}`;
throw error;
}
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// --- Fallback logic primarily for CLI or when projectRoot isn't passed ---
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
// 2. If we have a last known project root that worked, try it first
if (lastFoundProjectRoot) {
log.info(`Trying last known project root: ${lastFoundProjectRoot}`);
try {
// Use the cached root
const tasksPath = findTasksJsonInDirectory(
lastFoundProjectRoot,
args.file,
log
);
return tasksPath; // Return if found in cached root
} catch (error) {
log.info(
`Task file not found in last known project root, continuing search.`
);
// Continue with search if not found in cache
}
}
// Use core findTasksPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindTasksPath(explicitPath, { projectRoot }, log);
}
// 3. Start search from current directory (most common CLI scenario)
const startDir = process.cwd();
log.info(
`Searching for tasks.json starting from current directory: ${startDir}`
);
// Fallback to core function without projectRoot context
return coreFindTasksPath(explicitPath, null, log);
// Try to find tasks.json by walking up the directory tree from cwd
try {
// This will throw if not found in the CWD tree
return findTasksJsonWithParentSearch(startDir, args.file, log);
} catch (error) {
// If all attempts fail, augment and throw the original error from CWD search
error.message = `${error.message}\n\nPossible solutions:\n1. Run the command from your project directory containing tasks.json\n2. Use --project-root=/path/to/project to specify the project location (if using CLI)\n3. Ensure the project root is correctly passed from the client (if using MCP)\n\nCurrent working directory: ${startDir}\nLast known project root: ${lastFoundProjectRoot}\nProject root from args: ${args.projectRoot}`;
throw error;
}
}
/**
* Resolve PRD path from arguments
* @param {Object} args - Arguments object containing projectRoot and optional input path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
* Check if a directory contains any project marker files or directories
* @param {string} dirPath - Directory to check
* @returns {boolean} - True if the directory contains any project markers
*/
export function resolvePrdPath(args, log = silentLogger) {
// Get explicit path from args.input if provided
const explicitPath = args?.input;
const rawProjectRoot = args?.projectRoot;
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
}
// Use core findPRDPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindPrdPath(explicitPath, { projectRoot }, log);
}
// Fallback to core function without projectRoot context
return coreFindPrdPath(explicitPath, null, log);
function hasProjectMarkers(dirPath) {
return PROJECT_MARKERS.some((marker) => {
const markerPath = path.join(dirPath, marker);
// Check if the marker exists as either a file or directory
return fs.existsSync(markerPath);
});
}
/**
* Resolve complexity report path from arguments
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to complexity report or null if not found
* Search for tasks.json in a specific directory
* @param {string} dirPath - Directory to search in
* @param {string} explicitFilePath - Optional explicit file path relative to dirPath
* @param {Object} log - Logger object
* @returns {string} - Absolute path to tasks.json
* @throws {Error} - If tasks.json cannot be found
*/
export function resolveComplexityReportPath(args, log = silentLogger) {
// Get explicit path from args.complexityReport if provided
const explicitPath = args?.complexityReport;
const rawProjectRoot = args?.projectRoot;
function findTasksJsonInDirectory(dirPath, explicitFilePath, log) {
const possiblePaths = [];
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
// 1. If a file is explicitly provided relative to dirPath
if (explicitFilePath) {
possiblePaths.push(path.resolve(dirPath, explicitFilePath));
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// 2. Check the standard locations relative to dirPath
possiblePaths.push(
path.join(dirPath, 'tasks.json'),
path.join(dirPath, 'tasks', 'tasks.json')
);
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
log.info(`Checking potential task file paths: ${possiblePaths.join(', ')}`);
// Find the first existing path
for (const p of possiblePaths) {
log.info(`Checking if exists: ${p}`);
const exists = fs.existsSync(p);
log.info(`Path ${p} exists: ${exists}`);
if (exists) {
log.info(`Found tasks file at: ${p}`);
// Store the project root for future use
lastFoundProjectRoot = dirPath;
return p;
}
}
// Use core findComplexityReportPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindComplexityReportPath(explicitPath, { projectRoot }, log);
}
// Fallback to core function without projectRoot context
return coreFindComplexityReportPath(explicitPath, null, log);
// If no file was found, throw an error
const error = new Error(
`Tasks file not found in any of the expected locations relative to ${dirPath}: ${possiblePaths.join(', ')}`
);
error.code = 'TASKS_FILE_NOT_FOUND';
throw error;
}
/**
* Resolve any project-relative path from arguments
* @param {string} relativePath - Relative path to resolve
* @param {Object} args - Arguments object containing projectRoot
* @returns {string} - Resolved absolute path
* Recursively search for tasks.json in the given directory and parent directories
* Also looks for project markers to identify potential project roots
* @param {string} startDir - Directory to start searching from
* @param {string} explicitFilePath - Optional explicit file path
* @param {Object} log - Logger object
* @returns {string} - Absolute path to tasks.json
* @throws {Error} - If tasks.json cannot be found in any parent directory
*/
export function resolveProjectPath(relativePath, args) {
// Ensure we have a projectRoot from args
if (!args?.projectRoot) {
throw new Error('projectRoot is required in args to resolve project paths');
function findTasksJsonWithParentSearch(startDir, explicitFilePath, log) {
let currentDir = startDir;
const rootDir = path.parse(currentDir).root;
// Keep traversing up until we hit the root directory
while (currentDir !== rootDir) {
// First check for tasks.json directly
try {
return findTasksJsonInDirectory(currentDir, explicitFilePath, log);
} catch (error) {
// If tasks.json not found but the directory has project markers,
// log it as a potential project root (helpful for debugging)
if (hasProjectMarkers(currentDir)) {
log.info(`Found project markers in ${currentDir}, but no tasks.json`);
}
// Move up to parent directory
const parentDir = path.dirname(currentDir);
// Check if we've reached the root
if (parentDir === currentDir) {
break;
}
log.info(
`Tasks file not found in ${currentDir}, searching in parent directory: ${parentDir}`
);
currentDir = parentDir;
}
}
// Normalize the project root to prevent double .taskmaster paths
const projectRoot = normalizeProjectRoot(args.projectRoot);
// If we've searched all the way to the root and found nothing
const error = new Error(
`Tasks file not found in ${startDir} or any parent directory.`
);
error.code = 'TASKS_FILE_NOT_FOUND';
throw error;
}
// If already absolute, return as-is
if (path.isAbsolute(relativePath)) {
return relativePath;
// Note: findTasksWithNpmConsideration is not used by findTasksJsonPath and might be legacy or used elsewhere.
// If confirmed unused, it could potentially be removed in a separate cleanup.
function findTasksWithNpmConsideration(startDir, log) {
// First try our recursive parent search from cwd
try {
return findTasksJsonWithParentSearch(startDir, null, log);
} catch (error) {
// If that fails, try looking relative to the executable location
const execPath = process.argv[1];
const execDir = path.dirname(execPath);
log.info(`Looking for tasks file relative to executable at: ${execDir}`);
try {
return findTasksJsonWithParentSearch(execDir, null, log);
} catch (secondError) {
// If that also fails, check standard locations in user's home directory
const homeDir = os.homedir();
log.info(`Looking for tasks file in home directory: ${homeDir}`);
try {
// Check standard locations in home dir
return findTasksJsonInDirectory(
path.join(homeDir, '.task-master'),
null,
log
);
} catch (thirdError) {
// If all approaches fail, throw the original error
throw error;
}
}
}
}
/**
* Finds potential PRD document files based on common naming patterns
* @param {string} projectRoot - The project root directory
* @param {string|null} explicitPath - Optional explicit path provided by the user
* @param {Object} log - Logger object
* @returns {string|null} - The path to the first found PRD file, or null if none found
*/
export function findPRDDocumentPath(projectRoot, explicitPath, log) {
// If explicit path is provided, check if it exists
if (explicitPath) {
const fullPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
if (fs.existsSync(fullPath)) {
log.info(`Using provided PRD document path: ${fullPath}`);
return fullPath;
} else {
log.warn(
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
);
}
}
// Resolve relative to normalized projectRoot
return path.resolve(projectRoot, relativePath);
// Common locations and file patterns for PRD documents
const commonLocations = [
'', // Project root
'scripts/'
];
const commonFileNames = ['PRD.md', 'prd.md', 'PRD.txt', 'prd.txt'];
// Check all possible combinations
for (const location of commonLocations) {
for (const fileName of commonFileNames) {
const potentialPath = path.join(projectRoot, location, fileName);
if (fs.existsSync(potentialPath)) {
log.info(`Found PRD document at: ${potentialPath}`);
return potentialPath;
}
}
}
log.warn(`No PRD document found in common locations within ${projectRoot}`);
return null;
}
export function findComplexityReportPath(projectRoot, explicitPath, log) {
// If explicit path is provided, check if it exists
if (explicitPath) {
const fullPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
if (fs.existsSync(fullPath)) {
log.info(`Using provided PRD document path: ${fullPath}`);
return fullPath;
} else {
log.warn(
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
);
}
}
// Common locations and file patterns for PRD documents
const commonLocations = [
'', // Project root
'scripts/'
];
const commonFileNames = [
'complexity-report.json',
'task-complexity-report.json'
];
// Check all possible combinations
for (const location of commonLocations) {
for (const fileName of commonFileNames) {
const potentialPath = path.join(projectRoot, location, fileName);
if (fs.existsSync(potentialPath)) {
log.info(`Found PRD document at: ${potentialPath}`);
return potentialPath;
}
}
}
log.warn(`No PRD document found in common locations within ${projectRoot}`);
return null;
}
/**
* Find project root using core utility
* @param {string} [startDir] - Directory to start searching from
* @returns {string|null} - Project root path or null if not found
* Resolves the tasks output directory path
* @param {string} projectRoot - The project root directory
* @param {string|null} explicitPath - Optional explicit output path provided by the user
* @param {Object} log - Logger object
* @returns {string} - The resolved tasks directory path
*/
export function findProjectRoot(startDir) {
return coreFindProjectRoot(startDir);
}
export function resolveTasksOutputPath(projectRoot, explicitPath, log) {
// If explicit path is provided, use it
if (explicitPath) {
const outputPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
// MAIN EXPORTS FOR MCP TOOLS - these are the functions MCP tools should use
log.info(`Using provided tasks output path: ${outputPath}`);
return outputPath;
}
/**
* Find tasks.json path from arguments - primary MCP function
* @param {Object} args - Arguments object containing projectRoot and optional file path
* @param {Object} [log] - Log function to prevent console logging
* @returns {string|null} - Resolved path to tasks.json or null if not found
*/
export function findTasksPath(args, log = silentLogger) {
return resolveTasksPath(args, log);
// Default output path: tasks/tasks.json in the project root
const defaultPath = path.resolve(projectRoot, 'tasks', 'tasks.json');
log.info(`Using default tasks output path: ${defaultPath}`);
// Ensure the directory exists
const outputDir = path.dirname(defaultPath);
if (!fs.existsSync(outputDir)) {
log.info(`Creating tasks directory: ${outputDir}`);
fs.mkdirSync(outputDir, { recursive: true });
}
return defaultPath;
}
/**
* Find complexity report path from arguments - primary MCP function
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
* @param {Object} [log] - Log function to prevent console logging
* @returns {string|null} - Resolved path to complexity report or null if not found
* Resolves various file paths needed for MCP operations based on project root
* @param {string} projectRoot - The project root directory
* @param {Object} args - Command arguments that may contain explicit paths
* @param {Object} log - Logger object
* @returns {Object} - An object containing resolved paths
*/
export function findComplexityReportPath(args, log = silentLogger) {
return resolveComplexityReportPath(args, log);
export function resolveProjectPaths(projectRoot, args, log) {
const prdPath = findPRDDocumentPath(projectRoot, args.input, log);
const tasksJsonPath = resolveTasksOutputPath(projectRoot, args.output, log);
// You can add more path resolutions here as needed
return {
projectRoot,
prdPath,
tasksJsonPath
// Add additional path properties as needed
};
}
/**
* Find PRD path - primary MCP function
* @param {string} [explicitPath] - Explicit path to PRD file
* @param {Object} [args] - Arguments object for context (not used in current implementation)
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
*/
export function findPRDPath(explicitPath, args = null, log = silentLogger) {
return findPrdPath(explicitPath, args, log);
}
// Legacy aliases for backward compatibility - DEPRECATED
export const findTasksJsonPath = findTasksPath;
export const findComplexityReportJsonPath = findComplexityReportPath;
// Re-export PROJECT_MARKERS for MCP tools that import it from this module
export { PROJECT_MARKERS };

View File

@@ -7,10 +7,11 @@ import { z } from 'zod';
import {
handleApiResult,
createErrorResponse,
getProjectRootFromSession,
withNormalizedProjectRoot
} from './utils.js';
import { addDependencyDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addDependency tool with the MCP server
@@ -43,7 +44,7 @@ export function registerAddDependencyTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { addSubtaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addSubtask tool with the MCP server
@@ -67,7 +67,7 @@ export function registerAddSubtaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { addTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addTask tool with the MCP server
@@ -70,7 +70,7 @@ export function registerAddTaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -12,8 +12,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; // Assuming core functions are exported via task-master-core.js
import { findTasksPath } from '../core/utils/path-utils.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the analyze_project_complexity tool
@@ -42,7 +41,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
.string()
.optional()
.describe(
`Output file path relative to project root (default: ${COMPLEXITY_REPORT_FILE}).`
'Output file path relative to project root (default: scripts/task-complexity-report.json).'
),
file: z
.string()
@@ -81,7 +80,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
@@ -95,7 +94,11 @@ export function registerAnalyzeProjectComplexityTool(server) {
const outputPath = args.output
? path.resolve(args.projectRoot, args.output)
: path.resolve(args.projectRoot, COMPLEXITY_REPORT_FILE);
: path.resolve(
args.projectRoot,
'scripts',
'task-complexity-report.json'
);
log.info(`${toolName}: Report output path: ${outputPath}`);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { clearSubtasksDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the clearSubtasks tool with the MCP server
@@ -48,7 +48,7 @@ export function registerClearSubtasksTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,8 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { complexityReportDirect } from '../core/task-master-core.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
import { findComplexityReportPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
* Register the complexityReport tool with the MCP server
@@ -26,7 +25,7 @@ export function registerComplexityReportTool(server) {
.string()
.optional()
.describe(
`Path to the report file (default: ${COMPLEXITY_REPORT_FILE})`
'Path to the report file (default: scripts/task-complexity-report.json)'
),
projectRoot: z
.string()
@@ -38,18 +37,14 @@ export function registerComplexityReportTool(server) {
`Getting complexity report with args: ${JSON.stringify(args)}`
);
const pathArgs = {
projectRoot: args.projectRoot,
complexityReport: args.file
};
const reportPath = findComplexityReportPath(pathArgs, log);
if (!reportPath) {
return createErrorResponse(
'No complexity report found. Run task-master analyze-complexity first.'
);
}
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
const reportPath = args.file
? path.resolve(args.projectRoot, args.file)
: path.resolve(
args.projectRoot,
'scripts',
'task-complexity-report.json'
);
const result = await complexityReportDirect(
{
@@ -59,7 +54,9 @@ export function registerComplexityReportTool(server) {
);
if (result.success) {
log.info('Successfully retrieved complexity report');
log.info(
`Successfully retrieved complexity report${result.fromCache ? ' (from cache)' : ''}`
);
} else {
log.error(
`Failed to retrieve complexity report: ${result.error.message}`

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { expandAllTasksDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the expandAll tool with the MCP server
@@ -67,7 +67,7 @@ export function registerExpandAllTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { expandTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the expand-task tool with the MCP server
@@ -54,7 +54,7 @@ export function registerExpandTaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { fixDependenciesDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the fixDependencies tool with the MCP server
@@ -33,7 +33,7 @@ export function registerFixDependenciesTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { generateTaskFilesDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
@@ -39,7 +39,7 @@ export function registerGenerateTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -11,7 +11,7 @@ import {
} from './utils.js';
import { showTaskDirect } from '../core/task-master-core.js';
import {
findTasksPath,
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
@@ -77,7 +77,7 @@ export function registerShowTaskTool(server) {
// Resolve the path to tasks.json using the NORMALIZED projectRoot from args
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: projectRoot, file: file },
log
);
@@ -94,10 +94,8 @@ export function registerShowTaskTool(server) {
let complexityReportPath;
try {
complexityReportPath = findComplexityReportPath(
{
projectRoot: projectRoot,
complexityReport: args.complexityReport
},
projectRoot,
args.complexityReport,
log
);
} catch (error) {
@@ -116,7 +114,9 @@ export function registerShowTaskTool(server) {
);
if (result.success) {
log.info(`Successfully retrieved task details for ID: ${args.id}`);
log.info(
`Successfully retrieved task details for ID: ${args.id}${result.fromCache ? ' (from cache)' : ''}`
);
} else {
log.error(`Failed to get task: ${result.error.message}`);
}

View File

@@ -11,8 +11,8 @@ import {
} from './utils.js';
import { listTasksDirect } from '../core/task-master-core.js';
import {
resolveTasksPath,
resolveComplexityReportPath
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
/**
@@ -55,10 +55,13 @@ export function registerListTasksTool(server) {
try {
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
// Resolve the path to tasks.json using new path utilities
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = resolveTasksPath(args, log);
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
@@ -69,13 +72,14 @@ export function registerListTasksTool(server) {
// Resolve the path to complexity report
let complexityReportPath;
try {
complexityReportPath = resolveComplexityReportPath(args, session);
complexityReportPath = findComplexityReportPath(
args.projectRoot,
args.complexityReport,
log
);
} catch (error) {
log.error(`Error finding complexity report: ${error.message}`);
// This is optional, so we don't fail the operation
complexityReportPath = null;
}
const result = await listTasksDirect(
{
tasksJsonPath: tasksJsonPath,
@@ -87,7 +91,7 @@ export function registerListTasksTool(server) {
);
log.info(
`Retrieved ${result.success ? result.data?.tasks?.length || 0 : 0} tasks`
`Retrieved ${result.success ? result.data?.tasks?.length || 0 : 0} tasks${result.fromCache ? ' (from cache)' : ''}`
);
return handleApiResult(result, log, 'Error getting tasks');
} catch (error) {

View File

@@ -47,6 +47,7 @@ export function registerModelsTool(server) {
),
projectRoot: z
.string()
.optional()
.describe('The directory of the project. Must be an absolute path.'),
openrouter: z
.boolean()

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { moveTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the moveTask tool with the MCP server
@@ -34,6 +34,7 @@ export function registerMoveTaskTool(server) {
file: z.string().optional().describe('Custom path to tasks.json file'),
projectRoot: z
.string()
.optional()
.describe(
'Root directory of the project (typically derived from session)'
)
@@ -44,7 +45,7 @@ export function registerMoveTaskTool(server) {
let tasksJsonPath = args.file;
if (!tasksJsonPath) {
tasksJsonPath = findTasksPath(args, log);
tasksJsonPath = findTasksJsonPath(args, log);
}
// Parse comma-separated IDs
@@ -94,16 +95,13 @@ export function registerMoveTaskTool(server) {
}
}
return handleApiResult(
{
success: true,
data: {
moves: results,
message: `Successfully moved ${results.length} tasks`
}
},
log
);
return {
success: true,
data: {
moves: results,
message: `Successfully moved ${results.length} tasks`
}
};
} else {
// Moving a single task
return handleApiResult(

View File

@@ -1,22 +1,22 @@
/**
* tools/next-task.js
* Tool to find the next task to work on based on dependencies and status
* Tool to find the next task to work on
*/
import { z } from 'zod';
import {
createErrorResponse,
handleApiResult,
createErrorResponse,
withNormalizedProjectRoot
} from './utils.js';
import { nextTaskDirect } from '../core/task-master-core.js';
import {
resolveTasksPath,
resolveComplexityReportPath
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
/**
* Register the nextTask tool with the MCP server
* Register the next-task tool with the MCP server
* @param {Object} server - FastMCP server instance
*/
export function registerNextTaskTool(server) {
@@ -40,10 +40,13 @@ export function registerNextTaskTool(server) {
try {
log.info(`Finding next task with args: ${JSON.stringify(args)}`);
// Resolve the path to tasks.json using new path utilities
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = resolveTasksPath(args, session);
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
@@ -51,16 +54,17 @@ export function registerNextTaskTool(server) {
);
}
// Resolve the path to complexity report (optional)
// Resolve the path to complexity report
let complexityReportPath;
try {
complexityReportPath = resolveComplexityReportPath(args, session);
complexityReportPath = findComplexityReportPath(
args.projectRoot,
args.complexityReport,
log
);
} catch (error) {
log.error(`Error finding complexity report: ${error.message}`);
// This is optional, so we don't fail the operation
complexityReportPath = null;
}
const result = await nextTaskDirect(
{
tasksJsonPath: tasksJsonPath,
@@ -69,10 +73,19 @@ export function registerNextTaskTool(server) {
log
);
log.info(`Next task result: ${result.success ? 'found' : 'none'}`);
if (result.success) {
log.info(
`Successfully found next task: ${result.data?.task?.id || 'No available tasks'}`
);
} else {
log.error(
`Failed to find next task: ${result.error?.message || 'Unknown error'}`
);
}
return handleApiResult(result, log, 'Error finding next task');
} catch (error) {
log.error(`Error finding next task: ${error.message}`);
log.error(`Error in nextTask tool: ${error.message}`);
return createErrorResponse(error.message);
}
})

View File

@@ -4,17 +4,13 @@
*/
import { z } from 'zod';
import path from 'path';
import {
handleApiResult,
withNormalizedProjectRoot,
createErrorResponse
createErrorResponse,
withNormalizedProjectRoot
} from './utils.js';
import { parsePRDDirect } from '../core/task-master-core.js';
import {
PRD_FILE,
TASKMASTER_DOCS_DIR,
TASKMASTER_TASKS_FILE
} from '../../../src/constants/paths.js';
/**
* Register the parse_prd tool
@@ -23,51 +19,80 @@ import {
export function registerParsePRDTool(server) {
server.addTool({
name: 'parse_prd',
description: `Parse a Product Requirements Document (PRD) text file to automatically generate initial tasks. Reinitializing the project is not necessary to run this tool. It is recommended to run parse-prd after initializing the project and creating/importing a prd.txt file in the project root's ${TASKMASTER_DOCS_DIR} directory.`,
description:
"Parse a Product Requirements Document (PRD) text file to automatically generate initial tasks. Reinitializing the project is not necessary to run this tool. It is recommended to run parse-prd after initializing the project and creating/importing a prd.txt file in the project root's scripts/ directory.",
parameters: z.object({
input: z
.string()
.optional()
.default(PRD_FILE)
.default('scripts/prd.txt')
.describe('Absolute path to the PRD document file (.txt, .md, etc.)'),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.'),
output: z
.string()
.optional()
.describe(
`Output path for tasks.json file (default: ${TASKMASTER_TASKS_FILE})`
),
numTasks: z
.string()
.optional()
.describe(
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Avoid entering numbers above 50 due to context window limitations.'
),
output: z
.string()
.optional()
.describe(
'Output path for tasks.json file (default: tasks/tasks.json)'
),
force: z
.boolean()
.optional()
.default(false)
.describe('Overwrite existing output file without prompting.'),
research: z
.boolean()
.optional()
.describe(
'Enable Taskmaster to use the research role for potentially more informed task generation. Requires appropriate API key.'
),
append: z
.boolean()
.optional()
.describe('Append generated tasks to existing file.')
.default(false)
.describe('Append generated tasks to existing file.'),
research: z
.boolean()
.optional()
.default(false)
.describe(
'Use the research model for research-backed task generation, providing more comprehensive, accurate and up-to-date task details.'
),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const toolName = 'parse_prd';
try {
const result = await parsePRDDirect(args, log, { session });
return handleApiResult(result, log);
log.info(
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
);
// Call Direct Function - Pass relevant args including projectRoot
const result = await parsePRDDirect(
{
input: args.input,
output: args.output,
numTasks: args.numTasks,
force: args.force,
append: args.append,
research: args.research,
projectRoot: args.projectRoot
},
log,
{ session }
);
log.info(
`${toolName}: Direct function result: success=${result.success}`
);
return handleApiResult(result, log, 'Error parsing PRD');
} catch (error) {
log.error(`Error in parse_prd: ${error.message}`);
return createErrorResponse(`Failed to parse PRD: ${error.message}`);
log.error(
`Critical error in ${toolName} tool execute: ${error.message}`
);
return createErrorResponse(
`Internal tool error (${toolName}): ${error.message}`
);
}
})
});

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { removeDependencyDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the removeDependency tool with the MCP server
@@ -42,7 +42,7 @@ export function registerRemoveDependencyTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { removeSubtaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the removeSubtask tool with the MCP server
@@ -53,7 +53,7 @@ export function registerRemoveSubtaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { removeTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the remove-task tool with the MCP server
@@ -42,7 +42,7 @@ export function registerRemoveTaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -14,7 +14,7 @@ import {
nextTaskDirect
} from '../core/task-master-core.js';
import {
findTasksPath,
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
@@ -56,7 +56,7 @@ export function registerSetTaskStatusTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
@@ -70,10 +70,8 @@ export function registerSetTaskStatusTool(server) {
let complexityReportPath;
try {
complexityReportPath = findComplexityReportPath(
{
projectRoot: args.projectRoot,
complexityReport: args.complexityReport
},
args.projectRoot,
args.complexityReport,
log
);
} catch (error) {

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { updateSubtaskByIdDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the update-subtask tool with the MCP server
@@ -20,7 +20,7 @@ export function registerUpdateSubtaskTool(server) {
server.addTool({
name: 'update_subtask',
description:
'Appends timestamped information to a specific subtask without replacing existing content. If you just want to update the subtask status, use set_task_status instead.',
'Appends timestamped information to a specific subtask without replacing existing content',
parameters: z.object({
id: z
.string()
@@ -44,7 +44,7 @@ export function registerUpdateSubtaskTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { updateTaskByIdDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the update-task tool with the MCP server
@@ -48,7 +48,7 @@ export function registerUpdateTaskTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { updateTasksDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the update tool with the MCP server
@@ -56,7 +56,7 @@ export function registerUpdateTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath({ projectRoot, file }, log);
tasksJsonPath = findTasksJsonPath({ projectRoot, file }, log);
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
} catch (error) {
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);

View File

@@ -7,7 +7,6 @@ import { spawnSync } from 'child_process';
import path from 'path';
import fs from 'fs';
import { contextManager } from '../core/context-manager.js'; // Import the singleton
import { fileURLToPath } from 'url';
// Import path utilities to ensure consistent path resolution
import {
@@ -15,50 +14,6 @@ import {
PROJECT_MARKERS
} from '../core/utils/path-utils.js';
const __filename = fileURLToPath(import.meta.url);
// Cache for version info to avoid repeated file reads
let cachedVersionInfo = null;
/**
* Get version information from package.json
* @returns {Object} Version information
*/
function getVersionInfo() {
// Return cached version if available
if (cachedVersionInfo) {
return cachedVersionInfo;
}
try {
// Navigate to the project root from the tools directory
const packageJsonPath = path.join(
path.dirname(__filename),
'../../../package.json'
);
if (fs.existsSync(packageJsonPath)) {
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf-8'));
cachedVersionInfo = {
version: packageJson.version,
name: packageJson.name
};
return cachedVersionInfo;
}
cachedVersionInfo = {
version: 'unknown',
name: 'task-master-ai'
};
return cachedVersionInfo;
} catch (error) {
// Fallback version info if package.json can't be read
cachedVersionInfo = {
version: 'unknown',
name: 'task-master-ai'
};
return cachedVersionInfo;
}
}
/**
* Get normalized project root path
* @param {string|undefined} projectRootRaw - Raw project root from arguments
@@ -244,19 +199,17 @@ function getProjectRootFromSession(session, log) {
* @param {Function} processFunction - Optional function to process successful result data
* @returns {Object} - Standardized MCP response object
*/
async function handleApiResult(
function handleApiResult(
result,
log,
errorPrefix = 'API error',
processFunction = processMCPResponseData
) {
// Get version info for every response
const versionInfo = getVersionInfo();
if (!result.success) {
const errorMsg = result.error?.message || `Unknown ${errorPrefix}`;
log.error(`${errorPrefix}: ${errorMsg}`);
return createErrorResponse(errorMsg, versionInfo);
// Include cache status in error logs
log.error(`${errorPrefix}: ${errorMsg}. From cache: ${result.fromCache}`); // Keep logging cache status on error
return createErrorResponse(errorMsg);
}
// Process the result data if needed
@@ -264,14 +217,16 @@ async function handleApiResult(
? processFunction(result.data)
: result.data;
log.info('Successfully completed operation');
// Log success including cache status
log.info(`Successfully completed operation. From cache: ${result.fromCache}`); // Add success log with cache status
// Create the response payload including version info
// Create the response payload including the fromCache flag
const responsePayload = {
data: processedData,
version: versionInfo
fromCache: result.fromCache, // Get the flag from the original 'result'
data: processedData // Nest the processed data under a 'data' key
};
// Pass this combined payload to createContentResponse
return createContentResponse(responsePayload);
}
@@ -365,8 +320,8 @@ function executeTaskMasterCommand(
* @param {Function} options.actionFn - The async function to execute if the cache misses.
* Should return an object like { success: boolean, data?: any, error?: { code: string, message: string } }.
* @param {Object} options.log - The logger instance.
* @returns {Promise<Object>} - An object containing the result.
* Format: { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - An object containing the result, indicating if it was from cache.
* Format: { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
async function getCachedOrExecute({ cacheKey, actionFn, log }) {
// Check cache first
@@ -374,7 +329,11 @@ async function getCachedOrExecute({ cacheKey, actionFn, log }) {
if (cachedResult !== undefined) {
log.info(`Cache hit for key: ${cacheKey}`);
return cachedResult;
// Return the cached data in the same structure as a fresh result
return {
...cachedResult, // Spread the cached result to maintain its structure
fromCache: true // Just add the fromCache flag
};
}
log.info(`Cache miss for key: ${cacheKey}. Executing action function.`);
@@ -382,10 +341,12 @@ async function getCachedOrExecute({ cacheKey, actionFn, log }) {
// Execute the action function if cache missed
const result = await actionFn();
// If the action was successful, cache the result
// If the action was successful, cache the result (but without fromCache flag)
if (result.success && result.data !== undefined) {
log.info(`Action successful. Caching result for key: ${cacheKey}`);
contextManager.setCachedData(cacheKey, result);
// Cache the entire result structure (minus the fromCache flag)
const { fromCache, ...resultToCache } = result;
contextManager.setCachedData(cacheKey, resultToCache);
} else if (!result.success) {
log.warn(
`Action failed for cache key ${cacheKey}. Result not cached. Error: ${result.error?.message}`
@@ -396,7 +357,11 @@ async function getCachedOrExecute({ cacheKey, actionFn, log }) {
);
}
return result;
// Return the fresh result, indicating it wasn't from cache
return {
...result,
fromCache: false
};
}
/**
@@ -495,22 +460,14 @@ function createContentResponse(content) {
/**
* Creates error response for tools
* @param {string} errorMessage - Error message to include in response
* @param {Object} [versionInfo] - Optional version information object
* @returns {Object} - Error content response object in FastMCP format
*/
function createErrorResponse(errorMessage, versionInfo) {
// Provide fallback version info if not provided
if (!versionInfo) {
versionInfo = getVersionInfo();
}
function createErrorResponse(errorMessage) {
return {
content: [
{
type: 'text',
text: `Error: ${errorMessage}
Version: ${versionInfo.version}
Name: ${versionInfo.name}`
text: `Error: ${errorMessage}`
}
],
isError: true

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { validateDependenciesDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the validateDependencies tool with the MCP server
@@ -34,7 +34,7 @@ export function registerValidateDependenciesTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

2247
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.16.2-rc.0",
"version": "0.15.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",
@@ -21,8 +21,8 @@
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
"mcp-server": "node mcp-server/server.js",
"format-check": "biome format .",
"format": "biome format . --write"
"format-check": "prettier --check .",
"format": "prettier --write ."
},
"keywords": [
"claude",
@@ -39,17 +39,14 @@
"author": "Eyal Toledano",
"license": "MIT WITH Commons-Clause",
"dependencies": {
"@ai-sdk/amazon-bedrock": "^2.2.9",
"@ai-sdk/anthropic": "^1.2.10",
"@ai-sdk/azure": "^1.3.17",
"@ai-sdk/google": "^1.2.13",
"@ai-sdk/google-vertex": "^2.2.23",
"@ai-sdk/mistral": "^1.2.7",
"@ai-sdk/openai": "^1.3.20",
"@ai-sdk/perplexity": "^1.1.7",
"@ai-sdk/xai": "^1.2.15",
"@anthropic-ai/sdk": "^0.39.0",
"@aws-sdk/credential-providers": "^3.817.0",
"@openrouter/ai-sdk-provider": "^0.4.5",
"ai": "^4.3.10",
"boxen": "^8.0.1",
@@ -59,7 +56,7 @@
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"express": "^4.21.2",
"fastmcp": "^2.2.2",
"fastmcp": "^1.20.5",
"figlet": "^1.8.0",
"fuse.js": "^7.1.0",
"gradient-string": "^3.0.0",
@@ -74,7 +71,7 @@
"zod": "^3.23.8"
},
"engines": {
"node": ">=18.0.0"
"node": ">=14.0.0"
},
"repository": {
"type": "git",
@@ -95,11 +92,10 @@
"src/**"
],
"overrides": {
"node-fetch": "^2.6.12",
"node-fetch": "^3.3.2",
"whatwg-url": "^11.0.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@changesets/changelog-github": "^0.5.1",
"@changesets/cli": "^2.28.1",
"@types/jest": "^29.5.14",
@@ -108,6 +104,7 @@
"jest": "^29.7.0",
"jest-environment-node": "^29.7.0",
"mock-fs": "^5.5.0",
"node-fetch": "^3.3.2",
"prettier": "^3.5.3",
"react": "^18.3.1",
"supertest": "^7.1.0",

View File

@@ -25,17 +25,6 @@ import gradient from 'gradient-string';
import { isSilentMode } from './modules/utils.js';
import { convertAllCursorRulesToRooRules } from './modules/rule-transformer.js';
import { execSync } from 'child_process';
import {
EXAMPLE_PRD_FILE,
TASKMASTER_CONFIG_FILE,
TASKMASTER_TEMPLATES_DIR,
TASKMASTER_DIR,
TASKMASTER_TASKS_DIR,
TASKMASTER_DOCS_DIR,
TASKMASTER_REPORTS_DIR,
ENV_EXAMPLE_FILE,
GITIGNORE_FILE
} from '../src/constants/paths.js';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
@@ -173,7 +162,8 @@ alias taskmaster='task-master'
log('success', `Added Task Master aliases to ${shellConfigFile}`);
log(
'info',
`To use the aliases in your current terminal, run: source ${shellConfigFile}`
'To use the aliases in your current terminal, run: source ' +
shellConfigFile
);
return true;
@@ -243,7 +233,7 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
case 'boomerang-rules':
case 'code-rules':
case 'debug-rules':
case 'test-rules': {
case 'test-rules':
// Extract the mode name from the template name (e.g., 'architect' from 'architect-rules')
const mode = templateName.split('-')[0];
sourcePath = path.join(
@@ -256,7 +246,6 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
templateName
);
break;
}
default:
// For other files like env.example, gitignore, etc. that don't have direct equivalents
sourcePath = path.join(__dirname, '..', 'assets', templateName);
@@ -297,7 +286,10 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
if (newLines.length > 0) {
// Add a comment to separate the original content from our additions
const updatedContent = `${existingContent.trim()}\n\n# Added by Task Master AI\n${newLines.join('\n')}`;
const updatedContent =
existingContent.trim() +
'\n\n# Added by Claude Task Master\n' +
newLines.join('\n');
fs.writeFileSync(targetPath, updatedContent);
log('success', `Updated ${targetPath} with additional entries`);
} else {
@@ -315,7 +307,10 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
const existingContent = fs.readFileSync(targetPath, 'utf8');
// Add a separator comment before appending our content
const updatedContent = `${existingContent.trim()}\n\n# Added by Task Master - Development Workflow Rules\n\n${content}`;
const updatedContent =
existingContent.trim() +
'\n\n# Added by Task Master - Development Workflow Rules\n\n' +
content;
fs.writeFileSync(targetPath, updatedContent);
log('success', `Updated ${targetPath} with additional rules`);
return;
@@ -395,7 +390,7 @@ async function initializeProject(options = {}) {
};
}
createProjectStructure(addAliases, dryRun, options);
createProjectStructure(addAliases, dryRun);
} else {
// Interactive logic
log('info', 'Required options not provided, proceeding with prompts.');
@@ -451,7 +446,7 @@ async function initializeProject(options = {}) {
}
// Create structure using only necessary values
createProjectStructure(addAliasesPrompted, dryRun, options);
createProjectStructure(addAliasesPrompted, dryRun);
} catch (error) {
rl.close();
log('error', `Error during initialization process: ${error.message}`);
@@ -470,29 +465,29 @@ function promptQuestion(rl, question) {
}
// Function to create the project structure
function createProjectStructure(addAliases, dryRun, options) {
function createProjectStructure(addAliases, dryRun) {
const targetDir = process.cwd();
log('info', `Initializing project in ${targetDir}`);
// Define Roo modes locally (external integration, not part of core Task Master)
const ROO_MODES = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'];
// Create directories
ensureDirectoryExists(path.join(targetDir, '.cursor/rules'));
ensureDirectoryExists(path.join(targetDir, '.cursor', 'rules'));
// Create Roo directories
ensureDirectoryExists(path.join(targetDir, '.roo'));
ensureDirectoryExists(path.join(targetDir, '.roo/rules'));
for (const mode of ROO_MODES) {
ensureDirectoryExists(path.join(targetDir, '.roo', 'rules'));
for (const mode of [
'architect',
'ask',
'boomerang',
'code',
'debug',
'test'
]) {
ensureDirectoryExists(path.join(targetDir, '.roo', `rules-${mode}`));
}
// Create NEW .taskmaster directory structure (using constants)
ensureDirectoryExists(path.join(targetDir, TASKMASTER_DIR));
ensureDirectoryExists(path.join(targetDir, TASKMASTER_TASKS_DIR));
ensureDirectoryExists(path.join(targetDir, TASKMASTER_DOCS_DIR));
ensureDirectoryExists(path.join(targetDir, TASKMASTER_REPORTS_DIR));
ensureDirectoryExists(path.join(targetDir, TASKMASTER_TEMPLATES_DIR));
ensureDirectoryExists(path.join(targetDir, 'scripts'));
ensureDirectoryExists(path.join(targetDir, 'tasks'));
// Setup MCP configuration for integration with Cursor
setupMCPConfiguration(targetDir);
@@ -505,44 +500,44 @@ function createProjectStructure(addAliases, dryRun, options) {
// Copy .env.example
copyTemplateFile(
'env.example',
path.join(targetDir, ENV_EXAMPLE_FILE),
path.join(targetDir, '.env.example'),
replacements
);
// Copy config.json with project name to NEW location
// Copy .taskmasterconfig with project name
copyTemplateFile(
'config.json',
path.join(targetDir, TASKMASTER_CONFIG_FILE),
'.taskmasterconfig',
path.join(targetDir, '.taskmasterconfig'),
{
...replacements
}
);
// Copy .gitignore
copyTemplateFile('gitignore', path.join(targetDir, GITIGNORE_FILE));
copyTemplateFile('gitignore', path.join(targetDir, '.gitignore'));
// Copy dev_workflow.mdc
copyTemplateFile(
'dev_workflow.mdc',
path.join(targetDir, '.cursor/rules/dev_workflow.mdc')
path.join(targetDir, '.cursor', 'rules', 'dev_workflow.mdc')
);
// Copy taskmaster.mdc
copyTemplateFile(
'taskmaster.mdc',
path.join(targetDir, '.cursor/rules/taskmaster.mdc')
path.join(targetDir, '.cursor', 'rules', 'taskmaster.mdc')
);
// Copy cursor_rules.mdc
copyTemplateFile(
'cursor_rules.mdc',
path.join(targetDir, '.cursor/rules/cursor_rules.mdc')
path.join(targetDir, '.cursor', 'rules', 'cursor_rules.mdc')
);
// Copy self_improve.mdc
copyTemplateFile(
'self_improve.mdc',
path.join(targetDir, '.cursor/rules/self_improve.mdc')
path.join(targetDir, '.cursor', 'rules', 'self_improve.mdc')
);
// Generate Roo rules from Cursor rules
@@ -556,15 +551,26 @@ function createProjectStructure(addAliases, dryRun, options) {
copyTemplateFile('.roomodes', path.join(targetDir, '.roomodes'));
// Copy Roo rule files for each mode
for (const mode of ROO_MODES) {
const rooModes = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'];
for (const mode of rooModes) {
copyTemplateFile(
`${mode}-rules`,
path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`)
);
}
// Copy example_prd.txt to NEW location
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
// Copy example_prd.txt
copyTemplateFile(
'example_prd.txt',
path.join(targetDir, 'scripts', 'example_prd.txt')
);
// // Create main README.md
// copyTemplateFile(
// 'README-task-master.md',
// path.join(targetDir, 'README-task-master.md'),
// replacements
// );
// Initialize git repository if git is available
try {
@@ -601,7 +607,7 @@ function createProjectStructure(addAliases, dryRun, options) {
}
// === Add Model Configuration Step ===
if (!isSilentMode() && !dryRun && !options?.yes) {
if (!isSilentMode() && !dryRun) {
console.log(
boxen(chalk.cyan('Configuring AI Models...'), {
padding: 0.5,
@@ -632,12 +638,6 @@ function createProjectStructure(addAliases, dryRun, options) {
);
} else if (dryRun) {
log('info', 'DRY RUN: Skipping interactive model setup.');
} else if (options?.yes) {
log('info', 'Skipping interactive model setup due to --yes flag.');
log(
'info',
'Default AI models will be used. You can configure different models later using "task-master models --setup" or "task-master models --set-..." commands.'
);
}
// ====================================
@@ -645,9 +645,11 @@ function createProjectStructure(addAliases, dryRun, options) {
if (!isSilentMode()) {
console.log(
boxen(
`${warmGradient.multiline(
warmGradient.multiline(
figlet.textSync('Success!', { font: 'Standard' })
)}\n${chalk.green('Project initialized successfully!')}`,
) +
'\n' +
chalk.green('Project initialized successfully!'),
{
padding: 1,
margin: 1,
@@ -662,29 +664,76 @@ function createProjectStructure(addAliases, dryRun, options) {
if (!isSilentMode()) {
console.log(
boxen(
`${chalk.cyan.bold('Things you should do next:')}\n\n${chalk.white('1. ')}${chalk.yellow(
'Configure AI models (if needed) and add API keys to `.env`'
)}\n${chalk.white(' ├─ ')}${chalk.dim('Models: Use `task-master models` commands')}\n${chalk.white(' └─ ')}${chalk.dim(
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
)}\n${chalk.white('2. ')}${chalk.yellow(
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
)}\n${chalk.white('3. ')}${chalk.yellow(
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd scripts/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('analyze_project_complexity')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master analyze-complexity')}\n${chalk.white('5. ')}${chalk.yellow(
'Ask Cursor to expand all of your tasks using the complexity analysis'
)}\n${chalk.white('6. ')}${chalk.yellow('Ask Cursor to begin working on the next task')}\n${chalk.white('7. ')}${chalk.yellow(
'Add new tasks anytime using the add-task command or MCP tool'
)}\n${chalk.white('8. ')}${chalk.yellow(
'Ask Cursor to set the status of one or many tasks/subtasks at a time. Use the task id from the task lists.'
)}\n${chalk.white('9. ')}${chalk.yellow(
'Ask Cursor to update all tasks from a specific task id based on new learnings or pivots in your project.'
)}\n${chalk.white('10. ')}${chalk.green.bold('Ship it!')}\n\n${chalk.dim(
'* Review the README.md file to learn how to use other commands via Cursor Agent.'
)}\n${chalk.dim(
'* Use the task-master command without arguments to see all available commands.'
)}`,
chalk.cyan.bold('Things you should do next:') +
'\n\n' +
chalk.white('1. ') +
chalk.yellow(
'Configure AI models (if needed) and add API keys to `.env`'
) +
'\n' +
chalk.white(' ├─ ') +
chalk.dim('Models: Use `task-master models` commands') +
'\n' +
chalk.white(' └─ ') +
chalk.dim(
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
) +
'\n' +
chalk.white('2. ') +
chalk.yellow(
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
) +
'\n' +
chalk.white('3. ') +
chalk.yellow(
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
) +
'\n' +
chalk.white(' └─ ') +
chalk.dim('MCP Tool: ') +
chalk.cyan('parse_prd') +
chalk.dim(' | CLI: ') +
chalk.cyan('task-master parse-prd scripts/prd.txt') +
'\n' +
chalk.white('4. ') +
chalk.yellow(
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
) +
'\n' +
chalk.white(' └─ ') +
chalk.dim('MCP Tool: ') +
chalk.cyan('analyze_project_complexity') +
chalk.dim(' | CLI: ') +
chalk.cyan('task-master analyze-complexity') +
'\n' +
chalk.white('5. ') +
chalk.yellow(
'Ask Cursor to expand all of your tasks using the complexity analysis'
) +
'\n' +
chalk.white('6. ') +
chalk.yellow('Ask Cursor to begin working on the next task') +
'\n' +
chalk.white('7. ') +
chalk.yellow(
'Ask Cursor to set the status of one or many tasks/subtasks at a time. Use the task id from the task lists.'
) +
'\n' +
chalk.white('8. ') +
chalk.yellow(
'Ask Cursor to update all tasks from a specific task id based on new learnings or pivots in your project.'
) +
'\n' +
chalk.white('9. ') +
chalk.green.bold('Ship it!') +
'\n\n' +
chalk.dim(
'* Review the README.md file to learn how to use other commands via Cursor Agent.'
) +
'\n' +
chalk.dim(
'* Use the task-master command without arguments to see all available commands.'
),
{
padding: 1,
margin: 1,

View File

@@ -19,42 +19,18 @@ import {
MODEL_MAP,
getDebugFlag,
getBaseUrlForRole,
isApiKeySet,
getOllamaBaseURL,
getAzureBaseURL,
getBedrockBaseURL,
getVertexProjectId,
getVertexLocation
isApiKeySet
} from './config-manager.js';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
// Import provider classes
import {
AnthropicAIProvider,
PerplexityAIProvider,
GoogleAIProvider,
OpenAIProvider,
XAIProvider,
OpenRouterAIProvider,
OllamaAIProvider,
BedrockAIProvider,
AzureProvider,
VertexAIProvider
} from '../../src/ai-providers/index.js';
// Create provider instances
const PROVIDERS = {
anthropic: new AnthropicAIProvider(),
perplexity: new PerplexityAIProvider(),
google: new GoogleAIProvider(),
openai: new OpenAIProvider(),
xai: new XAIProvider(),
openrouter: new OpenRouterAIProvider(),
ollama: new OllamaAIProvider(),
bedrock: new BedrockAIProvider(),
azure: new AzureProvider(),
vertex: new VertexAIProvider()
};
import * as anthropic from '../../src/ai-providers/anthropic.js';
import * as perplexity from '../../src/ai-providers/perplexity.js';
import * as google from '../../src/ai-providers/google.js';
import * as openai from '../../src/ai-providers/openai.js';
import * as xai from '../../src/ai-providers/xai.js';
import * as openrouter from '../../src/ai-providers/openrouter.js';
import * as ollama from '../../src/ai-providers/ollama.js';
// TODO: Import other provider modules when implemented (ollama, etc.)
// Helper function to get cost for a specific model
function _getCostForModel(providerName, modelId) {
@@ -86,6 +62,51 @@ function _getCostForModel(providerName, modelId) {
};
}
// --- Provider Function Map ---
// Maps provider names (lowercase) to their respective service functions
const PROVIDER_FUNCTIONS = {
anthropic: {
generateText: anthropic.generateAnthropicText,
streamText: anthropic.streamAnthropicText,
generateObject: anthropic.generateAnthropicObject
},
perplexity: {
generateText: perplexity.generatePerplexityText,
streamText: perplexity.streamPerplexityText,
generateObject: perplexity.generatePerplexityObject
},
google: {
// Add Google entry
generateText: google.generateGoogleText,
streamText: google.streamGoogleText,
generateObject: google.generateGoogleObject
},
openai: {
// ADD: OpenAI entry
generateText: openai.generateOpenAIText,
streamText: openai.streamOpenAIText,
generateObject: openai.generateOpenAIObject
},
xai: {
// ADD: xAI entry
generateText: xai.generateXaiText,
streamText: xai.streamXaiText,
generateObject: xai.generateXaiObject // Note: Object generation might be unsupported
},
openrouter: {
// ADD: OpenRouter entry
generateText: openrouter.generateOpenRouterText,
streamText: openrouter.streamOpenRouterText,
generateObject: openrouter.generateOpenRouterObject
},
ollama: {
generateText: ollama.generateOllamaText,
streamText: ollama.streamOllamaText,
generateObject: ollama.generateOllamaObject
}
// TODO: Add entries for ollama, etc. when implemented
};
// --- Configuration for Retries ---
const MAX_RETRIES = 2;
const INITIAL_RETRY_DELAY_MS = 1000;
@@ -170,9 +191,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY',
ollama: 'OLLAMA_API_KEY',
bedrock: 'AWS_ACCESS_KEY_ID',
vertex: 'GOOGLE_API_KEY'
ollama: 'OLLAMA_API_KEY'
};
const envVarName = keyMap[providerName];
@@ -184,11 +203,12 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
// Special handling for providers that can use alternative auth
if (providerName === 'ollama' || providerName === 'bedrock') {
// Special handling for Ollama - API key is optional
if (providerName === 'ollama') {
return apiKey || null;
}
// For all other providers, API key is required
if (!apiKey) {
throw new Error(
`Required API key ${envVarName} for provider '${providerName}' is not set in environment, session, or .env file.`
@@ -209,15 +229,14 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
* @throws {Error} If the call fails after all retries.
*/
async function _attemptProviderCallWithRetries(
provider,
serviceType,
providerApiFn,
callParams,
providerName,
modelId,
attemptRole
) {
let retries = 0;
const fnName = serviceType;
const fnName = providerApiFn.name;
while (retries <= MAX_RETRIES) {
try {
@@ -228,8 +247,8 @@ async function _attemptProviderCallWithRetries(
);
}
// Call the appropriate method on the provider instance
const result = await provider[serviceType](callParams);
// Call the specific provider function directly
const result = await providerApiFn(callParams);
if (getDebugFlag()) {
log(
@@ -331,8 +350,9 @@ async function _unifiedServiceRunner(serviceType, params) {
modelId,
apiKey,
roleParams,
provider,
baseURL,
providerFnSet,
providerApiFn,
baseUrl,
providerResponse,
telemetryData = null;
@@ -371,20 +391,7 @@ async function _unifiedServiceRunner(serviceType, params) {
continue;
}
// Get provider instance
provider = PROVIDERS[providerName?.toLowerCase()];
if (!provider) {
log(
'warn',
`Skipping role '${currentRole}': Provider '${providerName}' not supported.`
);
lastError =
lastError ||
new Error(`Unsupported provider configured: ${providerName}`);
continue;
}
// Check API key if needed
// Check if API key is set for the current provider and role (excluding 'ollama')
if (providerName?.toLowerCase() !== 'ollama') {
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
log(
@@ -400,74 +407,40 @@ async function _unifiedServiceRunner(serviceType, params) {
}
}
// Get base URL if configured (optional for most providers)
baseURL = getBaseUrlForRole(currentRole, effectiveProjectRoot);
// For Azure, use the global Azure base URL if role-specific URL is not configured
if (providerName?.toLowerCase() === 'azure' && !baseURL) {
baseURL = getAzureBaseURL(effectiveProjectRoot);
log('debug', `Using global Azure base URL: ${baseURL}`);
} else if (providerName?.toLowerCase() === 'ollama' && !baseURL) {
// For Ollama, use the global Ollama base URL if role-specific URL is not configured
baseURL = getOllamaBaseURL(effectiveProjectRoot);
log('debug', `Using global Ollama base URL: ${baseURL}`);
} else if (providerName?.toLowerCase() === 'bedrock' && !baseURL) {
// For Bedrock, use the global Bedrock base URL if role-specific URL is not configured
baseURL = getBedrockBaseURL(effectiveProjectRoot);
log('debug', `Using global Bedrock base URL: ${baseURL}`);
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
baseUrl = getBaseUrlForRole(currentRole, effectiveProjectRoot);
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
if (!providerFnSet) {
log(
'warn',
`Skipping role '${currentRole}': Provider '${providerName}' not supported or map entry missing.`
);
lastError =
lastError ||
new Error(`Unsupported provider configured: ${providerName}`);
continue;
}
providerApiFn = providerFnSet[serviceType];
if (typeof providerApiFn !== 'function') {
log(
'warn',
`Skipping role '${currentRole}': Service type '${serviceType}' not implemented for provider '${providerName}'.`
);
lastError =
lastError ||
new Error(
`Service '${serviceType}' not implemented for provider ${providerName}`
);
continue;
}
// Get AI parameters for the current role
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
apiKey = _resolveApiKey(
providerName?.toLowerCase(),
session,
effectiveProjectRoot
);
// Prepare provider-specific configuration
let providerSpecificParams = {};
// Handle Vertex AI specific configuration
if (providerName?.toLowerCase() === 'vertex') {
// Get Vertex project ID and location
const projectId =
getVertexProjectId(effectiveProjectRoot) ||
resolveEnvVariable(
'VERTEX_PROJECT_ID',
session,
effectiveProjectRoot
);
const location =
getVertexLocation(effectiveProjectRoot) ||
resolveEnvVariable(
'VERTEX_LOCATION',
session,
effectiveProjectRoot
) ||
'us-central1';
// Get credentials path if available
const credentialsPath = resolveEnvVariable(
'GOOGLE_APPLICATION_CREDENTIALS',
session,
effectiveProjectRoot
);
// Add Vertex-specific parameters
providerSpecificParams = {
projectId,
location,
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
};
log(
'debug',
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
);
}
const messages = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
@@ -503,15 +476,13 @@ async function _unifiedServiceRunner(serviceType, params) {
maxTokens: roleParams.maxTokens,
temperature: roleParams.temperature,
messages,
...(baseURL && { baseURL }),
baseUrl,
...(serviceType === 'generateObject' && { schema, objectName }),
...providerSpecificParams,
...restApiParams
};
providerResponse = await _attemptProviderCallWithRetries(
provider,
serviceType,
providerApiFn,
callParams,
providerName,
modelId,
@@ -577,8 +548,7 @@ async function _unifiedServiceRunner(serviceType, params) {
lowerCaseMessage.includes('does not support tool_use') ||
lowerCaseMessage.includes('tool use is not supported') ||
lowerCaseMessage.includes('tools are not supported') ||
lowerCaseMessage.includes('function calling is not supported') ||
lowerCaseMessage.includes('tool use is not supported')
lowerCaseMessage.includes('function calling is not supported')
) {
const specificErrorMsg = `Model '${modelId || 'unknown'}' via provider '${providerName || 'unknown'}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
log('error', `[Tool Support Error] ${specificErrorMsg}`);

View File

@@ -32,8 +32,7 @@ import {
removeTask,
findTaskById,
taskExists,
moveTask,
migrateProject
moveTask
} from './task-manager.js';
import {
@@ -54,12 +53,6 @@ import {
getBaseUrlForRole
} from './config-manager.js';
import {
COMPLEXITY_REPORT_FILE,
PRD_FILE,
TASKMASTER_TASKS_FILE
} from '../../src/constants/paths.js';
import {
displayBanner,
displayHelp,
@@ -72,7 +65,8 @@ import {
stopLoadingIndicator,
displayModelConfiguration,
displayAvailableModels,
displayApiKeyStatus
displayApiKeyStatus,
displayAiUsageSummary
} from './ui.js';
import { initializeProject } from '../init.js';
@@ -87,8 +81,6 @@ import {
TASK_STATUS_OPTIONS
} from '../../src/constants/task-status.js';
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
import { syncTasksToReadme } from './sync-readme.js';
/**
* Runs the interactive setup process for model configuration.
* @param {string|null} projectRoot - The resolved project root directory.
@@ -163,11 +155,11 @@ async function runInteractiveSetup(projectRoot) {
}
// Helper function to fetch Ollama models (duplicated for CLI context)
function fetchOllamaModelsCLI(baseURL = 'http://localhost:11434/api') {
function fetchOllamaModelsCLI(baseUrl = 'http://localhost:11434/api') {
return new Promise((resolve) => {
try {
// Parse the base URL to extract hostname, port, and base path
const url = new URL(baseURL);
const url = new URL(baseUrl);
const isHttps = url.protocol === 'https:';
const port = url.port || (isHttps ? 443 : 80);
const basePath = url.pathname.endsWith('/')
@@ -252,11 +244,6 @@ async function runInteractiveSetup(projectRoot) {
value: '__CUSTOM_OLLAMA__'
};
const customBedrockOption = {
name: '* Custom Bedrock model', // Add Bedrock custom option
value: '__CUSTOM_BEDROCK__'
};
let choices = [];
let defaultIndex = 0; // Default to 'Cancel'
@@ -303,9 +290,8 @@ async function runInteractiveSetup(projectRoot) {
commonPrefix.push(cancelOption);
commonPrefix.push(customOpenRouterOption);
commonPrefix.push(customOllamaOption);
commonPrefix.push(customBedrockOption);
const prefixLength = commonPrefix.length; // Initial prefix length
let prefixLength = commonPrefix.length; // Initial prefix length
if (allowNone) {
choices = [
@@ -450,13 +436,13 @@ async function runInteractiveSetup(projectRoot) {
modelIdToSet = customId;
providerHint = 'ollama';
// Get the Ollama base URL from config for this role
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
// Validate against live Ollama list
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseURL);
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseUrl);
if (ollamaModels === null) {
console.error(
chalk.red(
`Error: Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
`Error: Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
)
);
setupSuccess = false;
@@ -469,47 +455,12 @@ async function runInteractiveSetup(projectRoot) {
);
console.log(
chalk.yellow(
`You can check available models with: curl ${ollamaBaseURL}/tags`
`You can check available models with: curl ${ollamaBaseUrl}/tags`
)
);
setupSuccess = false;
return true; // Continue setup, but mark as failed
}
} else if (selectedValue === '__CUSTOM_BEDROCK__') {
isCustomSelection = true;
const { customId } = await inquirer.prompt([
{
type: 'input',
name: 'customId',
message: `Enter the custom Bedrock Model ID for the ${role} role (e.g., anthropic.claude-3-sonnet-20240229-v1:0):`
}
]);
if (!customId) {
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
return true; // Continue setup, but don't set this role
}
modelIdToSet = customId;
providerHint = 'bedrock';
// Check if AWS environment variables exist
if (
!process.env.AWS_ACCESS_KEY_ID ||
!process.env.AWS_SECRET_ACCESS_KEY
) {
console.error(
chalk.red(
'Error: AWS_ACCESS_KEY_ID and/or AWS_SECRET_ACCESS_KEY environment variables are missing. Please set them before using custom Bedrock models.'
)
);
setupSuccess = false;
return true; // Continue setup, but mark as failed
}
console.log(
chalk.blue(
`Custom Bedrock model "${modelIdToSet}" will be used. No validation performed.`
)
);
} else if (
selectedValue &&
typeof selectedValue === 'object' &&
@@ -656,7 +607,7 @@ function registerCommands(programInstance) {
'-i, --input <file>',
'Path to the PRD file (alternative to positional argument)'
)
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
.option('-o, --output <file>', 'Output file path', 'tasks/tasks.json')
.option('-n, --num-tasks <number>', 'Number of tasks to generate', '10')
.option('-f, --force', 'Skip confirmation when overwriting existing tasks')
.option(
@@ -670,14 +621,14 @@ function registerCommands(programInstance) {
.action(async (file, options) => {
// Use input option if file argument not provided
const inputFile = file || options.input;
const defaultPrdPath = PRD_FILE;
const defaultPrdPath = 'scripts/prd.txt';
const numTasks = parseInt(options.numTasks, 10);
const outputPath = options.output;
const force = options.force || false;
const append = options.append || false;
const research = options.research || false;
let useForce = force;
const useAppend = append;
let useAppend = append;
// Helper function to check if tasks.json exists and confirm overwrite
async function confirmOverwriteIfNeeded() {
@@ -717,12 +668,38 @@ function registerCommands(programInstance) {
console.log(
chalk.yellow(
`No PRD file specified and default PRD file not found at ${PRD_FILE}.`
'No PRD file specified and default PRD file not found at scripts/prd.txt.'
)
);
console.log(
boxen(
`${chalk.white.bold('Parse PRD Help')}\n\n${chalk.cyan('Usage:')}\n task-master parse-prd <prd-file.txt> [options]\n\n${chalk.cyan('Options:')}\n -i, --input <file> Path to the PRD file (alternative to positional argument)\n -o, --output <file> Output file path (default: "${TASKMASTER_TASKS_FILE}")\n -n, --num-tasks <number> Number of tasks to generate (default: 10)\n -f, --force Skip confirmation when overwriting existing tasks\n --append Append new tasks to existing tasks.json instead of overwriting\n -r, --research Use Perplexity AI for research-backed task generation\n\n${chalk.cyan('Example:')}\n task-master parse-prd requirements.txt --num-tasks 15\n task-master parse-prd --input=requirements.txt\n task-master parse-prd --force\n task-master parse-prd requirements_v2.txt --append\n task-master parse-prd requirements.txt --research\n\n${chalk.yellow('Note: This command will:')}\n 1. Look for a PRD file at ${PRD_FILE} by default\n 2. Use the file specified by --input or positional argument if provided\n 3. Generate tasks from the PRD and either:\n - Overwrite any existing tasks.json file (default)\n - Append to existing tasks.json if --append is used`,
chalk.white.bold('Parse PRD Help') +
'\n\n' +
chalk.cyan('Usage:') +
'\n' +
` task-master parse-prd <prd-file.txt> [options]\n\n` +
chalk.cyan('Options:') +
'\n' +
' -i, --input <file> Path to the PRD file (alternative to positional argument)\n' +
' -o, --output <file> Output file path (default: "tasks/tasks.json")\n' +
' -n, --num-tasks <number> Number of tasks to generate (default: 10)\n' +
' -f, --force Skip confirmation when overwriting existing tasks\n' +
' --append Append new tasks to existing tasks.json instead of overwriting\n' +
' -r, --research Use Perplexity AI for research-backed task generation\n\n' +
chalk.cyan('Example:') +
'\n' +
' task-master parse-prd requirements.txt --num-tasks 15\n' +
' task-master parse-prd --input=requirements.txt\n' +
' task-master parse-prd --force\n' +
' task-master parse-prd requirements_v2.txt --append\n' +
' task-master parse-prd requirements.txt --research\n\n' +
chalk.yellow('Note: This command will:') +
'\n' +
' 1. Look for a PRD file at scripts/prd.txt by default\n' +
' 2. Use the file specified by --input or positional argument if provided\n' +
' 3. Generate tasks from the PRD and either:\n' +
' - Overwrite any existing tasks.json file (default)\n' +
' - Append to existing tasks.json if --append is used',
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
)
);
@@ -774,11 +751,7 @@ function registerCommands(programInstance) {
.description(
'Update multiple tasks with ID >= "from" based on new information or implementation changes'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'--from <id>',
'Task ID to start updating from (tasks with ID >= this value will be updated)',
@@ -793,7 +766,7 @@ function registerCommands(programInstance) {
'Use Perplexity AI for research-backed task updates'
)
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const fromId = parseInt(options.from, 10); // Validation happens here
const prompt = options.prompt;
const useResearch = options.research || false;
@@ -859,11 +832,7 @@ function registerCommands(programInstance) {
.description(
'Update a single specific task by ID with new information (use --id parameter)'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-i, --id <id>', 'Task ID to update (required)')
.option(
'-p, --prompt <text>',
@@ -875,7 +844,7 @@ function registerCommands(programInstance) {
)
.action(async (options) => {
try {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
// Validate required parameters
if (!options.id) {
@@ -890,7 +859,7 @@ function registerCommands(programInstance) {
// Parse the task ID and validate it's a number
const taskId = parseInt(options.id, 10);
if (Number.isNaN(taskId) || taskId <= 0) {
if (isNaN(taskId) || taskId <= 0) {
console.error(
chalk.red(
`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`
@@ -926,7 +895,7 @@ function registerCommands(programInstance) {
console.error(
chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)
);
if (tasksPath === TASKMASTER_TASKS_FILE) {
if (tasksPath === 'tasks/tasks.json') {
console.log(
chalk.yellow(
'Hint: Run task-master init or task-master parse-prd to create tasks.json first'
@@ -1016,11 +985,7 @@ function registerCommands(programInstance) {
.description(
'Update a subtask by appending additional timestamped information'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-i, --id <id>',
'Subtask ID to update in format "parentId.subtaskId" (required)'
@@ -1032,7 +997,7 @@ function registerCommands(programInstance) {
.option('-r, --research', 'Use Perplexity AI for research-backed updates')
.action(async (options) => {
try {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
// Validate required parameters
if (!options.id) {
@@ -1083,7 +1048,7 @@ function registerCommands(programInstance) {
console.error(
chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)
);
if (tasksPath === TASKMASTER_TASKS_FILE) {
if (tasksPath === 'tasks/tasks.json') {
console.log(
chalk.yellow(
'Hint: Run task-master init or task-master parse-prd to create tasks.json first'
@@ -1174,14 +1139,10 @@ function registerCommands(programInstance) {
programInstance
.command('generate')
.description('Generate task files from tasks.json')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-o, --output <dir>', 'Output directory', 'tasks')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const outputDir = options.output;
console.log(chalk.blue(`Generating task files from: ${tasksPath}`));
@@ -1204,13 +1165,9 @@ function registerCommands(programInstance) {
'-s, --status <status>',
`New status (one of: ${TASK_STATUS_OPTIONS.join(', ')})`
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const taskId = options.id;
const status = options.status;
@@ -1240,20 +1197,16 @@ function registerCommands(programInstance) {
programInstance
.command('list')
.description('List all tasks')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-r, --report <report>',
'Path to the complexity report file',
COMPLEXITY_REPORT_FILE
'scripts/task-complexity-report.json'
)
.option('-s, --status <status>', 'Filter by status')
.option('--with-subtasks', 'Show subtasks for each task')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const reportPath = options.report;
const statusFilter = options.status;
const withSubtasks = options.withSubtasks || false;
@@ -1292,7 +1245,7 @@ function registerCommands(programInstance) {
.option(
'--file <file>',
'Path to the tasks file (relative to project root)',
TASKMASTER_TASKS_FILE // Allow file override
'tasks/tasks.json'
) // Allow file override
.action(async (options) => {
const projectRoot = findProjectRoot();
@@ -1367,7 +1320,7 @@ function registerCommands(programInstance) {
.option(
'-o, --output <file>',
'Output file path for the report',
COMPLEXITY_REPORT_FILE
'scripts/task-complexity-report.json'
)
.option(
'-m, --model <model>',
@@ -1378,11 +1331,7 @@ function registerCommands(programInstance) {
'Minimum complexity score to recommend expansion (1-10)',
'5'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-r, --research',
'Use Perplexity AI for research-backed complexity analysis'
@@ -1394,7 +1343,7 @@ function registerCommands(programInstance) {
.option('--from <id>', 'Starting task ID in a range to analyze')
.option('--to <id>', 'Ending task ID in a range to analyze')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file || 'tasks/tasks.json';
const outputPath = options.output;
const modelOverride = options.model;
const thresholdScore = parseFloat(options.threshold);
@@ -1428,18 +1377,14 @@ function registerCommands(programInstance) {
programInstance
.command('clear-subtasks')
.description('Clear subtasks from specified tasks')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-i, --id <ids>',
'Task IDs (comma-separated) to clear subtasks from'
)
.option('--all', 'Clear subtasks from all tasks')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const taskIds = options.id;
const all = options.all;
@@ -1470,11 +1415,7 @@ function registerCommands(programInstance) {
programInstance
.command('add-task')
.description('Add a new task using AI, optionally providing manual details')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-p, --prompt <prompt>',
'Description of the task to add (required if not using manual fields)'
@@ -1514,14 +1455,10 @@ function registerCommands(programInstance) {
process.exit(1);
}
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
if (!fs.existsSync(tasksPath)) {
console.error(
`❌ No tasks.json file found. Please run "task-master init" or create a tasks.json file at ${TASKMASTER_TASKS_FILE}`
);
process.exit(1);
}
const tasksPath =
options.file ||
path.join(findProjectRoot() || '.', 'tasks', 'tasks.json') || // Ensure tasksPath is also relative to a found root or current dir
'tasks/tasks.json';
// Correctly determine projectRoot
const projectRoot = findProjectRoot();
@@ -1593,20 +1530,15 @@ function registerCommands(programInstance) {
.description(
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-r, --report <report>',
'Path to the complexity report file',
COMPLEXITY_REPORT_FILE
'scripts/task-complexity-report.json'
)
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const reportPath = options.report;
await displayNextTask(tasksPath, reportPath);
});
@@ -1619,15 +1551,11 @@ function registerCommands(programInstance) {
.argument('[id]', 'Task ID to show')
.option('-i, --id <id>', 'Task ID to show')
.option('-s, --status <status>', 'Filter subtasks by status') // ADDED status option
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-r, --report <report>',
'Path to the complexity report file',
COMPLEXITY_REPORT_FILE
'scripts/task-complexity-report.json'
)
.action(async (taskId, options) => {
const idArg = taskId || options.id;
@@ -1638,7 +1566,7 @@ function registerCommands(programInstance) {
process.exit(1);
}
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const reportPath = options.report;
// PASS statusFilter to the display function
await displayTaskById(tasksPath, idArg, reportPath, statusFilter);
@@ -1650,13 +1578,9 @@ function registerCommands(programInstance) {
.description('Add a dependency to a task')
.option('-i, --id <id>', 'Task ID to add dependency to')
.option('-d, --depends-on <id>', 'Task ID that will become a dependency')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const taskId = options.id;
const dependencyId = options.dependsOn;
@@ -1685,13 +1609,9 @@ function registerCommands(programInstance) {
.description('Remove a dependency from a task')
.option('-i, --id <id>', 'Task ID to remove dependency from')
.option('-d, --depends-on <id>', 'Task ID to remove as a dependency')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const taskId = options.id;
const dependencyId = options.dependsOn;
@@ -1720,26 +1640,18 @@ function registerCommands(programInstance) {
.description(
`Identify invalid dependencies without fixing them${chalk.reset('')}`
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.action(async (options) => {
await validateDependenciesCommand(options.file || TASKMASTER_TASKS_FILE);
await validateDependenciesCommand(options.file);
});
// fix-dependencies command
programInstance
.command('fix-dependencies')
.description(`Fix invalid dependencies automatically${chalk.reset('')}`)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.action(async (options) => {
await fixDependenciesCommand(options.file || TASKMASTER_TASKS_FILE);
await fixDependenciesCommand(options.file);
});
// complexity-report command
@@ -1749,21 +1661,17 @@ function registerCommands(programInstance) {
.option(
'-f, --file <file>',
'Path to the report file',
COMPLEXITY_REPORT_FILE
'scripts/task-complexity-report.json'
)
.action(async (options) => {
await displayComplexityReport(options.file || COMPLEXITY_REPORT_FILE);
await displayComplexityReport(options.file);
});
// add-subtask command
programInstance
.command('add-subtask')
.description('Add a subtask to an existing task')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-p, --parent <id>', 'Parent task ID (required)')
.option('-i, --task-id <id>', 'Existing task ID to convert to subtask')
.option(
@@ -1779,7 +1687,7 @@ function registerCommands(programInstance) {
.option('-s, --status <status>', 'Status for the new subtask', 'pending')
.option('--skip-generate', 'Skip regenerating task files')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const parentId = options.parent;
const existingTaskId = options.taskId;
const generateFiles = !options.skipGenerate;
@@ -1924,7 +1832,26 @@ function registerCommands(programInstance) {
function showAddSubtaskHelp() {
console.log(
boxen(
`${chalk.white.bold('Add Subtask Command Help')}\n\n${chalk.cyan('Usage:')}\n task-master add-subtask --parent=<id> [options]\n\n${chalk.cyan('Options:')}\n -p, --parent <id> Parent task ID (required)\n -i, --task-id <id> Existing task ID to convert to subtask\n -t, --title <title> Title for the new subtask\n -d, --description <text> Description for the new subtask\n --details <text> Implementation details for the new subtask\n --dependencies <ids> Comma-separated list of dependency IDs\n -s, --status <status> Status for the new subtask (default: "pending")\n -f, --file <file> Path to the tasks file (default: "${TASKMASTER_TASKS_FILE}")\n --skip-generate Skip regenerating task files\n\n${chalk.cyan('Examples:')}\n task-master add-subtask --parent=5 --task-id=8\n task-master add-subtask -p 5 -t "Implement login UI" -d "Create the login form"`,
chalk.white.bold('Add Subtask Command Help') +
'\n\n' +
chalk.cyan('Usage:') +
'\n' +
` task-master add-subtask --parent=<id> [options]\n\n` +
chalk.cyan('Options:') +
'\n' +
' -p, --parent <id> Parent task ID (required)\n' +
' -i, --task-id <id> Existing task ID to convert to subtask\n' +
' -t, --title <title> Title for the new subtask\n' +
' -d, --description <text> Description for the new subtask\n' +
' --details <text> Implementation details for the new subtask\n' +
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
' --skip-generate Skip regenerating task files\n\n' +
chalk.cyan('Examples:') +
'\n' +
' task-master add-subtask --parent=5 --task-id=8\n' +
' task-master add-subtask -p 5 -t "Implement login UI" -d "Create the login form"',
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
)
);
@@ -1934,11 +1861,7 @@ function registerCommands(programInstance) {
programInstance
.command('remove-subtask')
.description('Remove a subtask from its parent task')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-i, --id <id>',
'Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated for multiple subtasks)'
@@ -1949,7 +1872,7 @@ function registerCommands(programInstance) {
)
.option('--skip-generate', 'Skip regenerating task files')
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const subtaskIds = options.id;
const convertToTask = options.convert || false;
const generateFiles = !options.skipGenerate;
@@ -2069,9 +1992,7 @@ function registerCommands(programInstance) {
'\n' +
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
' -f, --file <file> Path to the tasks file (default: "' +
TASKMASTER_TASKS_FILE +
'")\n' +
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
' --skip-generate Skip regenerating task files\n\n' +
chalk.cyan('Examples:') +
'\n' +
@@ -2092,14 +2013,10 @@ function registerCommands(programInstance) {
'-i, --id <ids>',
'ID(s) of the task(s) or subtask(s) to remove (e.g., "5", "5.2", or "5,6.1,7")'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-y, --yes', 'Skip confirmation prompt', false)
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const taskIdsString = options.id;
if (!taskIdsString) {
@@ -2376,10 +2293,6 @@ function registerCommands(programInstance) {
'--ollama',
'Allow setting a custom Ollama model ID (use with --set-*) '
)
.option(
'--bedrock',
'Allow setting a custom Bedrock model ID (use with --set-*) '
)
.addHelpText(
'after',
`
@@ -2389,7 +2302,6 @@ Examples:
$ task-master models --set-research sonar-pro # Set research model
$ task-master models --set-fallback claude-3-5-sonnet-20241022 # Set fallback
$ task-master models --set-main my-custom-model --ollama # Set custom Ollama model for main role
$ task-master models --set-main anthropic.claude-3-sonnet-20240229-v1:0 --bedrock # Set custom Bedrock model for main role
$ task-master models --set-main some/other-model --openrouter # Set custom OpenRouter model for main role
$ task-master models --setup # Run interactive setup`
)
@@ -2399,16 +2311,11 @@ Examples:
console.error(chalk.red('Error: Could not find project root.'));
process.exit(1);
}
// Validate flags: cannot use multiple provider flags simultaneously
const providerFlags = [
options.openrouter,
options.ollama,
options.bedrock
].filter(Boolean).length;
if (providerFlags > 1) {
// Validate flags: cannot use both --openrouter and --ollama simultaneously
if (options.openrouter && options.ollama) {
console.error(
chalk.red(
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock) simultaneously.'
'Error: Cannot use both --openrouter and --ollama flags simultaneously.'
)
);
process.exit(1);
@@ -2448,9 +2355,7 @@ Examples:
? 'openrouter'
: options.ollama
? 'ollama'
: options.bedrock
? 'bedrock'
: undefined
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -2470,9 +2375,7 @@ Examples:
? 'openrouter'
: options.ollama
? 'ollama'
: options.bedrock
? 'bedrock'
: undefined
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -2494,9 +2397,7 @@ Examples:
? 'openrouter'
: options.ollama
? 'ollama'
: options.bedrock
? 'bedrock'
: undefined
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -2596,11 +2497,7 @@ Examples:
programInstance
.command('move')
.description('Move a task or subtask to a new position')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'--from <id>',
'ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated to move multiple tasks (e.g., "5,6,7")'
@@ -2610,7 +2507,7 @@ Examples:
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
)
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const tasksPath = options.file;
const sourceId = options.from;
const destinationId = options.to;
@@ -2725,87 +2622,6 @@ Examples:
}
});
programInstance
.command('migrate')
.description(
'Migrate existing project to use the new .taskmaster directory structure'
)
.option(
'-f, --force',
'Force migration even if .taskmaster directory already exists'
)
.option(
'--backup',
'Create backup of old files before migration (default: false)',
false
)
.option(
'--cleanup',
'Remove old files after successful migration (default: true)',
true
)
.option('-y, --yes', 'Skip confirmation prompts')
.option(
'--dry-run',
'Show what would be migrated without actually moving files'
)
.action(async (options) => {
try {
await migrateProject(options);
} catch (error) {
console.error(chalk.red('Error during migration:'), error.message);
process.exit(1);
}
});
// sync-readme command
programInstance
.command('sync-readme')
.description('Sync the current task list to README.md in the project root')
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option('--with-subtasks', 'Include subtasks in the README output')
.option(
'-s, --status <status>',
'Show only tasks matching this status (e.g., pending, done)'
)
.action(async (options) => {
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
const withSubtasks = options.withSubtasks || false;
const status = options.status || null;
// Find project root
const projectRoot = findProjectRoot();
if (!projectRoot) {
console.error(
chalk.red(
'Error: Could not find project root. Make sure you are in a Task Master project directory.'
)
);
process.exit(1);
}
console.log(
chalk.blue(
`📝 Syncing tasks to README.md${withSubtasks ? ' (with subtasks)' : ''}${status ? ` (status: ${status})` : ''}...`
)
);
const success = await syncTasksToReadme(projectRoot, {
withSubtasks,
status,
tasksPath
});
if (!success) {
console.error(chalk.red('❌ Failed to sync tasks to README.md'));
process.exit(1);
}
});
return programInstance;
}
@@ -2990,7 +2806,7 @@ async function runCLI(argv = process.argv) {
// Setup and parse
// NOTE: getConfig() might be called during setupCLI->registerCommands if commands need config
// This means the ConfigurationError might be thrown here if configuration file is missing.
// This means the ConfigurationError might be thrown here if .taskmasterconfig is missing.
const programInstance = setupCLI();
await programInstance.parseAsync(argv);
@@ -3009,10 +2825,10 @@ async function runCLI(argv = process.argv) {
boxen(
chalk.red.bold('Configuration Update Required!') +
'\n\n' +
chalk.white('Taskmaster now uses a ') +
chalk.yellow.bold('configuration file') +
chalk.white('Taskmaster now uses the ') +
chalk.yellow.bold('.taskmasterconfig') +
chalk.white(
' in your project for AI model choices and settings.\n\n' +
' file in your project root for AI model choices and settings.\n\n' +
'This file appears to be '
) +
chalk.red.bold('missing') +
@@ -3024,7 +2840,7 @@ async function runCLI(argv = process.argv) {
chalk.white.bold('Key Points:') +
'\n' +
chalk.white('* ') +
chalk.yellow.bold('Configuration file') +
chalk.yellow.bold('.taskmasterconfig') +
chalk.white(
': Stores your AI model settings (do not manually edit)\n'
) +

View File

@@ -3,8 +3,6 @@ import path from 'path';
import chalk from 'chalk';
import { fileURLToPath } from 'url';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
import { findConfigPath } from '../../src/utils/path-utils.js';
// Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
@@ -29,10 +27,12 @@ try {
process.exit(1); // Exit if models can't be loaded
}
const CONFIG_FILE_NAME = '.taskmasterconfig';
// Define valid providers dynamically from the loaded MODEL_MAP
const VALID_PROVIDERS = Object.keys(MODEL_MAP || {});
// Default configuration values (used if config file is missing or incomplete)
// Default configuration values (used if .taskmasterconfig is missing or incomplete)
const DEFAULTS = {
models: {
main: {
@@ -61,8 +61,7 @@ const DEFAULTS = {
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api',
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
ollamaBaseUrl: 'http://localhost:11434/api'
}
};
@@ -97,15 +96,13 @@ function _loadAndValidateConfig(explicitRoot = null) {
}
// ---> End find project root logic <---
// --- Find configuration file using centralized path utility ---
const configPath = findConfigPath(null, { projectRoot: rootToUse });
// --- Proceed with loading from the determined rootToUse ---
const configPath = path.join(rootToUse, CONFIG_FILE_NAME);
let config = { ...defaults }; // Start with a deep copy of defaults
let configExists = false;
if (configPath) {
if (fs.existsSync(configPath)) {
configExists = true;
const isLegacy = configPath.endsWith(LEGACY_CONFIG_FILE);
try {
const rawData = fs.readFileSync(configPath, 'utf-8');
const parsedConfig = JSON.parse(rawData);
@@ -128,15 +125,6 @@ function _loadAndValidateConfig(explicitRoot = null) {
};
configSource = `file (${configPath})`; // Update source info
// Issue deprecation warning if using legacy config file
if (isLegacy) {
console.warn(
chalk.yellow(
`⚠️ DEPRECATION WARNING: Found configuration in legacy location '${configPath}'. Please migrate to .taskmaster/config.json. Run 'task-master migrate' to automatically migrate your project.`
)
);
}
// --- Validation (Warn if file content is invalid) ---
// Use log.warn for consistency
if (!validateProvider(config.models.main.provider)) {
@@ -183,19 +171,19 @@ function _loadAndValidateConfig(explicitRoot = null) {
// Only warn if an explicit root was *expected*.
console.warn(
chalk.yellow(
`Warning: Configuration file not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
`Warning: ${CONFIG_FILE_NAME} not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
)
);
} else {
console.warn(
chalk.yellow(
`Warning: Configuration file not found at derived root (${rootToUse}). Using defaults.`
`Warning: ${CONFIG_FILE_NAME} not found at derived root (${rootToUse}). Using defaults.`
)
);
}
// Keep config as defaults
config = { ...defaults };
configSource = `defaults (no config file found at ${rootToUse})`;
configSource = `defaults (file not found at ${configPath})`;
}
return config;
@@ -354,13 +342,13 @@ function getDefaultSubtasks(explicitRoot = null) {
// Directly return value from config, ensure integer
const val = getGlobalConfig(explicitRoot).defaultSubtasks;
const parsedVal = parseInt(val, 10);
return Number.isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
}
function getDefaultNumTasks(explicitRoot = null) {
const val = getGlobalConfig(explicitRoot).defaultNumTasks;
const parsedVal = parseInt(val, 10);
return Number.isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
return isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
}
function getDefaultPriority(explicitRoot = null) {
@@ -373,39 +361,9 @@ function getProjectName(explicitRoot = null) {
return getGlobalConfig(explicitRoot).projectName;
}
function getOllamaBaseURL(explicitRoot = null) {
function getOllamaBaseUrl(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).ollamaBaseURL;
}
function getAzureBaseURL(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).azureBaseURL;
}
function getBedrockBaseURL(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).bedrockBaseURL;
}
/**
* Gets the Google Cloud project ID for Vertex AI from configuration
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @returns {string|null} The project ID or null if not configured
*/
function getVertexProjectId(explicitRoot = null) {
// Return value from config
return getGlobalConfig(explicitRoot).vertexProjectId;
}
/**
* Gets the Google Cloud location for Vertex AI from configuration
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @returns {string} The location or default value of "us-central1"
*/
function getVertexLocation(explicitRoot = null) {
// Return value from config or default
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
return getGlobalConfig(explicitRoot).ollamaBaseUrl;
}
/**
@@ -492,8 +450,7 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY',
vertex: 'GOOGLE_API_KEY' // Vertex uses the same key as Google
xai: 'XAI_API_KEY'
// Add other providers as needed
};
@@ -585,10 +542,6 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
break;
case 'vertex':
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY; // Vertex uses Google API key
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
break;
default:
return false; // Unknown provider
}
@@ -675,16 +628,12 @@ function writeConfig(config, explicitRoot = null) {
}
// ---> End determine root path logic <---
// Use new config location: .taskmaster/config.json
const taskmasterDir = path.join(rootPath, '.taskmaster');
const configPath = path.join(taskmasterDir, 'config.json');
const configPath =
path.basename(rootPath) === CONFIG_FILE_NAME
? rootPath
: path.join(rootPath, CONFIG_FILE_NAME);
try {
// Ensure .taskmaster directory exists
if (!fs.existsSync(taskmasterDir)) {
fs.mkdirSync(taskmasterDir, { recursive: true });
}
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
loadedConfig = config; // Update the cache after successful write
return true;
@@ -699,12 +648,25 @@ function writeConfig(config, explicitRoot = null) {
}
/**
* Checks if a configuration file exists at the project root (new or legacy location)
* Checks if the .taskmasterconfig file exists at the project root
* @param {string|null} explicitRoot - Optional explicit path to the project root
* @returns {boolean} True if the file exists, false otherwise
*/
function isConfigFilePresent(explicitRoot = null) {
return findConfigPath(null, { projectRoot: explicitRoot }) !== null;
// ---> Determine root path reliably <---
let rootPath = explicitRoot;
if (explicitRoot === null || explicitRoot === undefined) {
// Logic matching _loadAndValidateConfig
const foundRoot = findProjectRoot(); // *** Explicitly call findProjectRoot ***
if (!foundRoot) {
return false; // Cannot check if root doesn't exist
}
rootPath = foundRoot;
}
// ---> End determine root path logic <---
const configPath = path.join(rootPath, CONFIG_FILE_NAME);
return fs.existsSync(configPath);
}
/**
@@ -745,8 +707,8 @@ function getAllProviders() {
function getBaseUrlForRole(role, explicitRoot = null) {
const roleConfig = getModelConfigForRole(role, explicitRoot);
return roleConfig && typeof roleConfig.baseURL === 'string'
? roleConfig.baseURL
return roleConfig && typeof roleConfig.baseUrl === 'string'
? roleConfig.baseUrl
: undefined;
}
@@ -756,12 +718,14 @@ export {
writeConfig,
ConfigurationError,
isConfigFilePresent,
// Validation
validateProvider,
validateProviderModelCombination,
VALID_PROVIDERS,
MODEL_MAP,
getAvailableModels,
// Role-specific getters (No env var overrides)
getMainProvider,
getMainModelId,
@@ -776,6 +740,7 @@ export {
getFallbackMaxTokens,
getFallbackTemperature,
getBaseUrlForRole,
// Global setting getters (No env var overrides)
getLogLevel,
getDebugFlag,
@@ -783,16 +748,13 @@ export {
getDefaultSubtasks,
getDefaultPriority,
getProjectName,
getOllamaBaseURL,
getAzureBaseURL,
getBedrockBaseURL,
getOllamaBaseUrl,
getParametersForRole,
getUserId,
// API Key Checkers (still relevant)
isApiKeySet,
getMcpApiKeyStatus,
// ADD: Function to get all provider names
getAllProviders,
getVertexProjectId,
getVertexLocation
getAllProviders
};

View File

@@ -563,6 +563,11 @@ function cleanupSubtaskDependencies(tasksData) {
* @param {string} tasksPath - Path to tasks.json
*/
async function validateDependenciesCommand(tasksPath, options = {}) {
// Only display banner if not in silent mode
if (!isSilentMode()) {
displayBanner();
}
log('info', 'Checking for invalid dependencies in task files...');
// Read tasks data
@@ -686,6 +691,11 @@ function countAllDependencies(tasks) {
* @param {Object} options - Options object
*/
async function fixDependenciesCommand(tasksPath, options = {}) {
// Only display banner if not in silent mode
if (!isSilentMode()) {
displayBanner();
}
log('info', 'Checking for and fixing invalid dependencies in tasks.json...');
try {

View File

@@ -5,14 +5,14 @@
"swe_score": 0.727,
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000
"max_tokens": 120000
},
{
"id": "claude-opus-4-20250514",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 15.0, "output": 75.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 32000
"max_tokens": 120000
},
{
"id": "claude-3-7-sonnet-20250219",
@@ -153,7 +153,7 @@
"id": "sonar-pro",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "research"],
"allowed_roles": ["research"],
"max_tokens": 8700
},
{
@@ -174,14 +174,14 @@
"id": "sonar-reasoning-pro",
"swe_score": 0.211,
"cost_per_1m_tokens": { "input": 2, "output": 8 },
"allowed_roles": ["main", "research", "fallback"],
"allowed_roles": ["main", "fallback"],
"max_tokens": 8700
},
{
"id": "sonar-reasoning",
"swe_score": 0.211,
"cost_per_1m_tokens": { "input": 1, "output": 5 },
"allowed_roles": ["main", "research", "fallback"],
"allowed_roles": ["main", "fallback"],
"max_tokens": 8700
}
],

View File

@@ -1,184 +0,0 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { log, findProjectRoot } from './utils.js';
import { getProjectName } from './config-manager.js';
import listTasks from './task-manager/list-tasks.js';
/**
* Creates a basic README structure if one doesn't exist
* @param {string} projectName - Name of the project
* @returns {string} - Basic README content
*/
function createBasicReadme(projectName) {
return `# ${projectName}
This project is managed using Task Master.
`;
}
/**
* Create UTM tracking URL for task-master.dev
* @param {string} projectRoot - The project root path
* @returns {string} - UTM tracked URL
*/
function createTaskMasterUrl(projectRoot) {
// Get the actual folder name from the project root path
const folderName = path.basename(projectRoot);
// Clean folder name for UTM (replace spaces/special chars with hyphens)
const cleanFolderName = folderName
.toLowerCase()
.replace(/[^a-z0-9]/g, '-')
.replace(/-+/g, '-')
.replace(/^-|-$/g, '');
const utmParams = new URLSearchParams({
utm_source: 'github-readme',
utm_medium: 'readme-export',
utm_campaign: cleanFolderName || 'task-sync',
utm_content: 'task-export-link'
});
return `https://task-master.dev?${utmParams.toString()}`;
}
/**
* Create the start marker with metadata
* @param {Object} options - Export options
* @returns {string} - Formatted start marker
*/
function createStartMarker(options) {
const { timestamp, withSubtasks, status, projectRoot } = options;
// Format status filter text
const statusText = status
? `Status filter: ${status}`
: 'Status filter: none';
const subtasksText = withSubtasks ? 'with subtasks' : 'without subtasks';
// Create the export info content
const exportInfo =
`🎯 **Taskmaster Export** - ${timestamp}\n` +
`📋 Export: ${subtasksText}${statusText}\n` +
`🔗 Powered by [Task Master](${createTaskMasterUrl(projectRoot)})`;
// Create a markdown box using code blocks and emojis to mimic our UI style
const boxContent =
`<!-- TASKMASTER_EXPORT_START -->\n` +
`> ${exportInfo.split('\n').join('\n> ')}\n\n`;
return boxContent;
}
/**
* Create the end marker
* @returns {string} - Formatted end marker
*/
function createEndMarker() {
return (
`\n> 📋 **End of Taskmaster Export** - Tasks are synced from your project using the \`sync-readme\` command.\n` +
`<!-- TASKMASTER_EXPORT_END -->\n`
);
}
/**
* Syncs the current task list to README.md at the project root
* @param {string} projectRoot - Path to the project root directory
* @param {Object} options - Options for syncing
* @param {boolean} options.withSubtasks - Include subtasks in the output (default: false)
* @param {string} options.status - Filter by status (e.g., 'pending', 'done')
* @param {string} options.tasksPath - Custom path to tasks.json
* @returns {boolean} - True if sync was successful, false otherwise
*/
export async function syncTasksToReadme(projectRoot = null, options = {}) {
try {
const actualProjectRoot = projectRoot || findProjectRoot() || '.';
const { withSubtasks = false, status, tasksPath } = options;
// Get current tasks using the list-tasks functionality with markdown-readme format
const tasksOutput = await listTasks(
tasksPath ||
path.join(actualProjectRoot, '.taskmaster', 'tasks', 'tasks.json'),
status,
null,
withSubtasks,
'markdown-readme'
);
if (!tasksOutput) {
console.log(chalk.red('❌ Failed to generate task output'));
return false;
}
// Generate timestamp and metadata
const timestamp =
new Date().toISOString().replace('T', ' ').substring(0, 19) + ' UTC';
const projectName = getProjectName(actualProjectRoot);
// Create the export markers with metadata
const startMarker = createStartMarker({
timestamp,
withSubtasks,
status,
projectRoot: actualProjectRoot
});
const endMarker = createEndMarker();
// Create the complete task section
const taskSection = startMarker + tasksOutput + endMarker;
// Read current README content
const readmePath = path.join(actualProjectRoot, 'README.md');
let readmeContent = '';
try {
readmeContent = fs.readFileSync(readmePath, 'utf8');
} catch (err) {
if (err.code === 'ENOENT') {
// Create basic README if it doesn't exist
readmeContent = createBasicReadme(projectName);
} else {
throw err;
}
}
// Check if export markers exist and replace content between them
const startComment = '<!-- TASKMASTER_EXPORT_START -->';
const endComment = '<!-- TASKMASTER_EXPORT_END -->';
let updatedContent;
const startIndex = readmeContent.indexOf(startComment);
const endIndex = readmeContent.indexOf(endComment);
if (startIndex !== -1 && endIndex !== -1) {
// Replace existing task section
const beforeTasks = readmeContent.substring(0, startIndex);
const afterTasks = readmeContent.substring(endIndex + endComment.length);
updatedContent = beforeTasks + taskSection + afterTasks;
} else {
// Append to end of README
updatedContent = readmeContent + '\n' + taskSection;
}
// Write updated content to README
fs.writeFileSync(readmePath, updatedContent, 'utf8');
console.log(chalk.green('✅ Successfully synced tasks to README.md'));
console.log(
chalk.cyan(
`📋 Export details: ${withSubtasks ? 'with' : 'without'} subtasks${status ? `, status: ${status}` : ''}`
)
);
console.log(chalk.gray(`📍 Location: ${readmePath}`));
return true;
} catch (error) {
console.log(chalk.red('❌ Failed to sync tasks to README:'), error.message);
log('error', `README sync error: ${error.message}`);
return false;
}
}
export default syncTasksToReadme;

View File

@@ -24,7 +24,6 @@ import removeTask from './task-manager/remove-task.js';
import taskExists from './task-manager/task-exists.js';
import isTaskDependentOn from './task-manager/is-task-dependent.js';
import moveTask from './task-manager/move-task.js';
import { migrateProject } from './task-manager/migrate.js';
import { readComplexityReport } from './utils.js';
// Export task manager functions
export {
@@ -49,6 +48,5 @@ export {
taskExists,
isTaskDependentOn,
moveTask,
readComplexityReport,
migrateProject
readComplexityReport
};

View File

@@ -10,8 +10,6 @@ import {
getStatusWithColor,
startLoadingIndicator,
stopLoadingIndicator,
succeedLoadingIndicator,
failLoadingIndicator,
displayAiUsageSummary
} from '../ui.js';
import { readJSON, writeJSON, log as consoleLog, truncate } from '../utils.js';
@@ -281,7 +279,7 @@ async function addTask(
// CLI-only feedback for the dependency analysis
if (outputFormat === 'text') {
console.log(
boxen(chalk.cyan.bold('Task Context Analysis'), {
boxen(chalk.cyan.bold('Task Context Analysis') + '\n', {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
margin: { top: 0, bottom: 0 },
borderColor: 'cyan',
@@ -494,9 +492,9 @@ async function addTask(
includeScore: true, // Return match scores
threshold: 0.4, // Lower threshold = stricter matching (range 0-1)
keys: [
{ name: 'title', weight: 1.5 }, // Title is most important
{ name: 'description', weight: 2 }, // Description is very important
{ name: 'details', weight: 3 }, // Details is most important
{ name: 'title', weight: 2 }, // Title is most important
{ name: 'description', weight: 1.5 }, // Description is next
{ name: 'details', weight: 0.8 }, // Details is less important
// Search dependencies to find tasks that depend on similar things
{ name: 'dependencyTitles', weight: 0.5 }
],
@@ -504,8 +502,8 @@ async function addTask(
shouldSort: true,
// Allow searching in nested properties
useExtendedSearch: true,
// Return up to 50 matches
limit: 50
// Return up to 15 matches
limit: 15
};
// Prepare task data with dependencies expanded as titles for better semantic search
@@ -598,6 +596,32 @@ async function addTask(
// Get top N results for context
const relatedTasks = allRelevantTasks.slice(0, 8);
// Also look for tasks with similar purposes or categories
const purposeCategories = [
{ pattern: /(command|cli|flag)/i, label: 'CLI commands' },
{ pattern: /(task|subtask|add)/i, label: 'Task management' },
{ pattern: /(dependency|depend)/i, label: 'Dependency handling' },
{ pattern: /(AI|model|prompt)/i, label: 'AI integration' },
{ pattern: /(UI|display|show)/i, label: 'User interface' },
{ pattern: /(schedule|time|cron)/i, label: 'Scheduling' }, // Added scheduling category
{ pattern: /(config|setting|option)/i, label: 'Configuration' } // Added configuration category
];
promptCategory = purposeCategories.find((cat) =>
cat.pattern.test(prompt)
);
const categoryTasks = promptCategory
? data.tasks
.filter(
(t) =>
promptCategory.pattern.test(t.title) ||
promptCategory.pattern.test(t.description) ||
(t.details && promptCategory.pattern.test(t.details))
)
.filter((t) => !relatedTasks.some((rt) => rt.id === t.id))
.slice(0, 3)
: [];
// Format basic task overviews
if (relatedTasks.length > 0) {
contextTasks = `\nRelevant tasks identified by semantic similarity:\n${relatedTasks
@@ -608,6 +632,12 @@ async function addTask(
.join('\n')}`;
}
if (categoryTasks.length > 0) {
contextTasks += `\n\nTasks related to ${promptCategory.label}:\n${categoryTasks
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
}
if (
recentTasks.length > 0 &&
!contextTasks.includes('Recently created tasks')
@@ -620,10 +650,13 @@ async function addTask(
}
// Add detailed information about the most relevant tasks
const allDetailedTasks = [...relatedTasks.slice(0, 25)];
const allDetailedTasks = [
...relatedTasks.slice(0, 5),
...categoryTasks.slice(0, 2)
];
uniqueDetailedTasks = Array.from(
new Map(allDetailedTasks.map((t) => [t.id, t])).values()
).slice(0, 20);
).slice(0, 8);
if (uniqueDetailedTasks.length > 0) {
contextTasks += `\n\nDetailed information about relevant tasks:`;
@@ -682,14 +715,18 @@ async function addTask(
}
// Additional analysis of common patterns
const similarPurposeTasks = data.tasks.filter((t) =>
prompt.toLowerCase().includes(t.title.toLowerCase())
);
const similarPurposeTasks = promptCategory
? data.tasks.filter(
(t) =>
promptCategory.pattern.test(t.title) ||
promptCategory.pattern.test(t.description)
)
: [];
let commonDeps = []; // Initialize commonDeps
if (similarPurposeTasks.length > 0) {
contextTasks += `\n\nCommon patterns for similar tasks:`;
contextTasks += `\n\nCommon patterns for ${promptCategory ? promptCategory.label : 'similar'} tasks:`;
// Collect dependencies from similar purpose tasks
const similarDeps = similarPurposeTasks
@@ -706,7 +743,7 @@ async function addTask(
// Get most common dependencies for similar tasks
commonDeps = Object.entries(depCounts)
.sort((a, b) => b[1] - a[1])
.slice(0, 10);
.slice(0, 5);
if (commonDeps.length > 0) {
contextTasks += '\nMost common dependencies for similar tasks:';
@@ -723,7 +760,7 @@ async function addTask(
if (outputFormat === 'text') {
console.log(
chalk.gray(
` Context search across ${data.tasks.length} tasks using full prompt and ${promptWords.length} keywords`
` Fuzzy search across ${data.tasks.length} tasks using full prompt and ${promptWords.length} keywords`
)
);
@@ -731,7 +768,7 @@ async function addTask(
console.log(
chalk.gray(`\n High relevance matches (score < 0.25):`)
);
highRelevance.slice(0, 25).forEach((t) => {
highRelevance.slice(0, 5).forEach((t) => {
console.log(
chalk.yellow(` • ⭐ Task ${t.id}: ${truncate(t.title, 50)}`)
);
@@ -742,13 +779,24 @@ async function addTask(
console.log(
chalk.gray(`\n Medium relevance matches (score < 0.4):`)
);
mediumRelevance.slice(0, 10).forEach((t) => {
mediumRelevance.slice(0, 3).forEach((t) => {
console.log(
chalk.green(` • Task ${t.id}: ${truncate(t.title, 50)}`)
);
});
}
if (promptCategory && categoryTasks.length > 0) {
console.log(
chalk.gray(`\n Tasks related to ${promptCategory.label}:`)
);
categoryTasks.forEach((t) => {
console.log(
chalk.magenta(` • Task ${t.id}: ${truncate(t.title, 50)}`)
);
});
}
// Show dependency patterns
if (commonDeps && commonDeps.length > 0) {
console.log(
@@ -816,7 +864,10 @@ async function addTask(
numericDependencies.length > 0
? dependentTasks.length // Use length of tasks from explicit dependency path
: uniqueDetailedTasks.length // Use length of tasks from fuzzy search path
)}`,
)}` +
(promptCategory
? `\n${chalk.cyan('Category detected: ')}${chalk.yellow(promptCategory.label)}`
: ''),
{
padding: { top: 0, bottom: 1, left: 1, right: 1 },
margin: { top: 1, bottom: 0 },
@@ -880,7 +931,7 @@ async function addTask(
// Start the loading indicator - only for text mode
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI... \n`
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...\n`
);
}
@@ -925,33 +976,17 @@ async function addTask(
}
report('Successfully generated task data from AI.', 'success');
// Success! Show checkmark
if (loadingIndicator) {
succeedLoadingIndicator(
loadingIndicator,
'Task generated successfully'
);
loadingIndicator = null; // Clear it
}
} catch (error) {
// Failure! Show X
if (loadingIndicator) {
failLoadingIndicator(loadingIndicator, 'AI generation failed');
loadingIndicator = null;
}
report(
`DEBUG: generateObjectService caught error: ${error.message}`,
'debug'
);
report(`Error generating task with AI: ${error.message}`, 'error');
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
throw error; // Re-throw error after logging
} finally {
report('DEBUG: generateObjectService finally block reached.', 'debug');
// Clean up if somehow still running
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
}
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
}
// --- End Refactored AI Interaction ---
}
@@ -1022,7 +1057,7 @@ async function addTask(
truncate(newTask.description, 47)
]);
console.log(chalk.green(' New task created successfully:'));
console.log(chalk.green(' New task created successfully:'));
console.log(table.toString());
// Helper to get priority color

View File

@@ -14,10 +14,6 @@ import {
import { generateTextService } from '../ai-services-unified.js';
import { getDebugFlag, getProjectName } from '../config-manager.js';
import {
COMPLEXITY_REPORT_FILE,
LEGACY_TASKS_FILE
} from '../../../src/constants/paths.js';
/**
* Generates the prompt for complexity analysis.
@@ -68,8 +64,8 @@ Do not include any explanatory text, markdown formatting, or code block markers
*/
async function analyzeTaskComplexity(options, context = {}) {
const { session, mcpLog } = context;
const tasksPath = options.file || LEGACY_TASKS_FILE;
const outputPath = options.output || COMPLEXITY_REPORT_FILE;
const tasksPath = options.file || 'tasks/tasks.json';
const outputPath = options.output || 'scripts/task-complexity-report.json';
const thresholdScore = parseFloat(options.threshold || '5');
const useResearch = options.research || false;
const projectRoot = options.projectRoot;
@@ -78,7 +74,7 @@ async function analyzeTaskComplexity(options, context = {}) {
? options.id
.split(',')
.map((id) => parseInt(id.trim(), 10))
.filter((id) => !Number.isNaN(id))
.filter((id) => !isNaN(id))
: null;
const fromId = options.from !== undefined ? parseInt(options.from, 10) : null;
const toId = options.to !== undefined ? parseInt(options.to, 10) : null;
@@ -96,7 +92,7 @@ async function analyzeTaskComplexity(options, context = {}) {
if (outputFormat === 'text') {
console.log(
chalk.blue(
'Analyzing task complexity and generating expansion recommendations...'
`Analyzing task complexity and generating expansion recommendations...`
)
);
}

View File

@@ -13,6 +13,8 @@ import generateTaskFiles from './generate-task-files.js';
* @param {string} taskIds - Task IDs to clear subtasks from
*/
function clearSubtasks(tasksPath, taskIds) {
displayBanner();
log('info', `Reading tasks from ${tasksPath}...`);
const data = readJSON(tasksPath);
if (!data || !data.tasks) {

View File

@@ -14,7 +14,6 @@ import { generateTextService } from '../ai-services-unified.js';
import { getDefaultSubtasks, getDebugFlag } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
// --- Zod Schemas (Keep from previous step) ---
const subtaskSchema = z
@@ -309,8 +308,7 @@ function parseSubtasksFromText(
logger.error(
`Advanced extraction: Problematic JSON string for parse (first 500 chars): ${jsonToParse.substring(0, 500)}`
);
throw new Error(
// Re-throw a more specific error if advanced also fails
throw new Error( // Re-throw a more specific error if advanced also fails
`Failed to parse JSON response object after both simple and advanced attempts: ${parseError.message}`
);
}
@@ -464,7 +462,10 @@ async function expandTask(
let complexityReasoningContext = '';
let systemPrompt; // Declare systemPrompt here
const complexityReportPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
const complexityReportPath = path.join(
projectRoot,
'scripts/task-complexity-report.json'
);
let taskAnalysis = null;
try {

View File

@@ -36,6 +36,11 @@ function listTasks(
outputFormat = 'text'
) {
try {
// Only display banner for text output
if (outputFormat === 'text') {
displayBanner();
}
const data = readJSON(tasksPath); // Reads the whole tasks.json
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
@@ -120,7 +125,86 @@ function listTasks(
const subtaskCompletionPercentage =
totalSubtasks > 0 ? (completedSubtasks / totalSubtasks) * 100 : 0;
// Calculate dependency statistics (moved up to be available for all output formats)
// For JSON output, return structured data
if (outputFormat === 'json') {
// *** Modification: Remove 'details' field for JSON output ***
const tasksWithoutDetails = filteredTasks.map((task) => {
// <-- USES filteredTasks!
// Omit 'details' from the parent task
const { details, ...taskRest } = task;
// If subtasks exist, omit 'details' from them too
if (taskRest.subtasks && Array.isArray(taskRest.subtasks)) {
taskRest.subtasks = taskRest.subtasks.map((subtask) => {
const { details: subtaskDetails, ...subtaskRest } = subtask;
return subtaskRest;
});
}
return taskRest;
});
// *** End of Modification ***
return {
tasks: tasksWithoutDetails, // <--- THIS IS THE ARRAY BEING RETURNED
filter: statusFilter || 'all', // Return the actual filter used
stats: {
total: totalTasks,
completed: doneCount,
inProgress: inProgressCount,
pending: pendingCount,
blocked: blockedCount,
deferred: deferredCount,
cancelled: cancelledCount,
completionPercentage,
subtasks: {
total: totalSubtasks,
completed: completedSubtasks,
inProgress: inProgressSubtasks,
pending: pendingSubtasks,
blocked: blockedSubtasks,
deferred: deferredSubtasks,
cancelled: cancelledSubtasks,
completionPercentage: subtaskCompletionPercentage
}
}
};
}
// ... existing code for text output ...
// Calculate status breakdowns as percentages of total
const taskStatusBreakdown = {
'in-progress': totalTasks > 0 ? (inProgressCount / totalTasks) * 100 : 0,
pending: totalTasks > 0 ? (pendingCount / totalTasks) * 100 : 0,
blocked: totalTasks > 0 ? (blockedCount / totalTasks) * 100 : 0,
deferred: totalTasks > 0 ? (deferredCount / totalTasks) * 100 : 0,
cancelled: totalTasks > 0 ? (cancelledCount / totalTasks) * 100 : 0
};
const subtaskStatusBreakdown = {
'in-progress':
totalSubtasks > 0 ? (inProgressSubtasks / totalSubtasks) * 100 : 0,
pending: totalSubtasks > 0 ? (pendingSubtasks / totalSubtasks) * 100 : 0,
blocked: totalSubtasks > 0 ? (blockedSubtasks / totalSubtasks) * 100 : 0,
deferred:
totalSubtasks > 0 ? (deferredSubtasks / totalSubtasks) * 100 : 0,
cancelled:
totalSubtasks > 0 ? (cancelledSubtasks / totalSubtasks) * 100 : 0
};
// Create progress bars with status breakdowns
const taskProgressBar = createProgressBar(
completionPercentage,
30,
taskStatusBreakdown
);
const subtaskProgressBar = createProgressBar(
subtaskCompletionPercentage,
30,
subtaskStatusBreakdown
);
// Calculate dependency statistics
const completedTaskIds = new Set(
data.tasks
.filter((t) => t.status === 'done' || t.status === 'completed')
@@ -192,118 +276,6 @@ function listTasks(
// Find next task to work on, passing the complexity report
const nextItem = findNextTask(data.tasks, complexityReport);
// For JSON output, return structured data
if (outputFormat === 'json') {
// *** Modification: Remove 'details' field for JSON output ***
const tasksWithoutDetails = filteredTasks.map((task) => {
// <-- USES filteredTasks!
// Omit 'details' from the parent task
const { details, ...taskRest } = task;
// If subtasks exist, omit 'details' from them too
if (taskRest.subtasks && Array.isArray(taskRest.subtasks)) {
taskRest.subtasks = taskRest.subtasks.map((subtask) => {
const { details: subtaskDetails, ...subtaskRest } = subtask;
return subtaskRest;
});
}
return taskRest;
});
// *** End of Modification ***
return {
tasks: tasksWithoutDetails, // <--- THIS IS THE ARRAY BEING RETURNED
filter: statusFilter || 'all', // Return the actual filter used
stats: {
total: totalTasks,
completed: doneCount,
inProgress: inProgressCount,
pending: pendingCount,
blocked: blockedCount,
deferred: deferredCount,
cancelled: cancelledCount,
completionPercentage,
subtasks: {
total: totalSubtasks,
completed: completedSubtasks,
inProgress: inProgressSubtasks,
pending: pendingSubtasks,
blocked: blockedSubtasks,
deferred: deferredSubtasks,
cancelled: cancelledSubtasks,
completionPercentage: subtaskCompletionPercentage
}
}
};
}
// For markdown-readme output, return formatted markdown
if (outputFormat === 'markdown-readme') {
return generateMarkdownOutput(data, filteredTasks, {
totalTasks,
completedTasks,
completionPercentage,
doneCount,
inProgressCount,
pendingCount,
blockedCount,
deferredCount,
cancelledCount,
totalSubtasks,
completedSubtasks,
subtaskCompletionPercentage,
inProgressSubtasks,
pendingSubtasks,
blockedSubtasks,
deferredSubtasks,
cancelledSubtasks,
tasksWithNoDeps,
tasksReadyToWork,
tasksWithUnsatisfiedDeps,
mostDependedOnTask,
mostDependedOnTaskId,
maxDependents,
avgDependenciesPerTask,
complexityReport,
withSubtasks,
nextItem
});
}
// ... existing code for text output ...
// Calculate status breakdowns as percentages of total
const taskStatusBreakdown = {
'in-progress': totalTasks > 0 ? (inProgressCount / totalTasks) * 100 : 0,
pending: totalTasks > 0 ? (pendingCount / totalTasks) * 100 : 0,
blocked: totalTasks > 0 ? (blockedCount / totalTasks) * 100 : 0,
deferred: totalTasks > 0 ? (deferredCount / totalTasks) * 100 : 0,
cancelled: totalTasks > 0 ? (cancelledCount / totalTasks) * 100 : 0
};
const subtaskStatusBreakdown = {
'in-progress':
totalSubtasks > 0 ? (inProgressSubtasks / totalSubtasks) * 100 : 0,
pending: totalSubtasks > 0 ? (pendingSubtasks / totalSubtasks) * 100 : 0,
blocked: totalSubtasks > 0 ? (blockedSubtasks / totalSubtasks) * 100 : 0,
deferred:
totalSubtasks > 0 ? (deferredSubtasks / totalSubtasks) * 100 : 0,
cancelled:
totalSubtasks > 0 ? (cancelledSubtasks / totalSubtasks) * 100 : 0
};
// Create progress bars with status breakdowns
const taskProgressBar = createProgressBar(
completionPercentage,
30,
taskStatusBreakdown
);
const subtaskProgressBar = createProgressBar(
subtaskCompletionPercentage,
30,
subtaskStatusBreakdown
);
// Get terminal width - more reliable method
let terminalWidth;
try {
@@ -792,232 +764,4 @@ function getWorkItemDescription(item, allTasks) {
}
}
/**
* Generate markdown-formatted output for README files
* @param {Object} data - Full tasks data
* @param {Array} filteredTasks - Filtered tasks array
* @param {Object} stats - Statistics object
* @returns {string} - Formatted markdown string
*/
function generateMarkdownOutput(data, filteredTasks, stats) {
const {
totalTasks,
completedTasks,
completionPercentage,
doneCount,
inProgressCount,
pendingCount,
blockedCount,
deferredCount,
cancelledCount,
totalSubtasks,
completedSubtasks,
subtaskCompletionPercentage,
inProgressSubtasks,
pendingSubtasks,
blockedSubtasks,
deferredSubtasks,
cancelledSubtasks,
tasksWithNoDeps,
tasksReadyToWork,
tasksWithUnsatisfiedDeps,
mostDependedOnTask,
mostDependedOnTaskId,
maxDependents,
avgDependenciesPerTask,
complexityReport,
withSubtasks,
nextItem
} = stats;
let markdown = '';
// Create progress bars for markdown (using Unicode block characters)
const createMarkdownProgressBar = (percentage, width = 20) => {
const filled = Math.round((percentage / 100) * width);
const empty = width - filled;
return '█'.repeat(filled) + '░'.repeat(empty);
};
// Dashboard section
markdown += '```\n';
markdown +=
'╭─────────────────────────────────────────────────────────╮╭─────────────────────────────────────────────────────────╮\n';
markdown +=
'│ ││ │\n';
markdown +=
'│ Project Dashboard ││ Dependency Status & Next Task │\n';
markdown += `│ Tasks Progress: ${createMarkdownProgressBar(completionPercentage, 20)} ${Math.round(completionPercentage)}% ││ Dependency Metrics: │\n`;
markdown += `${Math.round(completionPercentage)}% ││ • Tasks with no dependencies: ${tasksWithNoDeps}\n`;
markdown += `│ Done: ${doneCount} In Progress: ${inProgressCount} Pending: ${pendingCount} Blocked: ${blockedCount} ││ • Tasks ready to work on: ${tasksReadyToWork}\n`;
markdown += `│ Deferred: ${deferredCount} Cancelled: ${cancelledCount} ││ • Tasks blocked by dependencies: ${tasksWithUnsatisfiedDeps}\n`;
markdown += `│ ││ • Most depended-on task: #${mostDependedOnTaskId} (${maxDependents} dependents) │\n`;
markdown += `│ Subtasks Progress: ${createMarkdownProgressBar(subtaskCompletionPercentage, 20)} ││ • Avg dependencies per task: ${avgDependenciesPerTask.toFixed(1)}\n`;
markdown += `${Math.round(subtaskCompletionPercentage)}% ${Math.round(subtaskCompletionPercentage)}% ││ │\n`;
markdown += `│ Completed: ${completedSubtasks}/${totalSubtasks} In Progress: ${inProgressSubtasks} Pending: ${pendingSubtasks} ││ Next Task to Work On: │\n`;
const nextTaskTitle = nextItem
? nextItem.title.length > 40
? nextItem.title.substring(0, 37) + '...'
: nextItem.title
: 'No task available';
markdown += `│ Blocked: ${blockedSubtasks} Deferred: ${deferredSubtasks} Cancelled: ${cancelledSubtasks} ││ ID: ${nextItem ? nextItem.id : 'N/A'} - ${nextTaskTitle}\n`;
markdown += `│ ││ Priority: ${nextItem ? nextItem.priority || 'medium' : ''} Dependencies: ${nextItem && nextItem.dependencies && nextItem.dependencies.length > 0 ? 'Some' : 'None'}\n`;
markdown += `│ Priority Breakdown: ││ Complexity: ${nextItem && nextItem.complexityScore ? '● ' + nextItem.complexityScore : 'N/A'}\n`;
markdown += `│ • High priority: ${data.tasks.filter((t) => t.priority === 'high').length} │╰─────────────────────────────────────────────────────────╯\n`;
markdown += `│ • Medium priority: ${data.tasks.filter((t) => t.priority === 'medium').length}\n`;
markdown += `│ • Low priority: ${data.tasks.filter((t) => t.priority === 'low').length}\n`;
markdown += '│ │\n';
markdown += '╰─────────────────────────────────────────────────────────╯\n';
// Tasks table
markdown +=
'┌───────────┬──────────────────────────────────────┬─────────────────┬──────────────┬───────────────────────┬───────────┐\n';
markdown +=
'│ ID │ Title │ Status │ Priority │ Dependencies │ Complexi… │\n';
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
// Helper function to format status with symbols
const getStatusSymbol = (status) => {
switch (status) {
case 'done':
case 'completed':
return '✓ done';
case 'in-progress':
return '► in-progress';
case 'pending':
return '○ pending';
case 'blocked':
return '⭕ blocked';
case 'deferred':
return 'x deferred';
case 'cancelled':
return 'x cancelled';
case 'review':
return '? review';
default:
return status || 'pending';
}
};
// Helper function to format dependencies without color codes
const formatDependenciesForMarkdown = (deps, allTasks) => {
if (!deps || deps.length === 0) return 'None';
return deps
.map((depId) => {
const depTask = allTasks.find((t) => t.id === depId);
return depTask ? depId.toString() : depId.toString();
})
.join(', ');
};
// Process all tasks
filteredTasks.forEach((task) => {
const taskTitle = task.title; // No truncation for README
const statusSymbol = getStatusSymbol(task.status);
const priority = task.priority || 'medium';
const deps = formatDependenciesForMarkdown(task.dependencies, data.tasks);
const complexity = task.complexityScore
? `${task.complexityScore}`
: 'N/A';
markdown += `${task.id.toString().padEnd(9)}${taskTitle.substring(0, 36).padEnd(36)}${statusSymbol.padEnd(15)}${priority.padEnd(12)}${deps.substring(0, 21).padEnd(21)}${complexity.padEnd(9)}\n`;
// Add subtasks if requested
if (withSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
const subtaskTitle = `└─ ${subtask.title}`; // No truncation
const subtaskStatus = getStatusSymbol(subtask.status);
const subtaskDeps = formatDependenciesForMarkdown(
subtask.dependencies,
data.tasks
);
const subtaskComplexity = subtask.complexityScore
? subtask.complexityScore.toString()
: 'N/A';
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
markdown += `${task.id}.${subtask.id}${' '.padEnd(6)}${subtaskTitle.substring(0, 36).padEnd(36)}${subtaskStatus.padEnd(15)} │ - │ ${subtaskDeps.substring(0, 21).padEnd(21)}${subtaskComplexity.padEnd(9)}\n`;
});
}
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
});
// Close the table
markdown = markdown.slice(
0,
-1 *
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n'
.length
);
markdown +=
'└───────────┴──────────────────────────────────────┴─────────────────┴──────────────┴───────────────────────┴───────────┘\n';
markdown += '```\n\n';
// Next task recommendation
if (nextItem) {
markdown +=
'╭────────────────────────────────────────────── ⚡ RECOMMENDED NEXT TASK ⚡ ──────────────────────────────────────────────╮\n';
markdown +=
'│ │\n';
markdown += `│ 🔥 Next Task to Work On: #${nextItem.id} - ${nextItem.title}\n`;
markdown +=
'│ │\n';
markdown += `│ Priority: ${nextItem.priority || 'medium'} Status: ${getStatusSymbol(nextItem.status)}\n`;
markdown += `│ Dependencies: ${nextItem.dependencies && nextItem.dependencies.length > 0 ? formatDependenciesForMarkdown(nextItem.dependencies, data.tasks) : 'None'}\n`;
markdown +=
'│ │\n';
markdown += `│ Description: ${getWorkItemDescription(nextItem, data.tasks)}\n`;
markdown +=
'│ │\n';
// Add subtasks if they exist
const parentTask = data.tasks.find((t) => t.id === nextItem.id);
if (parentTask && parentTask.subtasks && parentTask.subtasks.length > 0) {
markdown +=
'│ Subtasks: │\n';
parentTask.subtasks.forEach((subtask) => {
markdown += `${nextItem.id}.${subtask.id} [${subtask.status || 'pending'}] ${subtask.title}\n`;
});
markdown +=
'│ │\n';
}
markdown += `│ Start working: task-master set-status --id=${nextItem.id} --status=in-progress │\n`;
markdown += `│ View details: task-master show ${nextItem.id}\n`;
markdown +=
'│ │\n';
markdown +=
'╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n\n';
}
// Suggested next steps
markdown += '\n';
markdown +=
'╭──────────────────────────────────────────────────────────────────────────────────────╮\n';
markdown +=
'│ │\n';
markdown +=
'│ Suggested Next Steps: │\n';
markdown +=
'│ │\n';
markdown +=
'│ 1. Run task-master next to see what to work on next │\n';
markdown +=
'│ 2. Run task-master expand --id=<id> to break down a task into subtasks │\n';
markdown +=
'│ 3. Run task-master set-status --id=<id> --status=done to mark a task as complete │\n';
markdown +=
'│ │\n';
markdown +=
'╰──────────────────────────────────────────────────────────────────────────────────────╯\n';
return markdown;
}
export default listTasks;

View File

@@ -1,283 +0,0 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { fileURLToPath } from 'url';
import { createLogWrapper } from '../../../mcp-server/src/tools/utils.js';
import { findProjectRoot } from '../utils.js';
import {
LEGACY_CONFIG_FILE,
TASKMASTER_CONFIG_FILE
} from '../../../src/constants/paths.js';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Create a simple log wrapper for CLI use
const log = createLogWrapper({
info: (msg) => console.log(chalk.blue(''), msg),
warn: (msg) => console.log(chalk.yellow('⚠'), msg),
error: (msg) => console.error(chalk.red('✗'), msg),
success: (msg) => console.log(chalk.green('✓'), msg)
});
/**
* Main migration function
* @param {Object} options - Migration options
*/
export async function migrateProject(options = {}) {
const projectRoot = findProjectRoot() || process.cwd();
log.info(`Starting migration in: ${projectRoot}`);
// Check if .taskmaster directory already exists
const taskmasterDir = path.join(projectRoot, '.taskmaster');
if (fs.existsSync(taskmasterDir) && !options.force) {
log.warn(
'.taskmaster directory already exists. Use --force to overwrite or skip migration.'
);
return;
}
// Analyze what needs to be migrated
const migrationPlan = analyzeMigrationNeeds(projectRoot);
if (migrationPlan.length === 0) {
log.info(
'No files to migrate. Project may already be using the new structure.'
);
return;
}
// Show migration plan
log.info('Migration plan:');
for (const item of migrationPlan) {
const action = options.dryRun ? 'Would move' : 'Will move';
log.info(` ${action}: ${item.from}${item.to}`);
}
if (options.dryRun) {
log.info(
'Dry run complete. Use --dry-run=false to perform actual migration.'
);
return;
}
// Confirm migration
if (!options.yes) {
const readline = await import('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
const answer = await new Promise((resolve) => {
rl.question('Proceed with migration? (y/N): ', resolve);
});
rl.close();
if (answer.toLowerCase() !== 'y' && answer.toLowerCase() !== 'yes') {
log.info('Migration cancelled.');
return;
}
}
// Perform migration
try {
await performMigration(projectRoot, migrationPlan, options);
log.success('Migration completed successfully!');
log.info('You can now use the new .taskmaster directory structure.');
if (!options.cleanup) {
log.info(
'Old files were preserved. Use --cleanup to remove them after verification.'
);
}
} catch (error) {
log.error(`Migration failed: ${error.message}`);
throw error;
}
}
/**
* Analyze what files need to be migrated
* @param {string} projectRoot - Project root directory
* @returns {Array} Migration plan items
*/
function analyzeMigrationNeeds(projectRoot) {
const migrationPlan = [];
// Check for tasks directory
const tasksDir = path.join(projectRoot, 'tasks');
if (fs.existsSync(tasksDir)) {
const tasksFiles = fs.readdirSync(tasksDir);
for (const file of tasksFiles) {
migrationPlan.push({
from: path.join('tasks', file),
to: path.join('.taskmaster', 'tasks', file),
type: 'task'
});
}
}
// Check for scripts directory files
const scriptsDir = path.join(projectRoot, 'scripts');
if (fs.existsSync(scriptsDir)) {
const scriptsFiles = fs.readdirSync(scriptsDir);
for (const file of scriptsFiles) {
const filePath = path.join(scriptsDir, file);
if (fs.statSync(filePath).isFile()) {
// Categorize files more intelligently
let destination;
const lowerFile = file.toLowerCase();
if (
lowerFile.includes('example') ||
lowerFile.includes('template') ||
lowerFile.includes('boilerplate') ||
lowerFile.includes('sample')
) {
// Template/example files go to templates (including example_prd.txt)
destination = path.join('.taskmaster', 'templates', file);
} else if (
lowerFile.includes('complexity') &&
lowerFile.includes('report') &&
lowerFile.endsWith('.json')
) {
// Only actual complexity reports go to reports
destination = path.join('.taskmaster', 'reports', file);
} else if (
lowerFile.includes('prd') ||
lowerFile.endsWith('.md') ||
lowerFile.endsWith('.txt')
) {
// Documentation files go to docs (but not examples or reports)
destination = path.join('.taskmaster', 'docs', file);
} else {
// Other files stay in scripts or get skipped - don't force everything into templates
log.warn(
`Skipping migration of '${file}' - uncertain categorization. You may need to move this manually.`
);
continue;
}
migrationPlan.push({
from: path.join('scripts', file),
to: destination,
type: 'script'
});
}
}
}
// Check for .taskmasterconfig
const oldConfig = path.join(projectRoot, LEGACY_CONFIG_FILE);
if (fs.existsSync(oldConfig)) {
migrationPlan.push({
from: LEGACY_CONFIG_FILE,
to: TASKMASTER_CONFIG_FILE,
type: 'config'
});
}
return migrationPlan;
}
/**
* Perform the actual migration
* @param {string} projectRoot - Project root directory
* @param {Array} migrationPlan - List of files to migrate
* @param {Object} options - Migration options
*/
async function performMigration(projectRoot, migrationPlan, options) {
// Create .taskmaster directory
const taskmasterDir = path.join(projectRoot, '.taskmaster');
if (!fs.existsSync(taskmasterDir)) {
fs.mkdirSync(taskmasterDir, { recursive: true });
}
// Group migration items by destination directory to create only needed subdirs
const neededDirs = new Set();
for (const item of migrationPlan) {
const destDir = path.dirname(item.to);
neededDirs.add(destDir);
}
// Create only the directories we actually need
for (const dir of neededDirs) {
const fullDirPath = path.join(projectRoot, dir);
if (!fs.existsSync(fullDirPath)) {
fs.mkdirSync(fullDirPath, { recursive: true });
log.info(`Created directory: ${dir}`);
}
}
// Create backup if requested
if (options.backup) {
const backupDir = path.join(projectRoot, '.taskmaster-migration-backup');
log.info(`Creating backup in: ${backupDir}`);
if (fs.existsSync(backupDir)) {
fs.rmSync(backupDir, { recursive: true, force: true });
}
fs.mkdirSync(backupDir, { recursive: true });
}
// Migrate files
for (const item of migrationPlan) {
const fromPath = path.join(projectRoot, item.from);
const toPath = path.join(projectRoot, item.to);
if (!fs.existsSync(fromPath)) {
log.warn(`Source file not found: ${item.from}`);
continue;
}
// Create backup if requested
if (options.backup) {
const backupPath = path.join(
projectRoot,
'.taskmaster-migration-backup',
item.from
);
const backupDir = path.dirname(backupPath);
if (!fs.existsSync(backupDir)) {
fs.mkdirSync(backupDir, { recursive: true });
}
fs.copyFileSync(fromPath, backupPath);
}
// Ensure destination directory exists
const toDir = path.dirname(toPath);
if (!fs.existsSync(toDir)) {
fs.mkdirSync(toDir, { recursive: true });
}
// Copy file
fs.copyFileSync(fromPath, toPath);
log.info(`Migrated: ${item.from}${item.to}`);
// Remove original if cleanup is requested
if (options.cleanup) {
fs.unlinkSync(fromPath);
}
}
// Clean up empty directories if cleanup is requested
if (options.cleanup) {
const dirsToCheck = ['tasks', 'scripts'];
for (const dir of dirsToCheck) {
const dirPath = path.join(projectRoot, dir);
if (fs.existsSync(dirPath)) {
try {
const files = fs.readdirSync(dirPath);
if (files.length === 0) {
fs.rmdirSync(dirPath);
log.info(`Removed empty directory: ${dir}`);
}
} catch (error) {
// Directory not empty or other error, skip
}
}
}
}
}
export default { migrateProject };

View File

@@ -3,6 +3,8 @@
* Core functionality for managing AI model configurations
*/
import path from 'path';
import fs from 'fs';
import https from 'https';
import http from 'http';
import {
@@ -21,8 +23,6 @@ import {
getAllProviders,
getBaseUrlForRole
} from '../config-manager.js';
import { findConfigPath } from '../../../src/utils/path-utils.js';
import { log } from '../utils.js';
/**
* Fetches the list of models from OpenRouter API.
@@ -72,14 +72,14 @@ function fetchOpenRouterModels() {
/**
* Fetches the list of models from Ollama instance.
* @param {string} baseURL - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
* @param {string} baseUrl - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
* @returns {Promise<Array|null>} A promise that resolves with the list of model objects or null if fetch fails.
*/
function fetchOllamaModels(baseURL = 'http://localhost:11434/api') {
function fetchOllamaModels(baseUrl = 'http://localhost:11434/api') {
return new Promise((resolve) => {
try {
// Parse the base URL to extract hostname, port, and base path
const url = new URL(baseURL);
const url = new URL(baseUrl);
const isHttps = url.protocol === 'https:';
const port = url.port || (isHttps ? 443 : 80);
const basePath = url.pathname.endsWith('/')
@@ -149,27 +149,34 @@ async function getModelConfiguration(options = {}) {
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
try {
@@ -279,27 +286,34 @@ async function getAvailableModelsList(options = {}) {
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
try {
@@ -372,27 +386,34 @@ async function setModel(role, modelId, options = {}) {
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
// Validate role
@@ -424,7 +445,7 @@ async function setModel(role, modelId, options = {}) {
let warningMessage = null;
// Find the model data in internal list initially to see if it exists at all
const modelData = availableModels.find((m) => m.id === modelId);
let modelData = availableModels.find((m) => m.id === modelId);
// --- Revised Logic: Prioritize providerHint --- //
@@ -450,14 +471,7 @@ async function setModel(role, modelId, options = {}) {
openRouterModels.some((m) => m.id === modelId)
) {
determinedProvider = 'openrouter';
// Check if this is a free model (ends with :free)
if (modelId.endsWith(':free')) {
warningMessage = `Warning: OpenRouter free model '${modelId}' selected. Free models have significant limitations including lower context windows, reduced rate limits, and may not support advanced features like tool_use. Consider using the paid version '${modelId.replace(':free', '')}' for full functionality.`;
} else {
warningMessage = `Warning: Custom OpenRouter model '${modelId}' set. This model is not officially validated by Taskmaster and may not function as expected.`;
}
warningMessage = `Warning: Custom OpenRouter model '${modelId}' set. This model is not officially validated by Taskmaster and may not function as expected.`;
report('warn', warningMessage);
} else {
// Hinted as OpenRouter but not found in live check
@@ -470,13 +484,13 @@ async function setModel(role, modelId, options = {}) {
report('info', `Checking Ollama for ${modelId} (as hinted)...`);
// Get the Ollama base URL from config
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
const ollamaModels = await fetchOllamaModels(ollamaBaseURL);
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
const ollamaModels = await fetchOllamaModels(ollamaBaseUrl);
if (ollamaModels === null) {
// Connection failed - server probably not running
throw new Error(
`Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
`Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
);
} else if (ollamaModels.some((m) => m.model === modelId)) {
determinedProvider = 'ollama';
@@ -484,16 +498,11 @@ async function setModel(role, modelId, options = {}) {
report('warn', warningMessage);
} else {
// Server is running but model not found
const tagsUrl = `${ollamaBaseURL}/tags`;
const tagsUrl = `${ollamaBaseUrl}/tags`;
throw new Error(
`Model ID "${modelId}" not found in the Ollama instance. Please verify the model is pulled and available. You can check available models with: curl ${tagsUrl}`
);
}
} else if (providerHint === 'bedrock') {
// Set provider without model validation since Bedrock models are managed by AWS
determinedProvider = 'bedrock';
warningMessage = `Warning: Custom Bedrock model '${modelId}' set. Please ensure the model ID is valid and accessible in your AWS account.`;
report('warn', warningMessage);
} else {
// Invalid provider hint - should not happen
throw new Error(`Invalid provider hint received: ${providerHint}`);
@@ -547,8 +556,8 @@ async function setModel(role, modelId, options = {}) {
return {
success: false,
error: {
code: 'CONFIG_WRITE_ERROR',
message: 'Error writing updated configuration to configuration file'
code: 'WRITE_ERROR',
message: 'Error writing updated configuration to .taskmasterconfig'
}
};
}

View File

@@ -33,6 +33,8 @@ async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
// Only display UI elements if not in MCP mode
if (!isMcpMode) {
displayBanner();
console.log(
boxen(chalk.white.bold(`Updating Task Status to: ${newStatus}`), {
padding: 1,

View File

@@ -24,10 +24,6 @@ import {
} from './task-manager.js';
import { getProjectName, getDefaultSubtasks } from './config-manager.js';
import { TASK_STATUS_OPTIONS } from '../../src/constants/task-status.js';
import {
TASKMASTER_CONFIG_FILE,
TASKMASTER_TASKS_FILE
} from '../../src/constants/paths.js';
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
// Create a color gradient for the banner
@@ -40,7 +36,7 @@ const warmGradient = gradient(['#fb8b24', '#e36414', '#9a031e']);
function displayBanner() {
if (isSilentMode()) return;
// console.clear(); // Removing this to avoid clearing the terminal per command
console.clear();
const bannerText = figlet.textSync('Task Master', {
font: 'Standard',
horizontalLayout: 'default',
@@ -78,8 +74,6 @@ function displayBanner() {
* @returns {Object} Spinner object
*/
function startLoadingIndicator(message) {
if (isSilentMode()) return null;
const spinner = ora({
text: message,
color: 'cyan'
@@ -89,75 +83,15 @@ function startLoadingIndicator(message) {
}
/**
* Stop a loading indicator (basic stop, no success/fail indicator)
* Stop a loading indicator
* @param {Object} spinner - Spinner object to stop
*/
function stopLoadingIndicator(spinner) {
if (spinner && typeof spinner.stop === 'function') {
if (spinner && spinner.stop) {
spinner.stop();
}
}
/**
* Complete a loading indicator with success (shows checkmark)
* @param {Object} spinner - Spinner object to complete
* @param {string} message - Optional success message (defaults to current text)
*/
function succeedLoadingIndicator(spinner, message = null) {
if (spinner && typeof spinner.succeed === 'function') {
if (message) {
spinner.succeed(message);
} else {
spinner.succeed();
}
}
}
/**
* Complete a loading indicator with failure (shows X)
* @param {Object} spinner - Spinner object to fail
* @param {string} message - Optional failure message (defaults to current text)
*/
function failLoadingIndicator(spinner, message = null) {
if (spinner && typeof spinner.fail === 'function') {
if (message) {
spinner.fail(message);
} else {
spinner.fail();
}
}
}
/**
* Complete a loading indicator with warning (shows warning symbol)
* @param {Object} spinner - Spinner object to warn
* @param {string} message - Optional warning message (defaults to current text)
*/
function warnLoadingIndicator(spinner, message = null) {
if (spinner && typeof spinner.warn === 'function') {
if (message) {
spinner.warn(message);
} else {
spinner.warn();
}
}
}
/**
* Complete a loading indicator with info (shows info symbol)
* @param {Object} spinner - Spinner object to complete with info
* @param {string} message - Optional info message (defaults to current text)
*/
function infoLoadingIndicator(spinner, message = null) {
if (spinner && typeof spinner.info === 'function') {
if (message) {
spinner.info(message);
} else {
spinner.info();
}
}
}
/**
* Create a colored progress bar
* @param {number} percent - The completion percentage
@@ -294,14 +228,14 @@ function getStatusWithColor(status, forTable = false) {
}
const statusConfig = {
done: { color: chalk.green, icon: '', tableIcon: '✓' },
completed: { color: chalk.green, icon: '', tableIcon: '✓' },
pending: { color: chalk.yellow, icon: '', tableIcon: '⏱' },
done: { color: chalk.green, icon: '', tableIcon: '✓' },
completed: { color: chalk.green, icon: '', tableIcon: '✓' },
pending: { color: chalk.yellow, icon: '⏱️', tableIcon: '⏱' },
'in-progress': { color: chalk.hex('#FFA500'), icon: '🔄', tableIcon: '►' },
deferred: { color: chalk.gray, icon: 'x', tableIcon: '⏱' },
blocked: { color: chalk.red, icon: '!', tableIcon: '✗' },
review: { color: chalk.magenta, icon: '?', tableIcon: '?' },
cancelled: { color: chalk.gray, icon: '❌', tableIcon: 'x' }
deferred: { color: chalk.gray, icon: '⏱️', tableIcon: '⏱' },
blocked: { color: chalk.red, icon: '', tableIcon: '✗' },
review: { color: chalk.magenta, icon: '👀', tableIcon: '👁' },
cancelled: { color: chalk.gray, icon: '❌', tableIcon: '' }
};
const config = statusConfig[status.toLowerCase()] || {
@@ -445,6 +379,8 @@ function formatDependenciesWithStatus(
* Display a comprehensive help guide
*/
function displayHelp() {
displayBanner();
// Get terminal width - moved to top of function to make it available throughout
const terminalWidth = process.stdout.columns || 100; // Default to 100 if can't detect
@@ -525,11 +461,6 @@ function displayHelp() {
args: '--id=<id> --status=<status>',
desc: `Update task status (${TASK_STATUS_OPTIONS.join(', ')})`
},
{
name: 'sync-readme',
args: '[--with-subtasks] [--status=<status>]',
desc: 'Export tasks to README.md with professional formatting'
},
{
name: 'update',
args: '--from=<id> --prompt="<context>"',
@@ -755,7 +686,7 @@ function displayHelp() {
configTable.push(
[
`${chalk.yellow(TASKMASTER_CONFIG_FILE)}${chalk.reset('')}`,
`${chalk.yellow('.taskmasterconfig')}${chalk.reset('')}`,
`${chalk.white('AI model configuration file (project root)')}${chalk.reset('')}`,
`${chalk.dim('Managed by models cmd')}${chalk.reset('')}`
],
@@ -810,9 +741,9 @@ function displayHelp() {
* @returns {string} Colored complexity score
*/
function getComplexityWithColor(score) {
if (score <= 3) return chalk.green(` ${score}`);
if (score <= 6) return chalk.yellow(` ${score}`);
return chalk.red(` ${score}`);
if (score <= 3) return chalk.green(`🟢 ${score}`);
if (score <= 6) return chalk.yellow(`🟡 ${score}`);
return chalk.red(`🔴 ${score}`);
}
/**
@@ -832,6 +763,8 @@ function truncateString(str, maxLength) {
* @param {string} tasksPath - Path to the tasks.json file
*/
async function displayNextTask(tasksPath, complexityReportPath = null) {
displayBanner();
// Read the tasks file
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
@@ -1102,6 +1035,8 @@ async function displayTaskById(
complexityReportPath = null,
statusFilter = null
) {
displayBanner();
// Read the tasks file
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
@@ -1556,6 +1491,8 @@ async function displayTaskById(
* @param {string} reportPath - Path to the complexity report file
*/
async function displayComplexityReport(reportPath) {
displayBanner();
// Check if the report exists
if (!fs.existsSync(reportPath)) {
console.log(
@@ -1587,18 +1524,10 @@ async function displayComplexityReport(reportPath) {
if (answer.toLowerCase() === 'y' || answer.toLowerCase() === 'yes') {
// Call the analyze-complexity command
console.log(chalk.blue('Generating complexity report...'));
const tasksPath = TASKMASTER_TASKS_FILE;
if (!fs.existsSync(tasksPath)) {
console.error(
'❌ No tasks.json file found. Please run "task-master init" or create a tasks.json file.'
);
return null;
}
await analyzeTaskComplexity({
output: reportPath,
research: false, // Default to no research for speed
file: tasksPath
file: 'tasks/tasks.json'
});
// Read the newly generated report
return displayComplexityReport(reportPath);
@@ -1913,7 +1842,7 @@ function displayApiKeyStatus(statusReport) {
console.log(table.toString());
console.log(
chalk.gray(
` Note: Some providers (e.g., Azure, Ollama) may require additional endpoint configuration in ${TASKMASTER_CONFIG_FILE}.`
' Note: Some providers (e.g., Azure, Ollama) may require additional endpoint configuration in .taskmasterconfig.'
)
);
}
@@ -2152,9 +2081,5 @@ export {
displayApiKeyStatus,
displayModelConfiguration,
displayAvailableModels,
displayAiUsageSummary,
succeedLoadingIndicator,
failLoadingIndicator,
warnLoadingIndicator,
infoLoadingIndicator
displayAiUsageSummary
};

View File

@@ -9,11 +9,6 @@ import chalk from 'chalk';
import dotenv from 'dotenv';
// Import specific config getters needed here
import { getLogLevel, getDebugFlag } from './config-manager.js';
import {
COMPLEXITY_REPORT_FILE,
LEGACY_COMPLEXITY_REPORT_FILE,
LEGACY_CONFIG_FILE
} from '../../src/constants/paths.js';
// Global silent mode flag
let silentMode = false;
@@ -65,16 +60,16 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
// --- Project Root Finding Utility ---
/**
* Recursively searches upwards for project root starting from a given directory.
* @param {string} [startDir=process.cwd()] - The directory to start searching from.
* @param {string[]} [markers=['package.json', '.git', LEGACY_CONFIG_FILE]] - Marker files/dirs to look for.
* @returns {string|null} The path to the project root, or null if not found.
* Finds the project root directory by searching for marker files/directories.
* @param {string} [startPath=process.cwd()] - The directory to start searching from.
* @param {string[]} [markers=['package.json', '.git', '.taskmasterconfig']] - Marker files/dirs to look for.
* @returns {string|null} The path to the project root directory, or null if not found.
*/
function findProjectRoot(
startDir = process.cwd(),
markers = ['package.json', '.git', LEGACY_CONFIG_FILE]
startPath = process.cwd(),
markers = ['package.json', '.git', '.taskmasterconfig']
) {
let currentPath = path.resolve(startDir);
let currentPath = path.resolve(startPath);
const rootPath = path.parse(currentPath).root;
while (currentPath !== rootPath) {
@@ -155,17 +150,8 @@ function log(level, ...args) {
return;
}
// GUARD: Prevent circular dependency during config loading
// Use a simple fallback log level instead of calling getLogLevel()
let configLevel = 'info'; // Default fallback
try {
// Only try to get config level if we're not in the middle of config loading
configLevel = getLogLevel() || 'info';
} catch (error) {
// If getLogLevel() fails (likely due to circular dependency),
// use default 'info' level and continue
configLevel = 'info';
}
// Get log level dynamically from config-manager
const configLevel = getLogLevel() || 'info'; // Use getter
// Use text prefixes instead of emojis
const prefixes = {
@@ -199,17 +185,8 @@ function log(level, ...args) {
* @returns {Object|null} Parsed JSON data or null if error occurs
*/
function readJSON(filepath) {
// GUARD: Prevent circular dependency during config loading
let isDebug = false; // Default fallback
try {
// Only try to get debug flag if we're not in the middle of config loading
isDebug = getDebugFlag();
} catch (error) {
// If getDebugFlag() fails (likely due to circular dependency),
// use default false and continue
isDebug = false;
}
// Get debug flag dynamically from config-manager
const isDebug = getDebugFlag();
try {
const rawData = fs.readFileSync(filepath, 'utf8');
return JSON.parse(rawData);
@@ -230,17 +207,8 @@ function readJSON(filepath) {
* @param {Object} data - Data to write
*/
function writeJSON(filepath, data) {
// GUARD: Prevent circular dependency during config loading
let isDebug = false; // Default fallback
try {
// Only try to get debug flag if we're not in the middle of config loading
isDebug = getDebugFlag();
} catch (error) {
// If getDebugFlag() fails (likely due to circular dependency),
// use default false and continue
isDebug = false;
}
// Get debug flag dynamically from config-manager
const isDebug = getDebugFlag();
try {
const dir = path.dirname(filepath);
if (!fs.existsSync(dir)) {
@@ -268,52 +236,29 @@ function sanitizePrompt(prompt) {
}
/**
* Reads the complexity report from file
* Reads and parses the complexity report if it exists
* @param {string} customPath - Optional custom path to the report
* @returns {Object|null} The parsed complexity report or null if not found
*/
function readComplexityReport(customPath = null) {
// GUARD: Prevent circular dependency during config loading
let isDebug = false; // Default fallback
// Get debug flag dynamically from config-manager
const isDebug = getDebugFlag();
try {
// Only try to get debug flag if we're not in the middle of config loading
isDebug = getDebugFlag();
} catch (error) {
// If getDebugFlag() fails (likely due to circular dependency),
// use default false and continue
isDebug = false;
}
try {
let reportPath;
if (customPath) {
reportPath = customPath;
} else {
// Try new location first, then fall back to legacy
const newPath = path.join(process.cwd(), COMPLEXITY_REPORT_FILE);
const legacyPath = path.join(
process.cwd(),
LEGACY_COMPLEXITY_REPORT_FILE
);
reportPath = fs.existsSync(newPath) ? newPath : legacyPath;
}
const reportPath =
customPath ||
path.join(process.cwd(), 'scripts', 'task-complexity-report.json');
if (!fs.existsSync(reportPath)) {
if (isDebug) {
log('debug', `Complexity report not found at ${reportPath}`);
}
return null;
}
const reportData = readJSON(reportPath);
if (isDebug) {
log('debug', `Successfully read complexity report from ${reportPath}`);
}
return reportData;
const reportData = fs.readFileSync(reportPath, 'utf8');
return JSON.parse(reportData);
} catch (error) {
log('warn', `Could not read complexity report: ${error.message}`);
// Optionally log full error in debug mode
if (isDebug) {
log('error', `Error reading complexity report: ${error.message}`);
// Use dynamic debug flag
log('error', 'Full error details:', error);
}
return null;
}
@@ -450,7 +395,7 @@ function findTaskById(
}
let taskResult = null;
const originalSubtaskCount = null;
let originalSubtaskCount = null;
// Find the main task
const id = parseInt(taskId, 10);
@@ -465,6 +410,7 @@ function findTaskById(
// If task found and statusFilter provided, filter its subtasks
if (statusFilter && task.subtasks && Array.isArray(task.subtasks)) {
const originalSubtaskCount = task.subtasks.length;
// Clone the task to avoid modifying the original array
const filteredTask = { ...task };
filteredTask.subtasks = task.subtasks.filter(
@@ -474,6 +420,7 @@ function findTaskById(
);
taskResult = filteredTask;
originalSubtaskCount = originalSubtaskCount;
}
// If task found and complexityReport provided, add complexity data
@@ -496,7 +443,7 @@ function truncate(text, maxLength) {
return text;
}
return `${text.slice(0, maxLength - 3)}...`;
return text.slice(0, maxLength - 3) + '...';
}
/**

View File

@@ -4,9 +4,9 @@
* Implementation for interacting with Anthropic models (e.g., Claude)
* using the Vercel AI SDK.
*/
import { createAnthropic } from '@ai-sdk/anthropic';
import { BaseAIProvider } from './base-provider.js';
import { generateText, streamText, generateObject } from 'ai';
import { log } from '../../scripts/modules/utils.js'; // Assuming utils is accessible
// TODO: Implement standardized functions for generateText, streamText, generateObject
@@ -17,38 +17,207 @@ import { BaseAIProvider } from './base-provider.js';
// Remove the global variable and caching logic
// let anthropicClient;
export class AnthropicAIProvider extends BaseAIProvider {
constructor() {
super();
this.name = 'Anthropic';
function getClient(apiKey, baseUrl) {
if (!apiKey) {
// In a real scenario, this would use the config resolver.
// Throwing error here if key isn't passed for simplicity.
// Keep the error check for the passed key
throw new Error('Anthropic API key is required.');
}
/**
* Creates and returns an Anthropic client instance.
* @param {object} params - Parameters for client initialization
* @param {string} params.apiKey - Anthropic API key
* @param {string} [params.baseURL] - Optional custom API endpoint
* @returns {Function} Anthropic client function
* @throws {Error} If API key is missing or initialization fails
*/
getClient(params) {
try {
const { apiKey, baseURL } = params;
if (!apiKey) {
throw new Error('Anthropic API key is required.');
}
return createAnthropic({
apiKey,
...(baseURL && { baseURL }),
headers: {
'anthropic-beta': 'output-128k-2025-02-19'
}
});
} catch (error) {
this.handleError('client initialization', error);
// Remove the check for anthropicClient
// if (!anthropicClient) {
// TODO: Explore passing options like default headers if needed
// Create and return a new instance directly with standard version header
return createAnthropic({
apiKey: apiKey,
...(baseUrl && { baseURL: baseUrl }),
// Use standard version header instead of beta
headers: {
'anthropic-beta': 'output-128k-2025-02-19'
}
});
}
// --- Standardized Service Function Implementations ---
/**
* Generates text using an Anthropic model.
*
* @param {object} params - Parameters for the text generation.
* @param {string} params.apiKey - The Anthropic API key.
* @param {string} params.modelId - The specific Anthropic model ID.
* @param {Array<object>} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]).
* @param {number} [params.maxTokens] - Maximum tokens for the response.
* @param {number} [params.temperature] - Temperature for generation.
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
* @returns {Promise<object>} The generated text content and usage.
* @throws {Error} If the API call fails.
*/
export async function generateAnthropicText({
apiKey,
modelId,
messages,
maxTokens,
temperature,
baseUrl
}) {
log('debug', `Generating Anthropic text with model: ${modelId}`);
try {
const client = getClient(apiKey, baseUrl);
const result = await generateText({
model: client(modelId),
messages: messages,
maxTokens: maxTokens,
temperature: temperature
// Beta header moved to client initialization
// TODO: Add other relevant parameters like topP, topK if needed
});
log(
'debug',
`Anthropic generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
);
// Return both text and usage
return {
text: result.text,
usage: {
inputTokens: result.usage.promptTokens,
outputTokens: result.usage.completionTokens
}
};
} catch (error) {
log('error', `Anthropic generateText failed: ${error.message}`);
// Consider more specific error handling or re-throwing a standardized error
throw error;
}
}
/**
* Streams text using an Anthropic model.
*
* @param {object} params - Parameters for the text streaming.
* @param {string} params.apiKey - The Anthropic API key.
* @param {string} params.modelId - The specific Anthropic model ID.
* @param {Array<object>} params.messages - The messages array.
* @param {number} [params.maxTokens] - Maximum tokens for the response.
* @param {number} [params.temperature] - Temperature for generation.
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
* @returns {Promise<object>} The full stream result object from the Vercel AI SDK.
* @throws {Error} If the API call fails to initiate the stream.
*/
export async function streamAnthropicText({
apiKey,
modelId,
messages,
maxTokens,
temperature,
baseUrl
}) {
log('debug', `Streaming Anthropic text with model: ${modelId}`);
try {
const client = getClient(apiKey, baseUrl);
log(
'debug',
'[streamAnthropicText] Parameters received by streamText:',
JSON.stringify(
{
modelId: modelId,
messages: messages,
maxTokens: maxTokens,
temperature: temperature
},
null,
2
)
);
const stream = await streamText({
model: client(modelId),
messages: messages,
maxTokens: maxTokens,
temperature: temperature
// TODO: Add other relevant parameters
});
// *** RETURN THE FULL STREAM OBJECT, NOT JUST stream.textStream ***
return stream;
} catch (error) {
log('error', `Anthropic streamText failed: ${error.message}`, error.stack);
throw error;
}
}
/**
* Generates a structured object using an Anthropic model.
* NOTE: Anthropic's tool/function calling support might have limitations
* compared to OpenAI, especially regarding complex schemas or enforcement.
* The Vercel AI SDK attempts to abstract this.
*
* @param {object} params - Parameters for object generation.
* @param {string} params.apiKey - The Anthropic API key.
* @param {string} params.modelId - The specific Anthropic model ID.
* @param {Array<object>} params.messages - The messages array.
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the object.
* @param {string} params.objectName - A name for the object/tool.
* @param {number} [params.maxTokens] - Maximum tokens for the response.
* @param {number} [params.temperature] - Temperature for generation.
* @param {number} [params.maxRetries] - Max retries for validation/generation.
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
* @returns {Promise<object>} The generated object matching the schema and usage.
* @throws {Error} If generation or validation fails.
*/
export async function generateAnthropicObject({
apiKey,
modelId,
messages,
schema,
objectName = 'generated_object',
maxTokens,
temperature,
maxRetries = 3,
baseUrl
}) {
log(
'debug',
`Generating Anthropic object ('${objectName}') with model: ${modelId}`
);
try {
const client = getClient(apiKey, baseUrl);
log(
'debug',
`Using maxTokens: ${maxTokens}, temperature: ${temperature}, model: ${modelId}`
);
const result = await generateObject({
model: client(modelId),
mode: 'tool',
schema: schema,
messages: messages,
tool: {
name: objectName,
description: `Generate a ${objectName} based on the prompt.`
},
maxTokens: maxTokens,
temperature: temperature,
maxRetries: maxRetries
});
log(
'debug',
`Anthropic generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
);
// Return both object and usage
return {
object: result.object,
usage: {
inputTokens: result.usage.promptTokens,
outputTokens: result.usage.completionTokens
}
};
} catch (error) {
log(
'error',
`Anthropic generateObject ('${objectName}') failed: ${error.message}`
);
throw error;
}
}

Some files were not shown because too many files have changed in this diff Show More