Compare commits
2 Commits
v0.16.0-rc
...
feat/issue
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2b9bb8b94f | ||
|
|
6f27399df8 |
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Add AWS bedrock support
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
# Add Google Vertex AI Provider Integration
|
||||
|
||||
- Implemented `VertexAIProvider` class extending BaseAIProvider
|
||||
- Added authentication and configuration handling for Vertex AI
|
||||
- Updated configuration manager with Vertex-specific getters
|
||||
- Modified AI services unified system to integrate the provider
|
||||
- Added documentation for Vertex AI setup and configuration
|
||||
- Updated environment variable examples for Vertex AI support
|
||||
- Implemented specialized error handling for Vertex-specific issues
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Add support for Azure
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Increased minimum required node version to > 18 (was > 14)
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Renamed baseUrl to baseURL
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Consolidate Task Master files into unified .taskmaster directory structure
|
||||
|
||||
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
|
||||
|
||||
**New Directory Structure:**
|
||||
|
||||
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
|
||||
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
|
||||
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
|
||||
- `.taskmaster/templates/` - Template files like example PRD
|
||||
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
|
||||
|
||||
**Migration & Backward Compatibility:**
|
||||
|
||||
- Existing projects continue to work with legacy file locations
|
||||
- New projects use the consolidated structure automatically
|
||||
- Run `task-master migrate` to move existing projects to the new structure
|
||||
- All CLI commands and MCP tools automatically detect and use appropriate file locations
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Cleaner project root with Task Master files organized in one location
|
||||
- Reduced file scatter across multiple directories
|
||||
- Improved project navigation and maintenance
|
||||
- Consistent file organization across all Task Master projects
|
||||
|
||||
This change maintains full backward compatibility while providing a migration path to the improved structure.
|
||||
@@ -104,7 +104,7 @@ Task Master offers two primary ways to interact:
|
||||
|
||||
Taskmaster configuration is managed through two main mechanisms:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Primary):**
|
||||
1. **`.taskmasterconfig` File (Primary):**
|
||||
* Located in the project root directory.
|
||||
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
|
||||
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
|
||||
|
||||
@@ -36,7 +36,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
|
||||
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
|
||||
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
|
||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
|
||||
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt.
|
||||
|
||||
### 2. Parse PRD (`parse_prd`)
|
||||
|
||||
@@ -50,7 +50,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
|
||||
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
|
||||
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
|
||||
|
||||
---
|
||||
|
||||
@@ -77,10 +77,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
|
||||
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
|
||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
||||
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
||||
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||
|
||||
---
|
||||
|
||||
@@ -348,7 +348,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master analyze-complexity [options]`
|
||||
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
|
||||
* **Key Parameters/Options:**
|
||||
* `output`: `Where to save the complexity analysis report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-o, --output <file>`)
|
||||
* `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`)
|
||||
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
|
||||
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
@@ -361,7 +361,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master complexity-report [options]`
|
||||
* **Description:** `Display the task complexity analysis report in a readable format.`
|
||||
* **Key Parameters/Options:**
|
||||
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||
* `file`: `Path to the complexity report (default: 'scripts/task-complexity-report.json').` (CLI: `-f, --file <file>`)
|
||||
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
|
||||
|
||||
---
|
||||
@@ -382,7 +382,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
|
||||
## Environment Variables Configuration (Updated)
|
||||
|
||||
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
||||
Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
|
||||
|
||||
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
|
||||
|
||||
@@ -396,11 +396,11 @@ Environment variables are used **only** for sensitive API keys related to AI pro
|
||||
* `OPENROUTER_API_KEY`
|
||||
* `XAI_API_KEY`
|
||||
* `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
|
||||
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
|
||||
* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):**
|
||||
* `AZURE_OPENAI_ENDPOINT`
|
||||
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
|
||||
|
||||
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
|
||||
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -7,9 +7,3 @@ MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
|
||||
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
|
||||
XAI_API_KEY=YOUR_XAI_KEY_HERE
|
||||
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
|
||||
|
||||
# Google Vertex AI Configuration
|
||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
VERTEX_LOCATION=us-central1
|
||||
# Optional: Path to service account credentials JSON file (alternative to API key)
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||
|
||||
7
.prettierignore
Normal file
7
.prettierignore
Normal file
@@ -0,0 +1,7 @@
|
||||
# Ignore artifacts:
|
||||
build
|
||||
coverage
|
||||
.changeset
|
||||
tasks
|
||||
package-lock.json
|
||||
tests/fixture/*.json
|
||||
11
.prettierrc
Normal file
11
.prettierrc
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"printWidth": 80,
|
||||
"tabWidth": 2,
|
||||
"useTabs": true,
|
||||
"semi": true,
|
||||
"singleQuote": true,
|
||||
"trailingComma": "none",
|
||||
"bracketSpacing": true,
|
||||
"arrowParens": "always",
|
||||
"endOfLine": "lf"
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-sonnet-4-20250514",
|
||||
"maxTokens": 50000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 128000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/"
|
||||
}
|
||||
}
|
||||
@@ -1,528 +0,0 @@
|
||||
|
||||
# Claude Task Master - Product Requirements Document
|
||||
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
|
||||
## System Components
|
||||
1. **Task Management Core**
|
||||
- Tasks.json file structure (single source of truth)
|
||||
- Task model with dependencies, priorities, and metadata
|
||||
- Task state management system
|
||||
- Task file generation subsystem
|
||||
|
||||
2. **AI Integration Layer**
|
||||
- Anthropic Claude API integration
|
||||
- Perplexity API integration (optional)
|
||||
- Prompt engineering components
|
||||
- Response parsing and processing
|
||||
|
||||
3. **Command Line Interface**
|
||||
- Command parsing and execution
|
||||
- Interactive user input handling
|
||||
- Display and formatting utilities
|
||||
- Status reporting and feedback system
|
||||
|
||||
4. **Cursor AI Integration**
|
||||
- Cursor rules documentation
|
||||
- Agent interaction patterns
|
||||
- Workflow guideline specifications
|
||||
|
||||
## Data Models
|
||||
|
||||
### Task Model
|
||||
```json
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Task Title",
|
||||
"description": "Brief task description",
|
||||
"status": "pending|done|deferred",
|
||||
"dependencies": [0],
|
||||
"priority": "high|medium|low",
|
||||
"details": "Detailed implementation instructions",
|
||||
"testStrategy": "Verification approach details",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Subtask Title",
|
||||
"description": "Subtask description",
|
||||
"status": "pending|done|deferred",
|
||||
"dependencies": [],
|
||||
"acceptanceCriteria": "Verification criteria"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Tasks Collection Model
|
||||
```json
|
||||
{
|
||||
"meta": {
|
||||
"projectName": "Project Name",
|
||||
"version": "1.0.0",
|
||||
"prdSource": "path/to/prd.txt",
|
||||
"createdAt": "ISO-8601 timestamp",
|
||||
"updatedAt": "ISO-8601 timestamp"
|
||||
},
|
||||
"tasks": [
|
||||
// Array of Task objects
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Task File Format
|
||||
```
|
||||
# Task ID: <id>
|
||||
# Title: <title>
|
||||
# Status: <status>
|
||||
# Dependencies: <comma-separated list of dependency IDs>
|
||||
# Priority: <priority>
|
||||
# Description: <brief description>
|
||||
# Details:
|
||||
<detailed implementation notes>
|
||||
|
||||
# Test Strategy:
|
||||
<verification approach>
|
||||
|
||||
# Subtasks:
|
||||
1. <subtask title> - <subtask description>
|
||||
```
|
||||
|
||||
## APIs and Integrations
|
||||
1. **Anthropic Claude API**
|
||||
- Authentication via API key
|
||||
- Prompt construction and streaming
|
||||
- Response parsing and extraction
|
||||
- Error handling and retries
|
||||
|
||||
2. **Perplexity API (via OpenAI client)**
|
||||
- Authentication via API key
|
||||
- Research-oriented prompt construction
|
||||
- Enhanced contextual response handling
|
||||
- Fallback mechanisms to Claude
|
||||
|
||||
3. **File System API**
|
||||
- Reading/writing tasks.json
|
||||
- Managing individual task files
|
||||
- Command execution logging
|
||||
- Debug logging system
|
||||
|
||||
## Infrastructure Requirements
|
||||
1. **Node.js Runtime**
|
||||
- Version 14.0.0 or higher
|
||||
- ES Module support
|
||||
- File system access rights
|
||||
- Command execution capabilities
|
||||
|
||||
2. **Configuration Management**
|
||||
- Environment variable handling
|
||||
- .env file support
|
||||
- Configuration validation
|
||||
- Sensible defaults with overrides
|
||||
|
||||
3. **Development Environment**
|
||||
- Git repository
|
||||
- NPM package management
|
||||
- Cursor editor integration
|
||||
- Command-line terminal access
|
||||
|
||||
# Development Roadmap
|
||||
|
||||
## Phase 1: Core Task Management System
|
||||
1. **Task Data Structure**
|
||||
- Design and implement the tasks.json structure
|
||||
- Create task model validation
|
||||
- Implement basic task operations (create, read, update)
|
||||
- Develop file system interactions
|
||||
|
||||
2. **Command Line Interface Foundation**
|
||||
- Implement command parsing with Commander.js
|
||||
- Create help documentation
|
||||
- Implement colorized console output
|
||||
- Add logging system with configurable levels
|
||||
|
||||
3. **Basic Task Operations**
|
||||
- Implement task listing functionality
|
||||
- Create task status update capability
|
||||
- Add dependency tracking
|
||||
- Implement priority management
|
||||
|
||||
4. **Task File Generation**
|
||||
- Create task file templates
|
||||
- Implement generation from tasks.json
|
||||
- Add bi-directional synchronization
|
||||
- Implement proper file naming and organization
|
||||
|
||||
## Phase 2: AI Integration
|
||||
1. **Claude API Integration**
|
||||
- Implement API authentication
|
||||
- Create prompt templates for PRD parsing
|
||||
- Design response handlers
|
||||
- Add error management and retries
|
||||
|
||||
2. **PRD Parsing System**
|
||||
- Implement PRD file reading
|
||||
- Create PRD to task conversion logic
|
||||
- Add intelligent dependency inference
|
||||
- Implement priority assignment logic
|
||||
|
||||
3. **Task Expansion With Claude**
|
||||
- Create subtask generation prompts
|
||||
- Implement subtask creation workflow
|
||||
- Add context-aware expansion capabilities
|
||||
- Implement parent-child relationship management
|
||||
|
||||
4. **Implementation Drift Handling**
|
||||
- Add capability to update future tasks
|
||||
- Implement task rewriting based on new context
|
||||
- Create dependency chain updates
|
||||
- Preserve completed work while updating future tasks
|
||||
|
||||
## Phase 3: Advanced Features
|
||||
1. **Perplexity Integration**
|
||||
- Implement Perplexity API authentication
|
||||
- Create research-oriented prompts
|
||||
- Add fallback to Claude when unavailable
|
||||
- Implement response quality comparison logic
|
||||
|
||||
2. **Research-Backed Subtask Generation**
|
||||
- Create specialized research prompts
|
||||
- Implement context enrichment
|
||||
- Add domain-specific knowledge incorporation
|
||||
- Create more detailed subtask generation
|
||||
|
||||
3. **Batch Operations**
|
||||
- Implement multi-task status updates
|
||||
- Add bulk subtask generation
|
||||
- Create task filtering and querying
|
||||
- Implement advanced dependency management
|
||||
|
||||
4. **Project Initialization**
|
||||
- Create project templating system
|
||||
- Implement interactive setup
|
||||
- Add environment configuration
|
||||
- Create documentation generation
|
||||
|
||||
## Phase 4: Cursor AI Integration
|
||||
1. **Cursor Rules Implementation**
|
||||
- Create dev_workflow.mdc documentation
|
||||
- Implement cursor_rules.mdc
|
||||
- Add self_improve.mdc
|
||||
- Design rule integration documentation
|
||||
|
||||
2. **Agent Workflow Guidelines**
|
||||
- Document task discovery workflow
|
||||
- Create task selection guidelines
|
||||
- Implement implementation guidance
|
||||
- Add verification procedures
|
||||
|
||||
3. **Agent Command Integration**
|
||||
- Document command syntax for agents
|
||||
- Create example interactions
|
||||
- Implement agent response patterns
|
||||
- Add context management for agents
|
||||
|
||||
4. **User Documentation**
|
||||
- Create detailed README
|
||||
- Add scripts documentation
|
||||
- Implement example workflows
|
||||
- Create troubleshooting guides
|
||||
|
||||
# Logical Dependency Chain
|
||||
|
||||
## Foundation Layer
|
||||
1. **Task Data Structure**
|
||||
- Must be implemented first as all other functionality depends on this
|
||||
- Defines the core data model for the entire system
|
||||
- Establishes the single source of truth concept
|
||||
|
||||
2. **Command Line Interface**
|
||||
- Built on top of the task data structure
|
||||
- Provides the primary user interaction mechanism
|
||||
- Required for all subsequent operations to be accessible
|
||||
|
||||
3. **Basic Task Operations**
|
||||
- Depends on both task data structure and CLI
|
||||
- Provides the fundamental operations for task management
|
||||
- Enables the minimal viable workflow
|
||||
|
||||
## Functional Layer
|
||||
4. **Task File Generation**
|
||||
- Depends on task data structure and basic operations
|
||||
- Creates the individual task files for reference
|
||||
- Enables the file-based workflow complementing tasks.json
|
||||
|
||||
5. **Claude API Integration**
|
||||
- Independent of most previous components but needs the task data structure
|
||||
- Provides the AI capabilities that enhance the system
|
||||
- Gateway to advanced task generation features
|
||||
|
||||
6. **PRD Parsing System**
|
||||
- Depends on Claude API integration and task data structure
|
||||
- Enables the initial task generation workflow
|
||||
- Creates the starting point for new projects
|
||||
|
||||
## Enhancement Layer
|
||||
7. **Task Expansion With Claude**
|
||||
- Depends on Claude API integration and basic task operations
|
||||
- Enhances existing tasks with more detailed subtasks
|
||||
- Improves the implementation guidance
|
||||
|
||||
8. **Implementation Drift Handling**
|
||||
- Depends on Claude API integration and task operations
|
||||
- Addresses a key challenge in AI-driven development
|
||||
- Maintains the relevance of task planning as implementation evolves
|
||||
|
||||
9. **Perplexity Integration**
|
||||
- Can be developed in parallel with other features after Claude integration
|
||||
- Enhances the quality of generated content
|
||||
- Provides research-backed improvements
|
||||
|
||||
## Advanced Layer
|
||||
10. **Research-Backed Subtask Generation**
|
||||
- Depends on Perplexity integration and task expansion
|
||||
- Provides higher quality, more contextual subtasks
|
||||
- Enhances the value of the task breakdown
|
||||
|
||||
11. **Batch Operations**
|
||||
- Depends on basic task operations
|
||||
- Improves efficiency for managing multiple tasks
|
||||
- Quality-of-life enhancement for larger projects
|
||||
|
||||
12. **Project Initialization**
|
||||
- Depends on most previous components being stable
|
||||
- Provides a smooth onboarding experience
|
||||
- Creates a complete project setup in one step
|
||||
|
||||
## Integration Layer
|
||||
13. **Cursor Rules Implementation**
|
||||
- Can be developed in parallel after basic functionality
|
||||
- Provides the guidance for Cursor AI agent
|
||||
- Enhances the AI-driven workflow
|
||||
|
||||
14. **Agent Workflow Guidelines**
|
||||
- Depends on Cursor rules implementation
|
||||
- Structures how the agent interacts with the system
|
||||
- Ensures consistent agent behavior
|
||||
|
||||
15. **Agent Command Integration**
|
||||
- Depends on agent workflow guidelines
|
||||
- Provides specific command patterns for the agent
|
||||
- Optimizes the agent-user interaction
|
||||
|
||||
16. **User Documentation**
|
||||
- Should be developed alongside all features
|
||||
- Must be completed before release
|
||||
- Ensures users can effectively use the system
|
||||
|
||||
# Risks and Mitigations
|
||||
|
||||
## Technical Challenges
|
||||
|
||||
### API Reliability
|
||||
**Risk**: Anthropic or Perplexity API could have downtime, rate limiting, or breaking changes.
|
||||
**Mitigation**:
|
||||
- Implement robust error handling with exponential backoff
|
||||
- Add fallback mechanisms (Claude fallback for Perplexity)
|
||||
- Cache important responses to reduce API dependency
|
||||
- Support offline mode for critical functions
|
||||
|
||||
### Model Output Variability
|
||||
**Risk**: AI models may produce inconsistent or unexpected outputs.
|
||||
**Mitigation**:
|
||||
- Design robust prompt templates with strict output formatting requirements
|
||||
- Implement response validation and error detection
|
||||
- Add self-correction mechanisms and retries with improved prompts
|
||||
- Allow manual editing of generated content
|
||||
|
||||
### Node.js Version Compatibility
|
||||
**Risk**: Differences in Node.js versions could cause unexpected behavior.
|
||||
**Mitigation**:
|
||||
- Clearly document minimum Node.js version requirements
|
||||
- Use transpilers if needed for compatibility
|
||||
- Test across multiple Node.js versions
|
||||
- Handle version-specific features gracefully
|
||||
|
||||
## MVP Definition
|
||||
|
||||
### Feature Prioritization
|
||||
**Risk**: Including too many features in the MVP could delay release and adoption.
|
||||
**Mitigation**:
|
||||
- Define MVP as core task management + basic Claude integration
|
||||
- Ensure each phase delivers a complete, usable product
|
||||
- Implement feature flags for easy enabling/disabling of features
|
||||
- Get early user feedback to validate feature importance
|
||||
|
||||
### Scope Creep
|
||||
**Risk**: The project could expand beyond its original intent, becoming too complex.
|
||||
**Mitigation**:
|
||||
- Maintain a strict definition of what the tool is and isn't
|
||||
- Focus on task management for AI-driven development
|
||||
- Evaluate new features against core value proposition
|
||||
- Implement extensibility rather than building every feature
|
||||
|
||||
### User Expectations
|
||||
**Risk**: Users might expect a full project management solution rather than a task tracking system.
|
||||
**Mitigation**:
|
||||
- Clearly communicate the tool's purpose and limitations
|
||||
- Provide integration points with existing project management tools
|
||||
- Focus on the unique value of AI-driven development
|
||||
- Document specific use cases and example workflows
|
||||
|
||||
## Resource Constraints
|
||||
|
||||
### Development Capacity
|
||||
**Risk**: Limited development resources could delay implementation.
|
||||
**Mitigation**:
|
||||
- Phase implementation to deliver value incrementally
|
||||
- Focus on core functionality first
|
||||
- Leverage open source libraries where possible
|
||||
- Design for extensibility to allow community contributions
|
||||
|
||||
### AI Cost Management
|
||||
**Risk**: Excessive API usage could lead to high costs.
|
||||
**Mitigation**:
|
||||
- Implement token usage tracking and reporting
|
||||
- Add configurable limits to prevent unexpected costs
|
||||
- Cache responses where appropriate
|
||||
- Optimize prompts for token efficiency
|
||||
- Support local LLM options in the future
|
||||
|
||||
### Documentation Overhead
|
||||
**Risk**: Complexity of the system requires extensive documentation that is time-consuming to maintain.
|
||||
**Mitigation**:
|
||||
- Use AI to help generate and maintain documentation
|
||||
- Create self-documenting commands and features
|
||||
- Implement progressive documentation (basic to advanced)
|
||||
- Build help directly into the CLI
|
||||
|
||||
# Appendix
|
||||
|
||||
## AI Prompt Engineering Specifications
|
||||
|
||||
### PRD Parsing Prompt Structure
|
||||
```
|
||||
You are assisting with transforming a Product Requirements Document (PRD) into a structured set of development tasks.
|
||||
|
||||
Given the following PRD, create a comprehensive list of development tasks that would be needed to implement the described product.
|
||||
|
||||
For each task:
|
||||
1. Assign a short, descriptive title
|
||||
2. Write a concise description
|
||||
3. Identify dependencies (which tasks must be completed before this one)
|
||||
4. Assign a priority (high, medium, low)
|
||||
5. Include detailed implementation notes
|
||||
6. Describe a test strategy to verify completion
|
||||
|
||||
Structure the tasks in a logical order of implementation.
|
||||
|
||||
PRD:
|
||||
{prd_content}
|
||||
```
|
||||
|
||||
### Task Expansion Prompt Structure
|
||||
```
|
||||
You are helping to break down a development task into more manageable subtasks.
|
||||
|
||||
Main task:
|
||||
Title: {task_title}
|
||||
Description: {task_description}
|
||||
Details: {task_details}
|
||||
|
||||
Please create {num_subtasks} specific subtasks that together would accomplish this main task.
|
||||
|
||||
For each subtask, provide:
|
||||
1. A clear, actionable title
|
||||
2. A concise description
|
||||
3. Any dependencies on other subtasks
|
||||
4. Specific acceptance criteria to verify completion
|
||||
|
||||
Additional context:
|
||||
{additional_context}
|
||||
```
|
||||
|
||||
### Research-Backed Expansion Prompt Structure
|
||||
```
|
||||
You are a technical researcher and developer helping to break down a software development task into detailed, well-researched subtasks.
|
||||
|
||||
Main task:
|
||||
Title: {task_title}
|
||||
Description: {task_description}
|
||||
Details: {task_details}
|
||||
|
||||
Research the latest best practices, technologies, and implementation patterns for this type of task. Then create {num_subtasks} specific, actionable subtasks that together would accomplish the main task.
|
||||
|
||||
For each subtask:
|
||||
1. Provide a clear, specific title
|
||||
2. Write a detailed description including technical approach
|
||||
3. Identify dependencies on other subtasks
|
||||
4. Include specific acceptance criteria
|
||||
5. Reference any relevant libraries, tools, or resources that should be used
|
||||
|
||||
Consider security, performance, maintainability, and user experience in your recommendations.
|
||||
```
|
||||
|
||||
## Task File System Specification
|
||||
|
||||
### Directory Structure
|
||||
```
|
||||
/
|
||||
├── .cursor/
|
||||
│ └── rules/
|
||||
│ ├── dev_workflow.mdc
|
||||
│ ├── cursor_rules.mdc
|
||||
│ └── self_improve.mdc
|
||||
├── scripts/
|
||||
│ ├── dev.js
|
||||
│ └── README.md
|
||||
├── tasks/
|
||||
│ ├── task_001.txt
|
||||
│ ├── task_002.txt
|
||||
│ └── ...
|
||||
├── .env
|
||||
├── .env.example
|
||||
├── .gitignore
|
||||
├── package.json
|
||||
├── README.md
|
||||
└── tasks.json
|
||||
```
|
||||
|
||||
### Task ID Specification
|
||||
- Main tasks: Sequential integers (1, 2, 3, ...)
|
||||
- Subtasks: Parent ID + dot + sequential integer (1.1, 1.2, 2.1, ...)
|
||||
- ID references: Used in dependencies, command parameters
|
||||
- ID ordering: Implies suggested implementation order
|
||||
|
||||
## Command-Line Interface Specification
|
||||
|
||||
### Global Options
|
||||
- `--help`: Display help information
|
||||
- `--version`: Display version information
|
||||
- `--file=<file>`: Specify an alternative tasks.json file
|
||||
- `--quiet`: Reduce output verbosity
|
||||
- `--debug`: Increase output verbosity
|
||||
- `--json`: Output in JSON format (for programmatic use)
|
||||
|
||||
### Command Structure
|
||||
- `node scripts/dev.js <command> [options]`
|
||||
- All commands operate on tasks.json by default
|
||||
- Commands follow consistent parameter naming
|
||||
- Common parameter styles: `--id=<id>`, `--status=<status>`, `--prompt="<text>"`
|
||||
- Boolean flags: `--all`, `--force`, `--with-subtasks`
|
||||
|
||||
## API Integration Specifications
|
||||
|
||||
### Anthropic API Configuration
|
||||
- Authentication: ANTHROPIC_API_KEY environment variable
|
||||
- Model selection: MODEL environment variable
|
||||
- Default model: claude-3-7-sonnet-20250219
|
||||
- Maximum tokens: MAX_TOKENS environment variable (default: 4000)
|
||||
- Temperature: TEMPERATURE environment variable (default: 0.7)
|
||||
|
||||
### Perplexity API Configuration
|
||||
- Authentication: PERPLEXITY_API_KEY environment variable
|
||||
- Model selection: PERPLEXITY_MODEL environment variable
|
||||
- Default model: sonar-medium-online
|
||||
- Connection: Via OpenAI client
|
||||
- Fallback: Use Claude if Perplexity unavailable
|
||||
</PRD>
|
||||
@@ -1,357 +0,0 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-05-22T05:48:33.026Z",
|
||||
"tasksAnalyzed": 6,
|
||||
"totalTasks": 88,
|
||||
"analysisCount": 43,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": true
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 24,
|
||||
"taskTitle": "Implement AI-Powered Test Generation Command",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
|
||||
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
|
||||
},
|
||||
{
|
||||
"taskId": 26,
|
||||
"taskTitle": "Implement Context Foundation for AI Operations",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
|
||||
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
|
||||
},
|
||||
{
|
||||
"taskId": 27,
|
||||
"taskTitle": "Implement Context Enhancements for AI Operations",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
|
||||
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
|
||||
},
|
||||
{
|
||||
"taskId": 28,
|
||||
"taskTitle": "Implement Advanced ContextManager System",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
|
||||
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
|
||||
},
|
||||
{
|
||||
"taskId": 40,
|
||||
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
|
||||
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
|
||||
},
|
||||
{
|
||||
"taskId": 41,
|
||||
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 10,
|
||||
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
|
||||
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
|
||||
},
|
||||
{
|
||||
"taskId": 42,
|
||||
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
||||
"complexityScore": 9,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
|
||||
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
|
||||
},
|
||||
{
|
||||
"taskId": 44,
|
||||
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
|
||||
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
|
||||
},
|
||||
{
|
||||
"taskId": 45,
|
||||
"taskTitle": "Implement GitHub Issue Import Feature",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
|
||||
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
|
||||
},
|
||||
{
|
||||
"taskId": 46,
|
||||
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
|
||||
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
|
||||
},
|
||||
{
|
||||
"taskId": 47,
|
||||
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
|
||||
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 48,
|
||||
"taskTitle": "Refactor Prompts into Centralized Structure",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 3,
|
||||
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
|
||||
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
|
||||
},
|
||||
{
|
||||
"taskId": 49,
|
||||
"taskTitle": "Implement Code Quality Analysis Command",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
|
||||
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
|
||||
},
|
||||
{
|
||||
"taskId": 50,
|
||||
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
||||
"complexityScore": 9,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
|
||||
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
|
||||
},
|
||||
{
|
||||
"taskId": 51,
|
||||
"taskTitle": "Implement Perplexity Research Command",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
|
||||
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
|
||||
},
|
||||
{
|
||||
"taskId": 52,
|
||||
"taskTitle": "Implement Task Suggestion Command for CLI",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
|
||||
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
|
||||
},
|
||||
{
|
||||
"taskId": 53,
|
||||
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
|
||||
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 55,
|
||||
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
|
||||
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 57,
|
||||
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
|
||||
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
|
||||
},
|
||||
{
|
||||
"taskId": 60,
|
||||
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
|
||||
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 62,
|
||||
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
|
||||
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 63,
|
||||
"taskTitle": "Add pnpm Support for the Taskmaster Package",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
|
||||
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
|
||||
},
|
||||
{
|
||||
"taskId": 64,
|
||||
"taskTitle": "Add Yarn Support for Taskmaster Installation",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 9,
|
||||
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
|
||||
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 65,
|
||||
"taskTitle": "Add Bun Support for Taskmaster Installation",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
|
||||
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 67,
|
||||
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
|
||||
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
|
||||
},
|
||||
{
|
||||
"taskId": 68,
|
||||
"taskTitle": "Ability to create tasks without parsing PRD",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 2,
|
||||
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
|
||||
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
|
||||
},
|
||||
{
|
||||
"taskId": 72,
|
||||
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
|
||||
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
|
||||
},
|
||||
{
|
||||
"taskId": 75,
|
||||
"taskTitle": "Integrate Google Search Grounding for Research Role",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
|
||||
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 76,
|
||||
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
|
||||
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 77,
|
||||
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 18,
|
||||
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
|
||||
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
|
||||
},
|
||||
{
|
||||
"taskId": 80,
|
||||
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
|
||||
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 81,
|
||||
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
|
||||
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
|
||||
},
|
||||
{
|
||||
"taskId": 82,
|
||||
"taskTitle": "Update supported-models.json with token limit fields",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
|
||||
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 83,
|
||||
"taskTitle": "Update config-manager.js defaults and getters",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
|
||||
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 84,
|
||||
"taskTitle": "Implement token counting utility",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
|
||||
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 69,
|
||||
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
|
||||
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
|
||||
},
|
||||
{
|
||||
"taskId": 70,
|
||||
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
|
||||
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
|
||||
},
|
||||
{
|
||||
"taskId": 85,
|
||||
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
|
||||
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
|
||||
},
|
||||
{
|
||||
"taskId": 86,
|
||||
"taskTitle": "Update .taskmasterconfig schema and user guide",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Expand this task into subtasks: (1) Draft a migration guide for users, (2) Update user documentation to explain new config fields, (3) Modify schema validation logic in config-manager.js, (4) Test and validate backward compatibility and error messaging.",
|
||||
"reasoning": "The task spans documentation, schema changes, migration guidance, and validation logic. While not algorithmically complex, it requires careful coordination and thorough testing to ensure a smooth user transition and robust validation."
|
||||
},
|
||||
{
|
||||
"taskId": 87,
|
||||
"taskTitle": "Implement validation and error handling",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
|
||||
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
|
||||
},
|
||||
{
|
||||
"taskId": 89,
|
||||
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
|
||||
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
|
||||
},
|
||||
{
|
||||
"taskId": 90,
|
||||
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
|
||||
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
|
||||
},
|
||||
{
|
||||
"taskId": 91,
|
||||
"taskTitle": "Implement Move Command for Tasks and Subtasks",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
|
||||
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,55 +0,0 @@
|
||||
# Task ID: 93
|
||||
# Title: Implement Google Vertex AI Provider Integration
|
||||
# Status: pending
|
||||
# Dependencies: 19, 94
|
||||
# Priority: medium
|
||||
# Description: Develop a dedicated Google Vertex AI provider in the codebase, enabling users to leverage Vertex AI models with enterprise-grade configuration and authentication.
|
||||
# Details:
|
||||
1. Create a new provider class in `src/ai-providers/google-vertex.js` that extends the existing BaseAIProvider, following the established structure used by other providers (e.g., google.js, openai.js).
|
||||
2. Integrate the Vercel AI SDK's `@ai-sdk/google-vertex` package. Use the default `vertex` provider for standard usage, and allow for custom configuration via `createVertex` for advanced scenarios (e.g., specifying project ID, location, and credentials).
|
||||
3. Implement all required interface methods (such as `getClient`, `generateText`, etc.) to ensure compatibility with the provider system. Reference the implementation patterns from other providers for consistency.
|
||||
4. Handle Vertex AI-specific configuration, including project ID, location, and Google Cloud authentication. Support both environment-based authentication and explicit service account credentials via `googleAuthOptions`.
|
||||
5. Implement robust error handling for Vertex-specific issues, including authentication failures and API errors, leveraging the system-wide error handling patterns.
|
||||
6. Update `src/ai-providers/index.js` to export the new provider, and add the 'vertex' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`.
|
||||
7. Update documentation to provide clear setup instructions for Google Vertex AI, including required environment variables, service account setup, and configuration examples.
|
||||
8. Ensure the implementation is modular and maintainable, supporting future expansion for additional Vertex AI features or models.
|
||||
|
||||
# Test Strategy:
|
||||
- Write unit tests for the new provider class, covering all interface methods and configuration scenarios (default, custom, error cases).
|
||||
- Verify that the provider can successfully authenticate using both environment-based and explicit service account credentials.
|
||||
- Test integration with the provider system by selecting 'vertex' as the provider and generating text using supported Vertex AI models (e.g., Gemini).
|
||||
- Simulate authentication and API errors to confirm robust error handling and user feedback.
|
||||
- Confirm that the provider is correctly exported and available in the PROVIDERS object.
|
||||
- Review and validate the updated documentation for accuracy and completeness.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create Google Vertex AI Provider Class [pending]
|
||||
### Dependencies: None
|
||||
### Description: Develop a new provider class in `src/ai-providers/google-vertex.js` that extends the BaseAIProvider, following the structure of existing providers.
|
||||
### Details:
|
||||
Ensure the new class is consistent with the architecture of other providers such as google.js and openai.js, and is ready to integrate with the AI SDK.
|
||||
|
||||
## 2. Integrate Vercel AI SDK Google Vertex Package [pending]
|
||||
### Dependencies: 93.1
|
||||
### Description: Integrate the `@ai-sdk/google-vertex` package, supporting both the default provider and custom configuration via `createVertex`.
|
||||
### Details:
|
||||
Allow for standard usage with the default `vertex` provider and advanced scenarios using `createVertex` for custom project ID, location, and credentials as per SDK documentation.
|
||||
|
||||
## 3. Implement Provider Interface Methods [pending]
|
||||
### Dependencies: 93.2
|
||||
### Description: Implement all required interface methods (e.g., `getClient`, `generateText`) to ensure compatibility with the provider system.
|
||||
### Details:
|
||||
Reference implementation patterns from other providers to maintain consistency and ensure all required methods are present and functional.
|
||||
|
||||
## 4. Handle Vertex AI Configuration and Authentication [pending]
|
||||
### Dependencies: 93.3
|
||||
### Description: Implement support for Vertex AI-specific configuration, including project ID, location, and authentication via environment variables or explicit service account credentials.
|
||||
### Details:
|
||||
Support both environment-based authentication and explicit credentials using `googleAuthOptions`, following Google Cloud and Vertex AI setup best practices.
|
||||
|
||||
## 5. Update Exports, Documentation, and Error Handling [pending]
|
||||
### Dependencies: 93.4
|
||||
### Description: Export the new provider, update the PROVIDERS object, and document setup instructions, including robust error handling for Vertex-specific issues.
|
||||
### Details:
|
||||
Update `src/ai-providers/index.js` and `scripts/modules/ai-services-unified.js`, and provide clear documentation for setup, configuration, and error handling patterns.
|
||||
|
||||
@@ -1,103 +0,0 @@
|
||||
# Task ID: 94
|
||||
# Title: Implement Azure OpenAI Provider Integration
|
||||
# Status: done
|
||||
# Dependencies: 19, 26
|
||||
# Priority: medium
|
||||
# Description: Create a comprehensive Azure OpenAI provider implementation that integrates with the existing AI provider system, enabling users to leverage Azure-hosted OpenAI models through proper authentication and configuration.
|
||||
# Details:
|
||||
Implement the Azure OpenAI provider following the established provider pattern:
|
||||
|
||||
1. **Create Azure Provider Class** (`src/ai-providers/azure.js`):
|
||||
- Extend BaseAIProvider class following the same pattern as openai.js and google.js
|
||||
- Import and use `createAzureOpenAI` from `@ai-sdk/azure` package
|
||||
- Implement required interface methods: `getClient()`, `validateConfig()`, and any other abstract methods
|
||||
- Handle Azure-specific configuration: endpoint URL, API key, and deployment name
|
||||
- Add proper error handling for missing or invalid Azure configuration
|
||||
|
||||
2. **Configuration Management**:
|
||||
- Support environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT
|
||||
- Validate that both endpoint and API key are provided
|
||||
- Provide clear error messages for configuration issues
|
||||
- Follow the same configuration pattern as other providers
|
||||
|
||||
3. **Integration Updates**:
|
||||
- Update `src/ai-providers/index.js` to export the new AzureProvider
|
||||
- Add 'azure' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`
|
||||
- Ensure the provider is properly registered and accessible through the unified AI services
|
||||
|
||||
4. **Error Handling**:
|
||||
- Implement Azure-specific error handling for authentication failures
|
||||
- Handle endpoint connectivity issues with helpful error messages
|
||||
- Validate deployment name and provide guidance for common configuration mistakes
|
||||
- Follow the established error handling patterns from Task 19
|
||||
|
||||
5. **Documentation Updates**:
|
||||
- Update any provider documentation to include Azure OpenAI setup instructions
|
||||
- Add configuration examples for Azure OpenAI environment variables
|
||||
- Include troubleshooting guidance for common Azure-specific issues
|
||||
|
||||
The implementation should maintain consistency with existing provider implementations while handling Azure's unique authentication and endpoint requirements.
|
||||
|
||||
# Test Strategy:
|
||||
Verify the Azure OpenAI provider implementation through comprehensive testing:
|
||||
|
||||
1. **Unit Testing**:
|
||||
- Test provider class instantiation and configuration validation
|
||||
- Verify getClient() method returns properly configured Azure OpenAI client
|
||||
- Test error handling for missing/invalid configuration parameters
|
||||
- Validate that the provider correctly extends BaseAIProvider
|
||||
|
||||
2. **Integration Testing**:
|
||||
- Test provider registration in the unified AI services system
|
||||
- Verify the provider appears in the PROVIDERS object and is accessible
|
||||
- Test end-to-end functionality with valid Azure OpenAI credentials
|
||||
- Validate that the provider works with existing AI operation workflows
|
||||
|
||||
3. **Configuration Testing**:
|
||||
- Test with various environment variable combinations
|
||||
- Verify proper error messages for missing endpoint or API key
|
||||
- Test with invalid endpoint URLs and ensure graceful error handling
|
||||
- Validate deployment name handling and error reporting
|
||||
|
||||
4. **Manual Verification**:
|
||||
- Set up test Azure OpenAI credentials and verify successful connection
|
||||
- Test actual AI operations (like task expansion) using the Azure provider
|
||||
- Verify that the provider selection works correctly in the CLI
|
||||
- Confirm that error messages are helpful and actionable for users
|
||||
|
||||
5. **Documentation Verification**:
|
||||
- Ensure all configuration examples work as documented
|
||||
- Verify that setup instructions are complete and accurate
|
||||
- Test troubleshooting guidance with common error scenarios
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create Azure Provider Class [done]
|
||||
### Dependencies: None
|
||||
### Description: Implement the AzureProvider class that extends BaseAIProvider to handle Azure OpenAI integration
|
||||
### Details:
|
||||
Create the AzureProvider class in src/ai-providers/azure.js that extends BaseAIProvider. Import createAzureOpenAI from @ai-sdk/azure package. Implement required interface methods including getClient() and validateConfig(). Handle Azure-specific configuration parameters: endpoint URL, API key, and deployment name. Follow the established pattern in openai.js and google.js. Ensure proper error handling for missing or invalid configuration.
|
||||
|
||||
## 2. Implement Configuration Management [done]
|
||||
### Dependencies: 94.1
|
||||
### Description: Add support for Azure OpenAI environment variables and configuration validation
|
||||
### Details:
|
||||
Implement configuration management for Azure OpenAI provider that supports environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and AZURE_OPENAI_DEPLOYMENT. Add validation logic to ensure both endpoint and API key are provided. Create clear error messages for configuration issues. Follow the same configuration pattern as implemented in other providers. Ensure the validateConfig() method properly checks all required Azure configuration parameters.
|
||||
|
||||
## 3. Update Provider Integration [done]
|
||||
### Dependencies: 94.1, 94.2
|
||||
### Description: Integrate the Azure provider into the existing AI provider system
|
||||
### Details:
|
||||
Update src/ai-providers/index.js to export the new AzureProvider class. Add 'azure' entry to the PROVIDERS object in scripts/modules/ai-services-unified.js. Ensure the provider is properly registered and accessible through the unified AI services. Test that the provider can be instantiated and used through the provider selection mechanism. Follow the same integration pattern used for existing providers.
|
||||
|
||||
## 4. Implement Azure-Specific Error Handling [done]
|
||||
### Dependencies: 94.1, 94.2
|
||||
### Description: Add specialized error handling for Azure OpenAI-specific issues
|
||||
### Details:
|
||||
Implement Azure-specific error handling for authentication failures, endpoint connectivity issues, and deployment name validation. Provide helpful error messages that guide users to resolve common configuration mistakes. Follow the established error handling patterns from Task 19. Create custom error classes if needed for Azure-specific errors. Ensure errors are properly propagated and formatted for user display.
|
||||
|
||||
## 5. Update Documentation [done]
|
||||
### Dependencies: 94.1, 94.2, 94.3, 94.4
|
||||
### Description: Create comprehensive documentation for the Azure OpenAI provider integration
|
||||
### Details:
|
||||
Update provider documentation to include Azure OpenAI setup instructions. Add configuration examples for Azure OpenAI environment variables. Include troubleshooting guidance for common Azure-specific issues. Document the required Azure resource creation process with references to Microsoft's documentation. Provide examples of valid configuration settings and explain each required parameter. Include information about Azure OpenAI model deployment requirements.
|
||||
|
||||
@@ -1,149 +0,0 @@
|
||||
# Task ID: 95
|
||||
# Title: Implement .taskmaster Directory Structure
|
||||
# Status: in-progress
|
||||
# Dependencies: 1, 3, 4, 17
|
||||
# Priority: high
|
||||
# Description: Consolidate all Task Master-managed files in user projects into a clean, centralized .taskmaster/ directory structure to improve organization and keep user project directories clean, based on GitHub issue #275.
|
||||
# Details:
|
||||
This task involves restructuring how Task Master organizes files within user projects to improve maintainability and keep project directories clean:
|
||||
|
||||
1. Create a new `.taskmaster/` directory structure in user projects:
|
||||
- Move task files from `tasks/` to `.taskmaster/tasks/`
|
||||
- Move PRD files from `scripts/` to `.taskmaster/docs/`
|
||||
- Move analysis reports to `.taskmaster/reports/`
|
||||
- Move configuration from `.taskmasterconfig` to `.taskmaster/config.json`
|
||||
- Create `.taskmaster/templates/` for user templates
|
||||
|
||||
2. Update all Task Master code that creates/reads user files:
|
||||
- Modify task file generation to use `.taskmaster/tasks/`
|
||||
- Update PRD file handling to use `.taskmaster/docs/`
|
||||
- Adjust report generation to save to `.taskmaster/reports/`
|
||||
- Update configuration loading to look for `.taskmaster/config.json`
|
||||
- Modify any path resolution logic in Task Master's codebase
|
||||
|
||||
3. Ensure backward compatibility during migration:
|
||||
- Implement path fallback logic that checks both old and new locations
|
||||
- Add deprecation warnings when old paths are detected
|
||||
- Create a migration command to help users transition to the new structure
|
||||
- Preserve existing user data during migration
|
||||
|
||||
4. Update the project initialization process:
|
||||
- Modify the init command to create the new `.taskmaster/` directory structure
|
||||
- Update default file creation to use new paths
|
||||
|
||||
5. Benefits of the new structure:
|
||||
- Keeps user project directories clean and organized
|
||||
- Clearly separates Task Master files from user project files
|
||||
- Makes it easier to add Task Master to .gitignore if desired
|
||||
- Provides logical grouping of different file types
|
||||
|
||||
6. Test thoroughly to ensure all functionality works with the new structure:
|
||||
- Verify all Task Master commands work with the new paths
|
||||
- Ensure backward compatibility functions correctly
|
||||
- Test migration process preserves all user data
|
||||
|
||||
7. Update documentation:
|
||||
- Update README.md to reflect the new user file structure
|
||||
- Add migration guide for existing users
|
||||
- Document the benefits of the cleaner organization
|
||||
|
||||
# Test Strategy:
|
||||
1. Unit Testing:
|
||||
- Create unit tests for path resolution that verify both new and old paths work
|
||||
- Test configuration loading with both `.taskmasterconfig` and `.taskmaster/config.json`
|
||||
- Verify the migration command correctly moves files and preserves content
|
||||
- Test file creation in all new subdirectories
|
||||
|
||||
2. Integration Testing:
|
||||
- Run all existing integration tests with the new directory structure
|
||||
- Verify that all Task Master commands function correctly with new paths
|
||||
- Test backward compatibility by running commands with old file structure
|
||||
|
||||
3. Migration Testing:
|
||||
- Test the migration process on sample projects with existing tasks and files
|
||||
- Verify all tasks, PRDs, reports, and configurations are correctly moved
|
||||
- Ensure no data loss occurs during migration
|
||||
- Test migration with partial existing structures (e.g., only tasks/ exists)
|
||||
|
||||
4. User Workflow Testing:
|
||||
- Test complete workflows: init → create tasks → generate reports → update PRDs
|
||||
- Verify all generated files go to correct locations in `.taskmaster/`
|
||||
- Test that user project directories remain clean
|
||||
|
||||
5. Manual Testing:
|
||||
- Perform end-to-end testing with the new structure
|
||||
- Create, update, and delete tasks using the new structure
|
||||
- Generate reports and verify they're saved to `.taskmaster/reports/`
|
||||
|
||||
6. Documentation Verification:
|
||||
- Review all documentation to ensure it accurately reflects the new user file structure
|
||||
- Verify the migration guide provides clear instructions
|
||||
|
||||
7. Regression Testing:
|
||||
- Run the full test suite to ensure no regressions were introduced
|
||||
- Verify existing user projects continue to work during transition period
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create .taskmaster directory structure [done]
|
||||
### Dependencies: None
|
||||
### Description: Create the new .taskmaster directory and move existing files to their new locations
|
||||
### Details:
|
||||
Create a new .taskmaster/ directory in the project root. Move the tasks/ directory to .taskmaster/tasks/. Move the scripts/ directory to .taskmaster/scripts/. Move the .taskmasterconfig file to .taskmaster/config.json. Ensure proper file permissions are maintained during the move.
|
||||
<info added on 2025-05-29T15:03:56.912Z>
|
||||
Create the new .taskmaster/ directory structure in user projects with subdirectories for tasks/, docs/, reports/, and templates/. Move the existing .taskmasterconfig file to .taskmaster/config.json. Since this project is also a Task Master user, move this project's current user files (tasks.json, PRD files, etc.) to the new .taskmaster/ structure to test the implementation. This subtask focuses on user project directory structure, not Task Master source code relocation.
|
||||
</info added on 2025-05-29T15:03:56.912Z>
|
||||
|
||||
## 2. Update Task Master code for new user file paths [done]
|
||||
### Dependencies: 95.1
|
||||
### Description: Modify all Task Master code that creates or reads user project files to use the new .taskmaster structure
|
||||
### Details:
|
||||
Update Task Master's file handling code to use the new paths: tasks in .taskmaster/tasks/, PRD files in .taskmaster/docs/, reports in .taskmaster/reports/, and config in .taskmaster/config.json. Modify path resolution logic throughout the Task Master codebase to reference the new user file locations.
|
||||
|
||||
## 3. Update task file generation system [done]
|
||||
### Dependencies: 95.1
|
||||
### Description: Modify the task file generation system to use the new directory structure
|
||||
### Details:
|
||||
Update the task file generation system to create and read task files from .taskmaster/tasks/ instead of tasks/. Ensure all template paths are updated. Modify any path resolution logic specific to task file handling.
|
||||
|
||||
## 4. Implement backward compatibility logic [in-progress]
|
||||
### Dependencies: 95.2, 95.3
|
||||
### Description: Add fallback mechanisms to support both old and new file locations during transition
|
||||
### Details:
|
||||
Implement path fallback logic that checks both old and new locations when files aren't found. Add deprecation warnings when old paths are used, informing users about the new structure. Ensure error messages are clear about the transition.
|
||||
|
||||
## 5. Create migration command for users [done]
|
||||
### Dependencies: 95.1, 95.4
|
||||
### Description: Develop a Task Master command to help users transition their existing projects to the new structure
|
||||
### Details:
|
||||
Create a 'taskmaster migrate' command that automatically moves user files from old locations to the new .taskmaster structure. Move tasks/ to .taskmaster/tasks/, scripts/prd.txt to .taskmaster/docs/, reports to .taskmaster/reports/, and .taskmasterconfig to .taskmaster/config.json. Include backup functionality and validation to ensure migration completed successfully.
|
||||
|
||||
## 6. Update project initialization process [done]
|
||||
### Dependencies: 95.1
|
||||
### Description: Modify the init command to create the new directory structure for new projects
|
||||
### Details:
|
||||
Update the init command to create the .taskmaster directory and its subdirectories (tasks/, docs/, reports/, templates/). Modify default file creation to use the new paths. Ensure new projects are created with the correct structure from the start.
|
||||
|
||||
## 7. Update PRD and report file handling [done]
|
||||
### Dependencies: 95.2, 95.6
|
||||
### Description: Modify PRD file creation and report generation to use the new directory structure
|
||||
### Details:
|
||||
Update PRD file handling to create and read files from .taskmaster/docs/ instead of scripts/. Modify report generation (like task-complexity-report.json) to save to .taskmaster/reports/. Ensure all file operations use the new paths consistently.
|
||||
|
||||
## 8. Update documentation and create migration guide [done]
|
||||
### Dependencies: 95.5, 95.6, 95.7
|
||||
### Description: Update all documentation to reflect the new directory structure and provide migration guidance
|
||||
### Details:
|
||||
Update README.md and other documentation to reflect the new .taskmaster structure for user projects. Create a comprehensive migration guide explaining the benefits of the new structure and how to migrate existing projects. Include examples of the new directory layout and explain how it keeps user project directories clean.
|
||||
|
||||
## 9. Add templates directory support [done]
|
||||
### Dependencies: 95.2, 95.6
|
||||
### Description: Implement support for user templates in the .taskmaster/templates/ directory
|
||||
### Details:
|
||||
Create functionality to support user-defined templates in .taskmaster/templates/. Allow users to store custom task templates, PRD templates, or other reusable files. Update Task Master commands to recognize and use templates from this directory when available.
|
||||
|
||||
## 10. Verify clean user project directories [in-progress]
|
||||
### Dependencies: 95.8, 95.9
|
||||
### Description: Ensure the new structure keeps user project root directories clean and organized
|
||||
### Details:
|
||||
Validate that after implementing the new structure, user project root directories only contain their actual project files plus the single .taskmaster/ directory. Verify that no Task Master files are created outside of .taskmaster/. Test that users can easily add .taskmaster/ to .gitignore if they choose to exclude Task Master files from version control.
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
<context>
|
||||
# Overview
|
||||
[Provide a high-level overview of your product here. Explain what problem it solves, who it's for, and why it's valuable.]
|
||||
|
||||
# Core Features
|
||||
[List and describe the main features of your product. For each feature, include:
|
||||
- What it does
|
||||
- Why it's important
|
||||
- How it works at a high level]
|
||||
|
||||
# User Experience
|
||||
[Describe the user journey and experience. Include:
|
||||
- User personas
|
||||
- Key user flows
|
||||
- UI/UX considerations]
|
||||
</context>
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
[Outline the technical implementation details:
|
||||
- System components
|
||||
- Data models
|
||||
- APIs and integrations
|
||||
- Infrastructure requirements]
|
||||
|
||||
# Development Roadmap
|
||||
[Break down the development process into phases:
|
||||
- MVP requirements
|
||||
- Future enhancements
|
||||
- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks]
|
||||
|
||||
# Logical Dependency Chain
|
||||
[Define the logical order of development:
|
||||
- Which features need to be built first (foundation)
|
||||
- Getting as quickly as possible to something usable/visible front end that works
|
||||
- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches]
|
||||
|
||||
# Risks and Mitigations
|
||||
[Identify potential risks and how they'll be addressed:
|
||||
- Technical challenges
|
||||
- Figuring out the MVP that we can build upon
|
||||
- Resource constraints]
|
||||
|
||||
# Appendix
|
||||
[Include any additional information:
|
||||
- Research findings
|
||||
- Technical specifications]
|
||||
</PRD>
|
||||
@@ -25,8 +25,8 @@
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/"
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
|
||||
335
CONTRIBUTING.md
335
CONTRIBUTING.md
@@ -1,335 +0,0 @@
|
||||
# Contributing to Task Master
|
||||
|
||||
Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀
|
||||
|
||||
## 🤝 Our Collaborative Approach
|
||||
|
||||
We're a **PR-friendly team** that values collaboration:
|
||||
|
||||
- ✅ **We review PRs quickly** - Usually within hours, not days
|
||||
- ✅ **We're super reactive** - Expect fast feedback and engagement
|
||||
- ✅ **We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it
|
||||
- ✅ **We're open to all contributions** - From bug fixes to major features
|
||||
|
||||
**We don't mind AI-generated code**, but we do expect you to:
|
||||
|
||||
- ✅ **Review and understand** what the AI generated
|
||||
- ✅ **Test the code thoroughly** before submitting
|
||||
- ✅ **Ensure it's well-written** and follows our patterns
|
||||
- ❌ **Don't submit "AI slop"** - untested, unreviewed AI output
|
||||
|
||||
> **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time!
|
||||
|
||||
## 🚀 Quick Start for Contributors
|
||||
|
||||
### 1. Fork and Clone
|
||||
|
||||
```bash
|
||||
git clone https://github.com/YOUR_USERNAME/claude-task-master.git
|
||||
cd claude-task-master
|
||||
npm install
|
||||
```
|
||||
|
||||
### 2. Create a Feature Branch
|
||||
|
||||
**Important**: Always target the `next` branch, not `main`:
|
||||
|
||||
```bash
|
||||
git checkout next
|
||||
git pull origin next
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
### 3. Make Your Changes
|
||||
|
||||
Follow our development guidelines below.
|
||||
|
||||
### 4. Test Everything Yourself
|
||||
|
||||
**Before submitting your PR**, ensure:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting if needed
|
||||
npm run format
|
||||
```
|
||||
|
||||
### 5. Create a Changeset
|
||||
|
||||
**Required for most changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
See the [Changeset Guidelines](#changeset-guidelines) below for details.
|
||||
|
||||
### 6. Submit Your PR
|
||||
|
||||
- Target the `next` branch
|
||||
- Write a clear description
|
||||
- Reference any related issues
|
||||
|
||||
## 📋 Development Guidelines
|
||||
|
||||
### Branch Strategy
|
||||
|
||||
- **`main`**: Production-ready code
|
||||
- **`next`**: Development branch - **target this for PRs**
|
||||
- **Feature branches**: `feature/description` or `fix/description`
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
1. **Write tests** for new functionality
|
||||
2. **Follow existing patterns** in the codebase
|
||||
3. **Add JSDoc comments** for functions
|
||||
4. **Keep functions focused** and single-purpose
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
Your PR **must pass all CI checks**:
|
||||
|
||||
- ✅ **Unit tests**: `npm test`
|
||||
- ✅ **Format check**: `npm run format-check`
|
||||
|
||||
**Test your changes locally first** - this saves review time and shows you care about quality.
|
||||
|
||||
## 📦 Changeset Guidelines
|
||||
|
||||
We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs.
|
||||
|
||||
### When to Create a Changeset
|
||||
|
||||
**Always create a changeset for**:
|
||||
|
||||
- ✅ New features
|
||||
- ✅ Bug fixes
|
||||
- ✅ Breaking changes
|
||||
- ✅ Performance improvements
|
||||
- ✅ User-facing documentation updates
|
||||
- ✅ Dependency updates that affect functionality
|
||||
|
||||
**Skip changesets for**:
|
||||
|
||||
- ❌ Internal documentation only
|
||||
- ❌ Test-only changes
|
||||
- ❌ Code formatting/linting
|
||||
- ❌ Development tooling that doesn't affect users
|
||||
|
||||
### How to Create a Changeset
|
||||
|
||||
1. **After making your changes**:
|
||||
|
||||
```bash
|
||||
npm run changeset
|
||||
```
|
||||
|
||||
2. **Choose the bump type**:
|
||||
|
||||
- **Major**: Breaking changes
|
||||
- **Minor**: New features
|
||||
- **Patch**: Bug fixes, docs, performance improvements
|
||||
|
||||
3. **Write a clear summary**:
|
||||
|
||||
```
|
||||
Add support for custom AI models in MCP configuration
|
||||
```
|
||||
|
||||
4. **Commit the changeset file** with your changes:
|
||||
```bash
|
||||
git add .changeset/*.md
|
||||
git commit -m "feat: add custom AI model support"
|
||||
```
|
||||
|
||||
### Changeset vs Git Commit Messages
|
||||
|
||||
- **Changeset summary**: User-facing, goes in CHANGELOG.md
|
||||
- **Git commit**: Developer-facing, explains the technical change
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Changeset summary (user-facing)
|
||||
"Add support for custom Ollama models"
|
||||
|
||||
# Git commit message (developer-facing)
|
||||
"feat(models): implement custom Ollama model validation
|
||||
|
||||
- Add model validation for custom Ollama endpoints
|
||||
- Update configuration schema to support custom models
|
||||
- Add tests for new validation logic"
|
||||
```
|
||||
|
||||
## 🔧 Development Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 18+
|
||||
- npm or yarn
|
||||
|
||||
### Environment Setup
|
||||
|
||||
1. **Copy environment template**:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. **Add your API keys** (for testing AI features):
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
OPENAI_API_KEY=your_key_here
|
||||
# Add others as needed
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Run tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run with coverage
|
||||
npm run test:coverage
|
||||
|
||||
# Run E2E tests
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
### Code Formatting
|
||||
|
||||
We use Prettier for consistent formatting:
|
||||
|
||||
```bash
|
||||
# Check formatting
|
||||
npm run format-check
|
||||
|
||||
# Fix formatting
|
||||
npm run format
|
||||
```
|
||||
|
||||
## 📝 PR Guidelines
|
||||
|
||||
### Before Submitting
|
||||
|
||||
- [ ] **Target the `next` branch**
|
||||
- [ ] **Test everything locally**
|
||||
- [ ] **Run the full test suite**
|
||||
- [ ] **Check code formatting**
|
||||
- [ ] **Create a changeset** (if needed)
|
||||
- [ ] **Re-read your changes** - ensure they're clean and well-thought-out
|
||||
|
||||
### PR Description Template
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
|
||||
Brief description of what this PR does.
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Breaking change
|
||||
- [ ] Documentation update
|
||||
|
||||
## Testing
|
||||
|
||||
- [ ] I have tested this locally
|
||||
- [ ] All existing tests pass
|
||||
- [ ] I have added tests for new functionality
|
||||
|
||||
## Changeset
|
||||
|
||||
- [ ] I have created a changeset (or this change doesn't need one)
|
||||
|
||||
## Additional Notes
|
||||
|
||||
Any additional context or notes for reviewers.
|
||||
```
|
||||
|
||||
### What We Look For
|
||||
|
||||
✅ **Good PRs**:
|
||||
|
||||
- Clear, focused changes
|
||||
- Comprehensive testing
|
||||
- Good commit messages
|
||||
- Proper changeset (when needed)
|
||||
- Self-reviewed code
|
||||
|
||||
❌ **Avoid**:
|
||||
|
||||
- Massive PRs that change everything
|
||||
- Untested code
|
||||
- Formatting issues
|
||||
- Missing changesets for user-facing changes
|
||||
- AI-generated code that wasn't reviewed
|
||||
|
||||
## 🏗️ Project Structure
|
||||
|
||||
```
|
||||
claude-task-master/
|
||||
├── bin/ # CLI executables
|
||||
├── mcp-server/ # MCP server implementation
|
||||
├── scripts/ # Core task management logic
|
||||
├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here)
|
||||
├── tests/ # Test files
|
||||
├── docs/ # Documentation
|
||||
└── .cursor/ # Cursor IDE rules and configuration
|
||||
└── assets/ # Assets like rules and configuration for all IDEs
|
||||
```
|
||||
|
||||
### Key Areas for Contribution
|
||||
|
||||
- **CLI Commands**: `scripts/modules/commands.js`
|
||||
- **MCP Tools**: `mcp-server/src/tools/`
|
||||
- **Core Logic**: `scripts/modules/task-manager/`
|
||||
- **AI Providers**: `src/ai-providers/`
|
||||
- **Tests**: `tests/`
|
||||
|
||||
## 🐛 Reporting Issues
|
||||
|
||||
### Bug Reports
|
||||
|
||||
Include:
|
||||
|
||||
- Task Master version
|
||||
- Node.js version
|
||||
- Operating system
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Error messages/logs
|
||||
|
||||
### Feature Requests
|
||||
|
||||
Include:
|
||||
|
||||
- Clear description of the feature
|
||||
- Use case/motivation
|
||||
- Proposed implementation (if you have ideas)
|
||||
- Willingness to contribute
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
|
||||
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
|
||||
|
||||
## 📄 License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause).
|
||||
|
||||
---
|
||||
|
||||
**Thank you for contributing to Task Master!** 🎉
|
||||
|
||||
Your contributions help make AI-driven development more accessible and efficient for everyone.
|
||||
85
README.md
85
README.md
@@ -45,23 +45,23 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE",
|
||||
},
|
||||
},
|
||||
},
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -71,23 +71,23 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"servers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
|
||||
},
|
||||
"type": "stdio",
|
||||
},
|
||||
},
|
||||
"servers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||
},
|
||||
"type": "stdio"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -99,7 +99,7 @@ Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable
|
||||
|
||||
#### 3. (Optional) Configure the models you want to use
|
||||
|
||||
In your editor's AI chat pane, say:
|
||||
In your editor’s AI chat pane, say:
|
||||
|
||||
```txt
|
||||
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
|
||||
@@ -109,21 +109,15 @@ Change the main, research and fallback models to <model_name>, <model_name> and
|
||||
|
||||
#### 4. Initialize Task Master
|
||||
|
||||
In your editor's AI chat pane, say:
|
||||
In your editor’s AI chat pane, say:
|
||||
|
||||
```txt
|
||||
Initialize taskmaster-ai in my project
|
||||
```
|
||||
|
||||
#### 5. Make sure you have a PRD (Recommended)
|
||||
#### 5. Make sure you have a PRD in `<project_folder>/scripts/prd.txt`
|
||||
|
||||
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
|
||||
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
||||
|
||||
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
||||
|
||||
> [!NOTE]
|
||||
> While a PRD is recommended for complex projects, you can always create individual tasks by asking "Can you help me implement [description of what you want to do]?" in chat.
|
||||
An example of a PRD is located into `<project_folder>/scripts/example_prd.txt`.
|
||||
|
||||
**Always start with a detailed PRD.**
|
||||
|
||||
@@ -134,7 +128,7 @@ The more detailed your PRD, the better the generated tasks will be.
|
||||
Use your AI assistant to:
|
||||
|
||||
- Parse requirements: `Can you parse my PRD at scripts/prd.txt?`
|
||||
- Plan next step: `What's the next task I should work on?`
|
||||
- Plan next step: `What’s the next task I should work on?`
|
||||
- Implement a task: `Can you help me implement task 3?`
|
||||
- Expand a task: `Can you help me expand task 4?`
|
||||
|
||||
@@ -192,7 +186,6 @@ For more detailed information, check out the documentation in the `docs` directo
|
||||
- [Command Reference](docs/command-reference.md) - Complete list of all available commands
|
||||
- [Task Structure](docs/task-structure.md) - Understanding the task format and features
|
||||
- [Example Interactions](docs/examples.md) - Common Cursor AI interaction examples
|
||||
- [Migration Guide](docs/migration-guide.md) - Guide to migrating to the new project structure
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/"
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
```bash
|
||||
# Project Setup
|
||||
task-master init # Initialize Task Master in current project
|
||||
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
|
||||
task-master parse-prd scripts/prd.txt # Generate tasks from PRD document
|
||||
task-master models --setup # Configure AI models interactively
|
||||
|
||||
# Daily Development Workflow
|
||||
@@ -39,10 +39,10 @@ task-master generate # Update task markd
|
||||
|
||||
### Core Files
|
||||
|
||||
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
|
||||
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
|
||||
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
|
||||
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
|
||||
- `tasks/tasks.json` - Main task data file (auto-managed)
|
||||
- `.taskmasterconfig` - AI model configuration (use `task-master models` to modify)
|
||||
- `scripts/prd.txt` - Product Requirements Document for parsing
|
||||
- `tasks/*.txt` - Individual task files (auto-generated from tasks.json)
|
||||
- `.env` - API keys for CLI usage
|
||||
|
||||
### Claude Code Integration Files
|
||||
@@ -78,23 +78,23 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your_key_here",
|
||||
"PERPLEXITY_API_KEY": "your_key_here",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -135,7 +135,7 @@ complexity_report; // = task-master complexity-report
|
||||
task-master init
|
||||
|
||||
# Create or obtain PRD, then parse it
|
||||
task-master parse-prd .taskmaster/docs/prd.txt
|
||||
task-master parse-prd scripts/prd.txt
|
||||
|
||||
# Analyze complexity and expand tasks
|
||||
task-master analyze-complexity --research
|
||||
@@ -208,14 +208,14 @@ Add to `.claude/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"allowedTools": [
|
||||
"Edit",
|
||||
"Bash(task-master *)",
|
||||
"Bash(git commit:*)",
|
||||
"Bash(git add:*)",
|
||||
"Bash(npm run *)",
|
||||
"mcp__task_master_ai__*"
|
||||
]
|
||||
"allowedTools": [
|
||||
"Edit",
|
||||
"Bash(task-master *)",
|
||||
"Bash(git commit:*)",
|
||||
"Bash(git add:*)",
|
||||
"Bash(npm run *)",
|
||||
"mcp__task_master_ai__*"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
@@ -268,15 +268,15 @@ task-master models --set-fallback gpt-4o-mini
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "1.2",
|
||||
"title": "Implement user authentication",
|
||||
"description": "Set up JWT-based auth system",
|
||||
"status": "pending",
|
||||
"priority": "high",
|
||||
"dependencies": ["1.1"],
|
||||
"details": "Use bcrypt for hashing, JWT for tokens...",
|
||||
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
|
||||
"subtasks": []
|
||||
"id": "1.2",
|
||||
"title": "Implement user authentication",
|
||||
"description": "Set up JWT-based auth system",
|
||||
"status": "pending",
|
||||
"priority": "high",
|
||||
"dependencies": ["1.1"],
|
||||
"details": "Use bcrypt for hashing, JWT for tokens...",
|
||||
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
|
||||
"subtasks": []
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
47
biome.json
47
biome.json
@@ -1,47 +0,0 @@
|
||||
{
|
||||
"files": {
|
||||
"ignore": [
|
||||
"build",
|
||||
"coverage",
|
||||
".changeset",
|
||||
"tasks",
|
||||
"package-lock.json",
|
||||
"tests/fixture/*.json"
|
||||
]
|
||||
},
|
||||
"formatter": {
|
||||
"bracketSpacing": true,
|
||||
"enabled": true,
|
||||
"indentStyle": "tab",
|
||||
"lineWidth": 80
|
||||
},
|
||||
"javascript": {
|
||||
"formatter": {
|
||||
"arrowParentheses": "always",
|
||||
"quoteStyle": "single",
|
||||
"trailingCommas": "none"
|
||||
}
|
||||
},
|
||||
"linter": {
|
||||
"rules": {
|
||||
"complexity": {
|
||||
"noForEach": "off",
|
||||
"useOptionalChain": "off"
|
||||
},
|
||||
"correctness": {
|
||||
"noConstantCondition": "off",
|
||||
"noUnreachable": "off"
|
||||
},
|
||||
"suspicious": {
|
||||
"noDuplicateTestHooks": "off",
|
||||
"noPrototypeBuiltins": "off"
|
||||
},
|
||||
"style": {
|
||||
"noUselessElse": "off",
|
||||
"useNodejsImportProtocol": "off",
|
||||
"useNumberNamespace": "off",
|
||||
"noParameterAssign": "off"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2,82 +2,68 @@
|
||||
|
||||
Taskmaster uses two primary methods for configuration:
|
||||
|
||||
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
|
||||
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
|
||||
|
||||
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
|
||||
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
|
||||
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
|
||||
- **Location:** This file is created in the root directory of your project when you run the `task-master models --setup` interactive setup. You typically do this during the initialization sequence. Do not manually edit this file beyond adjusting Temperature and Max Tokens depending on your model.
|
||||
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
|
||||
- **Example Structure:**
|
||||
```json
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2,
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1,
|
||||
"baseURL": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2,
|
||||
"baseUrl": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1,
|
||||
"baseUrl": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Your Project Name",
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
|
||||
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
- **Location:**
|
||||
- For CLI usage: Create a `.env` file in your project root.
|
||||
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
|
||||
- **Required API Keys (Depending on configured providers):**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key.
|
||||
- `GOOGLE_API_KEY`: Your Google API key.
|
||||
- `MISTRAL_API_KEY`: Your Mistral API key.
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseUrl` in `.taskmasterconfig`:** You can add a `baseUrl` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseUrl` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
|
||||
- For projects that haven't migrated to the new structure yet.
|
||||
- **Location:** Project root directory.
|
||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||
|
||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
- **Location:**
|
||||
- For CLI usage: Create a `.env` file in your project root.
|
||||
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
|
||||
- **Required API Keys (Depending on configured providers):**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key.
|
||||
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
|
||||
- `MISTRAL_API_KEY`: Your Mistral API key.
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
|
||||
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
|
||||
|
||||
## Example `.env` File (for API Keys)
|
||||
|
||||
@@ -92,20 +78,14 @@ PERPLEXITY_API_KEY=pplx-your-key-here
|
||||
# Optional Endpoint Overrides
|
||||
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
|
||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||
|
||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
# VERTEX_LOCATION=us-central1
|
||||
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Configuration Errors
|
||||
|
||||
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
|
||||
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
|
||||
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
|
||||
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
|
||||
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in `.taskmasterconfig`.
|
||||
|
||||
### If `task-master init` doesn't respond:
|
||||
|
||||
@@ -122,45 +102,3 @@ git clone https://github.com/eyaltoledano/claude-task-master.git
|
||||
cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
## Provider-Specific Configuration
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
1. **Prerequisites**:
|
||||
- A Google Cloud account with Vertex AI API enabled
|
||||
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
|
||||
- A Google Cloud project ID
|
||||
2. **Authentication Options**:
|
||||
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
|
||||
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
|
||||
3. **Required Configuration**:
|
||||
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
|
||||
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
|
||||
4. **Example Setup**:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
Or using service account:
|
||||
|
||||
```bash
|
||||
# In .env file
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||||
VERTEX_PROJECT_ID=my-gcp-project-123
|
||||
VERTEX_LOCATION=us-central1
|
||||
```
|
||||
|
||||
5. **In .taskmasterconfig**:
|
||||
```json
|
||||
"global": {
|
||||
"vertexProjectId": "my-gcp-project-123",
|
||||
"vertexLocation": "us-central1"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -5,7 +5,7 @@ Here are some common interactions with Cursor AI when using Task Master:
|
||||
## Starting a new project
|
||||
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
|
||||
|
||||
@@ -1,235 +0,0 @@
|
||||
# Migration Guide: New .taskmaster Directory Structure
|
||||
|
||||
## Overview
|
||||
|
||||
Task Master v3.x introduces a new `.taskmaster/` directory structure to keep your project directories clean and organized. This guide explains the benefits of the new structure and how to migrate existing projects.
|
||||
|
||||
## What's New
|
||||
|
||||
### Before (Legacy Structure)
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── tasks/ # Task files
|
||||
│ ├── tasks.json
|
||||
│ ├── task-1.md
|
||||
│ └── task-2.md
|
||||
├── scripts/ # PRD and reports
|
||||
│ ├── prd.txt
|
||||
│ ├── example_prd.txt
|
||||
│ └── task-complexity-report.json
|
||||
├── .taskmasterconfig # Configuration
|
||||
└── ... (your project files)
|
||||
```
|
||||
|
||||
### After (New Structure)
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── .taskmaster/ # Consolidated Task Master files
|
||||
│ ├── config.json # Configuration (was .taskmasterconfig)
|
||||
│ ├── tasks/ # Task files
|
||||
│ │ ├── tasks.json
|
||||
│ │ ├── task-1.md
|
||||
│ │ └── task-2.md
|
||||
│ ├── docs/ # Project documentation
|
||||
│ │ └── prd.txt
|
||||
│ ├── reports/ # Generated reports
|
||||
│ │ └── task-complexity-report.json
|
||||
│ └── templates/ # Example/template files
|
||||
│ └── example_prd.txt
|
||||
└── ... (your project files)
|
||||
```
|
||||
|
||||
## Benefits of the New Structure
|
||||
|
||||
✅ **Cleaner Project Root**: No more scattered Task Master files
|
||||
✅ **Better Organization**: Logical separation of tasks, docs, reports, and templates
|
||||
✅ **Hidden by Default**: `.taskmaster/` directory is hidden from most file browsers
|
||||
✅ **Future-Proof**: Centralized location for Task Master extensions
|
||||
✅ **Backward Compatible**: Existing projects continue to work until migrated
|
||||
|
||||
## Migration Options
|
||||
|
||||
### Option 1: Automatic Migration (Recommended)
|
||||
|
||||
Task Master provides a built-in migration command that handles everything automatically:
|
||||
|
||||
#### CLI Migration
|
||||
|
||||
```bash
|
||||
# Dry run to see what would be migrated
|
||||
task-master migrate --dry-run
|
||||
|
||||
# Perform the migration with backup
|
||||
task-master migrate --backup
|
||||
|
||||
# Force migration (overwrites existing files)
|
||||
task-master migrate --force
|
||||
|
||||
# Clean up legacy files after migration
|
||||
task-master migrate --cleanup
|
||||
```
|
||||
|
||||
#### MCP Migration (Cursor/AI Editors)
|
||||
|
||||
Ask your AI assistant:
|
||||
|
||||
```
|
||||
Please migrate my Task Master project to the new .taskmaster directory structure
|
||||
```
|
||||
|
||||
### Option 2: Manual Migration
|
||||
|
||||
If you prefer to migrate manually:
|
||||
|
||||
1. **Create the new directory structure:**
|
||||
|
||||
```bash
|
||||
mkdir -p .taskmaster/{tasks,docs,reports,templates}
|
||||
```
|
||||
|
||||
2. **Move your files:**
|
||||
|
||||
```bash
|
||||
# Move tasks
|
||||
mv tasks/* .taskmaster/tasks/
|
||||
|
||||
# Move configuration
|
||||
mv .taskmasterconfig .taskmaster/config.json
|
||||
|
||||
# Move PRD and documentation
|
||||
mv scripts/prd.txt .taskmaster/docs/
|
||||
mv scripts/example_prd.txt .taskmaster/templates/
|
||||
|
||||
# Move reports (if they exist)
|
||||
mv scripts/task-complexity-report.json .taskmaster/reports/ 2>/dev/null || true
|
||||
```
|
||||
|
||||
3. **Clean up empty directories:**
|
||||
```bash
|
||||
rmdir tasks scripts 2>/dev/null || true
|
||||
```
|
||||
|
||||
## What Gets Migrated
|
||||
|
||||
The migration process handles these file types:
|
||||
|
||||
### Tasks Directory → `.taskmaster/tasks/`
|
||||
|
||||
- `tasks.json`
|
||||
- Individual task Markdown files (`.md`)
|
||||
|
||||
### Scripts Directory → Multiple Destinations
|
||||
|
||||
- **PRD files** → `.taskmaster/docs/`
|
||||
- `prd.txt`, `requirements.txt`, etc.
|
||||
- **Example/Template files** → `.taskmaster/templates/`
|
||||
- `example_prd.txt`, template files
|
||||
- **Reports** → `.taskmaster/reports/`
|
||||
- `task-complexity-report.json`
|
||||
|
||||
### Configuration
|
||||
|
||||
- `.taskmasterconfig` → `.taskmaster/config.json`
|
||||
|
||||
## After Migration
|
||||
|
||||
Once migrated, Task Master will:
|
||||
|
||||
✅ **Automatically use** the new directory structure
|
||||
✅ **Show deprecation warnings** when legacy files are detected
|
||||
✅ **Create new files** in the proper locations
|
||||
✅ **Fall back gracefully** to legacy locations if new ones don't exist
|
||||
|
||||
### Verification
|
||||
|
||||
After migration, verify everything works:
|
||||
|
||||
1. **List your tasks:**
|
||||
|
||||
```bash
|
||||
task-master list
|
||||
```
|
||||
|
||||
2. **Check your configuration:**
|
||||
|
||||
```bash
|
||||
task-master models
|
||||
```
|
||||
|
||||
3. **Generate new task files:**
|
||||
```bash
|
||||
task-master generate
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Issues
|
||||
|
||||
**Q: Migration says "no files to migrate"**
|
||||
A: Your project may already be using the new structure or have no Task Master files to migrate.
|
||||
|
||||
**Q: Migration fails with permission errors**
|
||||
A: Ensure you have write permissions in your project directory.
|
||||
|
||||
**Q: Some files weren't migrated**
|
||||
A: Check the migration output - some files may not match the expected patterns. You can migrate these manually.
|
||||
|
||||
### Working with Legacy Projects
|
||||
|
||||
If you're working with an older project that hasn't been migrated:
|
||||
|
||||
- Task Master will continue to work with the old structure
|
||||
- You'll see deprecation warnings in the output
|
||||
- New files will still be created in legacy locations
|
||||
- Use the migration command when ready to upgrade
|
||||
|
||||
### New Project Initialization
|
||||
|
||||
New projects automatically use the new structure:
|
||||
|
||||
```bash
|
||||
task-master init # Creates .taskmaster/ structure
|
||||
```
|
||||
|
||||
## Path Changes for Developers
|
||||
|
||||
If you're developing tools or scripts that interact with Task Master files:
|
||||
|
||||
### Configuration File
|
||||
|
||||
- **Old:** `.taskmasterconfig`
|
||||
- **New:** `.taskmaster/config.json`
|
||||
- **Fallback:** Task Master checks both locations
|
||||
|
||||
### Tasks File
|
||||
|
||||
- **Old:** `tasks/tasks.json`
|
||||
- **New:** `.taskmaster/tasks/tasks.json`
|
||||
- **Fallback:** Task Master checks both locations
|
||||
|
||||
### Reports
|
||||
|
||||
- **Old:** `scripts/task-complexity-report.json`
|
||||
- **New:** `.taskmaster/reports/task-complexity-report.json`
|
||||
- **Fallback:** Task Master checks both locations
|
||||
|
||||
### PRD Files
|
||||
|
||||
- **Old:** `scripts/prd.txt`
|
||||
- **New:** `.taskmaster/docs/prd.txt`
|
||||
- **Fallback:** Task Master checks both locations
|
||||
|
||||
## Need Help?
|
||||
|
||||
If you encounter issues during migration:
|
||||
|
||||
1. **Check the logs:** Add `--debug` flag for detailed output
|
||||
2. **Backup first:** Always use `--backup` option for safety
|
||||
3. **Test with dry-run:** Use `--dry-run` to preview changes
|
||||
4. **Ask for help:** Use our Discord community or GitHub issues
|
||||
|
||||
---
|
||||
|
||||
_This migration guide applies to Task Master v3.x and later. For older versions, please upgrade to the latest version first._
|
||||
@@ -1,4 +1,4 @@
|
||||
# Available Models as of May 27, 2025
|
||||
# Available Models as of May 26, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
|
||||
@@ -20,22 +20,22 @@ npm i -g task-master-ai
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
|
||||
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
|
||||
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
|
||||
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
|
||||
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -60,12 +60,12 @@ The AI will:
|
||||
- Set up initial configuration files
|
||||
- Guide you through the rest of the process
|
||||
|
||||
5. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
|
||||
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
|
||||
|
||||
6. **Use natural language commands** to interact with Task Master:
|
||||
|
||||
```
|
||||
Can you parse my PRD at .taskmaster/docs/prd.txt?
|
||||
Can you parse my PRD at scripts/prd.txt?
|
||||
What's the next task I should work on?
|
||||
Can you help me implement task 3?
|
||||
```
|
||||
@@ -132,7 +132,7 @@ If you're not using MCP, you can still set up Cursor integration:
|
||||
|
||||
1. After initializing your project, open it in Cursor
|
||||
2. The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
|
||||
3. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
|
||||
3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
|
||||
4. Open Cursor's AI chat and switch to Agent mode
|
||||
|
||||
### Alternative MCP Setup in Cursor
|
||||
@@ -155,13 +155,13 @@ Once configured, you can interact with Task Master's task management commands di
|
||||
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
|
||||
|
||||
```
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at .taskmaster/docs/prd.txt.
|
||||
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
|
||||
```
|
||||
|
||||
The agent will execute:
|
||||
|
||||
```bash
|
||||
task-master parse-prd .taskmaster/docs/prd.txt
|
||||
task-master parse-prd scripts/prd.txt
|
||||
```
|
||||
|
||||
This will:
|
||||
@@ -377,7 +377,7 @@ task-master expand --id=5 --research
|
||||
### Starting a new project
|
||||
|
||||
```
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
|
||||
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
|
||||
Can you help me parse it and set up the initial tasks?
|
||||
```
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
||||
resultData = {
|
||||
message: 'Project initialized successfully.',
|
||||
next_step:
|
||||
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in .taskmaster/docs/ directory). You can create a prd.txt file by asking the user about their idea, and then using the .taskmaster/templates/example_prd.txt file as a template to generate a prd.txt file in .taskmaster/docs/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in .taskmaster/docs/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
|
||||
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in the project root directory, scripts/ directory). You can create a prd.txt file by asking the user about their idea, and then using the scripts/example_prd.txt file as a template to genrate a prd.txt file in scripts/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in scripts/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
|
||||
...result
|
||||
};
|
||||
success = true;
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
*/
|
||||
|
||||
import { moveTask } from '../../../../scripts/modules/task-manager.js';
|
||||
import { findTasksPath } from '../utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../utils/path-utils.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
@@ -58,7 +58,7 @@ export async function moveTaskDirect(args, log, context = {}) {
|
||||
}
|
||||
};
|
||||
}
|
||||
tasksPath = findTasksPath(args, log);
|
||||
tasksPath = findTasksJsonPath(args, log);
|
||||
}
|
||||
|
||||
// Enable silent mode to prevent console output during MCP operation
|
||||
|
||||
@@ -13,8 +13,6 @@ import {
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { createLogWrapper } from '../../tools/utils.js';
|
||||
import { getDefaultNumTasks } from '../../../../scripts/modules/config-manager.js';
|
||||
import { resolvePrdPath, resolveProjectPath } from '../utils/path-utils.js';
|
||||
import { TASKMASTER_TASKS_FILE } from '../../../../src/constants/paths.js';
|
||||
|
||||
/**
|
||||
* Direct function wrapper for parsing PRD documents and generating tasks.
|
||||
@@ -51,20 +49,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Resolve input path using path utilities
|
||||
let inputPath;
|
||||
if (inputArg) {
|
||||
try {
|
||||
inputPath = resolvePrdPath({ input: inputArg, projectRoot }, session);
|
||||
} catch (error) {
|
||||
logWrapper.error(`Error resolving PRD path: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'FILE_NOT_FOUND', message: error.message }
|
||||
};
|
||||
}
|
||||
} else {
|
||||
if (!inputArg) {
|
||||
logWrapper.error('parsePRDDirect called without input path');
|
||||
return {
|
||||
success: false,
|
||||
@@ -72,13 +57,11 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
};
|
||||
}
|
||||
|
||||
// Resolve output path - use new path utilities for default
|
||||
// Resolve input and output paths relative to projectRoot
|
||||
const inputPath = path.resolve(projectRoot, inputArg);
|
||||
const outputPath = outputArg
|
||||
? path.isAbsolute(outputArg)
|
||||
? outputArg
|
||||
: path.resolve(projectRoot, outputArg)
|
||||
: resolveProjectPath(TASKMASTER_TASKS_FILE, session) ||
|
||||
path.resolve(projectRoot, TASKMASTER_TASKS_FILE);
|
||||
? path.resolve(projectRoot, outputArg)
|
||||
: path.resolve(projectRoot, 'tasks', 'tasks.json'); // Default output path
|
||||
|
||||
// Check if input file exists
|
||||
if (!fs.existsSync(inputPath)) {
|
||||
@@ -96,12 +79,17 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
logWrapper.info(`Creating output directory: ${outputDir}`);
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMsg = `Failed to create output directory ${outputDir}: ${error.message}`;
|
||||
logWrapper.error(errorMsg);
|
||||
} catch (dirError) {
|
||||
logWrapper.error(
|
||||
`Failed to create output directory ${outputDir}: ${dirError.message}`
|
||||
);
|
||||
// Return an error response immediately if dir creation fails
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'DIRECTORY_CREATE_FAILED', message: errorMsg }
|
||||
error: {
|
||||
code: 'DIRECTORY_CREATION_ERROR',
|
||||
message: `Failed to create output directory: ${dirError.message}`
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@@ -109,7 +97,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
if (numTasksArg) {
|
||||
numTasks =
|
||||
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
|
||||
if (Number.isNaN(numTasks) || numTasks <= 0) {
|
||||
if (isNaN(numTasks) || numTasks <= 0) {
|
||||
// Ensure positive number
|
||||
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
||||
logWrapper.warn(
|
||||
|
||||
@@ -8,14 +8,14 @@ import {
|
||||
readComplexityReport,
|
||||
readJSON
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { findTasksPath } from '../utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Direct function wrapper for getting task details.
|
||||
*
|
||||
* @param {Object} args - Command arguments.
|
||||
* @param {string} args.id - Task ID to show.
|
||||
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksPath).
|
||||
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksJsonPath).
|
||||
* @param {string} args.reportPath - Explicit path to the complexity report file.
|
||||
* @param {string} [args.status] - Optional status to filter subtasks by.
|
||||
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
|
||||
@@ -37,7 +37,7 @@ export async function showTaskDirect(args, log) {
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
// Use the projectRoot passed directly from args
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: projectRoot, file: file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -33,7 +33,7 @@ import { modelsDirect } from './direct-functions/models.js';
|
||||
import { moveTaskDirect } from './direct-functions/move-task.js';
|
||||
|
||||
// Re-export utility functions
|
||||
export { findTasksPath } from './utils/path-utils.js';
|
||||
export { findTasksJsonPath } from './utils/path-utils.js';
|
||||
|
||||
// Use Map for potential future enhancements like introspection or dynamic dispatch
|
||||
export const directFunctions = new Map([
|
||||
|
||||
@@ -1,217 +1,436 @@
|
||||
/**
|
||||
* path-utils.js
|
||||
* Utility functions for file path operations in Task Master
|
||||
*
|
||||
* This module provides robust path resolution for both:
|
||||
* 1. PACKAGE PATH: Where task-master code is installed
|
||||
* (global node_modules OR local ./node_modules/task-master OR direct from repo)
|
||||
* 2. PROJECT PATH: Where user's tasks.json resides (typically user's project root)
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import {
|
||||
findTasksPath as coreFindTasksPath,
|
||||
findPRDPath as coreFindPrdPath,
|
||||
findComplexityReportPath as coreFindComplexityReportPath,
|
||||
findProjectRoot as coreFindProjectRoot
|
||||
} from '../../../../src/utils/path-utils.js';
|
||||
import { PROJECT_MARKERS } from '../../../../src/constants/paths.js';
|
||||
import { fileURLToPath } from 'url';
|
||||
import os from 'os';
|
||||
|
||||
// Store last found project root to improve performance on subsequent calls (primarily for CLI)
|
||||
export let lastFoundProjectRoot = null;
|
||||
|
||||
// Project marker files that indicate a potential project root
|
||||
export const PROJECT_MARKERS = [
|
||||
// Task Master specific
|
||||
'tasks.json',
|
||||
'tasks/tasks.json',
|
||||
|
||||
// Common version control
|
||||
'.git',
|
||||
'.svn',
|
||||
|
||||
// Common package files
|
||||
'package.json',
|
||||
'pyproject.toml',
|
||||
'Gemfile',
|
||||
'go.mod',
|
||||
'Cargo.toml',
|
||||
|
||||
// Common IDE/editor folders
|
||||
'.cursor',
|
||||
'.vscode',
|
||||
'.idea',
|
||||
|
||||
// Common dependency directories (check if directory)
|
||||
'node_modules',
|
||||
'venv',
|
||||
'.venv',
|
||||
|
||||
// Common config files
|
||||
'.env',
|
||||
'.eslintrc',
|
||||
'tsconfig.json',
|
||||
'babel.config.js',
|
||||
'jest.config.js',
|
||||
'webpack.config.js',
|
||||
|
||||
// Common CI/CD files
|
||||
'.github/workflows',
|
||||
'.gitlab-ci.yml',
|
||||
'.circleci/config.yml'
|
||||
];
|
||||
|
||||
/**
|
||||
* MCP-specific path utilities that extend core path utilities with session support
|
||||
* This module handles session-specific path resolution for the MCP server
|
||||
* Gets the path to the task-master package installation directory
|
||||
* NOTE: This might become unnecessary if CLI fallback in MCP utils is removed.
|
||||
* @returns {string} - Absolute path to the package installation directory
|
||||
*/
|
||||
export function getPackagePath() {
|
||||
// When running from source, __dirname is the directory containing this file
|
||||
// When running from npm, we need to find the package root
|
||||
const thisFilePath = fileURLToPath(import.meta.url);
|
||||
const thisFileDir = path.dirname(thisFilePath);
|
||||
|
||||
/**
|
||||
* Cache for last found project root to improve performance
|
||||
*/
|
||||
export const lastFoundProjectRoot = null;
|
||||
|
||||
/**
|
||||
* Find tasks.json file with MCP support
|
||||
* @param {string} [explicitPath] - Explicit path to tasks.json (highest priority)
|
||||
* @param {Object} [args] - Arguments object for context
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to tasks.json or null if not found
|
||||
*/
|
||||
export function findTasksPathCore(explicitPath, args = null, log = null) {
|
||||
return coreFindTasksPath(explicitPath, args, log);
|
||||
// Navigate from core/utils up to the package root
|
||||
// In dev: /path/to/task-master/mcp-server/src/core/utils -> /path/to/task-master
|
||||
// In npm: /path/to/node_modules/task-master/mcp-server/src/core/utils -> /path/to/node_modules/task-master
|
||||
return path.resolve(thisFileDir, '../../../../');
|
||||
}
|
||||
|
||||
/**
|
||||
* Find PRD file with MCP support
|
||||
* @param {string} [explicitPath] - Explicit path to PRD file (highest priority)
|
||||
* @param {Object} [args] - Arguments object for context
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to PRD file or null if not found
|
||||
* Finds the absolute path to the tasks.json file based on project root and arguments.
|
||||
* @param {Object} args - Command arguments, potentially including 'projectRoot' and 'file'.
|
||||
* @param {Object} log - Logger object.
|
||||
* @returns {string} - Absolute path to the tasks.json file.
|
||||
* @throws {Error} - If tasks.json cannot be found.
|
||||
*/
|
||||
export function findPrdPath(explicitPath, args = null, log = null) {
|
||||
return coreFindPrdPath(explicitPath, args, log);
|
||||
export function findTasksJsonPath(args, log) {
|
||||
// PRECEDENCE ORDER for finding tasks.json:
|
||||
// 1. Explicitly provided `projectRoot` in args (Highest priority, expected in MCP context)
|
||||
// 2. Previously found/cached `lastFoundProjectRoot` (primarily for CLI performance)
|
||||
// 3. Search upwards from current working directory (`process.cwd()`) - CLI usage
|
||||
|
||||
// 1. If project root is explicitly provided (e.g., from MCP session), use it directly
|
||||
if (args.projectRoot) {
|
||||
const projectRoot = args.projectRoot;
|
||||
log.info(`Using explicitly provided project root: ${projectRoot}`);
|
||||
try {
|
||||
// This will throw if tasks.json isn't found within this root
|
||||
return findTasksJsonInDirectory(projectRoot, args.file, log);
|
||||
} catch (error) {
|
||||
// Include debug info in error
|
||||
const debugInfo = {
|
||||
projectRoot,
|
||||
currentDir: process.cwd(),
|
||||
serverDir: path.dirname(process.argv[1]),
|
||||
possibleProjectRoot: path.resolve(
|
||||
path.dirname(process.argv[1]),
|
||||
'../..'
|
||||
),
|
||||
lastFoundProjectRoot,
|
||||
searchedPaths: error.message
|
||||
};
|
||||
|
||||
error.message = `Tasks file not found in any of the expected locations relative to project root "${projectRoot}" (from session).\nDebug Info: ${JSON.stringify(debugInfo, null, 2)}`;
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Fallback logic primarily for CLI or when projectRoot isn't passed ---
|
||||
|
||||
// 2. If we have a last known project root that worked, try it first
|
||||
if (lastFoundProjectRoot) {
|
||||
log.info(`Trying last known project root: ${lastFoundProjectRoot}`);
|
||||
try {
|
||||
// Use the cached root
|
||||
const tasksPath = findTasksJsonInDirectory(
|
||||
lastFoundProjectRoot,
|
||||
args.file,
|
||||
log
|
||||
);
|
||||
return tasksPath; // Return if found in cached root
|
||||
} catch (error) {
|
||||
log.info(
|
||||
`Task file not found in last known project root, continuing search.`
|
||||
);
|
||||
// Continue with search if not found in cache
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Start search from current directory (most common CLI scenario)
|
||||
const startDir = process.cwd();
|
||||
log.info(
|
||||
`Searching for tasks.json starting from current directory: ${startDir}`
|
||||
);
|
||||
|
||||
// Try to find tasks.json by walking up the directory tree from cwd
|
||||
try {
|
||||
// This will throw if not found in the CWD tree
|
||||
return findTasksJsonWithParentSearch(startDir, args.file, log);
|
||||
} catch (error) {
|
||||
// If all attempts fail, augment and throw the original error from CWD search
|
||||
error.message = `${error.message}\n\nPossible solutions:\n1. Run the command from your project directory containing tasks.json\n2. Use --project-root=/path/to/project to specify the project location (if using CLI)\n3. Ensure the project root is correctly passed from the client (if using MCP)\n\nCurrent working directory: ${startDir}\nLast known project root: ${lastFoundProjectRoot}\nProject root from args: ${args.projectRoot}`;
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Find complexity report file with MCP support
|
||||
* @param {string} [explicitPath] - Explicit path to complexity report (highest priority)
|
||||
* @param {Object} [args] - Arguments object for context
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to complexity report or null if not found
|
||||
* Check if a directory contains any project marker files or directories
|
||||
* @param {string} dirPath - Directory to check
|
||||
* @returns {boolean} - True if the directory contains any project markers
|
||||
*/
|
||||
export function findComplexityReportPathCore(
|
||||
explicitPath,
|
||||
args = null,
|
||||
log = null
|
||||
) {
|
||||
return coreFindComplexityReportPath(explicitPath, args, log);
|
||||
function hasProjectMarkers(dirPath) {
|
||||
return PROJECT_MARKERS.some((marker) => {
|
||||
const markerPath = path.join(dirPath, marker);
|
||||
// Check if the marker exists as either a file or directory
|
||||
return fs.existsSync(markerPath);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve tasks.json path from arguments
|
||||
* Prioritizes explicit path parameter, then uses fallback logic
|
||||
* @param {Object} args - Arguments object containing projectRoot and optional file path
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to tasks.json or null if not found
|
||||
* Search for tasks.json in a specific directory
|
||||
* @param {string} dirPath - Directory to search in
|
||||
* @param {string} explicitFilePath - Optional explicit file path relative to dirPath
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {string} - Absolute path to tasks.json
|
||||
* @throws {Error} - If tasks.json cannot be found
|
||||
*/
|
||||
export function resolveTasksPath(args, log = null) {
|
||||
// Get explicit path from args.file if provided
|
||||
const explicitPath = args?.file;
|
||||
const projectRoot = args?.projectRoot;
|
||||
function findTasksJsonInDirectory(dirPath, explicitFilePath, log) {
|
||||
const possiblePaths = [];
|
||||
|
||||
// If explicit path is provided and absolute, use it directly
|
||||
if (explicitPath && path.isAbsolute(explicitPath)) {
|
||||
return explicitPath;
|
||||
// 1. If a file is explicitly provided relative to dirPath
|
||||
if (explicitFilePath) {
|
||||
possiblePaths.push(path.resolve(dirPath, explicitFilePath));
|
||||
}
|
||||
|
||||
// If explicit path is relative, resolve it relative to projectRoot
|
||||
if (explicitPath && projectRoot) {
|
||||
return path.resolve(projectRoot, explicitPath);
|
||||
// 2. Check the standard locations relative to dirPath
|
||||
possiblePaths.push(
|
||||
path.join(dirPath, 'tasks.json'),
|
||||
path.join(dirPath, 'tasks', 'tasks.json')
|
||||
);
|
||||
|
||||
log.info(`Checking potential task file paths: ${possiblePaths.join(', ')}`);
|
||||
|
||||
// Find the first existing path
|
||||
for (const p of possiblePaths) {
|
||||
log.info(`Checking if exists: ${p}`);
|
||||
const exists = fs.existsSync(p);
|
||||
log.info(`Path ${p} exists: ${exists}`);
|
||||
|
||||
if (exists) {
|
||||
log.info(`Found tasks file at: ${p}`);
|
||||
// Store the project root for future use
|
||||
lastFoundProjectRoot = dirPath;
|
||||
return p;
|
||||
}
|
||||
}
|
||||
|
||||
// Use core findTasksPath with explicit path and projectRoot context
|
||||
if (projectRoot) {
|
||||
return coreFindTasksPath(explicitPath, { projectRoot }, log);
|
||||
}
|
||||
|
||||
// Fallback to core function without projectRoot context
|
||||
return coreFindTasksPath(explicitPath, null, log);
|
||||
// If no file was found, throw an error
|
||||
const error = new Error(
|
||||
`Tasks file not found in any of the expected locations relative to ${dirPath}: ${possiblePaths.join(', ')}`
|
||||
);
|
||||
error.code = 'TASKS_FILE_NOT_FOUND';
|
||||
throw error;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve PRD path from arguments
|
||||
* @param {Object} args - Arguments object containing projectRoot and optional input path
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to PRD file or null if not found
|
||||
* Recursively search for tasks.json in the given directory and parent directories
|
||||
* Also looks for project markers to identify potential project roots
|
||||
* @param {string} startDir - Directory to start searching from
|
||||
* @param {string} explicitFilePath - Optional explicit file path
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {string} - Absolute path to tasks.json
|
||||
* @throws {Error} - If tasks.json cannot be found in any parent directory
|
||||
*/
|
||||
export function resolvePrdPath(args, log = null) {
|
||||
// Get explicit path from args.input if provided
|
||||
const explicitPath = args?.input;
|
||||
const projectRoot = args?.projectRoot;
|
||||
function findTasksJsonWithParentSearch(startDir, explicitFilePath, log) {
|
||||
let currentDir = startDir;
|
||||
const rootDir = path.parse(currentDir).root;
|
||||
|
||||
// If explicit path is provided and absolute, use it directly
|
||||
if (explicitPath && path.isAbsolute(explicitPath)) {
|
||||
return explicitPath;
|
||||
// Keep traversing up until we hit the root directory
|
||||
while (currentDir !== rootDir) {
|
||||
// First check for tasks.json directly
|
||||
try {
|
||||
return findTasksJsonInDirectory(currentDir, explicitFilePath, log);
|
||||
} catch (error) {
|
||||
// If tasks.json not found but the directory has project markers,
|
||||
// log it as a potential project root (helpful for debugging)
|
||||
if (hasProjectMarkers(currentDir)) {
|
||||
log.info(`Found project markers in ${currentDir}, but no tasks.json`);
|
||||
}
|
||||
|
||||
// Move up to parent directory
|
||||
const parentDir = path.dirname(currentDir);
|
||||
|
||||
// Check if we've reached the root
|
||||
if (parentDir === currentDir) {
|
||||
break;
|
||||
}
|
||||
|
||||
log.info(
|
||||
`Tasks file not found in ${currentDir}, searching in parent directory: ${parentDir}`
|
||||
);
|
||||
currentDir = parentDir;
|
||||
}
|
||||
}
|
||||
|
||||
// If explicit path is relative, resolve it relative to projectRoot
|
||||
if (explicitPath && projectRoot) {
|
||||
return path.resolve(projectRoot, explicitPath);
|
||||
}
|
||||
// If we've searched all the way to the root and found nothing
|
||||
const error = new Error(
|
||||
`Tasks file not found in ${startDir} or any parent directory.`
|
||||
);
|
||||
error.code = 'TASKS_FILE_NOT_FOUND';
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Use core findPRDPath with explicit path and projectRoot context
|
||||
if (projectRoot) {
|
||||
return coreFindPrdPath(explicitPath, { projectRoot }, log);
|
||||
}
|
||||
// Note: findTasksWithNpmConsideration is not used by findTasksJsonPath and might be legacy or used elsewhere.
|
||||
// If confirmed unused, it could potentially be removed in a separate cleanup.
|
||||
function findTasksWithNpmConsideration(startDir, log) {
|
||||
// First try our recursive parent search from cwd
|
||||
try {
|
||||
return findTasksJsonWithParentSearch(startDir, null, log);
|
||||
} catch (error) {
|
||||
// If that fails, try looking relative to the executable location
|
||||
const execPath = process.argv[1];
|
||||
const execDir = path.dirname(execPath);
|
||||
log.info(`Looking for tasks file relative to executable at: ${execDir}`);
|
||||
|
||||
// Fallback to core function without projectRoot context
|
||||
return coreFindPrdPath(explicitPath, null, log);
|
||||
try {
|
||||
return findTasksJsonWithParentSearch(execDir, null, log);
|
||||
} catch (secondError) {
|
||||
// If that also fails, check standard locations in user's home directory
|
||||
const homeDir = os.homedir();
|
||||
log.info(`Looking for tasks file in home directory: ${homeDir}`);
|
||||
|
||||
try {
|
||||
// Check standard locations in home dir
|
||||
return findTasksJsonInDirectory(
|
||||
path.join(homeDir, '.task-master'),
|
||||
null,
|
||||
log
|
||||
);
|
||||
} catch (thirdError) {
|
||||
// If all approaches fail, throw the original error
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve complexity report path from arguments
|
||||
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to complexity report or null if not found
|
||||
* Finds potential PRD document files based on common naming patterns
|
||||
* @param {string} projectRoot - The project root directory
|
||||
* @param {string|null} explicitPath - Optional explicit path provided by the user
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {string|null} - The path to the first found PRD file, or null if none found
|
||||
*/
|
||||
export function resolveComplexityReportPath(args, log = null) {
|
||||
// Get explicit path from args.complexityReport if provided
|
||||
const explicitPath = args?.complexityReport;
|
||||
const projectRoot = args?.projectRoot;
|
||||
export function findPRDDocumentPath(projectRoot, explicitPath, log) {
|
||||
// If explicit path is provided, check if it exists
|
||||
if (explicitPath) {
|
||||
const fullPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(projectRoot, explicitPath);
|
||||
|
||||
// If explicit path is provided and absolute, use it directly
|
||||
if (explicitPath && path.isAbsolute(explicitPath)) {
|
||||
return explicitPath;
|
||||
if (fs.existsSync(fullPath)) {
|
||||
log.info(`Using provided PRD document path: ${fullPath}`);
|
||||
return fullPath;
|
||||
} else {
|
||||
log.warn(
|
||||
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// If explicit path is relative, resolve it relative to projectRoot
|
||||
if (explicitPath && projectRoot) {
|
||||
return path.resolve(projectRoot, explicitPath);
|
||||
// Common locations and file patterns for PRD documents
|
||||
const commonLocations = [
|
||||
'', // Project root
|
||||
'scripts/'
|
||||
];
|
||||
|
||||
const commonFileNames = ['PRD.md', 'prd.md', 'PRD.txt', 'prd.txt'];
|
||||
|
||||
// Check all possible combinations
|
||||
for (const location of commonLocations) {
|
||||
for (const fileName of commonFileNames) {
|
||||
const potentialPath = path.join(projectRoot, location, fileName);
|
||||
if (fs.existsSync(potentialPath)) {
|
||||
log.info(`Found PRD document at: ${potentialPath}`);
|
||||
return potentialPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Use core findComplexityReportPath with explicit path and projectRoot context
|
||||
if (projectRoot) {
|
||||
return coreFindComplexityReportPath(explicitPath, { projectRoot }, log);
|
||||
log.warn(`No PRD document found in common locations within ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
export function findComplexityReportPath(projectRoot, explicitPath, log) {
|
||||
// If explicit path is provided, check if it exists
|
||||
if (explicitPath) {
|
||||
const fullPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(projectRoot, explicitPath);
|
||||
|
||||
if (fs.existsSync(fullPath)) {
|
||||
log.info(`Using provided PRD document path: ${fullPath}`);
|
||||
return fullPath;
|
||||
} else {
|
||||
log.warn(
|
||||
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to core function without projectRoot context
|
||||
return coreFindComplexityReportPath(explicitPath, null, log);
|
||||
// Common locations and file patterns for PRD documents
|
||||
const commonLocations = [
|
||||
'', // Project root
|
||||
'scripts/'
|
||||
];
|
||||
|
||||
const commonFileNames = [
|
||||
'complexity-report.json',
|
||||
'task-complexity-report.json'
|
||||
];
|
||||
|
||||
// Check all possible combinations
|
||||
for (const location of commonLocations) {
|
||||
for (const fileName of commonFileNames) {
|
||||
const potentialPath = path.join(projectRoot, location, fileName);
|
||||
if (fs.existsSync(potentialPath)) {
|
||||
log.info(`Found PRD document at: ${potentialPath}`);
|
||||
return potentialPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.warn(`No PRD document found in common locations within ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve any project-relative path from arguments
|
||||
* @param {string} relativePath - Relative path to resolve
|
||||
* @param {Object} args - Arguments object containing projectRoot
|
||||
* @returns {string} - Resolved absolute path
|
||||
* Resolves the tasks output directory path
|
||||
* @param {string} projectRoot - The project root directory
|
||||
* @param {string|null} explicitPath - Optional explicit output path provided by the user
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {string} - The resolved tasks directory path
|
||||
*/
|
||||
export function resolveProjectPath(relativePath, args) {
|
||||
// Ensure we have a projectRoot from args
|
||||
if (!args?.projectRoot) {
|
||||
throw new Error('projectRoot is required in args to resolve project paths');
|
||||
export function resolveTasksOutputPath(projectRoot, explicitPath, log) {
|
||||
// If explicit path is provided, use it
|
||||
if (explicitPath) {
|
||||
const outputPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(projectRoot, explicitPath);
|
||||
|
||||
log.info(`Using provided tasks output path: ${outputPath}`);
|
||||
return outputPath;
|
||||
}
|
||||
|
||||
// If already absolute, return as-is
|
||||
if (path.isAbsolute(relativePath)) {
|
||||
return relativePath;
|
||||
// Default output path: tasks/tasks.json in the project root
|
||||
const defaultPath = path.resolve(projectRoot, 'tasks', 'tasks.json');
|
||||
log.info(`Using default tasks output path: ${defaultPath}`);
|
||||
|
||||
// Ensure the directory exists
|
||||
const outputDir = path.dirname(defaultPath);
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
log.info(`Creating tasks directory: ${outputDir}`);
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Resolve relative to projectRoot
|
||||
return path.resolve(args.projectRoot, relativePath);
|
||||
return defaultPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find project root using core utility
|
||||
* @param {string} [startDir] - Directory to start searching from
|
||||
* @returns {string|null} - Project root path or null if not found
|
||||
* Resolves various file paths needed for MCP operations based on project root
|
||||
* @param {string} projectRoot - The project root directory
|
||||
* @param {Object} args - Command arguments that may contain explicit paths
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Object} - An object containing resolved paths
|
||||
*/
|
||||
export function findProjectRoot(startDir) {
|
||||
return coreFindProjectRoot(startDir);
|
||||
export function resolveProjectPaths(projectRoot, args, log) {
|
||||
const prdPath = findPRDDocumentPath(projectRoot, args.input, log);
|
||||
const tasksJsonPath = resolveTasksOutputPath(projectRoot, args.output, log);
|
||||
|
||||
// You can add more path resolutions here as needed
|
||||
|
||||
return {
|
||||
projectRoot,
|
||||
prdPath,
|
||||
tasksJsonPath
|
||||
// Add additional path properties as needed
|
||||
};
|
||||
}
|
||||
|
||||
// MAIN EXPORTS FOR MCP TOOLS - these are the functions MCP tools should use
|
||||
|
||||
/**
|
||||
* Find tasks.json path from arguments - primary MCP function
|
||||
* @param {Object} args - Arguments object containing projectRoot and optional file path
|
||||
* @param {Object} [log] - Log function to prevent console logging
|
||||
* @returns {string|null} - Resolved path to tasks.json or null if not found
|
||||
*/
|
||||
export function findTasksPath(args, log = null) {
|
||||
return resolveTasksPath(args, log);
|
||||
}
|
||||
|
||||
/**
|
||||
* Find complexity report path from arguments - primary MCP function
|
||||
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
|
||||
* @param {Object} [log] - Log function to prevent console logging
|
||||
* @returns {string|null} - Resolved path to complexity report or null if not found
|
||||
*/
|
||||
export function findComplexityReportPath(args, log = null) {
|
||||
return resolveComplexityReportPath(args, log);
|
||||
}
|
||||
|
||||
/**
|
||||
* Find PRD path - primary MCP function
|
||||
* @param {string} [explicitPath] - Explicit path to PRD file
|
||||
* @param {Object} [args] - Arguments object for context (not used in current implementation)
|
||||
* @param {Object} [log] - Logger object to prevent console logging
|
||||
* @returns {string|null} - Resolved path to PRD file or null if not found
|
||||
*/
|
||||
export function findPRDPath(explicitPath, args = null, log = null) {
|
||||
return findPrdPath(explicitPath, args, log);
|
||||
}
|
||||
|
||||
// Legacy aliases for backward compatibility - DEPRECATED
|
||||
export const findTasksJsonPath = findTasksPath;
|
||||
export const findComplexityReportJsonPath = findComplexityReportPath;
|
||||
|
||||
// Re-export PROJECT_MARKERS for MCP tools that import it from this module
|
||||
export { PROJECT_MARKERS };
|
||||
|
||||
@@ -7,10 +7,11 @@ import { z } from 'zod';
|
||||
import {
|
||||
handleApiResult,
|
||||
createErrorResponse,
|
||||
getProjectRootFromSession,
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { addDependencyDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the addDependency tool with the MCP server
|
||||
@@ -43,7 +44,7 @@ export function registerAddDependencyTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { addSubtaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the addSubtask tool with the MCP server
|
||||
@@ -67,7 +67,7 @@ export function registerAddSubtaskTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { addTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the addTask tool with the MCP server
|
||||
@@ -70,7 +70,7 @@ export function registerAddTaskTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -12,8 +12,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; // Assuming core functions are exported via task-master-core.js
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the analyze_project_complexity tool
|
||||
@@ -42,7 +41,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
`Output file path relative to project root (default: ${COMPLEXITY_REPORT_FILE}).`
|
||||
'Output file path relative to project root (default: scripts/task-complexity-report.json).'
|
||||
),
|
||||
file: z
|
||||
.string()
|
||||
@@ -81,7 +80,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
@@ -95,7 +94,11 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
||||
|
||||
const outputPath = args.output
|
||||
? path.resolve(args.projectRoot, args.output)
|
||||
: path.resolve(args.projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
: path.resolve(
|
||||
args.projectRoot,
|
||||
'scripts',
|
||||
'task-complexity-report.json'
|
||||
);
|
||||
|
||||
log.info(`${toolName}: Report output path: ${outputPath}`);
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { clearSubtasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the clearSubtasks tool with the MCP server
|
||||
@@ -48,7 +48,7 @@ export function registerClearSubtasksTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,6 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { complexityReportDirect } from '../core/task-master-core.js';
|
||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||
import path from 'path';
|
||||
|
||||
/**
|
||||
@@ -26,7 +25,7 @@ export function registerComplexityReportTool(server) {
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
`Path to the report file (default: ${COMPLEXITY_REPORT_FILE})`
|
||||
'Path to the report file (default: scripts/task-complexity-report.json)'
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
@@ -41,7 +40,11 @@ export function registerComplexityReportTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
const reportPath = args.file
|
||||
? path.resolve(args.projectRoot, args.file)
|
||||
: path.resolve(args.projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
: path.resolve(
|
||||
args.projectRoot,
|
||||
'scripts',
|
||||
'task-complexity-report.json'
|
||||
);
|
||||
|
||||
const result = await complexityReportDirect(
|
||||
{
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { expandAllTasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the expandAll tool with the MCP server
|
||||
@@ -67,7 +67,7 @@ export function registerExpandAllTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { expandTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the expand-task tool with the MCP server
|
||||
@@ -54,7 +54,7 @@ export function registerExpandTaskTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { fixDependenciesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the fixDependencies tool with the MCP server
|
||||
@@ -33,7 +33,7 @@ export function registerFixDependenciesTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { generateTaskFilesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
import path from 'path';
|
||||
|
||||
/**
|
||||
@@ -39,7 +39,7 @@ export function registerGenerateTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -11,7 +11,7 @@ import {
|
||||
} from './utils.js';
|
||||
import { showTaskDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
findTasksPath,
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
@@ -77,7 +77,7 @@ export function registerShowTaskTool(server) {
|
||||
// Resolve the path to tasks.json using the NORMALIZED projectRoot from args
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: projectRoot, file: file },
|
||||
log
|
||||
);
|
||||
@@ -94,10 +94,8 @@ export function registerShowTaskTool(server) {
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
{
|
||||
projectRoot: projectRoot,
|
||||
complexityReport: args.complexityReport
|
||||
},
|
||||
projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
|
||||
@@ -11,8 +11,8 @@ import {
|
||||
} from './utils.js';
|
||||
import { listTasksDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
resolveTasksPath,
|
||||
resolveComplexityReportPath
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
@@ -55,10 +55,13 @@ export function registerListTasksTool(server) {
|
||||
try {
|
||||
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
|
||||
|
||||
// Resolve the path to tasks.json using new path utilities
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = resolveTasksPath(args, session);
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding tasks.json: ${error.message}`);
|
||||
return createErrorResponse(
|
||||
@@ -69,13 +72,14 @@ export function registerListTasksTool(server) {
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = resolveComplexityReportPath(args, session);
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
args.projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
// This is optional, so we don't fail the operation
|
||||
complexityReportPath = null;
|
||||
}
|
||||
|
||||
const result = await listTasksDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { moveTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the moveTask tool with the MCP server
|
||||
@@ -45,7 +45,7 @@ export function registerMoveTaskTool(server) {
|
||||
let tasksJsonPath = args.file;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
tasksJsonPath = findTasksPath(args, log);
|
||||
tasksJsonPath = findTasksJsonPath(args, log);
|
||||
}
|
||||
|
||||
// Parse comma-separated IDs
|
||||
|
||||
@@ -1,22 +1,22 @@
|
||||
/**
|
||||
* tools/next-task.js
|
||||
* Tool to find the next task to work on based on dependencies and status
|
||||
* Tool to find the next task to work on
|
||||
*/
|
||||
|
||||
import { z } from 'zod';
|
||||
import {
|
||||
createErrorResponse,
|
||||
handleApiResult,
|
||||
createErrorResponse,
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { nextTaskDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
resolveTasksPath,
|
||||
resolveComplexityReportPath
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the nextTask tool with the MCP server
|
||||
* Register the next-task tool with the MCP server
|
||||
* @param {Object} server - FastMCP server instance
|
||||
*/
|
||||
export function registerNextTaskTool(server) {
|
||||
@@ -40,10 +40,13 @@ export function registerNextTaskTool(server) {
|
||||
try {
|
||||
log.info(`Finding next task with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Resolve the path to tasks.json using new path utilities
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = resolveTasksPath(args, session);
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding tasks.json: ${error.message}`);
|
||||
return createErrorResponse(
|
||||
@@ -51,16 +54,17 @@ export function registerNextTaskTool(server) {
|
||||
);
|
||||
}
|
||||
|
||||
// Resolve the path to complexity report (optional)
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = resolveComplexityReportPath(args, session);
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
args.projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
// This is optional, so we don't fail the operation
|
||||
complexityReportPath = null;
|
||||
}
|
||||
|
||||
const result = await nextTaskDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
@@ -69,10 +73,19 @@ export function registerNextTaskTool(server) {
|
||||
log
|
||||
);
|
||||
|
||||
log.info(`Next task result: ${result.success ? 'found' : 'none'}`);
|
||||
if (result.success) {
|
||||
log.info(
|
||||
`Successfully found next task: ${result.data?.task?.id || 'No available tasks'}`
|
||||
);
|
||||
} else {
|
||||
log.error(
|
||||
`Failed to find next task: ${result.error?.message || 'Unknown error'}`
|
||||
);
|
||||
}
|
||||
|
||||
return handleApiResult(result, log, 'Error finding next task');
|
||||
} catch (error) {
|
||||
log.error(`Error finding next task: ${error.message}`);
|
||||
log.error(`Error in nextTask tool: ${error.message}`);
|
||||
return createErrorResponse(error.message);
|
||||
}
|
||||
})
|
||||
|
||||
@@ -4,17 +4,13 @@
|
||||
*/
|
||||
|
||||
import { z } from 'zod';
|
||||
import path from 'path';
|
||||
import {
|
||||
handleApiResult,
|
||||
withNormalizedProjectRoot,
|
||||
createErrorResponse
|
||||
createErrorResponse,
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { parsePRDDirect } from '../core/task-master-core.js';
|
||||
import {
|
||||
PRD_FILE,
|
||||
TASKMASTER_DOCS_DIR,
|
||||
TASKMASTER_TASKS_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
|
||||
/**
|
||||
* Register the parse_prd tool
|
||||
@@ -23,52 +19,80 @@ import {
|
||||
export function registerParsePRDTool(server) {
|
||||
server.addTool({
|
||||
name: 'parse_prd',
|
||||
description: `Parse a Product Requirements Document (PRD) text file to automatically generate initial tasks. Reinitializing the project is not necessary to run this tool. It is recommended to run parse-prd after initializing the project and creating/importing a prd.txt file in the project root's ${TASKMASTER_DOCS_DIR} directory.`,
|
||||
description:
|
||||
"Parse a Product Requirements Document (PRD) text file to automatically generate initial tasks. Reinitializing the project is not necessary to run this tool. It is recommended to run parse-prd after initializing the project and creating/importing a prd.txt file in the project root's scripts/ directory.",
|
||||
parameters: z.object({
|
||||
input: z
|
||||
.string()
|
||||
.optional()
|
||||
.default(PRD_FILE)
|
||||
.default('scripts/prd.txt')
|
||||
.describe('Absolute path to the PRD document file (.txt, .md, etc.)'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
output: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
`Output path for tasks.json file (default: ${TASKMASTER_TASKS_FILE})`
|
||||
),
|
||||
numTasks: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Avoid entering numbers above 50 due to context window limitations.'
|
||||
),
|
||||
output: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Output path for tasks.json file (default: tasks/tasks.json)'
|
||||
),
|
||||
force: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe('Overwrite existing output file without prompting.'),
|
||||
research: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'Enable Taskmaster to use the research role for potentially more informed task generation. Requires appropriate API key.'
|
||||
),
|
||||
append: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Append generated tasks to existing file.')
|
||||
.default(false)
|
||||
.describe('Append generated tasks to existing file.'),
|
||||
research: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe(
|
||||
'Use the research model for research-backed task generation, providing more comprehensive, accurate and up-to-date task details.'
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const toolName = 'parse_prd';
|
||||
try {
|
||||
const result = await parsePRDDirect(args, log, { session });
|
||||
return handleApiResult(result, log);
|
||||
log.info(
|
||||
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
|
||||
// Call Direct Function - Pass relevant args including projectRoot
|
||||
const result = await parsePRDDirect(
|
||||
{
|
||||
input: args.input,
|
||||
output: args.output,
|
||||
numTasks: args.numTasks,
|
||||
force: args.force,
|
||||
append: args.append,
|
||||
research: args.research,
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
|
||||
log.info(
|
||||
`${toolName}: Direct function result: success=${result.success}`
|
||||
);
|
||||
return handleApiResult(result, log, 'Error parsing PRD');
|
||||
} catch (error) {
|
||||
log.error(`Error in parse_prd: ${error.message}`);
|
||||
return createErrorResponse(`Failed to parse PRD: ${error.message}`);
|
||||
log.error(
|
||||
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||
);
|
||||
return createErrorResponse(
|
||||
`Internal tool error (${toolName}): ${error.message}`
|
||||
);
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { removeDependencyDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the removeDependency tool with the MCP server
|
||||
@@ -42,7 +42,7 @@ export function registerRemoveDependencyTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { removeSubtaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the removeSubtask tool with the MCP server
|
||||
@@ -53,7 +53,7 @@ export function registerRemoveSubtaskTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { removeTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the remove-task tool with the MCP server
|
||||
@@ -42,7 +42,7 @@ export function registerRemoveTaskTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -14,7 +14,7 @@ import {
|
||||
nextTaskDirect
|
||||
} from '../core/task-master-core.js';
|
||||
import {
|
||||
findTasksPath,
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
|
||||
@@ -56,7 +56,7 @@ export function registerSetTaskStatusTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
@@ -70,10 +70,8 @@ export function registerSetTaskStatusTool(server) {
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
{
|
||||
projectRoot: args.projectRoot,
|
||||
complexityReport: args.complexityReport
|
||||
},
|
||||
args.projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { updateSubtaskByIdDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the update-subtask tool with the MCP server
|
||||
@@ -20,7 +20,7 @@ export function registerUpdateSubtaskTool(server) {
|
||||
server.addTool({
|
||||
name: 'update_subtask',
|
||||
description:
|
||||
'Appends timestamped information to a specific subtask without replacing existing content. If you just want to update the subtask status, use set_task_status instead.',
|
||||
'Appends timestamped information to a specific subtask without replacing existing content',
|
||||
parameters: z.object({
|
||||
id: z
|
||||
.string()
|
||||
@@ -44,7 +44,7 @@ export function registerUpdateSubtaskTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { updateTaskByIdDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the update-task tool with the MCP server
|
||||
@@ -48,7 +48,7 @@ export function registerUpdateTaskTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { updateTasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the update tool with the MCP server
|
||||
@@ -56,7 +56,7 @@ export function registerUpdateTool(server) {
|
||||
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath({ projectRoot, file }, log);
|
||||
tasksJsonPath = findTasksJsonPath({ projectRoot, file }, log);
|
||||
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
|
||||
} catch (error) {
|
||||
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { validateDependenciesDirect } from '../core/task-master-core.js';
|
||||
import { findTasksPath } from '../core/utils/path-utils.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the validateDependencies tool with the MCP server
|
||||
@@ -34,7 +34,7 @@ export function registerValidateDependenciesTool(server) {
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = findTasksPath(
|
||||
tasksJsonPath = findTasksJsonPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
|
||||
21673
package-lock.json
generated
21673
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
13
package.json
13
package.json
@@ -21,8 +21,8 @@
|
||||
"release": "changeset publish",
|
||||
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
|
||||
"mcp-server": "node mcp-server/server.js",
|
||||
"format-check": "biome format .",
|
||||
"format": "biome format . --write"
|
||||
"format-check": "prettier --check .",
|
||||
"format": "prettier --write ."
|
||||
},
|
||||
"keywords": [
|
||||
"claude",
|
||||
@@ -39,17 +39,14 @@
|
||||
"author": "Eyal Toledano",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "^2.2.9",
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
"@ai-sdk/azure": "^1.3.17",
|
||||
"@ai-sdk/google": "^1.2.13",
|
||||
"@ai-sdk/google-vertex": "^2.2.23",
|
||||
"@ai-sdk/mistral": "^1.2.7",
|
||||
"@ai-sdk/openai": "^1.3.20",
|
||||
"@ai-sdk/perplexity": "^1.1.7",
|
||||
"@ai-sdk/xai": "^1.2.15",
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@aws-sdk/credential-providers": "^3.817.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.10",
|
||||
"boxen": "^8.0.1",
|
||||
@@ -74,7 +71,7 @@
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
"node": ">=14.0.0"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
@@ -95,11 +92,10 @@
|
||||
"src/**"
|
||||
],
|
||||
"overrides": {
|
||||
"node-fetch": "^2.6.12",
|
||||
"node-fetch": "^3.3.2",
|
||||
"whatwg-url": "^11.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@biomejs/biome": "^1.9.4",
|
||||
"@changesets/changelog-github": "^0.5.1",
|
||||
"@changesets/cli": "^2.28.1",
|
||||
"@types/jest": "^29.5.14",
|
||||
@@ -108,6 +104,7 @@
|
||||
"jest": "^29.7.0",
|
||||
"jest-environment-node": "^29.7.0",
|
||||
"mock-fs": "^5.5.0",
|
||||
"node-fetch": "^3.3.2",
|
||||
"prettier": "^3.5.3",
|
||||
"react": "^18.3.1",
|
||||
"supertest": "^7.1.0",
|
||||
|
||||
197
scripts/init.js
197
scripts/init.js
@@ -25,17 +25,6 @@ import gradient from 'gradient-string';
|
||||
import { isSilentMode } from './modules/utils.js';
|
||||
import { convertAllCursorRulesToRooRules } from './modules/rule-transformer.js';
|
||||
import { execSync } from 'child_process';
|
||||
import {
|
||||
EXAMPLE_PRD_FILE,
|
||||
TASKMASTER_CONFIG_FILE,
|
||||
TASKMASTER_TEMPLATES_DIR,
|
||||
TASKMASTER_DIR,
|
||||
TASKMASTER_TASKS_DIR,
|
||||
TASKMASTER_DOCS_DIR,
|
||||
TASKMASTER_REPORTS_DIR,
|
||||
ENV_EXAMPLE_FILE,
|
||||
GITIGNORE_FILE
|
||||
} from '../src/constants/paths.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
@@ -173,7 +162,8 @@ alias taskmaster='task-master'
|
||||
log('success', `Added Task Master aliases to ${shellConfigFile}`);
|
||||
log(
|
||||
'info',
|
||||
`To use the aliases in your current terminal, run: source ${shellConfigFile}`
|
||||
'To use the aliases in your current terminal, run: source ' +
|
||||
shellConfigFile
|
||||
);
|
||||
|
||||
return true;
|
||||
@@ -243,7 +233,7 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
case 'boomerang-rules':
|
||||
case 'code-rules':
|
||||
case 'debug-rules':
|
||||
case 'test-rules': {
|
||||
case 'test-rules':
|
||||
// Extract the mode name from the template name (e.g., 'architect' from 'architect-rules')
|
||||
const mode = templateName.split('-')[0];
|
||||
sourcePath = path.join(
|
||||
@@ -256,7 +246,6 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
templateName
|
||||
);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
// For other files like env.example, gitignore, etc. that don't have direct equivalents
|
||||
sourcePath = path.join(__dirname, '..', 'assets', templateName);
|
||||
@@ -297,7 +286,10 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
|
||||
if (newLines.length > 0) {
|
||||
// Add a comment to separate the original content from our additions
|
||||
const updatedContent = `${existingContent.trim()}\n\n# Added by Task Master AI\n${newLines.join('\n')}`;
|
||||
const updatedContent =
|
||||
existingContent.trim() +
|
||||
'\n\n# Added by Claude Task Master\n' +
|
||||
newLines.join('\n');
|
||||
fs.writeFileSync(targetPath, updatedContent);
|
||||
log('success', `Updated ${targetPath} with additional entries`);
|
||||
} else {
|
||||
@@ -315,7 +307,10 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
const existingContent = fs.readFileSync(targetPath, 'utf8');
|
||||
|
||||
// Add a separator comment before appending our content
|
||||
const updatedContent = `${existingContent.trim()}\n\n# Added by Task Master - Development Workflow Rules\n\n${content}`;
|
||||
const updatedContent =
|
||||
existingContent.trim() +
|
||||
'\n\n# Added by Task Master - Development Workflow Rules\n\n' +
|
||||
content;
|
||||
fs.writeFileSync(targetPath, updatedContent);
|
||||
log('success', `Updated ${targetPath} with additional rules`);
|
||||
return;
|
||||
@@ -395,7 +390,7 @@ async function initializeProject(options = {}) {
|
||||
};
|
||||
}
|
||||
|
||||
createProjectStructure(addAliases, dryRun, options);
|
||||
createProjectStructure(addAliases, dryRun);
|
||||
} else {
|
||||
// Interactive logic
|
||||
log('info', 'Required options not provided, proceeding with prompts.');
|
||||
@@ -451,7 +446,7 @@ async function initializeProject(options = {}) {
|
||||
}
|
||||
|
||||
// Create structure using only necessary values
|
||||
createProjectStructure(addAliasesPrompted, dryRun, options);
|
||||
createProjectStructure(addAliasesPrompted, dryRun);
|
||||
} catch (error) {
|
||||
rl.close();
|
||||
log('error', `Error during initialization process: ${error.message}`);
|
||||
@@ -470,29 +465,29 @@ function promptQuestion(rl, question) {
|
||||
}
|
||||
|
||||
// Function to create the project structure
|
||||
function createProjectStructure(addAliases, dryRun, options) {
|
||||
function createProjectStructure(addAliases, dryRun) {
|
||||
const targetDir = process.cwd();
|
||||
log('info', `Initializing project in ${targetDir}`);
|
||||
|
||||
// Define Roo modes locally (external integration, not part of core Task Master)
|
||||
const ROO_MODES = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'];
|
||||
|
||||
// Create directories
|
||||
ensureDirectoryExists(path.join(targetDir, '.cursor/rules'));
|
||||
ensureDirectoryExists(path.join(targetDir, '.cursor', 'rules'));
|
||||
|
||||
// Create Roo directories
|
||||
ensureDirectoryExists(path.join(targetDir, '.roo'));
|
||||
ensureDirectoryExists(path.join(targetDir, '.roo/rules'));
|
||||
for (const mode of ROO_MODES) {
|
||||
ensureDirectoryExists(path.join(targetDir, '.roo', 'rules'));
|
||||
for (const mode of [
|
||||
'architect',
|
||||
'ask',
|
||||
'boomerang',
|
||||
'code',
|
||||
'debug',
|
||||
'test'
|
||||
]) {
|
||||
ensureDirectoryExists(path.join(targetDir, '.roo', `rules-${mode}`));
|
||||
}
|
||||
|
||||
// Create NEW .taskmaster directory structure (using constants)
|
||||
ensureDirectoryExists(path.join(targetDir, TASKMASTER_DIR));
|
||||
ensureDirectoryExists(path.join(targetDir, TASKMASTER_TASKS_DIR));
|
||||
ensureDirectoryExists(path.join(targetDir, TASKMASTER_DOCS_DIR));
|
||||
ensureDirectoryExists(path.join(targetDir, TASKMASTER_REPORTS_DIR));
|
||||
ensureDirectoryExists(path.join(targetDir, TASKMASTER_TEMPLATES_DIR));
|
||||
ensureDirectoryExists(path.join(targetDir, 'scripts'));
|
||||
ensureDirectoryExists(path.join(targetDir, 'tasks'));
|
||||
|
||||
// Setup MCP configuration for integration with Cursor
|
||||
setupMCPConfiguration(targetDir);
|
||||
@@ -505,44 +500,44 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
// Copy .env.example
|
||||
copyTemplateFile(
|
||||
'env.example',
|
||||
path.join(targetDir, ENV_EXAMPLE_FILE),
|
||||
path.join(targetDir, '.env.example'),
|
||||
replacements
|
||||
);
|
||||
|
||||
// Copy .taskmasterconfig with project name to NEW location
|
||||
// Copy .taskmasterconfig with project name
|
||||
copyTemplateFile(
|
||||
'.taskmasterconfig',
|
||||
path.join(targetDir, TASKMASTER_CONFIG_FILE),
|
||||
path.join(targetDir, '.taskmasterconfig'),
|
||||
{
|
||||
...replacements
|
||||
}
|
||||
);
|
||||
|
||||
// Copy .gitignore
|
||||
copyTemplateFile('gitignore', path.join(targetDir, GITIGNORE_FILE));
|
||||
copyTemplateFile('gitignore', path.join(targetDir, '.gitignore'));
|
||||
|
||||
// Copy dev_workflow.mdc
|
||||
copyTemplateFile(
|
||||
'dev_workflow.mdc',
|
||||
path.join(targetDir, '.cursor/rules/dev_workflow.mdc')
|
||||
path.join(targetDir, '.cursor', 'rules', 'dev_workflow.mdc')
|
||||
);
|
||||
|
||||
// Copy taskmaster.mdc
|
||||
copyTemplateFile(
|
||||
'taskmaster.mdc',
|
||||
path.join(targetDir, '.cursor/rules/taskmaster.mdc')
|
||||
path.join(targetDir, '.cursor', 'rules', 'taskmaster.mdc')
|
||||
);
|
||||
|
||||
// Copy cursor_rules.mdc
|
||||
copyTemplateFile(
|
||||
'cursor_rules.mdc',
|
||||
path.join(targetDir, '.cursor/rules/cursor_rules.mdc')
|
||||
path.join(targetDir, '.cursor', 'rules', 'cursor_rules.mdc')
|
||||
);
|
||||
|
||||
// Copy self_improve.mdc
|
||||
copyTemplateFile(
|
||||
'self_improve.mdc',
|
||||
path.join(targetDir, '.cursor/rules/self_improve.mdc')
|
||||
path.join(targetDir, '.cursor', 'rules', 'self_improve.mdc')
|
||||
);
|
||||
|
||||
// Generate Roo rules from Cursor rules
|
||||
@@ -556,15 +551,26 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
copyTemplateFile('.roomodes', path.join(targetDir, '.roomodes'));
|
||||
|
||||
// Copy Roo rule files for each mode
|
||||
for (const mode of ROO_MODES) {
|
||||
const rooModes = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'];
|
||||
for (const mode of rooModes) {
|
||||
copyTemplateFile(
|
||||
`${mode}-rules`,
|
||||
path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`)
|
||||
);
|
||||
}
|
||||
|
||||
// Copy example_prd.txt to NEW location
|
||||
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
|
||||
// Copy example_prd.txt
|
||||
copyTemplateFile(
|
||||
'example_prd.txt',
|
||||
path.join(targetDir, 'scripts', 'example_prd.txt')
|
||||
);
|
||||
|
||||
// // Create main README.md
|
||||
// copyTemplateFile(
|
||||
// 'README-task-master.md',
|
||||
// path.join(targetDir, 'README-task-master.md'),
|
||||
// replacements
|
||||
// );
|
||||
|
||||
// Initialize git repository if git is available
|
||||
try {
|
||||
@@ -601,7 +607,7 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
}
|
||||
|
||||
// === Add Model Configuration Step ===
|
||||
if (!isSilentMode() && !dryRun && !options?.yes) {
|
||||
if (!isSilentMode() && !dryRun) {
|
||||
console.log(
|
||||
boxen(chalk.cyan('Configuring AI Models...'), {
|
||||
padding: 0.5,
|
||||
@@ -632,12 +638,6 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
);
|
||||
} else if (dryRun) {
|
||||
log('info', 'DRY RUN: Skipping interactive model setup.');
|
||||
} else if (options?.yes) {
|
||||
log('info', 'Skipping interactive model setup due to --yes flag.');
|
||||
log(
|
||||
'info',
|
||||
'Default AI models will be used. You can configure different models later using "task-master models --setup" or "task-master models --set-..." commands.'
|
||||
);
|
||||
}
|
||||
// ====================================
|
||||
|
||||
@@ -645,9 +645,11 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(
|
||||
`${warmGradient.multiline(
|
||||
warmGradient.multiline(
|
||||
figlet.textSync('Success!', { font: 'Standard' })
|
||||
)}\n${chalk.green('Project initialized successfully!')}`,
|
||||
) +
|
||||
'\n' +
|
||||
chalk.green('Project initialized successfully!'),
|
||||
{
|
||||
padding: 1,
|
||||
margin: 1,
|
||||
@@ -662,29 +664,76 @@ function createProjectStructure(addAliases, dryRun, options) {
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(
|
||||
`${chalk.cyan.bold('Things you should do next:')}\n\n${chalk.white('1. ')}${chalk.yellow(
|
||||
'Configure AI models (if needed) and add API keys to `.env`'
|
||||
)}\n${chalk.white(' ├─ ')}${chalk.dim('Models: Use `task-master models` commands')}\n${chalk.white(' └─ ')}${chalk.dim(
|
||||
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
|
||||
)}\n${chalk.white('2. ')}${chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
|
||||
)}\n${chalk.white('3. ')}${chalk.yellow(
|
||||
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('parse_prd')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master parse-prd scripts/prd.txt')}\n${chalk.white('4. ')}${chalk.yellow(
|
||||
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
|
||||
)}\n${chalk.white(' └─ ')}${chalk.dim('MCP Tool: ')}${chalk.cyan('analyze_project_complexity')}${chalk.dim(' | CLI: ')}${chalk.cyan('task-master analyze-complexity')}\n${chalk.white('5. ')}${chalk.yellow(
|
||||
'Ask Cursor to expand all of your tasks using the complexity analysis'
|
||||
)}\n${chalk.white('6. ')}${chalk.yellow('Ask Cursor to begin working on the next task')}\n${chalk.white('7. ')}${chalk.yellow(
|
||||
'Add new tasks anytime using the add-task command or MCP tool'
|
||||
)}\n${chalk.white('8. ')}${chalk.yellow(
|
||||
'Ask Cursor to set the status of one or many tasks/subtasks at a time. Use the task id from the task lists.'
|
||||
)}\n${chalk.white('9. ')}${chalk.yellow(
|
||||
'Ask Cursor to update all tasks from a specific task id based on new learnings or pivots in your project.'
|
||||
)}\n${chalk.white('10. ')}${chalk.green.bold('Ship it!')}\n\n${chalk.dim(
|
||||
'* Review the README.md file to learn how to use other commands via Cursor Agent.'
|
||||
)}\n${chalk.dim(
|
||||
'* Use the task-master command without arguments to see all available commands.'
|
||||
)}`,
|
||||
chalk.cyan.bold('Things you should do next:') +
|
||||
'\n\n' +
|
||||
chalk.white('1. ') +
|
||||
chalk.yellow(
|
||||
'Configure AI models (if needed) and add API keys to `.env`'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(' ├─ ') +
|
||||
chalk.dim('Models: Use `task-master models` commands') +
|
||||
'\n' +
|
||||
chalk.white(' └─ ') +
|
||||
chalk.dim(
|
||||
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('2. ') +
|
||||
chalk.yellow(
|
||||
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('3. ') +
|
||||
chalk.yellow(
|
||||
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(' └─ ') +
|
||||
chalk.dim('MCP Tool: ') +
|
||||
chalk.cyan('parse_prd') +
|
||||
chalk.dim(' | CLI: ') +
|
||||
chalk.cyan('task-master parse-prd scripts/prd.txt') +
|
||||
'\n' +
|
||||
chalk.white('4. ') +
|
||||
chalk.yellow(
|
||||
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white(' └─ ') +
|
||||
chalk.dim('MCP Tool: ') +
|
||||
chalk.cyan('analyze_project_complexity') +
|
||||
chalk.dim(' | CLI: ') +
|
||||
chalk.cyan('task-master analyze-complexity') +
|
||||
'\n' +
|
||||
chalk.white('5. ') +
|
||||
chalk.yellow(
|
||||
'Ask Cursor to expand all of your tasks using the complexity analysis'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('6. ') +
|
||||
chalk.yellow('Ask Cursor to begin working on the next task') +
|
||||
'\n' +
|
||||
chalk.white('7. ') +
|
||||
chalk.yellow(
|
||||
'Ask Cursor to set the status of one or many tasks/subtasks at a time. Use the task id from the task lists.'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('8. ') +
|
||||
chalk.yellow(
|
||||
'Ask Cursor to update all tasks from a specific task id based on new learnings or pivots in your project.'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.white('9. ') +
|
||||
chalk.green.bold('Ship it!') +
|
||||
'\n\n' +
|
||||
chalk.dim(
|
||||
'* Review the README.md file to learn how to use other commands via Cursor Agent.'
|
||||
) +
|
||||
'\n' +
|
||||
chalk.dim(
|
||||
'* Use the task-master command without arguments to see all available commands.'
|
||||
),
|
||||
{
|
||||
padding: 1,
|
||||
margin: 1,
|
||||
|
||||
@@ -19,41 +19,18 @@ import {
|
||||
MODEL_MAP,
|
||||
getDebugFlag,
|
||||
getBaseUrlForRole,
|
||||
isApiKeySet,
|
||||
getOllamaBaseURL,
|
||||
getAzureBaseURL,
|
||||
getVertexProjectId,
|
||||
getVertexLocation
|
||||
isApiKeySet
|
||||
} from './config-manager.js';
|
||||
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
|
||||
|
||||
// Import provider classes
|
||||
import {
|
||||
AnthropicAIProvider,
|
||||
PerplexityAIProvider,
|
||||
GoogleAIProvider,
|
||||
OpenAIProvider,
|
||||
XAIProvider,
|
||||
OpenRouterAIProvider,
|
||||
OllamaAIProvider,
|
||||
BedrockAIProvider,
|
||||
AzureProvider,
|
||||
VertexAIProvider
|
||||
} from '../../src/ai-providers/index.js';
|
||||
|
||||
// Create provider instances
|
||||
const PROVIDERS = {
|
||||
anthropic: new AnthropicAIProvider(),
|
||||
perplexity: new PerplexityAIProvider(),
|
||||
google: new GoogleAIProvider(),
|
||||
openai: new OpenAIProvider(),
|
||||
xai: new XAIProvider(),
|
||||
openrouter: new OpenRouterAIProvider(),
|
||||
ollama: new OllamaAIProvider(),
|
||||
bedrock: new BedrockAIProvider(),
|
||||
azure: new AzureProvider(),
|
||||
vertex: new VertexAIProvider()
|
||||
};
|
||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||
import * as google from '../../src/ai-providers/google.js';
|
||||
import * as openai from '../../src/ai-providers/openai.js';
|
||||
import * as xai from '../../src/ai-providers/xai.js';
|
||||
import * as openrouter from '../../src/ai-providers/openrouter.js';
|
||||
import * as ollama from '../../src/ai-providers/ollama.js';
|
||||
// TODO: Import other provider modules when implemented (ollama, etc.)
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
function _getCostForModel(providerName, modelId) {
|
||||
@@ -85,6 +62,51 @@ function _getCostForModel(providerName, modelId) {
|
||||
};
|
||||
}
|
||||
|
||||
// --- Provider Function Map ---
|
||||
// Maps provider names (lowercase) to their respective service functions
|
||||
const PROVIDER_FUNCTIONS = {
|
||||
anthropic: {
|
||||
generateText: anthropic.generateAnthropicText,
|
||||
streamText: anthropic.streamAnthropicText,
|
||||
generateObject: anthropic.generateAnthropicObject
|
||||
},
|
||||
perplexity: {
|
||||
generateText: perplexity.generatePerplexityText,
|
||||
streamText: perplexity.streamPerplexityText,
|
||||
generateObject: perplexity.generatePerplexityObject
|
||||
},
|
||||
google: {
|
||||
// Add Google entry
|
||||
generateText: google.generateGoogleText,
|
||||
streamText: google.streamGoogleText,
|
||||
generateObject: google.generateGoogleObject
|
||||
},
|
||||
openai: {
|
||||
// ADD: OpenAI entry
|
||||
generateText: openai.generateOpenAIText,
|
||||
streamText: openai.streamOpenAIText,
|
||||
generateObject: openai.generateOpenAIObject
|
||||
},
|
||||
xai: {
|
||||
// ADD: xAI entry
|
||||
generateText: xai.generateXaiText,
|
||||
streamText: xai.streamXaiText,
|
||||
generateObject: xai.generateXaiObject // Note: Object generation might be unsupported
|
||||
},
|
||||
openrouter: {
|
||||
// ADD: OpenRouter entry
|
||||
generateText: openrouter.generateOpenRouterText,
|
||||
streamText: openrouter.streamOpenRouterText,
|
||||
generateObject: openrouter.generateOpenRouterObject
|
||||
},
|
||||
ollama: {
|
||||
generateText: ollama.generateOllamaText,
|
||||
streamText: ollama.streamOllamaText,
|
||||
generateObject: ollama.generateOllamaObject
|
||||
}
|
||||
// TODO: Add entries for ollama, etc. when implemented
|
||||
};
|
||||
|
||||
// --- Configuration for Retries ---
|
||||
const MAX_RETRIES = 2;
|
||||
const INITIAL_RETRY_DELAY_MS = 1000;
|
||||
@@ -169,9 +191,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
azure: 'AZURE_OPENAI_API_KEY',
|
||||
openrouter: 'OPENROUTER_API_KEY',
|
||||
xai: 'XAI_API_KEY',
|
||||
ollama: 'OLLAMA_API_KEY',
|
||||
bedrock: 'AWS_ACCESS_KEY_ID',
|
||||
vertex: 'GOOGLE_API_KEY'
|
||||
ollama: 'OLLAMA_API_KEY'
|
||||
};
|
||||
|
||||
const envVarName = keyMap[providerName];
|
||||
@@ -183,11 +203,12 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
|
||||
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
|
||||
|
||||
// Special handling for providers that can use alternative auth
|
||||
if (providerName === 'ollama' || providerName === 'bedrock') {
|
||||
// Special handling for Ollama - API key is optional
|
||||
if (providerName === 'ollama') {
|
||||
return apiKey || null;
|
||||
}
|
||||
|
||||
// For all other providers, API key is required
|
||||
if (!apiKey) {
|
||||
throw new Error(
|
||||
`Required API key ${envVarName} for provider '${providerName}' is not set in environment, session, or .env file.`
|
||||
@@ -208,15 +229,14 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
* @throws {Error} If the call fails after all retries.
|
||||
*/
|
||||
async function _attemptProviderCallWithRetries(
|
||||
provider,
|
||||
serviceType,
|
||||
providerApiFn,
|
||||
callParams,
|
||||
providerName,
|
||||
modelId,
|
||||
attemptRole
|
||||
) {
|
||||
let retries = 0;
|
||||
const fnName = serviceType;
|
||||
const fnName = providerApiFn.name;
|
||||
|
||||
while (retries <= MAX_RETRIES) {
|
||||
try {
|
||||
@@ -227,8 +247,8 @@ async function _attemptProviderCallWithRetries(
|
||||
);
|
||||
}
|
||||
|
||||
// Call the appropriate method on the provider instance
|
||||
const result = await provider[serviceType](callParams);
|
||||
// Call the specific provider function directly
|
||||
const result = await providerApiFn(callParams);
|
||||
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
@@ -330,8 +350,9 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
modelId,
|
||||
apiKey,
|
||||
roleParams,
|
||||
provider,
|
||||
baseURL,
|
||||
providerFnSet,
|
||||
providerApiFn,
|
||||
baseUrl,
|
||||
providerResponse,
|
||||
telemetryData = null;
|
||||
|
||||
@@ -370,20 +391,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Get provider instance
|
||||
provider = PROVIDERS[providerName?.toLowerCase()];
|
||||
if (!provider) {
|
||||
log(
|
||||
'warn',
|
||||
`Skipping role '${currentRole}': Provider '${providerName}' not supported.`
|
||||
);
|
||||
lastError =
|
||||
lastError ||
|
||||
new Error(`Unsupported provider configured: ${providerName}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check API key if needed
|
||||
// Check if API key is set for the current provider and role (excluding 'ollama')
|
||||
if (providerName?.toLowerCase() !== 'ollama') {
|
||||
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
|
||||
log(
|
||||
@@ -399,70 +407,40 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
}
|
||||
}
|
||||
|
||||
// Get base URL if configured (optional for most providers)
|
||||
baseURL = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||
|
||||
// For Azure, use the global Azure base URL if role-specific URL is not configured
|
||||
if (providerName?.toLowerCase() === 'azure' && !baseURL) {
|
||||
baseURL = getAzureBaseURL(effectiveProjectRoot);
|
||||
log('debug', `Using global Azure base URL: ${baseURL}`);
|
||||
} else if (providerName?.toLowerCase() === 'ollama' && !baseURL) {
|
||||
// For Ollama, use the global Ollama base URL if role-specific URL is not configured
|
||||
baseURL = getOllamaBaseURL(effectiveProjectRoot);
|
||||
log('debug', `Using global Ollama base URL: ${baseURL}`);
|
||||
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||
baseUrl = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
||||
if (!providerFnSet) {
|
||||
log(
|
||||
'warn',
|
||||
`Skipping role '${currentRole}': Provider '${providerName}' not supported or map entry missing.`
|
||||
);
|
||||
lastError =
|
||||
lastError ||
|
||||
new Error(`Unsupported provider configured: ${providerName}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
providerApiFn = providerFnSet[serviceType];
|
||||
if (typeof providerApiFn !== 'function') {
|
||||
log(
|
||||
'warn',
|
||||
`Skipping role '${currentRole}': Service type '${serviceType}' not implemented for provider '${providerName}'.`
|
||||
);
|
||||
lastError =
|
||||
lastError ||
|
||||
new Error(
|
||||
`Service '${serviceType}' not implemented for provider ${providerName}`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Get AI parameters for the current role
|
||||
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||
apiKey = _resolveApiKey(
|
||||
providerName?.toLowerCase(),
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
|
||||
// Prepare provider-specific configuration
|
||||
let providerSpecificParams = {};
|
||||
|
||||
// Handle Vertex AI specific configuration
|
||||
if (providerName?.toLowerCase() === 'vertex') {
|
||||
// Get Vertex project ID and location
|
||||
const projectId =
|
||||
getVertexProjectId(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_PROJECT_ID',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
|
||||
const location =
|
||||
getVertexLocation(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_LOCATION',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
) ||
|
||||
'us-central1';
|
||||
|
||||
// Get credentials path if available
|
||||
const credentialsPath = resolveEnvVariable(
|
||||
'GOOGLE_APPLICATION_CREDENTIALS',
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
|
||||
// Add Vertex-specific parameters
|
||||
providerSpecificParams = {
|
||||
projectId,
|
||||
location,
|
||||
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
|
||||
};
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
|
||||
);
|
||||
}
|
||||
|
||||
const messages = [];
|
||||
if (systemPrompt) {
|
||||
messages.push({ role: 'system', content: systemPrompt });
|
||||
@@ -498,15 +476,13 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
maxTokens: roleParams.maxTokens,
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
...(baseURL && { baseURL }),
|
||||
baseUrl,
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...providerSpecificParams,
|
||||
...restApiParams
|
||||
};
|
||||
|
||||
providerResponse = await _attemptProviderCallWithRetries(
|
||||
provider,
|
||||
serviceType,
|
||||
providerApiFn,
|
||||
callParams,
|
||||
providerName,
|
||||
modelId,
|
||||
|
||||
@@ -32,8 +32,7 @@ import {
|
||||
removeTask,
|
||||
findTaskById,
|
||||
taskExists,
|
||||
moveTask,
|
||||
migrateProject
|
||||
moveTask
|
||||
} from './task-manager.js';
|
||||
|
||||
import {
|
||||
@@ -54,12 +53,6 @@ import {
|
||||
getBaseUrlForRole
|
||||
} from './config-manager.js';
|
||||
|
||||
import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
PRD_FILE,
|
||||
TASKMASTER_TASKS_FILE
|
||||
} from '../../src/constants/paths.js';
|
||||
|
||||
import {
|
||||
displayBanner,
|
||||
displayHelp,
|
||||
@@ -72,7 +65,8 @@ import {
|
||||
stopLoadingIndicator,
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels,
|
||||
displayApiKeyStatus
|
||||
displayApiKeyStatus,
|
||||
displayAiUsageSummary
|
||||
} from './ui.js';
|
||||
|
||||
import { initializeProject } from '../init.js';
|
||||
@@ -87,7 +81,6 @@ import {
|
||||
TASK_STATUS_OPTIONS
|
||||
} from '../../src/constants/task-status.js';
|
||||
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
|
||||
|
||||
/**
|
||||
* Runs the interactive setup process for model configuration.
|
||||
* @param {string|null} projectRoot - The resolved project root directory.
|
||||
@@ -162,11 +155,11 @@ async function runInteractiveSetup(projectRoot) {
|
||||
}
|
||||
|
||||
// Helper function to fetch Ollama models (duplicated for CLI context)
|
||||
function fetchOllamaModelsCLI(baseURL = 'http://localhost:11434/api') {
|
||||
function fetchOllamaModelsCLI(baseUrl = 'http://localhost:11434/api') {
|
||||
return new Promise((resolve) => {
|
||||
try {
|
||||
// Parse the base URL to extract hostname, port, and base path
|
||||
const url = new URL(baseURL);
|
||||
const url = new URL(baseUrl);
|
||||
const isHttps = url.protocol === 'https:';
|
||||
const port = url.port || (isHttps ? 443 : 80);
|
||||
const basePath = url.pathname.endsWith('/')
|
||||
@@ -251,11 +244,6 @@ async function runInteractiveSetup(projectRoot) {
|
||||
value: '__CUSTOM_OLLAMA__'
|
||||
};
|
||||
|
||||
const customBedrockOption = {
|
||||
name: '* Custom Bedrock model', // Add Bedrock custom option
|
||||
value: '__CUSTOM_BEDROCK__'
|
||||
};
|
||||
|
||||
let choices = [];
|
||||
let defaultIndex = 0; // Default to 'Cancel'
|
||||
|
||||
@@ -302,9 +290,8 @@ async function runInteractiveSetup(projectRoot) {
|
||||
commonPrefix.push(cancelOption);
|
||||
commonPrefix.push(customOpenRouterOption);
|
||||
commonPrefix.push(customOllamaOption);
|
||||
commonPrefix.push(customBedrockOption);
|
||||
|
||||
const prefixLength = commonPrefix.length; // Initial prefix length
|
||||
let prefixLength = commonPrefix.length; // Initial prefix length
|
||||
|
||||
if (allowNone) {
|
||||
choices = [
|
||||
@@ -449,13 +436,13 @@ async function runInteractiveSetup(projectRoot) {
|
||||
modelIdToSet = customId;
|
||||
providerHint = 'ollama';
|
||||
// Get the Ollama base URL from config for this role
|
||||
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
|
||||
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
|
||||
// Validate against live Ollama list
|
||||
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseURL);
|
||||
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseUrl);
|
||||
if (ollamaModels === null) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
|
||||
`Error: Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
@@ -468,47 +455,12 @@ async function runInteractiveSetup(projectRoot) {
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`You can check available models with: curl ${ollamaBaseURL}/tags`
|
||||
`You can check available models with: curl ${ollamaBaseUrl}/tags`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
} else if (selectedValue === '__CUSTOM_BEDROCK__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom Bedrock Model ID for the ${role} role (e.g., anthropic.claude-3-sonnet-20240229-v1:0):`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = 'bedrock';
|
||||
|
||||
// Check if AWS environment variables exist
|
||||
if (
|
||||
!process.env.AWS_ACCESS_KEY_ID ||
|
||||
!process.env.AWS_SECRET_ACCESS_KEY
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: AWS_ACCESS_KEY_ID and/or AWS_SECRET_ACCESS_KEY environment variables are missing. Please set them before using custom Bedrock models.'
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Custom Bedrock model "${modelIdToSet}" will be used. No validation performed.`
|
||||
)
|
||||
);
|
||||
} else if (
|
||||
selectedValue &&
|
||||
typeof selectedValue === 'object' &&
|
||||
@@ -655,7 +607,7 @@ function registerCommands(programInstance) {
|
||||
'-i, --input <file>',
|
||||
'Path to the PRD file (alternative to positional argument)'
|
||||
)
|
||||
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
|
||||
.option('-o, --output <file>', 'Output file path', 'tasks/tasks.json')
|
||||
.option('-n, --num-tasks <number>', 'Number of tasks to generate', '10')
|
||||
.option('-f, --force', 'Skip confirmation when overwriting existing tasks')
|
||||
.option(
|
||||
@@ -669,14 +621,14 @@ function registerCommands(programInstance) {
|
||||
.action(async (file, options) => {
|
||||
// Use input option if file argument not provided
|
||||
const inputFile = file || options.input;
|
||||
const defaultPrdPath = PRD_FILE;
|
||||
const defaultPrdPath = 'scripts/prd.txt';
|
||||
const numTasks = parseInt(options.numTasks, 10);
|
||||
const outputPath = options.output;
|
||||
const force = options.force || false;
|
||||
const append = options.append || false;
|
||||
const research = options.research || false;
|
||||
let useForce = force;
|
||||
const useAppend = append;
|
||||
let useAppend = append;
|
||||
|
||||
// Helper function to check if tasks.json exists and confirm overwrite
|
||||
async function confirmOverwriteIfNeeded() {
|
||||
@@ -716,12 +668,38 @@ function registerCommands(programInstance) {
|
||||
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`No PRD file specified and default PRD file not found at ${PRD_FILE}.`
|
||||
'No PRD file specified and default PRD file not found at scripts/prd.txt.'
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
boxen(
|
||||
`${chalk.white.bold('Parse PRD Help')}\n\n${chalk.cyan('Usage:')}\n task-master parse-prd <prd-file.txt> [options]\n\n${chalk.cyan('Options:')}\n -i, --input <file> Path to the PRD file (alternative to positional argument)\n -o, --output <file> Output file path (default: "${TASKMASTER_TASKS_FILE}")\n -n, --num-tasks <number> Number of tasks to generate (default: 10)\n -f, --force Skip confirmation when overwriting existing tasks\n --append Append new tasks to existing tasks.json instead of overwriting\n -r, --research Use Perplexity AI for research-backed task generation\n\n${chalk.cyan('Example:')}\n task-master parse-prd requirements.txt --num-tasks 15\n task-master parse-prd --input=requirements.txt\n task-master parse-prd --force\n task-master parse-prd requirements_v2.txt --append\n task-master parse-prd requirements.txt --research\n\n${chalk.yellow('Note: This command will:')}\n 1. Look for a PRD file at ${PRD_FILE} by default\n 2. Use the file specified by --input or positional argument if provided\n 3. Generate tasks from the PRD and either:\n - Overwrite any existing tasks.json file (default)\n - Append to existing tasks.json if --append is used`,
|
||||
chalk.white.bold('Parse PRD Help') +
|
||||
'\n\n' +
|
||||
chalk.cyan('Usage:') +
|
||||
'\n' +
|
||||
` task-master parse-prd <prd-file.txt> [options]\n\n` +
|
||||
chalk.cyan('Options:') +
|
||||
'\n' +
|
||||
' -i, --input <file> Path to the PRD file (alternative to positional argument)\n' +
|
||||
' -o, --output <file> Output file path (default: "tasks/tasks.json")\n' +
|
||||
' -n, --num-tasks <number> Number of tasks to generate (default: 10)\n' +
|
||||
' -f, --force Skip confirmation when overwriting existing tasks\n' +
|
||||
' --append Append new tasks to existing tasks.json instead of overwriting\n' +
|
||||
' -r, --research Use Perplexity AI for research-backed task generation\n\n' +
|
||||
chalk.cyan('Example:') +
|
||||
'\n' +
|
||||
' task-master parse-prd requirements.txt --num-tasks 15\n' +
|
||||
' task-master parse-prd --input=requirements.txt\n' +
|
||||
' task-master parse-prd --force\n' +
|
||||
' task-master parse-prd requirements_v2.txt --append\n' +
|
||||
' task-master parse-prd requirements.txt --research\n\n' +
|
||||
chalk.yellow('Note: This command will:') +
|
||||
'\n' +
|
||||
' 1. Look for a PRD file at scripts/prd.txt by default\n' +
|
||||
' 2. Use the file specified by --input or positional argument if provided\n' +
|
||||
' 3. Generate tasks from the PRD and either:\n' +
|
||||
' - Overwrite any existing tasks.json file (default)\n' +
|
||||
' - Append to existing tasks.json if --append is used',
|
||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
@@ -773,11 +751,7 @@ function registerCommands(programInstance) {
|
||||
.description(
|
||||
'Update multiple tasks with ID >= "from" based on new information or implementation changes'
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'--from <id>',
|
||||
'Task ID to start updating from (tasks with ID >= this value will be updated)',
|
||||
@@ -792,7 +766,7 @@ function registerCommands(programInstance) {
|
||||
'Use Perplexity AI for research-backed task updates'
|
||||
)
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const fromId = parseInt(options.from, 10); // Validation happens here
|
||||
const prompt = options.prompt;
|
||||
const useResearch = options.research || false;
|
||||
@@ -858,11 +832,7 @@ function registerCommands(programInstance) {
|
||||
.description(
|
||||
'Update a single specific task by ID with new information (use --id parameter)'
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-i, --id <id>', 'Task ID to update (required)')
|
||||
.option(
|
||||
'-p, --prompt <text>',
|
||||
@@ -874,7 +844,7 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.action(async (options) => {
|
||||
try {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
|
||||
// Validate required parameters
|
||||
if (!options.id) {
|
||||
@@ -889,7 +859,7 @@ function registerCommands(programInstance) {
|
||||
|
||||
// Parse the task ID and validate it's a number
|
||||
const taskId = parseInt(options.id, 10);
|
||||
if (Number.isNaN(taskId) || taskId <= 0) {
|
||||
if (isNaN(taskId) || taskId <= 0) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`
|
||||
@@ -925,7 +895,7 @@ function registerCommands(programInstance) {
|
||||
console.error(
|
||||
chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)
|
||||
);
|
||||
if (tasksPath === TASKMASTER_TASKS_FILE) {
|
||||
if (tasksPath === 'tasks/tasks.json') {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Hint: Run task-master init or task-master parse-prd to create tasks.json first'
|
||||
@@ -1015,11 +985,7 @@ function registerCommands(programInstance) {
|
||||
.description(
|
||||
'Update a subtask by appending additional timestamped information'
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-i, --id <id>',
|
||||
'Subtask ID to update in format "parentId.subtaskId" (required)'
|
||||
@@ -1031,7 +997,7 @@ function registerCommands(programInstance) {
|
||||
.option('-r, --research', 'Use Perplexity AI for research-backed updates')
|
||||
.action(async (options) => {
|
||||
try {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
|
||||
// Validate required parameters
|
||||
if (!options.id) {
|
||||
@@ -1082,7 +1048,7 @@ function registerCommands(programInstance) {
|
||||
console.error(
|
||||
chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)
|
||||
);
|
||||
if (tasksPath === TASKMASTER_TASKS_FILE) {
|
||||
if (tasksPath === 'tasks/tasks.json') {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'Hint: Run task-master init or task-master parse-prd to create tasks.json first'
|
||||
@@ -1173,14 +1139,10 @@ function registerCommands(programInstance) {
|
||||
programInstance
|
||||
.command('generate')
|
||||
.description('Generate task files from tasks.json')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-o, --output <dir>', 'Output directory', 'tasks')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const outputDir = options.output;
|
||||
|
||||
console.log(chalk.blue(`Generating task files from: ${tasksPath}`));
|
||||
@@ -1203,13 +1165,9 @@ function registerCommands(programInstance) {
|
||||
'-s, --status <status>',
|
||||
`New status (one of: ${TASK_STATUS_OPTIONS.join(', ')})`
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const taskId = options.id;
|
||||
const status = options.status;
|
||||
|
||||
@@ -1239,20 +1197,16 @@ function registerCommands(programInstance) {
|
||||
programInstance
|
||||
.command('list')
|
||||
.description('List all tasks')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.option('-s, --status <status>', 'Filter by status')
|
||||
.option('--with-subtasks', 'Show subtasks for each task')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const reportPath = options.report;
|
||||
const statusFilter = options.status;
|
||||
const withSubtasks = options.withSubtasks || false;
|
||||
@@ -1291,7 +1245,7 @@ function registerCommands(programInstance) {
|
||||
.option(
|
||||
'--file <file>',
|
||||
'Path to the tasks file (relative to project root)',
|
||||
TASKMASTER_TASKS_FILE // Allow file override
|
||||
'tasks/tasks.json'
|
||||
) // Allow file override
|
||||
.action(async (options) => {
|
||||
const projectRoot = findProjectRoot();
|
||||
@@ -1366,7 +1320,7 @@ function registerCommands(programInstance) {
|
||||
.option(
|
||||
'-o, --output <file>',
|
||||
'Output file path for the report',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.option(
|
||||
'-m, --model <model>',
|
||||
@@ -1377,11 +1331,7 @@ function registerCommands(programInstance) {
|
||||
'Minimum complexity score to recommend expansion (1-10)',
|
||||
'5'
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --research',
|
||||
'Use Perplexity AI for research-backed complexity analysis'
|
||||
@@ -1393,7 +1343,7 @@ function registerCommands(programInstance) {
|
||||
.option('--from <id>', 'Starting task ID in a range to analyze')
|
||||
.option('--to <id>', 'Ending task ID in a range to analyze')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file || 'tasks/tasks.json';
|
||||
const outputPath = options.output;
|
||||
const modelOverride = options.model;
|
||||
const thresholdScore = parseFloat(options.threshold);
|
||||
@@ -1427,18 +1377,14 @@ function registerCommands(programInstance) {
|
||||
programInstance
|
||||
.command('clear-subtasks')
|
||||
.description('Clear subtasks from specified tasks')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-i, --id <ids>',
|
||||
'Task IDs (comma-separated) to clear subtasks from'
|
||||
)
|
||||
.option('--all', 'Clear subtasks from all tasks')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const taskIds = options.id;
|
||||
const all = options.all;
|
||||
|
||||
@@ -1469,11 +1415,7 @@ function registerCommands(programInstance) {
|
||||
programInstance
|
||||
.command('add-task')
|
||||
.description('Add a new task using AI, optionally providing manual details')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-p, --prompt <prompt>',
|
||||
'Description of the task to add (required if not using manual fields)'
|
||||
@@ -1513,14 +1455,10 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
console.error(
|
||||
`❌ No tasks.json file found. Please run "task-master init" or create a tasks.json file at ${TASKMASTER_TASKS_FILE}`
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
const tasksPath =
|
||||
options.file ||
|
||||
path.join(findProjectRoot() || '.', 'tasks', 'tasks.json') || // Ensure tasksPath is also relative to a found root or current dir
|
||||
'tasks/tasks.json';
|
||||
|
||||
// Correctly determine projectRoot
|
||||
const projectRoot = findProjectRoot();
|
||||
@@ -1592,20 +1530,15 @@ function registerCommands(programInstance) {
|
||||
.description(
|
||||
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const reportPath = options.report;
|
||||
|
||||
await displayNextTask(tasksPath, reportPath);
|
||||
});
|
||||
|
||||
@@ -1618,15 +1551,11 @@ function registerCommands(programInstance) {
|
||||
.argument('[id]', 'Task ID to show')
|
||||
.option('-i, --id <id>', 'Task ID to show')
|
||||
.option('-s, --status <status>', 'Filter subtasks by status') // ADDED status option
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.action(async (taskId, options) => {
|
||||
const idArg = taskId || options.id;
|
||||
@@ -1637,7 +1566,7 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const reportPath = options.report;
|
||||
// PASS statusFilter to the display function
|
||||
await displayTaskById(tasksPath, idArg, reportPath, statusFilter);
|
||||
@@ -1649,13 +1578,9 @@ function registerCommands(programInstance) {
|
||||
.description('Add a dependency to a task')
|
||||
.option('-i, --id <id>', 'Task ID to add dependency to')
|
||||
.option('-d, --depends-on <id>', 'Task ID that will become a dependency')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const taskId = options.id;
|
||||
const dependencyId = options.dependsOn;
|
||||
|
||||
@@ -1684,13 +1609,9 @@ function registerCommands(programInstance) {
|
||||
.description('Remove a dependency from a task')
|
||||
.option('-i, --id <id>', 'Task ID to remove dependency from')
|
||||
.option('-d, --depends-on <id>', 'Task ID to remove as a dependency')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const taskId = options.id;
|
||||
const dependencyId = options.dependsOn;
|
||||
|
||||
@@ -1719,26 +1640,18 @@ function registerCommands(programInstance) {
|
||||
.description(
|
||||
`Identify invalid dependencies without fixing them${chalk.reset('')}`
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
await validateDependenciesCommand(options.file || TASKMASTER_TASKS_FILE);
|
||||
await validateDependenciesCommand(options.file);
|
||||
});
|
||||
|
||||
// fix-dependencies command
|
||||
programInstance
|
||||
.command('fix-dependencies')
|
||||
.description(`Fix invalid dependencies automatically${chalk.reset('')}`)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
await fixDependenciesCommand(options.file || TASKMASTER_TASKS_FILE);
|
||||
await fixDependenciesCommand(options.file);
|
||||
});
|
||||
|
||||
// complexity-report command
|
||||
@@ -1748,21 +1661,17 @@ function registerCommands(programInstance) {
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.action(async (options) => {
|
||||
await displayComplexityReport(options.file || COMPLEXITY_REPORT_FILE);
|
||||
await displayComplexityReport(options.file);
|
||||
});
|
||||
|
||||
// add-subtask command
|
||||
programInstance
|
||||
.command('add-subtask')
|
||||
.description('Add a subtask to an existing task')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-p, --parent <id>', 'Parent task ID (required)')
|
||||
.option('-i, --task-id <id>', 'Existing task ID to convert to subtask')
|
||||
.option(
|
||||
@@ -1778,7 +1687,7 @@ function registerCommands(programInstance) {
|
||||
.option('-s, --status <status>', 'Status for the new subtask', 'pending')
|
||||
.option('--skip-generate', 'Skip regenerating task files')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const parentId = options.parent;
|
||||
const existingTaskId = options.taskId;
|
||||
const generateFiles = !options.skipGenerate;
|
||||
@@ -1923,7 +1832,26 @@ function registerCommands(programInstance) {
|
||||
function showAddSubtaskHelp() {
|
||||
console.log(
|
||||
boxen(
|
||||
`${chalk.white.bold('Add Subtask Command Help')}\n\n${chalk.cyan('Usage:')}\n task-master add-subtask --parent=<id> [options]\n\n${chalk.cyan('Options:')}\n -p, --parent <id> Parent task ID (required)\n -i, --task-id <id> Existing task ID to convert to subtask\n -t, --title <title> Title for the new subtask\n -d, --description <text> Description for the new subtask\n --details <text> Implementation details for the new subtask\n --dependencies <ids> Comma-separated list of dependency IDs\n -s, --status <status> Status for the new subtask (default: "pending")\n -f, --file <file> Path to the tasks file (default: "${TASKMASTER_TASKS_FILE}")\n --skip-generate Skip regenerating task files\n\n${chalk.cyan('Examples:')}\n task-master add-subtask --parent=5 --task-id=8\n task-master add-subtask -p 5 -t "Implement login UI" -d "Create the login form"`,
|
||||
chalk.white.bold('Add Subtask Command Help') +
|
||||
'\n\n' +
|
||||
chalk.cyan('Usage:') +
|
||||
'\n' +
|
||||
` task-master add-subtask --parent=<id> [options]\n\n` +
|
||||
chalk.cyan('Options:') +
|
||||
'\n' +
|
||||
' -p, --parent <id> Parent task ID (required)\n' +
|
||||
' -i, --task-id <id> Existing task ID to convert to subtask\n' +
|
||||
' -t, --title <title> Title for the new subtask\n' +
|
||||
' -d, --description <text> Description for the new subtask\n' +
|
||||
' --details <text> Implementation details for the new subtask\n' +
|
||||
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
|
||||
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
|
||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||
' --skip-generate Skip regenerating task files\n\n' +
|
||||
chalk.cyan('Examples:') +
|
||||
'\n' +
|
||||
' task-master add-subtask --parent=5 --task-id=8\n' +
|
||||
' task-master add-subtask -p 5 -t "Implement login UI" -d "Create the login form"',
|
||||
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
@@ -1933,11 +1861,7 @@ function registerCommands(programInstance) {
|
||||
programInstance
|
||||
.command('remove-subtask')
|
||||
.description('Remove a subtask from its parent task')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-i, --id <id>',
|
||||
'Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated for multiple subtasks)'
|
||||
@@ -1948,7 +1872,7 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.option('--skip-generate', 'Skip regenerating task files')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const subtaskIds = options.id;
|
||||
const convertToTask = options.convert || false;
|
||||
const generateFiles = !options.skipGenerate;
|
||||
@@ -2068,9 +1992,7 @@ function registerCommands(programInstance) {
|
||||
'\n' +
|
||||
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
|
||||
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
|
||||
' -f, --file <file> Path to the tasks file (default: "' +
|
||||
TASKMASTER_TASKS_FILE +
|
||||
'")\n' +
|
||||
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
|
||||
' --skip-generate Skip regenerating task files\n\n' +
|
||||
chalk.cyan('Examples:') +
|
||||
'\n' +
|
||||
@@ -2091,14 +2013,10 @@ function registerCommands(programInstance) {
|
||||
'-i, --id <ids>',
|
||||
'ID(s) of the task(s) or subtask(s) to remove (e.g., "5", "5.2", or "5,6.1,7")'
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-y, --yes', 'Skip confirmation prompt', false)
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const taskIdsString = options.id;
|
||||
|
||||
if (!taskIdsString) {
|
||||
@@ -2375,10 +2293,6 @@ function registerCommands(programInstance) {
|
||||
'--ollama',
|
||||
'Allow setting a custom Ollama model ID (use with --set-*) '
|
||||
)
|
||||
.option(
|
||||
'--bedrock',
|
||||
'Allow setting a custom Bedrock model ID (use with --set-*) '
|
||||
)
|
||||
.addHelpText(
|
||||
'after',
|
||||
`
|
||||
@@ -2388,7 +2302,6 @@ Examples:
|
||||
$ task-master models --set-research sonar-pro # Set research model
|
||||
$ task-master models --set-fallback claude-3-5-sonnet-20241022 # Set fallback
|
||||
$ task-master models --set-main my-custom-model --ollama # Set custom Ollama model for main role
|
||||
$ task-master models --set-main anthropic.claude-3-sonnet-20240229-v1:0 --bedrock # Set custom Bedrock model for main role
|
||||
$ task-master models --set-main some/other-model --openrouter # Set custom OpenRouter model for main role
|
||||
$ task-master models --setup # Run interactive setup`
|
||||
)
|
||||
@@ -2398,16 +2311,11 @@ Examples:
|
||||
console.error(chalk.red('Error: Could not find project root.'));
|
||||
process.exit(1);
|
||||
}
|
||||
// Validate flags: cannot use multiple provider flags simultaneously
|
||||
const providerFlags = [
|
||||
options.openrouter,
|
||||
options.ollama,
|
||||
options.bedrock
|
||||
].filter(Boolean).length;
|
||||
if (providerFlags > 1) {
|
||||
// Validate flags: cannot use both --openrouter and --ollama simultaneously
|
||||
if (options.openrouter && options.ollama) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock) simultaneously.'
|
||||
'Error: Cannot use both --openrouter and --ollama flags simultaneously.'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
@@ -2447,9 +2355,7 @@ Examples:
|
||||
? 'openrouter'
|
||||
: options.ollama
|
||||
? 'ollama'
|
||||
: options.bedrock
|
||||
? 'bedrock'
|
||||
: undefined
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
@@ -2469,9 +2375,7 @@ Examples:
|
||||
? 'openrouter'
|
||||
: options.ollama
|
||||
? 'ollama'
|
||||
: options.bedrock
|
||||
? 'bedrock'
|
||||
: undefined
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
@@ -2493,9 +2397,7 @@ Examples:
|
||||
? 'openrouter'
|
||||
: options.ollama
|
||||
? 'ollama'
|
||||
: options.bedrock
|
||||
? 'bedrock'
|
||||
: undefined
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
@@ -2595,11 +2497,7 @@ Examples:
|
||||
programInstance
|
||||
.command('move')
|
||||
.description('Move a task or subtask to a new position')
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'--from <id>',
|
||||
'ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated to move multiple tasks (e.g., "5,6,7")'
|
||||
@@ -2609,7 +2507,7 @@ Examples:
|
||||
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
|
||||
)
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || TASKMASTER_TASKS_FILE;
|
||||
const tasksPath = options.file;
|
||||
const sourceId = options.from;
|
||||
const destinationId = options.to;
|
||||
|
||||
@@ -2724,39 +2622,6 @@ Examples:
|
||||
}
|
||||
});
|
||||
|
||||
programInstance
|
||||
.command('migrate')
|
||||
.description(
|
||||
'Migrate existing project to use the new .taskmaster directory structure'
|
||||
)
|
||||
.option(
|
||||
'-f, --force',
|
||||
'Force migration even if .taskmaster directory already exists'
|
||||
)
|
||||
.option(
|
||||
'--backup',
|
||||
'Create backup of old files before migration (default: false)',
|
||||
false
|
||||
)
|
||||
.option(
|
||||
'--cleanup',
|
||||
'Remove old files after successful migration (default: true)',
|
||||
true
|
||||
)
|
||||
.option('-y, --yes', 'Skip confirmation prompts')
|
||||
.option(
|
||||
'--dry-run',
|
||||
'Show what would be migrated without actually moving files'
|
||||
)
|
||||
.action(async (options) => {
|
||||
try {
|
||||
await migrateProject(options);
|
||||
} catch (error) {
|
||||
console.error(chalk.red('Error during migration:'), error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
|
||||
return programInstance;
|
||||
}
|
||||
|
||||
@@ -2941,7 +2806,7 @@ async function runCLI(argv = process.argv) {
|
||||
|
||||
// Setup and parse
|
||||
// NOTE: getConfig() might be called during setupCLI->registerCommands if commands need config
|
||||
// This means the ConfigurationError might be thrown here if configuration file is missing.
|
||||
// This means the ConfigurationError might be thrown here if .taskmasterconfig is missing.
|
||||
const programInstance = setupCLI();
|
||||
await programInstance.parseAsync(argv);
|
||||
|
||||
@@ -2960,10 +2825,10 @@ async function runCLI(argv = process.argv) {
|
||||
boxen(
|
||||
chalk.red.bold('Configuration Update Required!') +
|
||||
'\n\n' +
|
||||
chalk.white('Taskmaster now uses a ') +
|
||||
chalk.yellow.bold('configuration file') +
|
||||
chalk.white('Taskmaster now uses the ') +
|
||||
chalk.yellow.bold('.taskmasterconfig') +
|
||||
chalk.white(
|
||||
' in your project for AI model choices and settings.\n\n' +
|
||||
' file in your project root for AI model choices and settings.\n\n' +
|
||||
'This file appears to be '
|
||||
) +
|
||||
chalk.red.bold('missing') +
|
||||
@@ -2975,7 +2840,7 @@ async function runCLI(argv = process.argv) {
|
||||
chalk.white.bold('Key Points:') +
|
||||
'\n' +
|
||||
chalk.white('* ') +
|
||||
chalk.yellow.bold('Configuration file') +
|
||||
chalk.yellow.bold('.taskmasterconfig') +
|
||||
chalk.white(
|
||||
': Stores your AI model settings (do not manually edit)\n'
|
||||
) +
|
||||
|
||||
@@ -3,8 +3,6 @@ import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
|
||||
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
|
||||
import { findConfigPath } from '../../src/utils/path-utils.js';
|
||||
|
||||
// Calculate __dirname in ESM
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
@@ -29,10 +27,12 @@ try {
|
||||
process.exit(1); // Exit if models can't be loaded
|
||||
}
|
||||
|
||||
const CONFIG_FILE_NAME = '.taskmasterconfig';
|
||||
|
||||
// Define valid providers dynamically from the loaded MODEL_MAP
|
||||
const VALID_PROVIDERS = Object.keys(MODEL_MAP || {});
|
||||
|
||||
// Default configuration values (used if config file is missing or incomplete)
|
||||
// Default configuration values (used if .taskmasterconfig is missing or incomplete)
|
||||
const DEFAULTS = {
|
||||
models: {
|
||||
main: {
|
||||
@@ -61,7 +61,7 @@ const DEFAULTS = {
|
||||
defaultSubtasks: 5,
|
||||
defaultPriority: 'medium',
|
||||
projectName: 'Task Master',
|
||||
ollamaBaseURL: 'http://localhost:11434/api'
|
||||
ollamaBaseUrl: 'http://localhost:11434/api'
|
||||
}
|
||||
};
|
||||
|
||||
@@ -96,15 +96,13 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
}
|
||||
// ---> End find project root logic <---
|
||||
|
||||
// --- Find configuration file using centralized path utility ---
|
||||
const configPath = findConfigPath(null, { projectRoot: rootToUse });
|
||||
// --- Proceed with loading from the determined rootToUse ---
|
||||
const configPath = path.join(rootToUse, CONFIG_FILE_NAME);
|
||||
let config = { ...defaults }; // Start with a deep copy of defaults
|
||||
let configExists = false;
|
||||
|
||||
if (configPath) {
|
||||
if (fs.existsSync(configPath)) {
|
||||
configExists = true;
|
||||
const isLegacy = configPath.endsWith(LEGACY_CONFIG_FILE);
|
||||
|
||||
try {
|
||||
const rawData = fs.readFileSync(configPath, 'utf-8');
|
||||
const parsedConfig = JSON.parse(rawData);
|
||||
@@ -127,15 +125,6 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
};
|
||||
configSource = `file (${configPath})`; // Update source info
|
||||
|
||||
// Issue deprecation warning if using legacy config file
|
||||
if (isLegacy) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`⚠️ DEPRECATION WARNING: Found configuration in legacy location '${configPath}'. Please migrate to .taskmaster/config.json. Run 'task-master migrate' to automatically migrate your project.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// --- Validation (Warn if file content is invalid) ---
|
||||
// Use log.warn for consistency
|
||||
if (!validateProvider(config.models.main.provider)) {
|
||||
@@ -182,19 +171,19 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
// Only warn if an explicit root was *expected*.
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: Configuration file not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
|
||||
`Warning: ${CONFIG_FILE_NAME} not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: Configuration file not found at derived root (${rootToUse}). Using defaults.`
|
||||
`Warning: ${CONFIG_FILE_NAME} not found at derived root (${rootToUse}). Using defaults.`
|
||||
)
|
||||
);
|
||||
}
|
||||
// Keep config as defaults
|
||||
config = { ...defaults };
|
||||
configSource = `defaults (no config file found at ${rootToUse})`;
|
||||
configSource = `defaults (file not found at ${configPath})`;
|
||||
}
|
||||
|
||||
return config;
|
||||
@@ -353,13 +342,13 @@ function getDefaultSubtasks(explicitRoot = null) {
|
||||
// Directly return value from config, ensure integer
|
||||
const val = getGlobalConfig(explicitRoot).defaultSubtasks;
|
||||
const parsedVal = parseInt(val, 10);
|
||||
return Number.isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
|
||||
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
|
||||
}
|
||||
|
||||
function getDefaultNumTasks(explicitRoot = null) {
|
||||
const val = getGlobalConfig(explicitRoot).defaultNumTasks;
|
||||
const parsedVal = parseInt(val, 10);
|
||||
return Number.isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
|
||||
return isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
|
||||
}
|
||||
|
||||
function getDefaultPriority(explicitRoot = null) {
|
||||
@@ -372,34 +361,9 @@ function getProjectName(explicitRoot = null) {
|
||||
return getGlobalConfig(explicitRoot).projectName;
|
||||
}
|
||||
|
||||
function getOllamaBaseURL(explicitRoot = null) {
|
||||
function getOllamaBaseUrl(explicitRoot = null) {
|
||||
// Directly return value from config
|
||||
return getGlobalConfig(explicitRoot).ollamaBaseURL;
|
||||
}
|
||||
|
||||
function getAzureBaseURL(explicitRoot = null) {
|
||||
// Directly return value from config
|
||||
return getGlobalConfig(explicitRoot).azureBaseURL;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the Google Cloud project ID for Vertex AI from configuration
|
||||
* @param {string|null} explicitRoot - Optional explicit path to the project root.
|
||||
* @returns {string|null} The project ID or null if not configured
|
||||
*/
|
||||
function getVertexProjectId(explicitRoot = null) {
|
||||
// Return value from config
|
||||
return getGlobalConfig(explicitRoot).vertexProjectId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the Google Cloud location for Vertex AI from configuration
|
||||
* @param {string|null} explicitRoot - Optional explicit path to the project root.
|
||||
* @returns {string} The location or default value of "us-central1"
|
||||
*/
|
||||
function getVertexLocation(explicitRoot = null) {
|
||||
// Return value from config or default
|
||||
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
|
||||
return getGlobalConfig(explicitRoot).ollamaBaseUrl;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -486,8 +450,7 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
|
||||
mistral: 'MISTRAL_API_KEY',
|
||||
azure: 'AZURE_OPENAI_API_KEY',
|
||||
openrouter: 'OPENROUTER_API_KEY',
|
||||
xai: 'XAI_API_KEY',
|
||||
vertex: 'GOOGLE_API_KEY' // Vertex uses the same key as Google
|
||||
xai: 'XAI_API_KEY'
|
||||
// Add other providers as needed
|
||||
};
|
||||
|
||||
@@ -579,10 +542,6 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
|
||||
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
|
||||
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
|
||||
break;
|
||||
case 'vertex':
|
||||
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY; // Vertex uses Google API key
|
||||
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
|
||||
break;
|
||||
default:
|
||||
return false; // Unknown provider
|
||||
}
|
||||
@@ -669,16 +628,12 @@ function writeConfig(config, explicitRoot = null) {
|
||||
}
|
||||
// ---> End determine root path logic <---
|
||||
|
||||
// Use new config location: .taskmaster/config.json
|
||||
const taskmasterDir = path.join(rootPath, '.taskmaster');
|
||||
const configPath = path.join(taskmasterDir, 'config.json');
|
||||
const configPath =
|
||||
path.basename(rootPath) === CONFIG_FILE_NAME
|
||||
? rootPath
|
||||
: path.join(rootPath, CONFIG_FILE_NAME);
|
||||
|
||||
try {
|
||||
// Ensure .taskmaster directory exists
|
||||
if (!fs.existsSync(taskmasterDir)) {
|
||||
fs.mkdirSync(taskmasterDir, { recursive: true });
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
|
||||
loadedConfig = config; // Update the cache after successful write
|
||||
return true;
|
||||
@@ -693,12 +648,25 @@ function writeConfig(config, explicitRoot = null) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a configuration file exists at the project root (new or legacy location)
|
||||
* Checks if the .taskmasterconfig file exists at the project root
|
||||
* @param {string|null} explicitRoot - Optional explicit path to the project root
|
||||
* @returns {boolean} True if the file exists, false otherwise
|
||||
*/
|
||||
function isConfigFilePresent(explicitRoot = null) {
|
||||
return findConfigPath(null, { projectRoot: explicitRoot }) !== null;
|
||||
// ---> Determine root path reliably <---
|
||||
let rootPath = explicitRoot;
|
||||
if (explicitRoot === null || explicitRoot === undefined) {
|
||||
// Logic matching _loadAndValidateConfig
|
||||
const foundRoot = findProjectRoot(); // *** Explicitly call findProjectRoot ***
|
||||
if (!foundRoot) {
|
||||
return false; // Cannot check if root doesn't exist
|
||||
}
|
||||
rootPath = foundRoot;
|
||||
}
|
||||
// ---> End determine root path logic <---
|
||||
|
||||
const configPath = path.join(rootPath, CONFIG_FILE_NAME);
|
||||
return fs.existsSync(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -739,8 +707,8 @@ function getAllProviders() {
|
||||
|
||||
function getBaseUrlForRole(role, explicitRoot = null) {
|
||||
const roleConfig = getModelConfigForRole(role, explicitRoot);
|
||||
return roleConfig && typeof roleConfig.baseURL === 'string'
|
||||
? roleConfig.baseURL
|
||||
return roleConfig && typeof roleConfig.baseUrl === 'string'
|
||||
? roleConfig.baseUrl
|
||||
: undefined;
|
||||
}
|
||||
|
||||
@@ -750,12 +718,14 @@ export {
|
||||
writeConfig,
|
||||
ConfigurationError,
|
||||
isConfigFilePresent,
|
||||
|
||||
// Validation
|
||||
validateProvider,
|
||||
validateProviderModelCombination,
|
||||
VALID_PROVIDERS,
|
||||
MODEL_MAP,
|
||||
getAvailableModels,
|
||||
|
||||
// Role-specific getters (No env var overrides)
|
||||
getMainProvider,
|
||||
getMainModelId,
|
||||
@@ -770,6 +740,7 @@ export {
|
||||
getFallbackMaxTokens,
|
||||
getFallbackTemperature,
|
||||
getBaseUrlForRole,
|
||||
|
||||
// Global setting getters (No env var overrides)
|
||||
getLogLevel,
|
||||
getDebugFlag,
|
||||
@@ -777,15 +748,13 @@ export {
|
||||
getDefaultSubtasks,
|
||||
getDefaultPriority,
|
||||
getProjectName,
|
||||
getOllamaBaseURL,
|
||||
getAzureBaseURL,
|
||||
getOllamaBaseUrl,
|
||||
getParametersForRole,
|
||||
getUserId,
|
||||
// API Key Checkers (still relevant)
|
||||
isApiKeySet,
|
||||
getMcpApiKeyStatus,
|
||||
|
||||
// ADD: Function to get all provider names
|
||||
getAllProviders,
|
||||
getVertexProjectId,
|
||||
getVertexLocation
|
||||
getAllProviders
|
||||
};
|
||||
|
||||
@@ -5,14 +5,14 @@
|
||||
"swe_score": 0.727,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 64000
|
||||
"max_tokens": 120000
|
||||
},
|
||||
{
|
||||
"id": "claude-opus-4-20250514",
|
||||
"swe_score": 0.725,
|
||||
"cost_per_1m_tokens": { "input": 15.0, "output": 75.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 32000
|
||||
"max_tokens": 120000
|
||||
},
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
|
||||
@@ -24,7 +24,6 @@ import removeTask from './task-manager/remove-task.js';
|
||||
import taskExists from './task-manager/task-exists.js';
|
||||
import isTaskDependentOn from './task-manager/is-task-dependent.js';
|
||||
import moveTask from './task-manager/move-task.js';
|
||||
import { migrateProject } from './task-manager/migrate.js';
|
||||
import { readComplexityReport } from './utils.js';
|
||||
// Export task manager functions
|
||||
export {
|
||||
@@ -49,6 +48,5 @@ export {
|
||||
taskExists,
|
||||
isTaskDependentOn,
|
||||
moveTask,
|
||||
readComplexityReport,
|
||||
migrateProject
|
||||
readComplexityReport
|
||||
};
|
||||
|
||||
@@ -14,10 +14,6 @@ import {
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
|
||||
import { getDebugFlag, getProjectName } from '../config-manager.js';
|
||||
import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
LEGACY_TASKS_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
|
||||
/**
|
||||
* Generates the prompt for complexity analysis.
|
||||
@@ -68,8 +64,8 @@ Do not include any explanatory text, markdown formatting, or code block markers
|
||||
*/
|
||||
async function analyzeTaskComplexity(options, context = {}) {
|
||||
const { session, mcpLog } = context;
|
||||
const tasksPath = options.file || LEGACY_TASKS_FILE;
|
||||
const outputPath = options.output || COMPLEXITY_REPORT_FILE;
|
||||
const tasksPath = options.file || 'tasks/tasks.json';
|
||||
const outputPath = options.output || 'scripts/task-complexity-report.json';
|
||||
const thresholdScore = parseFloat(options.threshold || '5');
|
||||
const useResearch = options.research || false;
|
||||
const projectRoot = options.projectRoot;
|
||||
@@ -78,7 +74,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
? options.id
|
||||
.split(',')
|
||||
.map((id) => parseInt(id.trim(), 10))
|
||||
.filter((id) => !Number.isNaN(id))
|
||||
.filter((id) => !isNaN(id))
|
||||
: null;
|
||||
const fromId = options.from !== undefined ? parseInt(options.from, 10) : null;
|
||||
const toId = options.to !== undefined ? parseInt(options.to, 10) : null;
|
||||
@@ -96,7 +92,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
'Analyzing task complexity and generating expansion recommendations...'
|
||||
`Analyzing task complexity and generating expansion recommendations...`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -14,7 +14,6 @@ import { generateTextService } from '../ai-services-unified.js';
|
||||
|
||||
import { getDefaultSubtasks, getDebugFlag } from '../config-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
|
||||
|
||||
// --- Zod Schemas (Keep from previous step) ---
|
||||
const subtaskSchema = z
|
||||
@@ -309,8 +308,7 @@ function parseSubtasksFromText(
|
||||
logger.error(
|
||||
`Advanced extraction: Problematic JSON string for parse (first 500 chars): ${jsonToParse.substring(0, 500)}`
|
||||
);
|
||||
throw new Error(
|
||||
// Re-throw a more specific error if advanced also fails
|
||||
throw new Error( // Re-throw a more specific error if advanced also fails
|
||||
`Failed to parse JSON response object after both simple and advanced attempts: ${parseError.message}`
|
||||
);
|
||||
}
|
||||
@@ -464,7 +462,10 @@ async function expandTask(
|
||||
let complexityReasoningContext = '';
|
||||
let systemPrompt; // Declare systemPrompt here
|
||||
|
||||
const complexityReportPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
const complexityReportPath = path.join(
|
||||
projectRoot,
|
||||
'scripts/task-complexity-report.json'
|
||||
);
|
||||
let taskAnalysis = null;
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,283 +0,0 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { createLogWrapper } from '../../../mcp-server/src/tools/utils.js';
|
||||
import { findProjectRoot } from '../utils.js';
|
||||
import {
|
||||
LEGACY_CONFIG_FILE,
|
||||
TASKMASTER_CONFIG_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// Create a simple log wrapper for CLI use
|
||||
const log = createLogWrapper({
|
||||
info: (msg) => console.log(chalk.blue('ℹ'), msg),
|
||||
warn: (msg) => console.log(chalk.yellow('⚠'), msg),
|
||||
error: (msg) => console.error(chalk.red('✗'), msg),
|
||||
success: (msg) => console.log(chalk.green('✓'), msg)
|
||||
});
|
||||
|
||||
/**
|
||||
* Main migration function
|
||||
* @param {Object} options - Migration options
|
||||
*/
|
||||
export async function migrateProject(options = {}) {
|
||||
const projectRoot = findProjectRoot() || process.cwd();
|
||||
|
||||
log.info(`Starting migration in: ${projectRoot}`);
|
||||
|
||||
// Check if .taskmaster directory already exists
|
||||
const taskmasterDir = path.join(projectRoot, '.taskmaster');
|
||||
if (fs.existsSync(taskmasterDir) && !options.force) {
|
||||
log.warn(
|
||||
'.taskmaster directory already exists. Use --force to overwrite or skip migration.'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Analyze what needs to be migrated
|
||||
const migrationPlan = analyzeMigrationNeeds(projectRoot);
|
||||
|
||||
if (migrationPlan.length === 0) {
|
||||
log.info(
|
||||
'No files to migrate. Project may already be using the new structure.'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Show migration plan
|
||||
log.info('Migration plan:');
|
||||
for (const item of migrationPlan) {
|
||||
const action = options.dryRun ? 'Would move' : 'Will move';
|
||||
log.info(` ${action}: ${item.from} → ${item.to}`);
|
||||
}
|
||||
|
||||
if (options.dryRun) {
|
||||
log.info(
|
||||
'Dry run complete. Use --dry-run=false to perform actual migration.'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Confirm migration
|
||||
if (!options.yes) {
|
||||
const readline = await import('readline');
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout
|
||||
});
|
||||
|
||||
const answer = await new Promise((resolve) => {
|
||||
rl.question('Proceed with migration? (y/N): ', resolve);
|
||||
});
|
||||
rl.close();
|
||||
|
||||
if (answer.toLowerCase() !== 'y' && answer.toLowerCase() !== 'yes') {
|
||||
log.info('Migration cancelled.');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Perform migration
|
||||
try {
|
||||
await performMigration(projectRoot, migrationPlan, options);
|
||||
log.success('Migration completed successfully!');
|
||||
log.info('You can now use the new .taskmaster directory structure.');
|
||||
if (!options.cleanup) {
|
||||
log.info(
|
||||
'Old files were preserved. Use --cleanup to remove them after verification.'
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
log.error(`Migration failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyze what files need to be migrated
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @returns {Array} Migration plan items
|
||||
*/
|
||||
function analyzeMigrationNeeds(projectRoot) {
|
||||
const migrationPlan = [];
|
||||
|
||||
// Check for tasks directory
|
||||
const tasksDir = path.join(projectRoot, 'tasks');
|
||||
if (fs.existsSync(tasksDir)) {
|
||||
const tasksFiles = fs.readdirSync(tasksDir);
|
||||
for (const file of tasksFiles) {
|
||||
migrationPlan.push({
|
||||
from: path.join('tasks', file),
|
||||
to: path.join('.taskmaster', 'tasks', file),
|
||||
type: 'task'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for scripts directory files
|
||||
const scriptsDir = path.join(projectRoot, 'scripts');
|
||||
if (fs.existsSync(scriptsDir)) {
|
||||
const scriptsFiles = fs.readdirSync(scriptsDir);
|
||||
for (const file of scriptsFiles) {
|
||||
const filePath = path.join(scriptsDir, file);
|
||||
if (fs.statSync(filePath).isFile()) {
|
||||
// Categorize files more intelligently
|
||||
let destination;
|
||||
const lowerFile = file.toLowerCase();
|
||||
|
||||
if (
|
||||
lowerFile.includes('example') ||
|
||||
lowerFile.includes('template') ||
|
||||
lowerFile.includes('boilerplate') ||
|
||||
lowerFile.includes('sample')
|
||||
) {
|
||||
// Template/example files go to templates (including example_prd.txt)
|
||||
destination = path.join('.taskmaster', 'templates', file);
|
||||
} else if (
|
||||
lowerFile.includes('complexity') &&
|
||||
lowerFile.includes('report') &&
|
||||
lowerFile.endsWith('.json')
|
||||
) {
|
||||
// Only actual complexity reports go to reports
|
||||
destination = path.join('.taskmaster', 'reports', file);
|
||||
} else if (
|
||||
lowerFile.includes('prd') ||
|
||||
lowerFile.endsWith('.md') ||
|
||||
lowerFile.endsWith('.txt')
|
||||
) {
|
||||
// Documentation files go to docs (but not examples or reports)
|
||||
destination = path.join('.taskmaster', 'docs', file);
|
||||
} else {
|
||||
// Other files stay in scripts or get skipped - don't force everything into templates
|
||||
log.warn(
|
||||
`Skipping migration of '${file}' - uncertain categorization. You may need to move this manually.`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
migrationPlan.push({
|
||||
from: path.join('scripts', file),
|
||||
to: destination,
|
||||
type: 'script'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for .taskmasterconfig
|
||||
const oldConfig = path.join(projectRoot, LEGACY_CONFIG_FILE);
|
||||
if (fs.existsSync(oldConfig)) {
|
||||
migrationPlan.push({
|
||||
from: LEGACY_CONFIG_FILE,
|
||||
to: TASKMASTER_CONFIG_FILE,
|
||||
type: 'config'
|
||||
});
|
||||
}
|
||||
|
||||
return migrationPlan;
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform the actual migration
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {Array} migrationPlan - List of files to migrate
|
||||
* @param {Object} options - Migration options
|
||||
*/
|
||||
async function performMigration(projectRoot, migrationPlan, options) {
|
||||
// Create .taskmaster directory
|
||||
const taskmasterDir = path.join(projectRoot, '.taskmaster');
|
||||
if (!fs.existsSync(taskmasterDir)) {
|
||||
fs.mkdirSync(taskmasterDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Group migration items by destination directory to create only needed subdirs
|
||||
const neededDirs = new Set();
|
||||
for (const item of migrationPlan) {
|
||||
const destDir = path.dirname(item.to);
|
||||
neededDirs.add(destDir);
|
||||
}
|
||||
|
||||
// Create only the directories we actually need
|
||||
for (const dir of neededDirs) {
|
||||
const fullDirPath = path.join(projectRoot, dir);
|
||||
if (!fs.existsSync(fullDirPath)) {
|
||||
fs.mkdirSync(fullDirPath, { recursive: true });
|
||||
log.info(`Created directory: ${dir}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Create backup if requested
|
||||
if (options.backup) {
|
||||
const backupDir = path.join(projectRoot, '.taskmaster-migration-backup');
|
||||
log.info(`Creating backup in: ${backupDir}`);
|
||||
if (fs.existsSync(backupDir)) {
|
||||
fs.rmSync(backupDir, { recursive: true, force: true });
|
||||
}
|
||||
fs.mkdirSync(backupDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Migrate files
|
||||
for (const item of migrationPlan) {
|
||||
const fromPath = path.join(projectRoot, item.from);
|
||||
const toPath = path.join(projectRoot, item.to);
|
||||
|
||||
if (!fs.existsSync(fromPath)) {
|
||||
log.warn(`Source file not found: ${item.from}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Create backup if requested
|
||||
if (options.backup) {
|
||||
const backupPath = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster-migration-backup',
|
||||
item.from
|
||||
);
|
||||
const backupDir = path.dirname(backupPath);
|
||||
if (!fs.existsSync(backupDir)) {
|
||||
fs.mkdirSync(backupDir, { recursive: true });
|
||||
}
|
||||
fs.copyFileSync(fromPath, backupPath);
|
||||
}
|
||||
|
||||
// Ensure destination directory exists
|
||||
const toDir = path.dirname(toPath);
|
||||
if (!fs.existsSync(toDir)) {
|
||||
fs.mkdirSync(toDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Copy file
|
||||
fs.copyFileSync(fromPath, toPath);
|
||||
log.info(`Migrated: ${item.from} → ${item.to}`);
|
||||
|
||||
// Remove original if cleanup is requested
|
||||
if (options.cleanup) {
|
||||
fs.unlinkSync(fromPath);
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up empty directories if cleanup is requested
|
||||
if (options.cleanup) {
|
||||
const dirsToCheck = ['tasks', 'scripts'];
|
||||
for (const dir of dirsToCheck) {
|
||||
const dirPath = path.join(projectRoot, dir);
|
||||
if (fs.existsSync(dirPath)) {
|
||||
try {
|
||||
const files = fs.readdirSync(dirPath);
|
||||
if (files.length === 0) {
|
||||
fs.rmdirSync(dirPath);
|
||||
log.info(`Removed empty directory: ${dir}`);
|
||||
}
|
||||
} catch (error) {
|
||||
// Directory not empty or other error, skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export default { migrateProject };
|
||||
@@ -3,6 +3,8 @@
|
||||
* Core functionality for managing AI model configurations
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import https from 'https';
|
||||
import http from 'http';
|
||||
import {
|
||||
@@ -21,8 +23,6 @@ import {
|
||||
getAllProviders,
|
||||
getBaseUrlForRole
|
||||
} from '../config-manager.js';
|
||||
import { findConfigPath } from '../../../src/utils/path-utils.js';
|
||||
import { log } from '../utils.js';
|
||||
|
||||
/**
|
||||
* Fetches the list of models from OpenRouter API.
|
||||
@@ -72,14 +72,14 @@ function fetchOpenRouterModels() {
|
||||
|
||||
/**
|
||||
* Fetches the list of models from Ollama instance.
|
||||
* @param {string} baseURL - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
|
||||
* @param {string} baseUrl - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
|
||||
* @returns {Promise<Array|null>} A promise that resolves with the list of model objects or null if fetch fails.
|
||||
*/
|
||||
function fetchOllamaModels(baseURL = 'http://localhost:11434/api') {
|
||||
function fetchOllamaModels(baseUrl = 'http://localhost:11434/api') {
|
||||
return new Promise((resolve) => {
|
||||
try {
|
||||
// Parse the base URL to extract hostname, port, and base path
|
||||
const url = new URL(baseURL);
|
||||
const url = new URL(baseUrl);
|
||||
const isHttps = url.protocol === 'https:';
|
||||
const port = url.port || (isHttps ? 443 : 80);
|
||||
const basePath = url.pathname.endsWith('/')
|
||||
@@ -149,27 +149,34 @@ async function getModelConfiguration(options = {}) {
|
||||
}
|
||||
};
|
||||
|
||||
if (!projectRoot) {
|
||||
throw new Error('Project root is required but not found.');
|
||||
// Check if configuration file exists using provided project root
|
||||
let configPath;
|
||||
let configExists = false;
|
||||
|
||||
if (projectRoot) {
|
||||
configPath = path.join(projectRoot, '.taskmasterconfig');
|
||||
configExists = fs.existsSync(configPath);
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
|
||||
);
|
||||
} else {
|
||||
configExists = isConfigFilePresent();
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
}
|
||||
|
||||
// Use centralized config path finding instead of hardcoded path
|
||||
const configPath = findConfigPath(null, { projectRoot });
|
||||
const configExists = isConfigFilePresent(projectRoot);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Checking for config file using findConfigPath, found: ${configPath}`
|
||||
);
|
||||
log(
|
||||
'debug',
|
||||
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
|
||||
if (!configExists) {
|
||||
throw new Error(
|
||||
'The configuration file is missing. Run "task-master models --setup" to create it.'
|
||||
);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CONFIG_MISSING',
|
||||
message:
|
||||
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
@@ -279,27 +286,34 @@ async function getAvailableModelsList(options = {}) {
|
||||
}
|
||||
};
|
||||
|
||||
if (!projectRoot) {
|
||||
throw new Error('Project root is required but not found.');
|
||||
// Check if configuration file exists using provided project root
|
||||
let configPath;
|
||||
let configExists = false;
|
||||
|
||||
if (projectRoot) {
|
||||
configPath = path.join(projectRoot, '.taskmasterconfig');
|
||||
configExists = fs.existsSync(configPath);
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
|
||||
);
|
||||
} else {
|
||||
configExists = isConfigFilePresent();
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
}
|
||||
|
||||
// Use centralized config path finding instead of hardcoded path
|
||||
const configPath = findConfigPath(null, { projectRoot });
|
||||
const configExists = isConfigFilePresent(projectRoot);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Checking for config file using findConfigPath, found: ${configPath}`
|
||||
);
|
||||
log(
|
||||
'debug',
|
||||
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
|
||||
if (!configExists) {
|
||||
throw new Error(
|
||||
'The configuration file is missing. Run "task-master models --setup" to create it.'
|
||||
);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CONFIG_MISSING',
|
||||
message:
|
||||
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
@@ -372,27 +386,34 @@ async function setModel(role, modelId, options = {}) {
|
||||
}
|
||||
};
|
||||
|
||||
if (!projectRoot) {
|
||||
throw new Error('Project root is required but not found.');
|
||||
// Check if configuration file exists using provided project root
|
||||
let configPath;
|
||||
let configExists = false;
|
||||
|
||||
if (projectRoot) {
|
||||
configPath = path.join(projectRoot, '.taskmasterconfig');
|
||||
configExists = fs.existsSync(configPath);
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
|
||||
);
|
||||
} else {
|
||||
configExists = isConfigFilePresent();
|
||||
report(
|
||||
'info',
|
||||
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
}
|
||||
|
||||
// Use centralized config path finding instead of hardcoded path
|
||||
const configPath = findConfigPath(null, { projectRoot });
|
||||
const configExists = isConfigFilePresent(projectRoot);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Checking for config file using findConfigPath, found: ${configPath}`
|
||||
);
|
||||
log(
|
||||
'debug',
|
||||
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
|
||||
if (!configExists) {
|
||||
throw new Error(
|
||||
'The configuration file is missing. Run "task-master models --setup" to create it.'
|
||||
);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CONFIG_MISSING',
|
||||
message:
|
||||
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Validate role
|
||||
@@ -424,7 +445,7 @@ async function setModel(role, modelId, options = {}) {
|
||||
let warningMessage = null;
|
||||
|
||||
// Find the model data in internal list initially to see if it exists at all
|
||||
const modelData = availableModels.find((m) => m.id === modelId);
|
||||
let modelData = availableModels.find((m) => m.id === modelId);
|
||||
|
||||
// --- Revised Logic: Prioritize providerHint --- //
|
||||
|
||||
@@ -463,13 +484,13 @@ async function setModel(role, modelId, options = {}) {
|
||||
report('info', `Checking Ollama for ${modelId} (as hinted)...`);
|
||||
|
||||
// Get the Ollama base URL from config
|
||||
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
|
||||
const ollamaModels = await fetchOllamaModels(ollamaBaseURL);
|
||||
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
|
||||
const ollamaModels = await fetchOllamaModels(ollamaBaseUrl);
|
||||
|
||||
if (ollamaModels === null) {
|
||||
// Connection failed - server probably not running
|
||||
throw new Error(
|
||||
`Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
|
||||
`Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
|
||||
);
|
||||
} else if (ollamaModels.some((m) => m.model === modelId)) {
|
||||
determinedProvider = 'ollama';
|
||||
@@ -477,7 +498,7 @@ async function setModel(role, modelId, options = {}) {
|
||||
report('warn', warningMessage);
|
||||
} else {
|
||||
// Server is running but model not found
|
||||
const tagsUrl = `${ollamaBaseURL}/tags`;
|
||||
const tagsUrl = `${ollamaBaseUrl}/tags`;
|
||||
throw new Error(
|
||||
`Model ID "${modelId}" not found in the Ollama instance. Please verify the model is pulled and available. You can check available models with: curl ${tagsUrl}`
|
||||
);
|
||||
@@ -535,8 +556,8 @@ async function setModel(role, modelId, options = {}) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CONFIG_WRITE_ERROR',
|
||||
message: 'Error writing updated configuration to configuration file'
|
||||
code: 'WRITE_ERROR',
|
||||
message: 'Error writing updated configuration to .taskmasterconfig'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@@ -24,7 +24,6 @@ import {
|
||||
} from './task-manager.js';
|
||||
import { getProjectName, getDefaultSubtasks } from './config-manager.js';
|
||||
import { TASK_STATUS_OPTIONS } from '../../src/constants/task-status.js';
|
||||
import { TASKMASTER_TASKS_FILE } from '../../src/constants/paths.js';
|
||||
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
|
||||
|
||||
// Create a color gradient for the banner
|
||||
@@ -1525,18 +1524,10 @@ async function displayComplexityReport(reportPath) {
|
||||
if (answer.toLowerCase() === 'y' || answer.toLowerCase() === 'yes') {
|
||||
// Call the analyze-complexity command
|
||||
console.log(chalk.blue('Generating complexity report...'));
|
||||
const tasksPath = TASKMASTER_TASKS_FILE;
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
console.error(
|
||||
'❌ No tasks.json file found. Please run "task-master init" or create a tasks.json file.'
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
await analyzeTaskComplexity({
|
||||
output: reportPath,
|
||||
research: false, // Default to no research for speed
|
||||
file: tasksPath
|
||||
file: 'tasks/tasks.json'
|
||||
});
|
||||
// Read the newly generated report
|
||||
return displayComplexityReport(reportPath);
|
||||
|
||||
@@ -9,11 +9,6 @@ import chalk from 'chalk';
|
||||
import dotenv from 'dotenv';
|
||||
// Import specific config getters needed here
|
||||
import { getLogLevel, getDebugFlag } from './config-manager.js';
|
||||
import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
LEGACY_COMPLEXITY_REPORT_FILE,
|
||||
LEGACY_CONFIG_FILE
|
||||
} from '../../src/constants/paths.js';
|
||||
|
||||
// Global silent mode flag
|
||||
let silentMode = false;
|
||||
@@ -65,16 +60,16 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
|
||||
|
||||
// --- Project Root Finding Utility ---
|
||||
/**
|
||||
* Recursively searches upwards for project root starting from a given directory.
|
||||
* @param {string} [startDir=process.cwd()] - The directory to start searching from.
|
||||
* @param {string[]} [markers=['package.json', '.git', LEGACY_CONFIG_FILE]] - Marker files/dirs to look for.
|
||||
* @returns {string|null} The path to the project root, or null if not found.
|
||||
* Finds the project root directory by searching for marker files/directories.
|
||||
* @param {string} [startPath=process.cwd()] - The directory to start searching from.
|
||||
* @param {string[]} [markers=['package.json', '.git', '.taskmasterconfig']] - Marker files/dirs to look for.
|
||||
* @returns {string|null} The path to the project root directory, or null if not found.
|
||||
*/
|
||||
function findProjectRoot(
|
||||
startDir = process.cwd(),
|
||||
markers = ['package.json', '.git', LEGACY_CONFIG_FILE]
|
||||
startPath = process.cwd(),
|
||||
markers = ['package.json', '.git', '.taskmasterconfig']
|
||||
) {
|
||||
let currentPath = path.resolve(startDir);
|
||||
let currentPath = path.resolve(startPath);
|
||||
const rootPath = path.parse(currentPath).root;
|
||||
|
||||
while (currentPath !== rootPath) {
|
||||
@@ -241,7 +236,7 @@ function sanitizePrompt(prompt) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Reads the complexity report from file
|
||||
* Reads and parses the complexity report if it exists
|
||||
* @param {string} customPath - Optional custom path to the report
|
||||
* @returns {Object|null} The parsed complexity report or null if not found
|
||||
*/
|
||||
@@ -249,35 +244,21 @@ function readComplexityReport(customPath = null) {
|
||||
// Get debug flag dynamically from config-manager
|
||||
const isDebug = getDebugFlag();
|
||||
try {
|
||||
let reportPath;
|
||||
if (customPath) {
|
||||
reportPath = customPath;
|
||||
} else {
|
||||
// Try new location first, then fall back to legacy
|
||||
const newPath = path.join(process.cwd(), COMPLEXITY_REPORT_FILE);
|
||||
const legacyPath = path.join(
|
||||
process.cwd(),
|
||||
LEGACY_COMPLEXITY_REPORT_FILE
|
||||
);
|
||||
|
||||
reportPath = fs.existsSync(newPath) ? newPath : legacyPath;
|
||||
}
|
||||
|
||||
const reportPath =
|
||||
customPath ||
|
||||
path.join(process.cwd(), 'scripts', 'task-complexity-report.json');
|
||||
if (!fs.existsSync(reportPath)) {
|
||||
if (isDebug) {
|
||||
log('debug', `Complexity report not found at ${reportPath}`);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
const reportData = readJSON(reportPath);
|
||||
if (isDebug) {
|
||||
log('debug', `Successfully read complexity report from ${reportPath}`);
|
||||
}
|
||||
return reportData;
|
||||
const reportData = fs.readFileSync(reportPath, 'utf8');
|
||||
return JSON.parse(reportData);
|
||||
} catch (error) {
|
||||
log('warn', `Could not read complexity report: ${error.message}`);
|
||||
// Optionally log full error in debug mode
|
||||
if (isDebug) {
|
||||
log('error', `Error reading complexity report: ${error.message}`);
|
||||
// Use dynamic debug flag
|
||||
log('error', 'Full error details:', error);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@@ -414,7 +395,7 @@ function findTaskById(
|
||||
}
|
||||
|
||||
let taskResult = null;
|
||||
const originalSubtaskCount = null;
|
||||
let originalSubtaskCount = null;
|
||||
|
||||
// Find the main task
|
||||
const id = parseInt(taskId, 10);
|
||||
@@ -429,6 +410,7 @@ function findTaskById(
|
||||
|
||||
// If task found and statusFilter provided, filter its subtasks
|
||||
if (statusFilter && task.subtasks && Array.isArray(task.subtasks)) {
|
||||
const originalSubtaskCount = task.subtasks.length;
|
||||
// Clone the task to avoid modifying the original array
|
||||
const filteredTask = { ...task };
|
||||
filteredTask.subtasks = task.subtasks.filter(
|
||||
@@ -438,6 +420,7 @@ function findTaskById(
|
||||
);
|
||||
|
||||
taskResult = filteredTask;
|
||||
originalSubtaskCount = originalSubtaskCount;
|
||||
}
|
||||
|
||||
// If task found and complexityReport provided, add complexity data
|
||||
@@ -460,7 +443,7 @@ function truncate(text, maxLength) {
|
||||
return text;
|
||||
}
|
||||
|
||||
return `${text.slice(0, maxLength - 3)}...`;
|
||||
return text.slice(0, maxLength - 3) + '...';
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
* Implementation for interacting with Anthropic models (e.g., Claude)
|
||||
* using the Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createAnthropic } from '@ai-sdk/anthropic';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { generateText, streamText, generateObject } from 'ai';
|
||||
import { log } from '../../scripts/modules/utils.js'; // Assuming utils is accessible
|
||||
|
||||
// TODO: Implement standardized functions for generateText, streamText, generateObject
|
||||
|
||||
@@ -17,38 +17,207 @@ import { BaseAIProvider } from './base-provider.js';
|
||||
// Remove the global variable and caching logic
|
||||
// let anthropicClient;
|
||||
|
||||
export class AnthropicAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Anthropic';
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) {
|
||||
// In a real scenario, this would use the config resolver.
|
||||
// Throwing error here if key isn't passed for simplicity.
|
||||
// Keep the error check for the passed key
|
||||
throw new Error('Anthropic API key is required.');
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an Anthropic client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - Anthropic API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} Anthropic client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('Anthropic API key is required.');
|
||||
}
|
||||
|
||||
return createAnthropic({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL }),
|
||||
headers: {
|
||||
'anthropic-beta': 'output-128k-2025-02-19'
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
// Remove the check for anthropicClient
|
||||
// if (!anthropicClient) {
|
||||
// TODO: Explore passing options like default headers if needed
|
||||
// Create and return a new instance directly with standard version header
|
||||
return createAnthropic({
|
||||
apiKey: apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl }),
|
||||
// Use standard version header instead of beta
|
||||
headers: {
|
||||
'anthropic-beta': 'output-128k-2025-02-19'
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// --- Standardized Service Function Implementations ---
|
||||
|
||||
/**
|
||||
* Generates text using an Anthropic model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text generation.
|
||||
* @param {string} params.apiKey - The Anthropic API key.
|
||||
* @param {string} params.modelId - The specific Anthropic model ID.
|
||||
* @param {Array<object>} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
|
||||
* @returns {Promise<object>} The generated text content and usage.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
export async function generateAnthropicText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Generating Anthropic text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const result = await generateText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
// Beta header moved to client initialization
|
||||
// TODO: Add other relevant parameters like topP, topK if needed
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`Anthropic generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
// Return both text and usage
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log('error', `Anthropic generateText failed: ${error.message}`);
|
||||
// Consider more specific error handling or re-throwing a standardized error
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using an Anthropic model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text streaming.
|
||||
* @param {string} params.apiKey - The Anthropic API key.
|
||||
* @param {string} params.modelId - The specific Anthropic model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
|
||||
* @returns {Promise<object>} The full stream result object from the Vercel AI SDK.
|
||||
* @throws {Error} If the API call fails to initiate the stream.
|
||||
*/
|
||||
export async function streamAnthropicText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Streaming Anthropic text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
'[streamAnthropicText] Parameters received by streamText:',
|
||||
JSON.stringify(
|
||||
{
|
||||
modelId: modelId,
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
},
|
||||
null,
|
||||
2
|
||||
)
|
||||
);
|
||||
|
||||
const stream = await streamText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
// TODO: Add other relevant parameters
|
||||
});
|
||||
|
||||
// *** RETURN THE FULL STREAM OBJECT, NOT JUST stream.textStream ***
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log('error', `Anthropic streamText failed: ${error.message}`, error.stack);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using an Anthropic model.
|
||||
* NOTE: Anthropic's tool/function calling support might have limitations
|
||||
* compared to OpenAI, especially regarding complex schemas or enforcement.
|
||||
* The Vercel AI SDK attempts to abstract this.
|
||||
*
|
||||
* @param {object} params - Parameters for object generation.
|
||||
* @param {string} params.apiKey - The Anthropic API key.
|
||||
* @param {string} params.modelId - The specific Anthropic model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the object.
|
||||
* @param {string} params.objectName - A name for the object/tool.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the Anthropic API.
|
||||
* @returns {Promise<object>} The generated object matching the schema and usage.
|
||||
* @throws {Error} If generation or validation fails.
|
||||
*/
|
||||
export async function generateAnthropicObject({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
schema,
|
||||
objectName = 'generated_object',
|
||||
maxTokens,
|
||||
temperature,
|
||||
maxRetries = 3,
|
||||
baseUrl
|
||||
}) {
|
||||
log(
|
||||
'debug',
|
||||
`Generating Anthropic object ('${objectName}') with model: ${modelId}`
|
||||
);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
log(
|
||||
'debug',
|
||||
`Using maxTokens: ${maxTokens}, temperature: ${temperature}, model: ${modelId}`
|
||||
);
|
||||
const result = await generateObject({
|
||||
model: client(modelId),
|
||||
mode: 'tool',
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
tool: {
|
||||
name: objectName,
|
||||
description: `Generate a ${objectName} based on the prompt.`
|
||||
},
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
maxRetries: maxRetries
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`Anthropic generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
// Return both object and usage
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Anthropic generateObject ('${objectName}') failed: ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,52 +0,0 @@
|
||||
/**
|
||||
* azure.js
|
||||
* AI provider implementation for Azure OpenAI models using Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createAzure } from '@ai-sdk/azure';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
|
||||
export class AzureProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Azure OpenAI';
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates Azure-specific authentication parameters
|
||||
* @param {object} params - Parameters to validate
|
||||
* @throws {Error} If required parameters are missing
|
||||
*/
|
||||
validateAuth(params) {
|
||||
if (!params.apiKey) {
|
||||
throw new Error('Azure API key is required');
|
||||
}
|
||||
|
||||
if (!params.baseURL) {
|
||||
throw new Error(
|
||||
'Azure endpoint URL is required. Set it in .taskmasterconfig global.azureBaseURL or models.[role].baseURL'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an Azure OpenAI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - Azure OpenAI API key
|
||||
* @param {string} params.baseURL - Azure OpenAI endpoint URL (from .taskmasterconfig global.azureBaseURL or models.[role].baseURL)
|
||||
* @returns {Function} Azure OpenAI client function
|
||||
* @throws {Error} If required parameters are missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
|
||||
return createAzure({
|
||||
apiKey,
|
||||
baseURL
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,214 +0,0 @@
|
||||
import { generateText, streamText, generateObject } from 'ai';
|
||||
import { log } from '../../scripts/modules/index.js';
|
||||
|
||||
/**
|
||||
* Base class for all AI providers
|
||||
*/
|
||||
export class BaseAIProvider {
|
||||
constructor() {
|
||||
if (this.constructor === BaseAIProvider) {
|
||||
throw new Error('BaseAIProvider cannot be instantiated directly');
|
||||
}
|
||||
|
||||
// Each provider must set their name
|
||||
this.name = this.constructor.name;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates authentication parameters - can be overridden by providers
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(params) {
|
||||
// Default: require API key (most providers need this)
|
||||
if (!params.apiKey) {
|
||||
throw new Error(`${this.name} API key is required`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates common parameters across all methods
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateParams(params) {
|
||||
// Validate authentication (can be overridden by providers)
|
||||
this.validateAuth(params);
|
||||
|
||||
// Validate required model ID
|
||||
if (!params.modelId) {
|
||||
throw new Error(`${this.name} Model ID is required`);
|
||||
}
|
||||
|
||||
// Validate optional parameters
|
||||
this.validateOptionalParams(params);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates optional parameters like temperature and maxTokens
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateOptionalParams(params) {
|
||||
if (
|
||||
params.temperature !== undefined &&
|
||||
(params.temperature < 0 || params.temperature > 1)
|
||||
) {
|
||||
throw new Error('Temperature must be between 0 and 1');
|
||||
}
|
||||
if (params.maxTokens !== undefined && params.maxTokens <= 0) {
|
||||
throw new Error('maxTokens must be greater than 0');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates message array structure
|
||||
*/
|
||||
validateMessages(messages) {
|
||||
if (!messages || !Array.isArray(messages) || messages.length === 0) {
|
||||
throw new Error('Invalid or empty messages array provided');
|
||||
}
|
||||
|
||||
for (const msg of messages) {
|
||||
if (!msg.role || !msg.content) {
|
||||
throw new Error(
|
||||
'Invalid message format. Each message must have role and content'
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Common error handler
|
||||
*/
|
||||
handleError(operation, error) {
|
||||
const errorMessage = error.message || 'Unknown error occurred';
|
||||
log('error', `${this.name} ${operation} failed: ${errorMessage}`, {
|
||||
error
|
||||
});
|
||||
throw new Error(
|
||||
`${this.name} API error during ${operation}: ${errorMessage}`
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a client instance for the provider
|
||||
* @abstract
|
||||
*/
|
||||
getClient(params) {
|
||||
throw new Error('getClient must be implemented by provider');
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates text using the provider's model
|
||||
*/
|
||||
async generateText(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Generating ${this.name} text with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const result = await generateText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} generateText completed successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleError('text generation', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using the provider's model
|
||||
*/
|
||||
async streamText(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const stream = await streamText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} streamText initiated successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
return stream;
|
||||
} catch (error) {
|
||||
this.handleError('text streaming', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using the provider's model
|
||||
*/
|
||||
async generateObject(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
if (!params.schema) {
|
||||
throw new Error('Schema is required for object generation');
|
||||
}
|
||||
if (!params.objectName) {
|
||||
throw new Error('Object name is required for object generation');
|
||||
}
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const result = await generateObject({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
schema: params.schema,
|
||||
mode: 'tool',
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} generateObject completed successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleError('object generation', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
|
||||
import { fromNodeProviderChain } from '@aws-sdk/credential-providers';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
|
||||
export class BedrockAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Bedrock';
|
||||
}
|
||||
|
||||
/**
|
||||
* Override auth validation - Bedrock uses AWS credentials instead of API keys
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(params) {}
|
||||
|
||||
/**
|
||||
* Creates and returns a Bedrock client instance.
|
||||
* See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
|
||||
* for AWS SDK environment variables and configuration options.
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const {
|
||||
profile = process.env.AWS_PROFILE || 'default',
|
||||
region = process.env.AWS_DEFAULT_REGION || 'us-east-1',
|
||||
baseURL
|
||||
} = params;
|
||||
|
||||
const credentialProvider = fromNodeProviderChain({ profile });
|
||||
|
||||
return createAmazonBedrock({
|
||||
region,
|
||||
credentialProvider,
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,150 +0,0 @@
|
||||
/**
|
||||
* google-vertex.js
|
||||
* AI provider implementation for Google Vertex AI models using Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createVertex } from '@ai-sdk/google-vertex';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { resolveEnvVariable } from '../../scripts/modules/utils.js';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Vertex-specific error classes
|
||||
class VertexAuthError extends Error {
|
||||
constructor(message) {
|
||||
super(message);
|
||||
this.name = 'VertexAuthError';
|
||||
this.code = 'vertex_auth_error';
|
||||
}
|
||||
}
|
||||
|
||||
class VertexConfigError extends Error {
|
||||
constructor(message) {
|
||||
super(message);
|
||||
this.name = 'VertexConfigError';
|
||||
this.code = 'vertex_config_error';
|
||||
}
|
||||
}
|
||||
|
||||
class VertexApiError extends Error {
|
||||
constructor(message, statusCode) {
|
||||
super(message);
|
||||
this.name = 'VertexApiError';
|
||||
this.code = 'vertex_api_error';
|
||||
this.statusCode = statusCode;
|
||||
}
|
||||
}
|
||||
|
||||
export class VertexAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Google Vertex AI';
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates Vertex AI-specific authentication parameters
|
||||
* @param {object} params - Parameters to validate
|
||||
* @throws {Error} If required parameters are missing
|
||||
*/
|
||||
validateAuth(params) {
|
||||
const { apiKey, projectId, location, credentials } = params;
|
||||
|
||||
// Check for API key OR service account credentials
|
||||
if (!apiKey && !credentials) {
|
||||
throw new VertexAuthError(
|
||||
'Either Google API key (GOOGLE_API_KEY) or service account credentials (GOOGLE_APPLICATION_CREDENTIALS) is required for Vertex AI'
|
||||
);
|
||||
}
|
||||
|
||||
// Project ID is required for Vertex AI
|
||||
if (!projectId) {
|
||||
throw new VertexConfigError(
|
||||
'Google Cloud project ID is required for Vertex AI. Set VERTEX_PROJECT_ID environment variable.'
|
||||
);
|
||||
}
|
||||
|
||||
// Location is required for Vertex AI
|
||||
if (!location) {
|
||||
throw new VertexConfigError(
|
||||
'Google Cloud location is required for Vertex AI. Set VERTEX_LOCATION environment variable (e.g., "us-central1").'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a Google Vertex AI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.apiKey] - Google API key
|
||||
* @param {string} params.projectId - Google Cloud project ID
|
||||
* @param {string} params.location - Google Cloud location (e.g., "us-central1")
|
||||
* @param {object} [params.credentials] - Service account credentials object
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} Google Vertex AI client function
|
||||
* @throws {Error} If required parameters are missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
// Validate required parameters
|
||||
this.validateAuth(params);
|
||||
|
||||
const { apiKey, projectId, location, credentials, baseURL } = params;
|
||||
|
||||
// Configure auth options - either API key or service account
|
||||
const authOptions = {};
|
||||
if (apiKey) {
|
||||
authOptions.apiKey = apiKey;
|
||||
} else if (credentials) {
|
||||
authOptions.googleAuthOptions = credentials;
|
||||
}
|
||||
|
||||
// Return Vertex AI client
|
||||
return createVertex({
|
||||
...authOptions,
|
||||
projectId,
|
||||
location,
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors from Vertex AI
|
||||
* @param {string} operation - Description of the operation that failed
|
||||
* @param {Error} error - The error object
|
||||
* @throws {Error} Rethrows the error with additional context
|
||||
*/
|
||||
handleError(operation, error) {
|
||||
log('error', `Vertex AI ${operation} error:`, error);
|
||||
|
||||
// Handle known error types
|
||||
if (
|
||||
error.name === 'VertexAuthError' ||
|
||||
error.name === 'VertexConfigError' ||
|
||||
error.name === 'VertexApiError'
|
||||
) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Handle network/API errors
|
||||
if (error.response) {
|
||||
const statusCode = error.response.status;
|
||||
const errorMessage = error.response.data?.error?.message || error.message;
|
||||
|
||||
// Categorize by status code
|
||||
if (statusCode === 401 || statusCode === 403) {
|
||||
throw new VertexAuthError(`Authentication failed: ${errorMessage}`);
|
||||
} else if (statusCode === 400) {
|
||||
throw new VertexConfigError(`Invalid request: ${errorMessage}`);
|
||||
} else {
|
||||
throw new VertexApiError(
|
||||
`API error (${statusCode}): ${errorMessage}`,
|
||||
statusCode
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Generic error handling
|
||||
throw new Error(`Vertex AI ${operation} failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
@@ -1,39 +1,181 @@
|
||||
/**
|
||||
* google.js
|
||||
* AI provider implementation for Google AI models using Vercel AI SDK.
|
||||
* AI provider implementation for Google AI models (e.g., Gemini) using Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createGoogleGenerativeAI } from '@ai-sdk/google';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
// import { GoogleGenerativeAI } from '@ai-sdk/google'; // Incorrect import
|
||||
import { createGoogleGenerativeAI } from '@ai-sdk/google'; // Correct import for customization
|
||||
import { generateText, streamText, generateObject } from 'ai'; // Import from main 'ai' package
|
||||
import { log } from '../../scripts/modules/utils.js'; // Import logging utility
|
||||
|
||||
export class GoogleAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Google';
|
||||
// Consider making model configurable via config-manager.js later
|
||||
const DEFAULT_MODEL = 'gemini-2.5-pro-exp-03-25'; // Or a suitable default
|
||||
const DEFAULT_TEMPERATURE = 0.2; // Or a suitable default
|
||||
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
return createGoogleGenerativeAI({
|
||||
apiKey: apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl })
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a Google AI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - Google API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} Google AI client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
/**
|
||||
* Generates text using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the generation.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history (system/user prompts).
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function generateGoogleText({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
maxTokens,
|
||||
baseUrl
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Generating text with Google model: ${modelId}`);
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
try {
|
||||
const googleProvider = getClient(apiKey, baseUrl);
|
||||
const model = googleProvider(modelId);
|
||||
const result = await generateText({
|
||||
model,
|
||||
messages,
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens
|
||||
});
|
||||
|
||||
// Assuming result structure provides text directly or within a property
|
||||
// return result.text; // Adjust based on actual SDK response
|
||||
// Return both text and usage
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
|
||||
return createGoogleGenerativeAI({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error generating text with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the streaming.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<ReadableStream>} A readable stream of text deltas.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function streamGoogleText({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
maxTokens,
|
||||
baseUrl
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Streaming text with Google model: ${modelId}`);
|
||||
|
||||
try {
|
||||
const googleProvider = getClient(apiKey, baseUrl);
|
||||
const model = googleProvider(modelId);
|
||||
const stream = await streamText({
|
||||
model,
|
||||
messages,
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens
|
||||
});
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error streaming text with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using a Google AI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the object generation.
|
||||
* @param {string} params.apiKey - Google API Key.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {import('zod').ZodSchema} params.schema - Zod schema for the expected object.
|
||||
* @param {string} params.objectName - Name for the object generation context.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If API key is missing or API call fails.
|
||||
*/
|
||||
async function generateGoogleObject({
|
||||
apiKey,
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
schema,
|
||||
objectName, // Note: Vercel SDK might use this differently or not at all
|
||||
maxTokens,
|
||||
baseUrl
|
||||
}) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Google API key is required.');
|
||||
}
|
||||
log('info', `Generating object with Google model: ${modelId}`);
|
||||
|
||||
try {
|
||||
const googleProvider = getClient(apiKey, baseUrl);
|
||||
const model = googleProvider(modelId);
|
||||
const result = await generateObject({
|
||||
model,
|
||||
schema,
|
||||
messages,
|
||||
temperature,
|
||||
maxOutputTokens: maxTokens
|
||||
});
|
||||
|
||||
// return object; // Return the parsed object
|
||||
// Return both object and usage
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error generating object with Google (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export { generateGoogleText, streamGoogleText, generateGoogleObject };
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
/**
|
||||
* src/ai-providers/index.js
|
||||
* Central export point for all AI provider classes
|
||||
*/
|
||||
|
||||
export { AnthropicAIProvider } from './anthropic.js';
|
||||
export { PerplexityAIProvider } from './perplexity.js';
|
||||
export { GoogleAIProvider } from './google.js';
|
||||
export { OpenAIProvider } from './openai.js';
|
||||
export { XAIProvider } from './xai.js';
|
||||
export { OpenRouterAIProvider } from './openrouter.js';
|
||||
export { OllamaAIProvider } from './ollama.js';
|
||||
export { BedrockAIProvider } from './bedrock.js';
|
||||
export { AzureProvider } from './azure.js';
|
||||
export { VertexAIProvider } from './google-vertex.js';
|
||||
@@ -4,39 +4,160 @@
|
||||
*/
|
||||
|
||||
import { createOllama } from 'ollama-ai-provider';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { log } from '../../scripts/modules/utils.js'; // Import logging utility
|
||||
import { generateObject, generateText, streamText } from 'ai';
|
||||
|
||||
export class OllamaAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Ollama';
|
||||
}
|
||||
// Consider making model configurable via config-manager.js later
|
||||
const DEFAULT_MODEL = 'llama3'; // Or a suitable default for Ollama
|
||||
const DEFAULT_TEMPERATURE = 0.2;
|
||||
|
||||
/**
|
||||
* Override auth validation - Ollama doesn't require API keys
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(_params) {
|
||||
// Ollama runs locally and doesn't require API keys
|
||||
// No authentication validation needed
|
||||
}
|
||||
function getClient(baseUrl) {
|
||||
// baseUrl is optional, defaults to http://localhost:11434
|
||||
return createOllama({
|
||||
baseUrl: baseUrl || undefined
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an Ollama client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.baseURL] - Optional Ollama base URL (defaults to http://localhost:11434)
|
||||
* @returns {Function} Ollama client function
|
||||
* @throws {Error} If initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { baseURL } = params;
|
||||
/**
|
||||
* Generates text using an Ollama model.
|
||||
*
|
||||
* @param {object} params - Parameters for the generation.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history (system/user prompts).
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @param {string} [params.baseUrl] - Optional Ollama base URL.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If API call fails.
|
||||
*/
|
||||
async function generateOllamaText({
|
||||
modelId = DEFAULT_MODEL,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
baseUrl
|
||||
}) {
|
||||
log('info', `Generating text with Ollama model: ${modelId}`);
|
||||
|
||||
return createOllama({
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
try {
|
||||
const client = getClient(baseUrl);
|
||||
const result = await generateText({
|
||||
model: client(modelId),
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature
|
||||
});
|
||||
log('debug', `Ollama generated text: ${result.text}`);
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error generating text with Ollama (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using an Ollama model.
|
||||
*
|
||||
* @param {object} params - Parameters for the streaming.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @param {string} [params.baseUrl] - Optional Ollama base URL.
|
||||
* @returns {Promise<ReadableStream>} A readable stream of text deltas.
|
||||
* @throws {Error} If API call fails.
|
||||
*/
|
||||
async function streamOllamaText({
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
maxTokens,
|
||||
baseUrl
|
||||
}) {
|
||||
log('info', `Streaming text with Ollama model: ${modelId}`);
|
||||
|
||||
try {
|
||||
const ollama = getClient(baseUrl);
|
||||
const stream = await streamText({
|
||||
model: modelId,
|
||||
messages,
|
||||
temperature,
|
||||
maxTokens
|
||||
});
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error streaming text with Ollama (${modelId}): ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using an Ollama model using the Vercel AI SDK's generateObject.
|
||||
*
|
||||
* @param {object} params - Parameters for the object generation.
|
||||
* @param {string} params.modelId - Specific model ID to use (overrides default).
|
||||
* @param {number} params.temperature - Generation temperature.
|
||||
* @param {Array<object>} params.messages - The conversation history.
|
||||
* @param {import('zod').ZodSchema} params.schema - Zod schema for the expected object.
|
||||
* @param {string} params.objectName - Name for the object generation context.
|
||||
* @param {number} [params.maxTokens] - Optional max tokens.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @param {string} [params.baseUrl] - Optional Ollama base URL.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If generation or validation fails.
|
||||
*/
|
||||
async function generateOllamaObject({
|
||||
modelId = DEFAULT_MODEL,
|
||||
temperature = DEFAULT_TEMPERATURE,
|
||||
messages,
|
||||
schema,
|
||||
objectName = 'generated_object',
|
||||
maxTokens,
|
||||
maxRetries = 3,
|
||||
baseUrl
|
||||
}) {
|
||||
log('info', `Generating object with Ollama model: ${modelId}`);
|
||||
try {
|
||||
const ollama = getClient(baseUrl);
|
||||
const result = await generateObject({
|
||||
model: ollama(modelId),
|
||||
mode: 'tool',
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
tool: {
|
||||
name: objectName,
|
||||
description: `Generate a ${objectName} based on the prompt.`
|
||||
},
|
||||
maxOutputTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
maxRetries: maxRetries
|
||||
});
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Ollama generateObject ('${objectName}') failed: ${error.message}`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export { generateOllamaText, streamOllamaText, generateOllamaObject };
|
||||
|
||||
@@ -1,39 +1,199 @@
|
||||
import { createOpenAI } from '@ai-sdk/openai'; // Using openai provider from Vercel AI SDK
|
||||
import { generateObject, generateText } from 'ai'; // Import necessary functions from 'ai'
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) {
|
||||
throw new Error('OpenAI API key is required.');
|
||||
}
|
||||
return createOpenAI({
|
||||
apiKey: apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl })
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* openai.js
|
||||
* AI provider implementation for OpenAI models using Vercel AI SDK.
|
||||
* Generates text using OpenAI models via Vercel AI SDK.
|
||||
*
|
||||
* @param {object} params - Parameters including apiKey, modelId, messages, maxTokens, temperature, baseUrl.
|
||||
* @returns {Promise<object>} The generated text content and usage.
|
||||
* @throws {Error} If API call fails.
|
||||
*/
|
||||
export async function generateOpenAIText(params) {
|
||||
const { apiKey, modelId, messages, maxTokens, temperature, baseUrl } = params;
|
||||
log('debug', `generateOpenAIText called with model: ${modelId}`);
|
||||
|
||||
import { createOpenAI } from '@ai-sdk/openai';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
|
||||
export class OpenAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'OpenAI';
|
||||
if (!apiKey) {
|
||||
throw new Error('OpenAI API key is required.');
|
||||
}
|
||||
if (!modelId) {
|
||||
throw new Error('OpenAI Model ID is required.');
|
||||
}
|
||||
if (!messages || !Array.isArray(messages) || messages.length === 0) {
|
||||
throw new Error('Invalid or empty messages array provided for OpenAI.');
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an OpenAI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - OpenAI API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} OpenAI client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
const openaiClient = getClient(apiKey, baseUrl);
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('OpenAI API key is required.');
|
||||
}
|
||||
try {
|
||||
const result = await generateText({
|
||||
model: openaiClient(modelId),
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature
|
||||
});
|
||||
|
||||
return createOpenAI({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
if (!result || !result.text) {
|
||||
log(
|
||||
'warn',
|
||||
'OpenAI generateText response did not contain expected content.',
|
||||
{ result }
|
||||
);
|
||||
throw new Error('Failed to extract content from OpenAI response.');
|
||||
}
|
||||
log(
|
||||
'debug',
|
||||
`OpenAI generateText completed successfully for model: ${modelId}`
|
||||
);
|
||||
return {
|
||||
text: result.text.trim(),
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error in generateOpenAIText (Model: ${modelId}): ${error.message}`,
|
||||
{ error }
|
||||
);
|
||||
throw new Error(
|
||||
`OpenAI API error during text generation: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using OpenAI models via Vercel AI SDK.
|
||||
*
|
||||
* @param {object} params - Parameters including apiKey, modelId, messages, maxTokens, temperature, baseUrl.
|
||||
* @returns {Promise<ReadableStream>} A readable stream of text deltas.
|
||||
* @throws {Error} If API call fails.
|
||||
*/
|
||||
export async function streamOpenAIText(params) {
|
||||
const { apiKey, modelId, messages, maxTokens, temperature, baseUrl } = params;
|
||||
log('debug', `streamOpenAIText called with model: ${modelId}`);
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('OpenAI API key is required.');
|
||||
}
|
||||
if (!modelId) {
|
||||
throw new Error('OpenAI Model ID is required.');
|
||||
}
|
||||
if (!messages || !Array.isArray(messages) || messages.length === 0) {
|
||||
throw new Error(
|
||||
'Invalid or empty messages array provided for OpenAI streaming.'
|
||||
);
|
||||
}
|
||||
|
||||
const openaiClient = getClient(apiKey, baseUrl);
|
||||
|
||||
try {
|
||||
const stream = await openaiClient.chat.stream(messages, {
|
||||
model: modelId,
|
||||
max_tokens: maxTokens,
|
||||
temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`OpenAI streamText initiated successfully for model: ${modelId}`
|
||||
);
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error initiating OpenAI stream (Model: ${modelId}): ${error.message}`,
|
||||
{ error }
|
||||
);
|
||||
throw new Error(
|
||||
`OpenAI API error during streaming initiation: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates structured objects using OpenAI models via Vercel AI SDK.
|
||||
*
|
||||
* @param {object} params - Parameters including apiKey, modelId, messages, schema, objectName, maxTokens, temperature, baseUrl.
|
||||
* @returns {Promise<object>} The generated object matching the schema and usage.
|
||||
* @throws {Error} If API call fails or object generation fails.
|
||||
*/
|
||||
export async function generateOpenAIObject(params) {
|
||||
const {
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
schema,
|
||||
objectName,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
} = params;
|
||||
log(
|
||||
'debug',
|
||||
`generateOpenAIObject called with model: ${modelId}, object: ${objectName}`
|
||||
);
|
||||
|
||||
if (!apiKey) throw new Error('OpenAI API key is required.');
|
||||
if (!modelId) throw new Error('OpenAI Model ID is required.');
|
||||
if (!messages || !Array.isArray(messages) || messages.length === 0)
|
||||
throw new Error('Invalid messages array for OpenAI object generation.');
|
||||
if (!schema)
|
||||
throw new Error('Schema is required for OpenAI object generation.');
|
||||
if (!objectName)
|
||||
throw new Error('Object name is required for OpenAI object generation.');
|
||||
|
||||
const openaiClient = getClient(apiKey, baseUrl);
|
||||
|
||||
try {
|
||||
const result = await generateObject({
|
||||
model: openaiClient(modelId),
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
mode: 'tool',
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`OpenAI generateObject completed successfully for model: ${modelId}`
|
||||
);
|
||||
if (!result || typeof result.object === 'undefined') {
|
||||
log(
|
||||
'warn',
|
||||
'OpenAI generateObject response did not contain expected object.',
|
||||
{ result }
|
||||
);
|
||||
throw new Error('Failed to extract object from OpenAI response.');
|
||||
}
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Error in generateOpenAIObject (Model: ${modelId}, Object: ${objectName}): ${error.message}`,
|
||||
{ error }
|
||||
);
|
||||
throw new Error(
|
||||
`OpenAI API error during object generation: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,39 +1,246 @@
|
||||
/**
|
||||
* openrouter.js
|
||||
* AI provider implementation for OpenRouter models using Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { generateText, streamText, generateObject } from 'ai';
|
||||
import { log } from '../../scripts/modules/utils.js'; // Assuming utils.js is in scripts/modules
|
||||
|
||||
export class OpenRouterAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'OpenRouter';
|
||||
}
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) throw new Error('OpenRouter API key is required.');
|
||||
return createOpenRouter({
|
||||
apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl })
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an OpenRouter client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - OpenRouter API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} OpenRouter client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
/**
|
||||
* Generates text using an OpenRouter chat model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text generation.
|
||||
* @param {string} params.apiKey - OpenRouter API key.
|
||||
* @param {string} params.modelId - The OpenRouter model ID (e.g., 'anthropic/claude-3.5-sonnet').
|
||||
* @param {Array<object>} params.messages - Array of message objects (system, user, assistant).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens to generate.
|
||||
* @param {number} [params.temperature] - Sampling temperature.
|
||||
* @param {string} [params.baseUrl] - Base URL for the OpenRouter API.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
async function generateOpenRouterText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl,
|
||||
...rest // Capture any other Vercel AI SDK compatible parameters
|
||||
}) {
|
||||
if (!apiKey) throw new Error('OpenRouter API key is required.');
|
||||
if (!modelId) throw new Error('OpenRouter model ID is required.');
|
||||
if (!messages || messages.length === 0)
|
||||
throw new Error('Messages array cannot be empty.');
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('OpenRouter API key is required.');
|
||||
}
|
||||
try {
|
||||
const openrouter = getClient(apiKey, baseUrl);
|
||||
const model = openrouter.chat(modelId); // Assuming chat model
|
||||
|
||||
return createOpenRouter({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL })
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
// Capture the full result from generateText
|
||||
const result = await generateText({
|
||||
model,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
...rest // Pass any additional parameters
|
||||
});
|
||||
|
||||
// Check if text and usage are present
|
||||
if (!result || typeof result.text !== 'string') {
|
||||
log(
|
||||
'warn',
|
||||
`OpenRouter generateText for model ${modelId} did not return expected text.`,
|
||||
{ result }
|
||||
);
|
||||
throw new Error('Failed to extract text from OpenRouter response.');
|
||||
}
|
||||
if (!result.usage) {
|
||||
log(
|
||||
'warn',
|
||||
`OpenRouter generateText for model ${modelId} did not return usage data.`,
|
||||
{ result }
|
||||
);
|
||||
// Decide if this is critical. For now, let it pass but telemetry will be incomplete.
|
||||
}
|
||||
|
||||
log('debug', `OpenRouter generateText completed for model ${modelId}`);
|
||||
// Return text and usage
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
let detailedMessage = `OpenRouter generateText failed for model ${modelId}: ${error.message}`;
|
||||
if (error.cause) {
|
||||
detailedMessage += `\n\nCause:\n\n ${typeof error.cause === 'string' ? error.cause : JSON.stringify(error.cause)}`;
|
||||
}
|
||||
// Vercel AI SDK sometimes wraps the actual API error response in error.data
|
||||
if (error.data) {
|
||||
detailedMessage += `\n\nData:\n\n ${JSON.stringify(error.data)}`;
|
||||
}
|
||||
// Log the original error object for full context if needed for deeper debugging
|
||||
log('error', detailedMessage, { originalErrorObject: error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using an OpenRouter chat model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text streaming.
|
||||
* @param {string} params.apiKey - OpenRouter API key.
|
||||
* @param {string} params.modelId - The OpenRouter model ID (e.g., 'anthropic/claude-3.5-sonnet').
|
||||
* @param {Array<object>} params.messages - Array of message objects (system, user, assistant).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens to generate.
|
||||
* @param {number} [params.temperature] - Sampling temperature.
|
||||
* @param {string} [params.baseUrl] - Base URL for the OpenRouter API.
|
||||
* @returns {Promise<ReadableStream<string>>} A readable stream of text deltas.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
async function streamOpenRouterText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl,
|
||||
...rest
|
||||
}) {
|
||||
if (!apiKey) throw new Error('OpenRouter API key is required.');
|
||||
if (!modelId) throw new Error('OpenRouter model ID is required.');
|
||||
if (!messages || messages.length === 0)
|
||||
throw new Error('Messages array cannot be empty.');
|
||||
|
||||
try {
|
||||
const openrouter = getClient(apiKey, baseUrl);
|
||||
const model = openrouter.chat(modelId);
|
||||
|
||||
// Directly return the stream from the Vercel AI SDK function
|
||||
const stream = await streamText({
|
||||
model,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
...rest
|
||||
});
|
||||
return stream;
|
||||
} catch (error) {
|
||||
let detailedMessage = `OpenRouter streamText failed for model ${modelId}: ${error.message}`;
|
||||
if (error.cause) {
|
||||
detailedMessage += `\n\nCause:\n\n ${typeof error.cause === 'string' ? error.cause : JSON.stringify(error.cause)}`;
|
||||
}
|
||||
if (error.data) {
|
||||
detailedMessage += `\n\nData:\n\n ${JSON.stringify(error.data)}`;
|
||||
}
|
||||
log('error', detailedMessage, { originalErrorObject: error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using an OpenRouter chat model.
|
||||
*
|
||||
* @param {object} params - Parameters for object generation.
|
||||
* @param {string} params.apiKey - OpenRouter API key.
|
||||
* @param {string} params.modelId - The OpenRouter model ID.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the expected object.
|
||||
* @param {Array<object>} params.messages - Array of message objects.
|
||||
* @param {string} [params.objectName='generated_object'] - Name for object/tool.
|
||||
* @param {number} [params.maxRetries=3] - Max retries for object generation.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens.
|
||||
* @param {number} [params.temperature] - Temperature.
|
||||
* @param {string} [params.baseUrl] - Base URL for the OpenRouter API.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If the API call fails or validation fails.
|
||||
*/
|
||||
async function generateOpenRouterObject({
|
||||
apiKey,
|
||||
modelId,
|
||||
schema,
|
||||
messages,
|
||||
objectName = 'generated_object',
|
||||
maxRetries = 3,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl,
|
||||
...rest
|
||||
}) {
|
||||
if (!apiKey) throw new Error('OpenRouter API key is required.');
|
||||
if (!modelId) throw new Error('OpenRouter model ID is required.');
|
||||
if (!schema) throw new Error('Zod schema is required for object generation.');
|
||||
if (!messages || messages.length === 0)
|
||||
throw new Error('Messages array cannot be empty.');
|
||||
|
||||
try {
|
||||
const openrouter = getClient(apiKey, baseUrl);
|
||||
const model = openrouter.chat(modelId);
|
||||
|
||||
// Capture the full result from generateObject
|
||||
const result = await generateObject({
|
||||
model,
|
||||
schema,
|
||||
mode: 'tool',
|
||||
tool: {
|
||||
name: objectName,
|
||||
description: `Generate an object conforming to the ${objectName} schema.`,
|
||||
parameters: schema
|
||||
},
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
maxRetries,
|
||||
...rest
|
||||
});
|
||||
|
||||
// Check if object and usage are present
|
||||
if (!result || typeof result.object === 'undefined') {
|
||||
log(
|
||||
'warn',
|
||||
`OpenRouter generateObject for model ${modelId} did not return expected object.`,
|
||||
{ result }
|
||||
);
|
||||
throw new Error('Failed to extract object from OpenRouter response.');
|
||||
}
|
||||
if (!result.usage) {
|
||||
log(
|
||||
'warn',
|
||||
`OpenRouter generateObject for model ${modelId} did not return usage data.`,
|
||||
{ result }
|
||||
);
|
||||
}
|
||||
|
||||
log('debug', `OpenRouter generateObject completed for model ${modelId}`);
|
||||
// Return object and usage
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
let detailedMessage = `OpenRouter generateObject failed for model ${modelId}: ${error.message}`;
|
||||
if (error.cause) {
|
||||
detailedMessage += `\n\nCause:\n\n ${typeof error.cause === 'string' ? error.cause : JSON.stringify(error.cause)}`;
|
||||
}
|
||||
if (error.data) {
|
||||
detailedMessage += `\n\nData:\n\n ${JSON.stringify(error.data)}`;
|
||||
}
|
||||
log('error', detailedMessage, { originalErrorObject: error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
generateOpenRouterText,
|
||||
streamOpenRouterText,
|
||||
generateOpenRouterObject
|
||||
};
|
||||
|
||||
@@ -1,39 +1,181 @@
|
||||
/**
|
||||
* perplexity.js
|
||||
* AI provider implementation for Perplexity models using Vercel AI SDK.
|
||||
* src/ai-providers/perplexity.js
|
||||
*
|
||||
* Implementation for interacting with Perplexity models
|
||||
* using the Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createPerplexity } from '@ai-sdk/perplexity';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { generateText, streamText, generateObject, streamObject } from 'ai';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
|
||||
export class PerplexityAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Perplexity';
|
||||
// --- Client Instantiation ---
|
||||
// Similar to Anthropic, this expects the resolved API key to be passed in.
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) {
|
||||
throw new Error('Perplexity API key is required.');
|
||||
}
|
||||
return createPerplexity({
|
||||
apiKey: apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl })
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a Perplexity client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - Perplexity API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} Perplexity client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
// --- Standardized Service Function Implementations ---
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('Perplexity API key is required.');
|
||||
/**
|
||||
* Generates text using a Perplexity model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text generation.
|
||||
* @param {string} params.apiKey - The Perplexity API key.
|
||||
* @param {string} params.modelId - The specific Perplexity model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - Base URL for the Perplexity API.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
export async function generatePerplexityText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Generating Perplexity text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const result = await generateText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`Perplexity generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
|
||||
return createPerplexity({
|
||||
apiKey,
|
||||
baseURL: baseURL || 'https://api.perplexity.ai'
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log('error', `Perplexity generateText failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using a Perplexity model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text streaming.
|
||||
* @param {string} params.apiKey - The Perplexity API key.
|
||||
* @param {string} params.modelId - The specific Perplexity model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - Base URL for the Perplexity API.
|
||||
* @returns {Promise<object>} The full stream result object from the Vercel AI SDK.
|
||||
* @throws {Error} If the API call fails to initiate the stream.
|
||||
*/
|
||||
export async function streamPerplexityText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Streaming Perplexity text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const stream = await streamText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log('error', `Perplexity streamText failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using a Perplexity model.
|
||||
* Note: Perplexity API might not directly support structured object generation
|
||||
* in the same way as OpenAI or Anthropic. This function might need
|
||||
* adjustments or might not be feasible depending on the model's capabilities
|
||||
* and the Vercel AI SDK's support for Perplexity in this context.
|
||||
*
|
||||
* @param {object} params - Parameters for object generation.
|
||||
* @param {string} params.apiKey - The Perplexity API key.
|
||||
* @param {string} params.modelId - The specific Perplexity model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the object.
|
||||
* @param {string} params.objectName - A name for the object/tool.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @param {string} [params.baseUrl] - Base URL for the Perplexity API.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If generation or validation fails or is unsupported.
|
||||
*/
|
||||
export async function generatePerplexityObject({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
schema,
|
||||
objectName = 'generated_object',
|
||||
maxTokens,
|
||||
temperature,
|
||||
maxRetries = 1,
|
||||
baseUrl
|
||||
}) {
|
||||
log(
|
||||
'debug',
|
||||
`Attempting to generate Perplexity object ('${objectName}') with model: ${modelId}`
|
||||
);
|
||||
log(
|
||||
'warn',
|
||||
'generateObject support for Perplexity might be limited or experimental.'
|
||||
);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const result = await generateObject({
|
||||
model: client(modelId),
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
maxRetries: maxRetries
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`Perplexity generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`Perplexity generateObject ('${objectName}') failed: ${error.message}`
|
||||
);
|
||||
throw new Error(
|
||||
`Failed to generate object with Perplexity: ${error.message}. Structured output might not be fully supported.`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Implement streamPerplexityObject if needed and feasible.
|
||||
|
||||
@@ -1,39 +1,178 @@
|
||||
/**
|
||||
* xai.js
|
||||
* AI provider implementation for xAI models using Vercel AI SDK.
|
||||
* src/ai-providers/xai.js
|
||||
*
|
||||
* Implementation for interacting with xAI models (e.g., Grok)
|
||||
* using the Vercel AI SDK.
|
||||
*/
|
||||
|
||||
import { createXai } from '@ai-sdk/xai';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { generateText, streamText, generateObject } from 'ai'; // Only import what's used
|
||||
import { log } from '../../scripts/modules/utils.js'; // Assuming utils is accessible
|
||||
|
||||
export class XAIProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'xAI';
|
||||
// --- Client Instantiation ---
|
||||
function getClient(apiKey, baseUrl) {
|
||||
if (!apiKey) {
|
||||
throw new Error('xAI API key is required.');
|
||||
}
|
||||
return createXai({
|
||||
apiKey: apiKey,
|
||||
...(baseUrl && { baseURL: baseUrl })
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an xAI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} params.apiKey - xAI API key
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} xAI client function
|
||||
* @throws {Error} If API key is missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey, baseURL } = params;
|
||||
// --- Standardized Service Function Implementations ---
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('xAI API key is required.');
|
||||
/**
|
||||
* Generates text using an xAI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text generation.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID (e.g., 'grok-3').
|
||||
* @param {Array<object>} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the xAI API.
|
||||
* @returns {Promise<object>} The generated text content and usage.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
export async function generateXaiText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Generating xAI text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const result = await generateText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`xAI generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
// Return text and usage
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
|
||||
return createXai({
|
||||
apiKey,
|
||||
baseURL: baseURL || 'https://api.x.ai/v1'
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log('error', `xAI generateText failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using an xAI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text streaming.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the xAI API.
|
||||
* @returns {Promise<object>} The full stream result object from the Vercel AI SDK.
|
||||
* @throws {Error} If the API call fails to initiate the stream.
|
||||
*/
|
||||
export async function streamXaiText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature,
|
||||
baseUrl
|
||||
}) {
|
||||
log('debug', `Streaming xAI text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const stream = await streamText({
|
||||
model: client(modelId),
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
return stream;
|
||||
} catch (error) {
|
||||
log('error', `xAI streamText failed: ${error.message}`, error.stack);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using an xAI model.
|
||||
* Note: Based on search results, xAI models do not currently support Object Generation.
|
||||
* This function is included for structural consistency but will likely fail if called.
|
||||
*
|
||||
* @param {object} params - Parameters for object generation.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the object.
|
||||
* @param {string} params.objectName - A name for the object/tool.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @param {string} [params.baseUrl] - The base URL for the xAI API.
|
||||
* @returns {Promise<object>} The generated object matching the schema and its usage.
|
||||
* @throws {Error} If generation or validation fails.
|
||||
*/
|
||||
export async function generateXaiObject({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
schema,
|
||||
objectName = 'generated_xai_object',
|
||||
maxTokens,
|
||||
temperature,
|
||||
maxRetries = 3,
|
||||
baseUrl
|
||||
}) {
|
||||
log(
|
||||
'warn',
|
||||
`Attempting to generate xAI object ('${objectName}') with model: ${modelId}. This may not be supported by the provider.`
|
||||
);
|
||||
try {
|
||||
const client = getClient(apiKey, baseUrl);
|
||||
const result = await generateObject({
|
||||
model: client(modelId),
|
||||
// Note: mode might need adjustment if xAI ever supports object generation differently
|
||||
mode: 'tool',
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
tool: {
|
||||
name: objectName,
|
||||
description: `Generate a ${objectName} based on the prompt.`,
|
||||
parameters: schema
|
||||
},
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
maxRetries: maxRetries
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`xAI generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
// Return object and usage
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`xAI generateObject ('${objectName}') failed: ${error.message}. (Likely unsupported by provider)`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,54 +0,0 @@
|
||||
/**
|
||||
* Path constants for Task Master application
|
||||
*/
|
||||
|
||||
// .taskmaster directory structure paths
|
||||
export const TASKMASTER_DIR = '.taskmaster';
|
||||
export const TASKMASTER_TASKS_DIR = '.taskmaster/tasks';
|
||||
export const TASKMASTER_DOCS_DIR = '.taskmaster/docs';
|
||||
export const TASKMASTER_REPORTS_DIR = '.taskmaster/reports';
|
||||
export const TASKMASTER_TEMPLATES_DIR = '.taskmaster/templates';
|
||||
|
||||
// Task Master configuration files
|
||||
export const TASKMASTER_CONFIG_FILE = '.taskmaster/config.json';
|
||||
export const LEGACY_CONFIG_FILE = '.taskmasterconfig';
|
||||
|
||||
// Task Master report files
|
||||
export const COMPLEXITY_REPORT_FILE =
|
||||
'.taskmaster/reports/task-complexity-report.json';
|
||||
export const LEGACY_COMPLEXITY_REPORT_FILE =
|
||||
'scripts/task-complexity-report.json';
|
||||
|
||||
// Task Master PRD file paths
|
||||
export const PRD_FILE = '.taskmaster/docs/prd.txt';
|
||||
export const LEGACY_PRD_FILE = 'scripts/prd.txt';
|
||||
|
||||
// Task Master template files
|
||||
export const EXAMPLE_PRD_FILE = '.taskmaster/templates/example_prd.txt';
|
||||
export const LEGACY_EXAMPLE_PRD_FILE = 'scripts/example_prd.txt';
|
||||
|
||||
// Task Master task file paths
|
||||
export const TASKMASTER_TASKS_FILE = '.taskmaster/tasks/tasks.json';
|
||||
export const LEGACY_TASKS_FILE = 'tasks/tasks.json';
|
||||
|
||||
// General project files (not Task Master specific but commonly used)
|
||||
export const ENV_EXAMPLE_FILE = '.env.example';
|
||||
export const GITIGNORE_FILE = '.gitignore';
|
||||
|
||||
// Task file naming pattern
|
||||
export const TASK_FILE_PREFIX = 'task_';
|
||||
export const TASK_FILE_EXTENSION = '.txt';
|
||||
|
||||
/**
|
||||
* Project markers used to identify a task-master project root
|
||||
* These files/directories indicate that a directory is a Task Master project
|
||||
*/
|
||||
export const PROJECT_MARKERS = [
|
||||
'.taskmaster', // New taskmaster directory
|
||||
LEGACY_CONFIG_FILE, // .taskmasterconfig
|
||||
'tasks.json', // Generic tasks file
|
||||
LEGACY_TASKS_FILE, // tasks/tasks.json (legacy location)
|
||||
TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json (new location)
|
||||
'.git', // Git repository
|
||||
'.svn' // SVN repository
|
||||
];
|
||||
@@ -1,404 +0,0 @@
|
||||
/**
|
||||
* Path utility functions for Task Master
|
||||
* Provides centralized path resolution logic for both CLI and MCP use cases
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import {
|
||||
TASKMASTER_TASKS_FILE,
|
||||
LEGACY_TASKS_FILE,
|
||||
TASKMASTER_DOCS_DIR,
|
||||
TASKMASTER_REPORTS_DIR,
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
TASKMASTER_CONFIG_FILE,
|
||||
LEGACY_CONFIG_FILE
|
||||
} from '../constants/paths.js';
|
||||
|
||||
/**
|
||||
* Find the project root directory by looking for project markers
|
||||
* @param {string} startDir - Directory to start searching from
|
||||
* @returns {string|null} - Project root path or null if not found
|
||||
*/
|
||||
export function findProjectRoot(startDir = process.cwd()) {
|
||||
const projectMarkers = [
|
||||
'.taskmaster',
|
||||
TASKMASTER_TASKS_FILE,
|
||||
'tasks.json',
|
||||
LEGACY_TASKS_FILE,
|
||||
'.git',
|
||||
'.svn',
|
||||
'package.json',
|
||||
'yarn.lock',
|
||||
'package-lock.json',
|
||||
'pnpm-lock.yaml'
|
||||
];
|
||||
|
||||
let currentDir = path.resolve(startDir);
|
||||
const rootDir = path.parse(currentDir).root;
|
||||
|
||||
while (currentDir !== rootDir) {
|
||||
// Check if current directory contains any project markers
|
||||
for (const marker of projectMarkers) {
|
||||
const markerPath = path.join(currentDir, marker);
|
||||
if (fs.existsSync(markerPath)) {
|
||||
return currentDir;
|
||||
}
|
||||
}
|
||||
currentDir = path.dirname(currentDir);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the tasks.json file path with fallback logic
|
||||
* @param {string|null} explicitPath - Explicit path provided by user (highest priority)
|
||||
* @param {Object|null} args - Args object from MCP args (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string|null} - Resolved tasks.json path or null if not found
|
||||
*/
|
||||
export function findTasksPath(explicitPath = null, args = null, log = null) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it (highest priority)
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
if (fs.existsSync(resolvedPath)) {
|
||||
logger.info?.(`Using explicit tasks path: ${resolvedPath}`);
|
||||
return resolvedPath;
|
||||
} else {
|
||||
logger.warn?.(
|
||||
`Explicit tasks path not found: ${resolvedPath}, trying fallbacks`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
logger.warn?.('Could not determine project root directory');
|
||||
return null;
|
||||
}
|
||||
|
||||
// 3. Check possible locations in order of preference
|
||||
const possiblePaths = [
|
||||
path.join(projectRoot, TASKMASTER_TASKS_FILE), // .taskmaster/tasks/tasks.json (NEW)
|
||||
path.join(projectRoot, 'tasks.json'), // tasks.json in root (LEGACY)
|
||||
path.join(projectRoot, LEGACY_TASKS_FILE) // tasks/tasks.json (LEGACY)
|
||||
];
|
||||
|
||||
for (const tasksPath of possiblePaths) {
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
logger.info?.(`Found tasks file at: ${tasksPath}`);
|
||||
|
||||
// Issue deprecation warning for legacy paths
|
||||
if (
|
||||
tasksPath.includes('tasks/tasks.json') &&
|
||||
!tasksPath.includes('.taskmaster')
|
||||
) {
|
||||
logger.warn?.(
|
||||
`⚠️ DEPRECATION WARNING: Found tasks.json in legacy location '${tasksPath}'. Please migrate to the new .taskmaster directory structure. Run 'task-master migrate' to automatically migrate your project.`
|
||||
);
|
||||
} else if (
|
||||
tasksPath.endsWith('tasks.json') &&
|
||||
!tasksPath.includes('.taskmaster') &&
|
||||
!tasksPath.includes('tasks/')
|
||||
) {
|
||||
logger.warn?.(
|
||||
`⚠️ DEPRECATION WARNING: Found tasks.json in legacy root location '${tasksPath}'. Please migrate to the new .taskmaster directory structure. Run 'task-master migrate' to automatically migrate your project.`
|
||||
);
|
||||
}
|
||||
|
||||
return tasksPath;
|
||||
}
|
||||
}
|
||||
|
||||
logger.warn?.(`No tasks.json found in project: ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the PRD document file path with fallback logic
|
||||
* @param {string|null} explicitPath - Explicit path provided by user (highest priority)
|
||||
* @param {Object|null} args - Args object for MCP context (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string|null} - Resolved PRD document path or null if not found
|
||||
*/
|
||||
export function findPRDPath(explicitPath = null, args = null, log = null) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it (highest priority)
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
if (fs.existsSync(resolvedPath)) {
|
||||
logger.info?.(`Using explicit PRD path: ${resolvedPath}`);
|
||||
return resolvedPath;
|
||||
} else {
|
||||
logger.warn?.(
|
||||
`Explicit PRD path not found: ${resolvedPath}, trying fallbacks`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
logger.warn?.('Could not determine project root directory');
|
||||
return null;
|
||||
}
|
||||
|
||||
// 3. Check possible locations in order of preference
|
||||
const locations = [
|
||||
TASKMASTER_DOCS_DIR, // .taskmaster/docs/ (NEW)
|
||||
'scripts/', // Legacy location
|
||||
'' // Project root
|
||||
];
|
||||
|
||||
const fileNames = ['PRD.md', 'prd.md', 'PRD.txt', 'prd.txt'];
|
||||
|
||||
for (const location of locations) {
|
||||
for (const fileName of fileNames) {
|
||||
const prdPath = path.join(projectRoot, location, fileName);
|
||||
if (fs.existsSync(prdPath)) {
|
||||
logger.info?.(`Found PRD document at: ${prdPath}`);
|
||||
|
||||
// Issue deprecation warning for legacy paths
|
||||
if (location === 'scripts/' || location === '') {
|
||||
logger.warn?.(
|
||||
`⚠️ DEPRECATION WARNING: Found PRD file in legacy location '${prdPath}'. Please migrate to .taskmaster/docs/ directory. Run 'task-master migrate' to automatically migrate your project.`
|
||||
);
|
||||
}
|
||||
|
||||
return prdPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
logger.warn?.(`No PRD document found in project: ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the complexity report file path with fallback logic
|
||||
* @param {string|null} explicitPath - Explicit path provided by user (highest priority)
|
||||
* @param {Object|null} args - Args object for MCP context (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string|null} - Resolved complexity report path or null if not found
|
||||
*/
|
||||
export function findComplexityReportPath(
|
||||
explicitPath = null,
|
||||
args = null,
|
||||
log = null
|
||||
) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it (highest priority)
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
if (fs.existsSync(resolvedPath)) {
|
||||
logger.info?.(`Using explicit complexity report path: ${resolvedPath}`);
|
||||
return resolvedPath;
|
||||
} else {
|
||||
logger.warn?.(
|
||||
`Explicit complexity report path not found: ${resolvedPath}, trying fallbacks`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
logger.warn?.('Could not determine project root directory');
|
||||
return null;
|
||||
}
|
||||
|
||||
// 3. Check possible locations in order of preference
|
||||
const locations = [
|
||||
TASKMASTER_REPORTS_DIR, // .taskmaster/reports/ (NEW)
|
||||
'scripts/', // Legacy location
|
||||
'' // Project root
|
||||
];
|
||||
|
||||
const fileNames = ['task-complexity-report.json', 'complexity-report.json'];
|
||||
|
||||
for (const location of locations) {
|
||||
for (const fileName of fileNames) {
|
||||
const reportPath = path.join(projectRoot, location, fileName);
|
||||
if (fs.existsSync(reportPath)) {
|
||||
logger.info?.(`Found complexity report at: ${reportPath}`);
|
||||
|
||||
// Issue deprecation warning for legacy paths
|
||||
if (location === 'scripts/' || location === '') {
|
||||
logger.warn?.(
|
||||
`⚠️ DEPRECATION WARNING: Found complexity report in legacy location '${reportPath}'. Please migrate to .taskmaster/reports/ directory. Run 'task-master migrate' to automatically migrate your project.`
|
||||
);
|
||||
}
|
||||
|
||||
return reportPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
logger.warn?.(`No complexity report found in project: ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve output path for tasks.json (create if needed)
|
||||
* @param {string|null} explicitPath - Explicit output path provided by user
|
||||
* @param {Object|null} args - Args object for MCP context (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string} - Resolved output path for tasks.json
|
||||
*/
|
||||
export function resolveTasksOutputPath(
|
||||
explicitPath = null,
|
||||
args = null,
|
||||
log = null
|
||||
) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
logger.info?.(`Using explicit output path: ${resolvedPath}`);
|
||||
return resolvedPath;
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot() || process.cwd();
|
||||
|
||||
// 3. Use new .taskmaster structure by default
|
||||
const defaultPath = path.join(projectRoot, TASKMASTER_TASKS_FILE);
|
||||
logger.info?.(`Using default output path: ${defaultPath}`);
|
||||
|
||||
// Ensure the directory exists
|
||||
const outputDir = path.dirname(defaultPath);
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
logger.info?.(`Creating tasks directory: ${outputDir}`);
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
return defaultPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve output path for complexity report (create if needed)
|
||||
* @param {string|null} explicitPath - Explicit output path provided by user
|
||||
* @param {Object|null} args - Args object for MCP context (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string} - Resolved output path for complexity report
|
||||
*/
|
||||
export function resolveComplexityReportOutputPath(
|
||||
explicitPath = null,
|
||||
args = null,
|
||||
log = null
|
||||
) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
logger.info?.(
|
||||
`Using explicit complexity report output path: ${resolvedPath}`
|
||||
);
|
||||
return resolvedPath;
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot() || process.cwd();
|
||||
|
||||
// 3. Use new .taskmaster structure by default
|
||||
const defaultPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
logger.info?.(`Using default complexity report output path: ${defaultPath}`);
|
||||
|
||||
// Ensure the directory exists
|
||||
const outputDir = path.dirname(defaultPath);
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
logger.info?.(`Creating reports directory: ${outputDir}`);
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
return defaultPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the configuration file path with fallback logic
|
||||
* @param {string|null} explicitPath - Explicit path provided by user (highest priority)
|
||||
* @param {Object|null} args - Args object for MCP context (optional)
|
||||
* @param {Object|null} log - Logger object (optional)
|
||||
* @returns {string|null} - Resolved config file path or null if not found
|
||||
*/
|
||||
export function findConfigPath(explicitPath = null, args = null, log = null) {
|
||||
const logger = log || console;
|
||||
|
||||
// 1. If explicit path is provided, use it (highest priority)
|
||||
if (explicitPath) {
|
||||
const resolvedPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(process.cwd(), explicitPath);
|
||||
|
||||
if (fs.existsSync(resolvedPath)) {
|
||||
logger.info?.(`Using explicit config path: ${resolvedPath}`);
|
||||
return resolvedPath;
|
||||
} else {
|
||||
logger.warn?.(
|
||||
`Explicit config path not found: ${resolvedPath}, trying fallbacks`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Try to get project root from args (MCP) or find it
|
||||
const projectRoot = args?.projectRoot || findProjectRoot();
|
||||
|
||||
if (!projectRoot) {
|
||||
logger.warn?.('Could not determine project root directory');
|
||||
return null;
|
||||
}
|
||||
|
||||
// 3. Check possible locations in order of preference
|
||||
const possiblePaths = [
|
||||
path.join(projectRoot, TASKMASTER_CONFIG_FILE), // NEW location
|
||||
path.join(projectRoot, LEGACY_CONFIG_FILE) // LEGACY location
|
||||
];
|
||||
|
||||
for (const configPath of possiblePaths) {
|
||||
if (fs.existsSync(configPath)) {
|
||||
try {
|
||||
logger.info?.(`Found config file at: ${configPath}`);
|
||||
} catch (error) {
|
||||
// Silently handle logging errors during testing
|
||||
}
|
||||
|
||||
// Issue deprecation warning for legacy paths
|
||||
if (configPath?.endsWith(LEGACY_CONFIG_FILE)) {
|
||||
logger.warn?.(
|
||||
`⚠️ DEPRECATION WARNING: Found configuration in legacy location '${configPath}'. Please migrate to .taskmaster/config.json. Run 'task-master migrate' to automatically migrate your project.`
|
||||
);
|
||||
}
|
||||
|
||||
return configPath;
|
||||
}
|
||||
}
|
||||
|
||||
logger.warn?.(`No configuration file found in project: ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user