Merge pull request #240 from eyaltoledano/better-ai-model-management

- introduces model management features across CLI and MCP
- introduces an interactive model setup
- introduces API key verification checks across CLI and MCP
- introduces Gemini support
- introduces OpenAI support
- introduces xAI support
- introduces OpenRouter support
- introduces custom model support via OpenRouter and soon Ollama
- introduces `--research` flag to the `add-task` command to hit up research model right away
- introduces `--status`  and `-s` flag for the `show` command (and `get-task` MCP tool) to filter subtasks by any status
- bunch of small fixes and a few stealth additions
- refactors test suite to work with new structure
- introduces AI powered E2E test for testing all Taskmaster CLI commands
This commit is contained in:
Eyal Toledano
2025-04-30 22:13:46 -04:00
committed by GitHub
160 changed files with 22806 additions and 12983 deletions

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
- Add support for Google Gemini models via Vercel AI SDK integration.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Add xAI provider and Grok models support

View File

@@ -0,0 +1,8 @@
---
'task-master-ai': minor
---
feat(expand): Enhance `expand` and `expand-all` commands
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
- Change default behavior to *append* new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.

View File

@@ -0,0 +1,7 @@
---
'task-master-ai': minor
---
Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs.
- IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.

View File

@@ -0,0 +1,13 @@
---
'task-master-ai': patch
---
Improve and adjust `init` command for robustness and updated dependencies.
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': minor
---
Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140

View File

@@ -0,0 +1,10 @@
---
'task-master-ai': patch
---
Improves next command to be subtask-aware
- The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'.
- The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority.
- If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.

View File

@@ -0,0 +1,11 @@
---
'task-master-ai': minor
---
Adds custom model ID support for Ollama and OpenRouter providers.
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
- Added warnings when setting custom/unvalidated models.
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Add `--status` flag to `show` command to filter displayed subtasks.

View File

@@ -0,0 +1,7 @@
---
'task-master-ai': minor
---
Integrate OpenAI as a new AI provider.
- Enhance `models` command/tool to display API key status.
- Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.

View File

@@ -0,0 +1,9 @@
---
'task-master-ai': minor
---
Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information
- Forces temp at 0.1 for highly deterministic output, no variations
- Adds a system prompt to further improve the output
- Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity
- Specificies to use a high degree of research across the web
- Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥

View File

@@ -0,0 +1,9 @@
---
'task-master-ai': patch
---
Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models."
- In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup.
- In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both.
- Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information.
- Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.

View File

@@ -1,17 +1,18 @@
{
"mcpServers": {
"taskmaster-ai": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}

View File

@@ -0,0 +1,155 @@
---
description: Guidelines for managing Task Master AI providers and models.
globs:
alwaysApply: false
---
# Task Master AI Provider Management
This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc).
- **Primary Interaction:**
- Use the `models` MCP tool or the `task-master models` CLI command to manage AI configurations. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for detailed command/tool usage.
- **Configuration Roles:**
- Task Master uses three roles for AI models:
- `main`: Primary model for general tasks (generation, updates).
- `research`: Model used when the `--research` flag or `research: true` parameter is used (typically models with web access or specialized knowledge).
- `fallback`: Model used if the primary (`main`) model fails.
- Each role is configured with a specific `provider:modelId` pair (e.g., `openai:gpt-4o`).
- **Viewing Configuration & Available Models:**
- To see the current model assignments for each role and list all models available for assignment:
- **MCP Tool:** `models` (call with no arguments or `listAvailableModels: true`)
- **CLI Command:** `task-master models`
- The output will show currently assigned models and a list of others, prefixed with their provider (e.g., `google:gemini-2.5-pro-exp-03-25`).
- **Setting Models for Roles:**
- To assign a model to a role:
- **MCP Tool:** `models` with `setMain`, `setResearch`, or `setFallback` parameters.
- **CLI Command:** `task-master models` with `--set-main`, `--set-research`, or `--set-fallback` flags.
- **Crucially:** When providing the model ID to *set*, **DO NOT include the `provider:` prefix**. Use only the model ID itself.
- ✅ **DO:** `models(setMain='gpt-4o')` or `task-master models --set-main=gpt-4o`
- ❌ **DON'T:** `models(setMain='openai:gpt-4o')` or `task-master models --set-main=openai:gpt-4o`
- The tool/command will automatically determine the provider based on the model ID.
- **Setting Custom Models (Ollama/OpenRouter):**
- To set a model ID not in the internal list for Ollama or OpenRouter:
- **MCP Tool:** Use `models` with `set<Role>` and **also** `ollama: true` or `openrouter: true`.
- Example: `models(setMain='my-custom-ollama-model', ollama=true)`
- Example: `models(setMain='some-openrouter-model', openrouter=true)`
- **CLI Command:** Use `task-master models` with `--set-<role>` and **also** `--ollama` or `--openrouter`.
- Example: `task-master models --set-main=my-custom-ollama-model --ollama`
- Example: `task-master models --set-main=some-openrouter-model --openrouter`
- **Interactive Setup:** Use `task-master models --setup` and select the `Ollama (Enter Custom ID)` or `OpenRouter (Enter Custom ID)` options.
- **OpenRouter Validation:** When setting a custom OpenRouter model, Taskmaster attempts to validate the ID against the live OpenRouter API.
- **Ollama:** No live validation occurs for custom Ollama models; ensure the model is available on your Ollama server.
- **Supported Providers & Required API Keys:**
- Task Master integrates with various providers via the Vercel AI SDK.
- **API keys are essential** for most providers and must be configured correctly.
- **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management):
- **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`.
- **CLI:** Set keys in a `.env` file in the project root.
- **Provider List & Keys:**
- **`anthropic`**: Requires `ANTHROPIC_API_KEY`.
- **`google`**: Requires `GOOGLE_API_KEY`.
- **`openai`**: Requires `OPENAI_API_KEY`.
- **`perplexity`**: Requires `PERPLEXITY_API_KEY`.
- **`xai`**: Requires `XAI_API_KEY`.
- **`mistral`**: Requires `MISTRAL_API_KEY`.
- **`azure`**: Requires `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`.
- **`openrouter`**: Requires `OPENROUTER_API_KEY`.
- **`ollama`**: Might require `OLLAMA_API_KEY` (not currently supported) *and* `OLLAMA_BASE_URL` (default: `http://localhost:11434/api`). *Check specific setup.*
- **Troubleshooting:**
- If AI commands fail (especially in MCP context):
1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`).
2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`).
3. **Provider Status:** Check the status of the external AI provider's service.
4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server.
## Adding a New AI Provider (Vercel AI SDK Method)
Follow these steps to integrate a new AI provider that has an official Vercel AI SDK adapter (`@ai-sdk/<provider>`):
1. **Install Dependency:**
- Install the provider-specific package:
```bash
npm install @ai-sdk/<provider-name>
```
2. **Create Provider Module:**
- Create a new file in `src/ai-providers/` named `<provider-name>.js`.
- Use existing modules (`openai.js`, `anthropic.js`, etc.) as a template.
- **Import:**
- Import the provider's `create<ProviderName>` function from `@ai-sdk/<provider-name>`.
- Import `generateText`, `streamText`, `generateObject` from the core `ai` package.
- Import the `log` utility from `../../scripts/modules/utils.js`.
- **Implement Core Functions:**
- `generate<ProviderName>Text(params)`:
- Accepts `params` (apiKey, modelId, messages, etc.).
- Instantiate the client: `const client = create<ProviderName>({ apiKey });`
- Call `generateText({ model: client(modelId), ... })`.
- Return `result.text`.
- Include basic validation and try/catch error handling.
- `stream<ProviderName>Text(params)`:
- Similar structure to `generateText`.
- Call `streamText({ model: client(modelId), ... })`.
- Return the full stream result object.
- Include basic validation and try/catch.
- `generate<ProviderName>Object(params)`:
- Similar structure.
- Call `generateObject({ model: client(modelId), schema, messages, ... })`.
- Return `result.object`.
- Include basic validation and try/catch.
- **Export Functions:** Export the three implemented functions (`generate<ProviderName>Text`, `stream<ProviderName>Text`, `generate<ProviderName>Object`).
3. **Integrate with Unified Service:**
- Open `scripts/modules/ai-services-unified.js`.
- **Import:** Add `import * as <providerName> from '../../src/ai-providers/<provider-name>.js';`
- **Map:** Add an entry to the `PROVIDER_FUNCTIONS` map:
```javascript
'<provider-name>': {
generateText: <providerName>.generate<ProviderName>Text,
streamText: <providerName>.stream<ProviderName>Text,
generateObject: <providerName>.generate<ProviderName>Object
},
```
4. **Update Configuration Management:**
- Open `scripts/modules/config-manager.js`.
- **`MODEL_MAP`:** Add the new `<provider-name>` key to the `MODEL_MAP` loaded from `supported-models.json` (or ensure the loading handles new providers dynamically if `supported-models.json` is updated first).
- **`VALID_PROVIDERS`:** Ensure the new `<provider-name>` is included in the `VALID_PROVIDERS` array (this should happen automatically if derived from `MODEL_MAP` keys).
- **API Key Handling:**
- Update the `keyMap` in `_resolveApiKey` and `isApiKeySet` with the correct environment variable name (e.g., `PROVIDER_API_KEY`).
- Update the `switch` statement in `getMcpApiKeyStatus` to check the corresponding key in `mcp.json` and its placeholder value.
- Add a case to the `switch` statement in `getMcpApiKeyStatus` for the new provider, including its placeholder string if applicable.
- **Ollama Exception:** If adding Ollama or another provider *not* requiring an API key, add a specific check at the beginning of `isApiKeySet` and `getMcpApiKeyStatus` to return `true` immediately for that provider.
5. **Update Supported Models List:**
- Edit `scripts/modules/supported-models.json`.
- Add a new key for the `<provider-name>`.
- Add an array of model objects under the provider key, each including:
- `id`: The specific model identifier (e.g., `claude-3-opus-20240229`).
- `name`: A user-friendly name (optional).
- `swe_score`, `cost_per_1m_tokens`: (Optional) Add performance/cost data if available.
- `allowed_roles`: An array of roles (`"main"`, `"research"`, `"fallback"`) the model is suitable for.
- `max_tokens`: (Optional but recommended) The maximum token limit for the model.
6. **Update Environment Examples:**
- Add the new `PROVIDER_API_KEY` to `.env.example`.
- Add the new `PROVIDER_API_KEY` with its placeholder (`YOUR_PROVIDER_API_KEY_HERE`) to the `env` section for `taskmaster-ai` in `.cursor/mcp.json.example` (if it exists) or update instructions.
7. **Add Unit Tests:**
- Create `tests/unit/ai-providers/<provider-name>.test.js`.
- Mock the `@ai-sdk/<provider-name>` module and the core `ai` module functions (`generateText`, `streamText`, `generateObject`).
- Write tests for each exported function (`generate<ProviderName>Text`, etc.) to verify:
- Correct client instantiation.
- Correct parameters passed to the mocked Vercel AI SDK functions.
- Correct handling of results.
- Error handling (missing API key, SDK errors).
8. **Documentation:**
- Update any relevant documentation (like `README.md` or other rules) mentioning supported providers or configuration.
*(Note: For providers **without** an official Vercel AI SDK adapter, the process would involve directly using the provider's own SDK or API within the `src/ai-providers/<provider-name>.js` module and manually constructing responses compatible with the unified service layer, which is significantly more complex.)*

View File

@@ -0,0 +1,101 @@
---
description: Guidelines for interacting with the unified AI service layer.
globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js, scripts/modules/commands.js
---
# AI Services Layer Guidelines
This document outlines the architecture and usage patterns for interacting with Large Language Models (LLMs) via Task Master's unified AI service layer (`ai-services-unified.js`). The goal is to centralize configuration, provider selection, API key management, fallback logic, and error handling.
**Core Components:**
* **Configuration (`.taskmasterconfig` & [`config-manager.js`](mdc:scripts/modules/config-manager.js)):**
* Defines the AI provider and model ID for different **roles** (`main`, `research`, `fallback`).
* Stores parameters like `maxTokens` and `temperature` per role.
* Managed via the `task-master models --setup` CLI command.
* [`config-manager.js`](mdc:scripts/modules/config-manager.js) provides **getters** (e.g., `getMainProvider()`, `getParametersForRole()`) to access these settings. Core logic should **only** use these getters for *non-AI related application logic* (e.g., `getDefaultSubtasks`). The unified service fetches necessary AI parameters internally based on the `role`.
* **API keys** are **NOT** stored here; they are resolved via `resolveEnvVariable` (in [`utils.js`](mdc:scripts/modules/utils.js)) from `.env` (for CLI) or the MCP `session.env` object (for MCP calls). See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc).
* **Unified Service (`ai-services-unified.js`):**
* Exports primary interaction functions: `generateTextService`, `generateObjectService`. (Note: `streamTextService` exists but has known reliability issues with some providers/payloads).
* Contains the core `_unifiedServiceRunner` logic.
* Internally uses `config-manager.js` getters to determine the provider/model/parameters based on the requested `role`.
* Implements the **fallback sequence** (e.g., main -> fallback -> research) if the primary provider/model fails.
* Constructs the `messages` array required by the Vercel AI SDK.
* Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`).
* Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`).
* Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`.
* **Provider Implementations (`src/ai-providers/*.js`):**
* Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`).
**Usage Pattern (from Core Logic like `task-manager/*.js`):**
1. **Import Service:** Import `generateTextService` or `generateObjectService` from `../ai-services-unified.js`.
```javascript
// Preferred for most tasks (especially with complex JSON)
import { generateTextService } from '../ai-services-unified.js';
// Use if structured output is reliable for the specific use case
// import { generateObjectService } from '../ai-services-unified.js';
```
2. **Prepare Parameters:** Construct the parameters object for the service call.
* `role`: **Required.** `'main'`, `'research'`, or `'fallback'`. Determines the initial provider/model/parameters used by the unified service.
* `session`: **Required if called from MCP context.** Pass the `session` object received by the direct function wrapper. The unified service uses `session.env` to find API keys.
* `systemPrompt`: Your system instruction string.
* `prompt`: The user message string (can be long, include stringified data, etc.).
* (For `generateObjectService` only): `schema` (Zod schema), `objectName`.
3. **Call Service:** Use `await` to call the service function.
```javascript
// Example using generateTextService (most common)
try {
const resultText = await generateTextService({
role: useResearch ? 'research' : 'main', // Determine role based on logic
session: context.session, // Pass session from context object
systemPrompt: "You are...",
prompt: userMessageContent
});
// Process the raw text response (e.g., parse JSON, use directly)
// ...
} catch (error) {
// Handle errors thrown by the unified service (if all fallbacks/retries fail)
report('error', `Unified AI service call failed: ${error.message}`);
throw error;
}
// Example using generateObjectService (use cautiously)
try {
const resultObject = await generateObjectService({
role: 'main',
session: context.session,
schema: myZodSchema,
objectName: 'myDataObject',
systemPrompt: "You are...",
prompt: userMessageContent
});
// resultObject is already a validated JS object
// ...
} catch (error) {
report('error', `Unified AI service call failed: ${error.message}`);
throw error;
}
```
4. **Handle Results/Errors:** Process the returned text/object or handle errors thrown by the unified service layer.
**Key Implementation Rules & Gotchas:**
* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`.
* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context.
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
* ✅ **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.
* ❌ **DON'T**: Initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (`task-manager/`) or MCP direct functions.
* ❌ **DON'T**: Fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters *for the AI call*. Pass the `role` instead.
* ❌ **DON'T**: Implement fallback or retry logic outside `ai-services-unified.js`.
* ❌ **DON'T**: Handle API key resolution outside the service layer (it uses `utils.js` internally).
* ⚠️ **generateObjectService Caution**: Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

View File

@@ -3,7 +3,6 @@ description: Describes the high-level architecture of the Task Master CLI applic
globs: scripts/modules/*.js
alwaysApply: false
---
# Application Architecture Overview
- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability.
@@ -14,161 +13,73 @@ alwaysApply: false
- **Purpose**: Defines and registers all CLI commands using Commander.js.
- **Responsibilities** (See also: [`commands.mdc`](mdc:.cursor/rules/commands.mdc)):
- Parses command-line arguments and options.
- Invokes appropriate functions from other modules to execute commands (e.g., calls `initializeProject` from `init.js` for the `init` command).
- Handles user input and output related to command execution.
- Implements input validation and error handling for CLI commands.
- **Key Components**:
- `programInstance` (Commander.js `Command` instance): Manages command definitions.
- `registerCommands(programInstance)`: Function to register all application commands.
- Command action handlers: Functions executed when a specific command is invoked, delegating to core modules.
- Invokes appropriate core logic functions from `scripts/modules/`.
- Handles user input/output for CLI.
- Implements CLI-specific validation.
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js): Task Data Management**
- **Purpose**: Manages task data, including loading, saving, creating, updating, deleting, and querying tasks.
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js) & `task-manager/` directory: Task Data & Core Logic**
- **Purpose**: Contains core functions for task data manipulation (CRUD), AI interactions, and related logic.
- **Responsibilities**:
- Reads and writes task data to `tasks.json` file.
- Implements functions for task CRUD operations (Create, Read, Update, Delete).
- Handles task parsing from PRD documents using AI.
- Manages task expansion and subtask generation.
- Updates task statuses and properties.
- Implements task listing and display logic.
- Performs task complexity analysis using AI.
- **Key Functions**:
- `readTasks(tasksPath)` / `writeTasks(tasksPath, tasksData)`: Load and save task data.
- `parsePRD(prdFilePath, outputPath, numTasks)`: Parses PRD document to create tasks.
- `expandTask(taskId, numSubtasks, useResearch, prompt, force)`: Expands a task into subtasks.
- `setTaskStatus(tasksPath, taskIdInput, newStatus)`: Updates task status.
- `listTasks(tasksPath, statusFilter, withSubtasks)`: Lists tasks with filtering and subtask display options.
- `analyzeComplexity(tasksPath, reportPath, useResearch, thresholdScore)`: Analyzes task complexity.
- Reading/writing `tasks.json`.
- Implementing functions for task CRUD, parsing PRDs, expanding tasks, updating status, etc.
- **Delegating AI interactions** to the `ai-services-unified.js` layer.
- Accessing non-AI configuration via `config-manager.js` getters.
- **Key Files**: Individual files within `scripts/modules/task-manager/` handle specific actions (e.g., `add-task.js`, `expand-task.js`).
- **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management**
- **Purpose**: Manages task dependencies, including adding, removing, validating, and fixing dependency relationships.
- **Responsibilities**:
- Adds and removes task dependencies.
- Validates dependency relationships to prevent circular dependencies and invalid references.
- Fixes invalid dependencies by removing non-existent or self-referential dependencies.
- Provides functions to check for circular dependencies.
- **Key Functions**:
- `addDependency(tasksPath, taskId, dependencyId)`: Adds a dependency between tasks.
- `removeDependency(tasksPath, taskId, dependencyId)`: Removes a dependency.
- `validateDependencies(tasksPath)`: Validates task dependencies.
- `fixDependencies(tasksPath)`: Fixes invalid task dependencies.
- `isCircularDependency(tasks, taskId, dependencyChain)`: Detects circular dependencies.
- **Purpose**: Manages task dependencies.
- **Responsibilities**: Add/remove/validate/fix dependencies.
- **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components**
- **Purpose**: Handles all user interface elements, including displaying information, formatting output, and providing user feedback.
- **Responsibilities**:
- Displays task lists, task details, and command outputs in a formatted way.
- Uses `chalk` for colored output and `boxen` for boxed messages.
- Implements table display using `cli-table3`.
- Shows loading indicators using `ora`.
- Provides helper functions for status formatting, dependency display, and progress reporting.
- Suggests next actions to the user after command execution.
- **Key Functions**:
- `displayTaskList(tasks, statusFilter, withSubtasks)`: Displays a list of tasks in a table.
- `displayTaskDetails(task)`: Displays detailed information for a single task.
- `displayComplexityReport(reportPath)`: Displays the task complexity report.
- `startLoadingIndicator(message)` / `stopLoadingIndicator(indicator)`: Manages loading indicators.
- `getStatusWithColor(status)`: Returns status string with color formatting.
- `formatDependenciesWithStatus(dependencies, allTasks, inTable)`: Formats dependency list with status indicators.
- **Purpose**: Handles CLI output formatting (tables, colors, boxes, spinners).
- **Responsibilities**: Displaying tasks, reports, progress, suggestions.
- **[`ai-services.js`](mdc:scripts/modules/ai-services.js) (Conceptual): AI Integration**
- **Purpose**: Abstracts interactions with AI models (like Anthropic Claude and Perplexity AI) for various features. *Note: This module might be implicitly implemented within `task-manager.js` and `utils.js` or could be explicitly created for better organization as the project evolves.*
- **Responsibilities**:
- Handles API calls to AI services.
- Manages prompts and parameters for AI requests.
- Parses AI responses and extracts relevant information.
- Implements logic for task complexity analysis, task expansion, and PRD parsing using AI.
- **Potential Functions**:
- `getAIResponse(prompt, model, maxTokens, temperature)`: Generic function to interact with AI model.
- `analyzeTaskComplexityWithAI(taskDescription)`: Sends task description to AI for complexity analysis.
- `expandTaskWithAI(taskDescription, numSubtasks, researchContext)`: Generates subtasks using AI.
- `parsePRDWithAI(prdContent)`: Extracts tasks from PRD content using AI.
- **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js): Unified AI Service Layer**
- **Purpose**: Centralized interface for all LLM interactions using Vercel AI SDK.
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
- Exports `generateTextService`, `generateObjectService`.
- Handles provider/model selection based on `role` and `.taskmasterconfig`.
- Resolves API keys (from `.env` or `session.env`).
- Implements fallback and retry logic.
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
- **[`utils.js`](mdc:scripts/modules/utils.js): Utility Functions and Configuration**
- **Purpose**: Provides reusable utility functions and global configuration settings used across the **CLI application**.
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
- **Responsibilities**: Interact directly with Vercel AI SDK adapters.
- **[`config-manager.js`](mdc:scripts/modules/config-manager.js): Configuration Management**
- **Purpose**: Loads, validates, and provides access to configuration.
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
- Manages global configuration settings loaded from environment variables and defaults.
- Implements logging utility with different log levels and output formatting.
- Provides file system operation utilities (read/write JSON files).
- Includes string manipulation utilities (e.g., `truncate`, `sanitizePrompt`).
- Offers task-specific utility functions (e.g., `formatTaskId`, `findTaskById`, `taskExists`).
- Implements graph algorithms like cycle detection for dependency management.
- **Silent Mode Control**: Provides `enableSilentMode` and `disableSilentMode` functions to control log output.
- **Key Components**:
- `CONFIG`: Global configuration object.
- `log(level, ...args)`: Logging function.
- `readJSON(filepath)` / `writeJSON(filepath, data)`: File I/O utilities for JSON files.
- `truncate(text, maxLength)`: String truncation utility.
- `formatTaskId(id)` / `findTaskById(tasks, taskId)`: Task ID and search utilities.
- `findCycles(subtaskId, dependencyMap)`: Cycle detection algorithm.
- `enableSilentMode()` / `disableSilentMode()`: Control console logging output.
- Reads and merges `.taskmasterconfig` with defaults.
- Provides getters (e.g., `getMainProvider`, `getLogLevel`, `getDefaultSubtasks`) for accessing settings.
- **Note**: Does **not** store or directly handle API keys (keys are in `.env` or MCP `session.env`).
- **[`utils.js`](mdc:scripts/modules/utils.js): Core Utility Functions**
- **Purpose**: Low-level, reusable CLI utilities.
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
- Logging (`log` function), File I/O (`readJSON`, `writeJSON`), String utils (`truncate`).
- Task utils (`findTaskById`), Dependency utils (`findCycles`).
- API Key Resolution (`resolveEnvVariable`).
- Silent Mode Control (`enableSilentMode`, `disableSilentMode`).
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
- **Purpose**: Provides an MCP (Model Context Protocol) interface for Task Master, allowing integration with external tools like Cursor. Uses FastMCP framework.
- **Purpose**: Provides MCP interface using FastMCP.
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
- Registers Task Master functionalities as tools consumable via MCP.
- Handles MCP requests via tool `execute` methods defined in `mcp-server/src/tools/*.js`.
- Tool `execute` methods call corresponding **direct function wrappers**.
- Tool `execute` methods use `getProjectRootFromSession` (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) to determine the project root from the client session and pass it to the direct function.
- **Direct function wrappers (`*Direct` functions in `mcp-server/src/core/direct-functions/*.js`) contain the main logic for handling MCP requests**, including path resolution, argument validation, caching, and calling core Task Master functions.
- Direct functions use `findTasksJsonPath` (from [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js)) to locate `tasks.json` based on the provided `projectRoot`.
- **Silent Mode Implementation**: Direct functions use `enableSilentMode` and `disableSilentMode` to prevent logs from interfering with JSON responses.
- **Async Operations**: Uses `AsyncOperationManager` to handle long-running operations in the background.
- **Project Initialization**: Provides `initialize_project` command for setting up new projects from within integrated clients.
- Tool `execute` methods use `handleApiResult` from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) to process the result from the direct function and format the final MCP response.
- Uses CLI execution via `executeTaskMasterCommand` as a fallback only when necessary.
- **Implements Robust Path Finding**: The utility [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) (specifically `getProjectRootFromSession`) and [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js) (specifically `findTasksJsonPath`) work together. The tool gets the root via session, passes it to the direct function, which uses `findTasksJsonPath` to locate the specific `tasks.json` file within that root.
- **Implements Caching**: Utilizes a caching layer (`ContextManager` with `lru-cache`). Caching logic is invoked *within* the direct function wrappers using the `getCachedOrExecute` utility for performance-sensitive read operations.
- Standardizes response formatting and data filtering using utilities in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
- **Resource Management**: Provides access to static and dynamic resources.
- **Key Components**:
- `mcp-server/src/index.js`: Main server class definition with FastMCP initialization, resource registration, and server lifecycle management.
- `mcp-server/src/server.js`: Main server setup and initialization.
- `mcp-server/src/tools/`: Directory containing individual tool definitions. Each tool's `execute` method orchestrates the call to core logic and handles the response.
- `mcp-server/src/tools/utils.js`: Provides MCP-specific utilities like `handleApiResult`, `processMCPResponseData`, `getCachedOrExecute`, and **`getProjectRootFromSession`**.
- `mcp-server/src/core/utils/`: Directory containing utility functions specific to the MCP server, like **`path-utils.js` for resolving `tasks.json` within a given root** and **`async-manager.js` for handling background operations**.
- `mcp-server/src/core/direct-functions/`: Directory containing individual files for each **direct function wrapper (`*Direct`)**. These files contain the primary logic for MCP tool execution.
- `mcp-server/src/core/resources/`: Directory containing resource handlers for task templates, workflow definitions, and other static/dynamic data exposed to LLM clients.
- [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js): Acts as an import/export hub, collecting and exporting direct functions from the `direct-functions` directory and MCP utility functions.
- **Naming Conventions**:
- **Files** use **kebab-case**: `list-tasks.js`, `set-task-status.js`, `parse-prd.js`
- **Direct Functions** use **camelCase** with `Direct` suffix: `listTasksDirect`, `setTaskStatusDirect`, `parsePRDDirect`
- **Tool Registration Functions** use **camelCase** with `Tool` suffix: `registerListTasksTool`, `registerSetTaskStatusTool`
- **MCP Tool Names** use **snake_case**: `list_tasks`, `set_task_status`, `parse_prd_document`
- **Resource Handlers** use **camelCase** with pattern URI: `@mcp.resource("tasks://templates/{template_id}")`
- **AsyncOperationManager**:
- **Purpose**: Manages background execution of long-running operations.
- **Location**: `mcp-server/src/core/utils/async-manager.js`
- **Key Features**:
- Operation tracking with unique IDs using UUID
- Status management (pending, running, completed, failed)
- Progress reporting forwarded from background tasks
- Operation history with automatic cleanup of completed operations
- Context preservation (log, session, reportProgress)
- Robust error handling for background tasks
- **Usage**: Used for CPU-intensive operations like task expansion and PRD parsing
- Registers tools (`mcp-server/src/tools/*.js`).
- Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`).
- Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session.
- Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`.
- Manages MCP caching and response formatting.
- **[`init.js`](mdc:scripts/init.js): Project Initialization Logic**
- **Purpose**: Contains the core logic for setting up a new Task Master project structure.
- **Responsibilities**:
- Creates necessary directories (`.cursor/rules`, `scripts`, `tasks`).
- Copies template files (`.env.example`, `.gitignore`, rule files, `dev.js`, etc.).
- Creates or merges `package.json` with required dependencies and scripts.
- Sets up MCP configuration (`.cursor/mcp.json`).
- Optionally initializes a git repository and installs dependencies.
- Handles user prompts for project details *if* called without skip flags (`-y`).
- **Key Function**:
- `initializeProject(options)`: The main function exported and called by the `init` command's action handler in [`commands.js`](mdc:scripts/modules/commands.js). It receives parsed options directly.
- **Note**: This script is used as a module and no longer handles its own argument parsing or direct execution via a separate `bin` file.
- **Purpose**: Sets up new Task Master project structure.
- **Responsibilities**: Creates directories, copies templates, manages `package.json`, sets up `.cursor/mcp.json`.
- **Data Flow and Module Dependencies**:
- **Data Flow and Module Dependencies (Updated)**:
- **Commands Initiate Actions**: User commands entered via the CLI (parsed by `commander` based on definitions in [`commands.js`](mdc:scripts/modules/commands.js)) are the entry points for most operations.
- **Command Handlers Delegate to Core Logic**: Action handlers within [`commands.js`](mdc:scripts/modules/commands.js) call functions in core modules like [`task-manager.js`](mdc:scripts/modules/task-manager.js), [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js), and [`init.js`](mdc:scripts/init.js) (for the `init` command) to perform the actual work.
- **UI for Presentation**: [`ui.js`](mdc:scripts/modules/ui.js) is used by command handlers and task/dependency managers to display information to the user. UI functions primarily consume data and format it for output, without modifying core application state.
- **Utilities for Common Tasks**: [`utils.js`](mdc:scripts/modules/utils.js) provides helper functions used by all other modules for configuration, logging, file operations, and common data manipulations.
- **AI Services Integration**: AI functionalities (complexity analysis, task expansion, PRD parsing) are invoked from [`task-manager.js`](mdc:scripts/modules/task-manager.js) and potentially [`commands.js`](mdc:scripts/modules/commands.js), likely using functions that would reside in a dedicated `ai-services.js` module or be integrated within `utils.js` or `task-manager.js`.
- **MCP Server Interaction**: External tools interact with the `mcp-server`. MCP Tool `execute` methods use `getProjectRootFromSession` to find the project root, then call direct function wrappers (in `mcp-server/src/core/direct-functions/`) passing the root in `args`. These wrappers handle path finding for `tasks.json` (using `path-utils.js`), validation, caching, call the core logic from `scripts/modules/` (passing logging context via the standard wrapper pattern detailed in mcp.mdc), and return a standardized result. The final MCP response is formatted by `mcp-server/src/tools/utils.js`. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details.
- **CLI**: `bin/task-master.js` -> `scripts/dev.js` (loads `.env`) -> `scripts/modules/commands.js` -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
- **MCP**: External Tool -> `mcp-server/server.js` -> Tool (`mcp-server/src/tools/*`) -> Direct Function (`mcp-server/src/core/direct-functions/*`) -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
- **Configuration**: Core logic needing non-AI settings calls `config-manager.js` getters (passing `session.env` via `explicitRoot` if from MCP). Unified AI Service internally calls `config-manager.js` getters (using `role`) for AI params and `utils.js` (`resolveEnvVariable` with `session.env`) for API keys.
## Silent Mode Implementation Pattern in MCP Direct Functions
@@ -366,19 +277,8 @@ The `initialize_project` command provides a way to set up a new Task Master proj
- Configures project metadata (name, description, version)
- Handles shell alias creation if requested
- Works in both interactive and non-interactive modes
## Async Operation Management
The AsyncOperationManager provides background task execution capabilities:
- **Location**: `mcp-server/src/core/utils/async-manager.js`
- **Key Components**:
- `asyncOperationManager` singleton instance
- `addOperation(operationFn, args, context)` method
- `getStatus(operationId)` method
- **Usage Flow**:
1. Client calls an MCP tool that may take time to complete
2. Tool uses AsyncOperationManager to run the operation in background
3. Tool returns immediate response with operation ID
4. Client polls `get_operation_status` tool with the ID
5. Once completed, client can access operation results
- Creates necessary directories and files for a new project
- Sets up `tasks.json` and initial task files
- Configures project metadata (name, description, version)
- Handles shell alias creation if requested
- Works in both interactive and non-interactive modes

View File

@@ -34,8 +34,8 @@ While this document details the implementation of Task Master's **CLI commands**
- **Command Handler Organization**:
- ✅ DO: Keep action handlers concise and focused
- ✅ DO: Extract core functionality to appropriate modules
- ✅ DO: Have the action handler import and call the relevant function(s) from core modules (e.g., `task-manager.js`, `init.js`), passing the parsed `options`.
- ✅ DO: Perform basic parameter validation (e.g., checking for required options) within the action handler or at the start of the called core function.
- ✅ DO: Have the action handler import and call the relevant functions from core modules, like `task-manager.js` or `init.js`, passing the parsed `options`.
- ✅ DO: Perform basic parameter validation, such as checking for required options, within the action handler or at the start of the called core function.
- ❌ DON'T: Implement business logic in command handlers
## Best Practices for Removal/Delete Commands
@@ -44,7 +44,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
- **Confirmation Prompts**:
- ✅ **DO**: Include a confirmation prompt by default for destructive operations
- ✅ **DO**: Provide a `--yes` or `-y` flag to skip confirmation for scripting/automation
- ✅ **DO**: Provide a `--yes` or `-y` flag to skip confirmation, useful for scripting or automation
- ✅ **DO**: Show what will be deleted in the confirmation message
- ❌ **DON'T**: Perform destructive operations without user confirmation unless explicitly overridden
@@ -78,7 +78,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
- **File Path Handling**:
- ✅ **DO**: Use `path.join()` to construct file paths
- ✅ **DO**: Follow established naming conventions for tasks (e.g., `task_001.txt`)
- ✅ **DO**: Follow established naming conventions for tasks, like `task_001.txt`
- ✅ **DO**: Check if files exist before attempting to delete them
- ✅ **DO**: Handle file deletion errors gracefully
- ❌ **DON'T**: Construct paths with string concatenation
@@ -166,10 +166,10 @@ When implementing commands that delete or remove data (like `remove-task` or `re
- ✅ DO: Use descriptive, action-oriented names
- **Option Names**:
- ✅ DO: Use kebab-case for long-form option names (`--output-format`)
- ✅ DO: Provide single-letter shortcuts when appropriate (`-f, --file`)
- ✅ DO: Use kebab-case for long-form option names, like `--output-format`
- ✅ DO: Provide single-letter shortcuts when appropriate, like `-f, --file`
- ✅ DO: Use consistent option names across similar commands
- ❌ DON'T: Use different names for the same concept (`--file` in one command, `--path` in another)
- ❌ DON'T: Use different names for the same concept, such as `--file` in one command and `--path` in another
```javascript
// ✅ DO: Use consistent option naming
@@ -181,7 +181,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
.option('-p, --path <dir>', 'Output directory') // Should be --output
```
> **Note**: Although options are defined with kebab-case (`--num-tasks`), Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`.
> **Note**: Although options are defined with kebab-case, like `--num-tasks`, Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`.
- **Boolean Flag Conventions**:
- ✅ DO: Use positive flags with `--skip-` prefix for disabling behavior
@@ -210,7 +210,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
- **Required Parameters**:
- ✅ DO: Check that required parameters are provided
- ✅ DO: Provide clear error messages when parameters are missing
- ✅ DO: Use early returns with process.exit(1) for validation failures
- ✅ DO: Use early returns with `process.exit(1)` for validation failures
```javascript
// ✅ DO: Validate required parameters early
@@ -221,7 +221,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
```
- **Parameter Type Conversion**:
- ✅ DO: Convert string inputs to appropriate types (numbers, booleans)
- ✅ DO: Convert string inputs to appropriate types, such as numbers or booleans
- ✅ DO: Handle conversion errors gracefully
```javascript
@@ -254,7 +254,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
const taskId = parseInt(options.id, 10);
if (isNaN(taskId) || taskId <= 0) {
console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`));
console.log(chalk.yellow('Usage example: task-master update-task --id=\'23\' --prompt=\'Update with new information.\nEnsure proper error handling.\''));
console.log(chalk.yellow("Usage example: task-master update-task --id='23' --prompt='Update with new information.\\nEnsure proper error handling.'"));
process.exit(1);
}
@@ -392,9 +392,9 @@ When implementing commands that delete or remove data (like `remove-task` or `re
process.on('uncaughtException', (err) => {
// Handle Commander-specific errors
if (err.code === 'commander.unknownOption') {
const option = err.message.match(/'([^']+)'/)?.[1];
const option = err.message.match(/'([^']+)'/)?.[1]; // Safely extract option name
console.error(chalk.red(`Error: Unknown option '${option}'`));
console.error(chalk.yellow(`Run 'task-master <command> --help' to see available options`));
console.error(chalk.yellow("Run 'task-master <command> --help' to see available options"));
process.exit(1);
}
@@ -464,9 +464,9 @@ When implementing commands that delete or remove data (like `remove-task` or `re
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-p, --parent <id>', 'ID of the parent task (required)')
.option('-i, --task-id <id>', 'Existing task ID to convert to subtask')
.option('-t, --title <title>', 'Title for the new subtask (when not converting)')
.option('-d, --description <description>', 'Description for the new subtask (when not converting)')
.option('--details <details>', 'Implementation details for the new subtask (when not converting)')
.option('-t, --title <title>', 'Title for the new subtask, required if not converting')
.option('-d, --description <description>', 'Description for the new subtask, optional')
.option('--details <details>', 'Implementation details for the new subtask, optional')
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
.option('--status <status>', 'Initial status for the subtask', 'pending')
.option('--skip-generate', 'Skip regenerating task files')
@@ -489,8 +489,8 @@ When implementing commands that delete or remove data (like `remove-task` or `re
.command('remove-subtask')
.description('Remove a subtask from its parent task, optionally converting it to a standalone task')
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-i, --id <id>', 'ID of the subtask to remove in format "parentId.subtaskId" (required)')
.option('-c, --convert', 'Convert the subtask to a standalone task')
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
.option('--skip-generate', 'Skip regenerating task files')
.action(async (options) => {
// Implementation with detailed error handling
@@ -513,7 +513,8 @@ When implementing commands that delete or remove data (like `remove-task` or `re
// ✅ DO: Implement version checking function
async function checkForUpdate() {
// Implementation details...
return { currentVersion, latestVersion, needsUpdate };
// Example return structure:
return { currentVersion, latestVersion, updateAvailable };
}
// ✅ DO: Implement semantic version comparison
@@ -553,7 +554,7 @@ When implementing commands that delete or remove data (like `remove-task` or `re
// After command execution, check if an update is available
const updateInfo = await updateCheckPromise;
if (updateInfo.needsUpdate) {
if (updateInfo.updateAvailable) {
displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion);
}
} catch (error) {

View File

@@ -3,7 +3,6 @@ description: Guide for using Task Master to manage task-driven development workf
globs: **/*
alwaysApply: true
---
# Task Master Development Workflow
This guide outlines the typical process for using Task Master to manage software development projects.
@@ -29,21 +28,21 @@ Task Master offers two primary ways to interact:
## Standard Development Workflow Process
- Start new projects by running `init` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Analyze task complexity with `analyze_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
- Clarify tasks by checking task files in tasks/ directory or asking for user input
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
- Break down complex tasks using `expand_task` / `task-master expand --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`.
- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating
- Implement code following task details, dependencies, and project standards
- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent=<id> --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json
@@ -53,29 +52,30 @@ Task Master offers two primary ways to interact:
## Task Complexity Analysis
- Run `analyze_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis
- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version.
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
- Use analysis results to determine appropriate subtask allocation
- Note that reports are automatically used by the `expand` tool/command
- Note that reports are automatically used by the `expand_task` tool/command
## Task Breakdown Process
- For tasks with complexity analysis, use `expand_task` / `task-master expand --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Otherwise use `expand_task` / `task-master expand --id=<id> --num=<number>`
- Add `--research` flag to leverage Perplexity AI for research-backed expansion
- Use `--prompt="<context>"` to provide additional context when needed
- Review and adjust generated subtasks as necessary
- Use `--all` flag with `expand` or `expand_all` to expand multiple pending tasks at once
- If subtasks need regeneration, clear them first with `clear_subtasks` / `task-master clear-subtasks` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
- Use `--prompt="<context>"` to provide additional context when needed.
- Review and adjust generated subtasks as necessary.
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
## Implementation Drift Handling
- When implementation differs significantly from planned approach
- When future tasks need modification due to current implementation choices
- When new dependencies or requirements emerge
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to update a single specific task.
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
## Task Status Management
@@ -97,28 +97,32 @@ Task Master offers two primary ways to interact:
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
- Refer to [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc) for more details on the task data structure.
- Refer to task structure details (previously linked to `tasks.mdc`).
## Environment Variables Configuration
## Configuration Management (Updated)
- Task Master behavior is configured via environment variables:
- **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude.
- **MODEL**: Claude model to use (e.g., `claude-3-opus-20240229`).
- **MAX_TOKENS**: Maximum tokens for AI responses.
- **TEMPERATURE**: Temperature for AI model responses.
- **DEBUG**: Enable debug logging (`true`/`false`).
- **LOG_LEVEL**: Console output level (`debug`, `info`, `warn`, `error`).
- **DEFAULT_SUBTASKS**: Default number of subtasks for `expand`.
- **DEFAULT_PRIORITY**: Default priority for new tasks.
- **PROJECT_NAME**: Project name used in metadata.
- **PROJECT_VERSION**: Project version used in metadata.
- **PERPLEXITY_API_KEY**: API key for Perplexity AI (for `--research` flags).
- **PERPLEXITY_MODEL**: Perplexity model to use (e.g., `sonar-medium-online`).
- See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for default values and examples.
Taskmaster configuration is managed through two main mechanisms:
1. **`.taskmasterconfig` File (Primary):**
* Located in the project root directory.
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
* Created automatically when you run `task-master models --setup` for the first time.
2. **Environment Variables (`.env` / `mcp.json`):**
* Used **only** for sensitive API keys and specific endpoint URLs.
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
## Determining the Next Task
- Run `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to show the next task to work on
- Run `next_task` / `task-master next` to show the next task to work on.
- The command identifies tasks with all dependencies satisfied
- Tasks are prioritized by priority level, dependency count, and ID
- The command shows comprehensive task information including:
@@ -133,7 +137,7 @@ Task Master offers two primary ways to interact:
## Viewing Specific Task Details
- Run `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to view a specific task
- Run `get_task` / `task-master show <id>` to view a specific task.
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
- Displays comprehensive information similar to the next command, but for a specific task
- For parent tasks, shows all subtasks and their current status
@@ -143,8 +147,8 @@ Task Master offers two primary ways to interact:
## Managing Task Dependencies
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to add a dependency
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to remove a dependency
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
- The system prevents circular dependencies and duplicate dependency entries
- Dependencies are checked for existence before being added or removed
- Task files are automatically regenerated after dependency changes
@@ -164,14 +168,14 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
* Gather *all* relevant details from this exploration phase.
3. **Log the Plan:**
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
4. **Verify the Plan:**
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
5. **Begin Implementation:**
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
* Start coding based on the logged plan.
6. **Refine and Log Progress (Iteration 2+):**
@@ -189,7 +193,7 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
7. **Review & Update Rules (Post-Implementation):**
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
* Create new or update existing Cursor rules in the `.cursor/rules/` directory to capture these patterns, following the guidelines in [`cursor_rules.mdc`](mdc:.cursor/rules/cursor_rules.mdc) and [`self_improve.mdc`](mdc:.cursor/rules/self_improve.mdc).
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`).
8. **Mark Task Complete:**
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
@@ -198,10 +202,10 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
* Stage the relevant code changes and any updated/new rule files (`git add .`).
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
* Consider if a Changeset is needed according to [`changeset.mdc`](mdc:.cursor/rules/changeset.mdc). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
10. **Proceed to Next Subtask:**
* Identify the next subtask in the dependency chain (e.g., using `next_task` / `task-master next`) and repeat this iterative process starting from step 1.
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
## Code Analysis & Refactoring Techniques

View File

@@ -3,7 +3,6 @@ description: Guidelines for implementing and interacting with the Task Master MC
globs: mcp-server/src/**/*, scripts/modules/**/*
alwaysApply: false
---
# Task Master MCP Server Guidelines
This document outlines the architecture and implementation patterns for the Task Master Model Context Protocol (MCP) server, designed for integration with tools like Cursor.
@@ -90,69 +89,54 @@ When implementing a new direct function in `mcp-server/src/core/direct-functions
```
5. **Handling Logging Context (`mcpLog`)**:
- **Requirement**: Core functions that use the internal `report` helper function (common in `task-manager.js`, `dependency-manager.js`, etc.) expect the `options` object to potentially contain an `mcpLog` property. This `mcpLog` object **must** have callable methods for each log level (e.g., `mcpLog.info(...)`, `mcpLog.error(...)`).
- **Challenge**: The `log` object provided by FastMCP to the direct function's context, while functional, might not perfectly match this expected structure or could change in the future. Passing it directly can lead to runtime errors like `mcpLog[level] is not a function`.
- **Solution: The Logger Wrapper Pattern**: To reliably bridge the FastMCP `log` object and the core function's `mcpLog` expectation, use a simple wrapper object within the direct function:
- **Requirement**: Core functions (like those in `task-manager.js`) may accept an `options` object containing an optional `mcpLog` property. If provided, the core function expects this object to have methods like `mcpLog.info(...)`, `mcpLog.error(...)`.
- **Solution: The Logger Wrapper Pattern**: When calling a core function from a direct function, pass the `log` object provided by FastMCP *wrapped* in the standard `logWrapper` object. This ensures the core function receives a logger with the expected method structure.
```javascript
// Standard logWrapper pattern within a Direct Function
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args), // Handle optional debug
success: (message, ...args) => log.info(message, ...args) // Map success to info if needed
debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args)
};
// ... later when calling the core function ...
await coreFunction(
// ... other arguments ...
tasksPath,
taskId,
{
mcpLog: logWrapper, // Pass the wrapper object
session
session // Also pass session if needed by core logic or AI service
},
'json' // Pass 'json' output format if supported by core function
);
```
- **Critical For JSON Output Format**: Passing the `logWrapper` as `mcpLog` serves a dual purpose:
1. **Prevents Runtime Errors**: It ensures the `mcpLog[level](...)` calls within the core function succeed
2. **Controls Output Format**: In functions like `updateTaskById` and `updateSubtaskById`, the presence of `mcpLog` in the options triggers setting `outputFormat = 'json'` (instead of 'text'). This prevents UI elements (spinners, boxes) from being generated, which would break the JSON response.
- **Proven Solution**: This pattern has successfully fixed multiple issues in our MCP tools (including `update-task` and `update-subtask`), where direct passing of the `log` object or omitting `mcpLog` led to either runtime errors or JSON parsing failures from UI output.
- **When To Use**: Implement this wrapper in any direct function that calls a core function with an `options` object that might use `mcpLog` for logging or output format control.
- **Why it Works**: The `logWrapper` explicitly defines the `.info()`, `.warn()`, `.error()`, etc., methods that the core function's `report` helper needs, ensuring the `mcpLog[level](...)` call succeeds. It simply forwards the logging calls to the actual FastMCP `log` object.
- **Combined with Silent Mode**: Remember that using the `logWrapper` for `mcpLog` is **necessary *in addition* to using `enableSilentMode()` / `disableSilentMode()`** (see next point). The wrapper handles structured logging *within* the core function, while silent mode suppresses direct `console.log` and UI elements (spinners, boxes) that would break the MCP JSON response.
- **JSON Output**: Passing `mcpLog` (via the wrapper) often triggers the core function to use a JSON-friendly output format, suppressing spinners/boxes.
- ✅ **DO**: Implement this pattern in direct functions calling core functions that might use `mcpLog`.
6. **Silent Mode Implementation**:
- ✅ **DO**: Import silent mode utilities at the top: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';`
- ✅ **DO**: Ensure core Task Master functions called from direct functions do **not** pollute `stdout` with console output (banners, spinners, logs) that would break MCP's JSON communication.
- **Preferred**: Modify the core function to accept an `outputFormat: 'json'` parameter and check it internally before printing UI elements. Pass `'json'` from the direct function.
- **Required Fallback/Guarantee**: If the core function cannot be modified or its output suppression is unreliable, **wrap the core function call** within the direct function using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block. This guarantees no console output interferes with the MCP response.
- ✅ **DO**: Use `isSilentMode()` function to check global silent mode status if needed (rare in direct functions), NEVER access the global `silentMode` variable directly.
- ❌ **DON'T**: Wrap AI client initialization or AI API calls in `enable/disableSilentMode`; their logging is controlled via the `log` object (passed potentially within the `logWrapper` for core functions).
- ❌ **DON'T**: Assume a core function is silent just because it *should* be. Verify or use the `enable/disableSilentMode` wrapper.
- **Example (Direct Function Guaranteeing Silence and using Log Wrapper)**:
- ✅ **DO**: Import silent mode utilities: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';`
- ✅ **DO**: Wrap core function calls *within direct functions* using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block if the core function might produce console output (spinners, boxes, direct `console.log`) that isn't reliably controlled by passing `{ mcpLog }` or an `outputFormat` parameter.
- ✅ **DO**: Always disable silent mode in the `finally` block.
- ❌ **DON'T**: Wrap calls to the unified AI service (`generateTextService`, `generateObjectService`) in silent mode; their logging is handled internally.
- **Example (Direct Function Guaranteeing Silence & using Log Wrapper)**:
```javascript
export async function coreWrapperDirect(args, log, context = {}) {
const { session } = context;
const tasksPath = findTasksJsonPath(args, log);
// Create the logger wrapper
const logWrapper = { /* ... as defined above ... */ };
const logWrapper = { /* ... */ };
enableSilentMode(); // Ensure silence for direct console output
try {
// Call core function, passing wrapper and 'json' format
const result = await coreFunction(
tasksPath,
args.param1,
{ mcpLog: logWrapper, session },
'json' // Explicitly request JSON format if supported
{ mcpLog: logWrapper, session }, // Pass context
'json' // Request JSON format if supported
);
return { success: true, data: result };
} catch (error) {
log.error(`Error: ${error.message}`);
// Return standardized error object
return { success: false, error: { /* ... */ } };
} finally {
disableSilentMode(); // Critical: Always disable in finally
@@ -163,32 +147,6 @@ When implementing a new direct function in `mcp-server/src/core/direct-functions
7. **Debugging MCP/Core Logic Interaction**:
- ✅ **DO**: If an MCP tool fails with unclear errors (like JSON parsing failures), run the equivalent `task-master` CLI command in the terminal. The CLI often provides more detailed error messages originating from the core logic (e.g., `ReferenceError`, stack traces) that are obscured by the MCP layer.
### Specific Guidelines for AI-Based Direct Functions
Direct functions that interact with AI (e.g., `addTaskDirect`, `expandTaskDirect`) have additional responsibilities:
- **Context Parameter**: These functions receive an additional `context` object as their third parameter. **Critically, this object should only contain `{ session }`**. Do NOT expect or use `reportProgress` from this context.
```javascript
export async function yourAIDirect(args, log, context = {}) {
const { session } = context; // Only expect session
// ...
}
```
- **AI Client Initialization**:
- ✅ **DO**: Use the utilities from [`mcp-server/src/core/utils/ai-client-utils.js`](mdc:mcp-server/src/core/utils/ai-client-utils.js) (e.g., `getAnthropicClientForMCP(session, log)`) to get AI client instances. These correctly use the `session` object to resolve API keys.
- ✅ **DO**: Wrap client initialization in a try/catch block and return a specific `AI_CLIENT_ERROR` on failure.
- **AI Interaction**:
- ✅ **DO**: Build prompts using helper functions where appropriate (e.g., from `ai-prompt-helpers.js`).
- ✅ **DO**: Make the AI API call using appropriate helpers (e.g., `_handleAnthropicStream`). Pass the `log` object to these helpers for internal logging. **Do NOT pass `reportProgress`**.
- ✅ **DO**: Parse the AI response using helpers (e.g., `parseTaskJsonResponse`) and handle parsing errors with a specific code (e.g., `RESPONSE_PARSING_ERROR`).
- **Calling Core Logic**:
- ✅ **DO**: After successful AI interaction, call the relevant core Task Master function (from `scripts/modules/`) if needed (e.g., `addTaskDirect` calls `addTask`).
- ✅ **DO**: Pass necessary data, including potentially the parsed AI results, to the core function.
- ✅ **DO**: If the core function can produce console output, call it with an `outputFormat: 'json'` argument (or similar, depending on the function) to suppress CLI output. Ensure the core function is updated to respect this. Use `enableSilentMode/disableSilentMode` around the core function call as a fallback if `outputFormat` is not supported or insufficient.
- **Progress Indication**:
- ❌ **DON'T**: Call `reportProgress` within the direct function.
- ✅ **DO**: If intermediate progress status is needed *within* the long-running direct function, use standard logging: `log.info('Progress: Processing AI response...')`.
## Tool Definition and Execution
### Tool Structure
@@ -221,21 +179,14 @@ server.addTool({
The `execute` function receives validated arguments and the FastMCP context:
```javascript
// Standard signature
execute: async (args, context) => {
// Tool implementation
}
// Destructured signature (recommended)
execute: async (args, { log, reportProgress, session }) => {
execute: async (args, { log, session }) => {
// Tool implementation
}
```
- **args**: The first parameter contains all the validated parameters defined in the tool's schema.
- **context**: The second parameter is an object containing `{ log, reportProgress, session }` provided by FastMCP.
- ✅ **DO**: Use `{ log, session }` when calling direct functions.
- ⚠️ **WARNING**: Avoid passing `reportProgress` down to direct functions due to client compatibility issues. See Progress Reporting Convention below.
- **args**: Validated parameters.
- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`).
### Standard Tool Execution Pattern
@@ -245,11 +196,12 @@ The `execute` method within each MCP tool (in `mcp-server/src/tools/*.js`) shoul
2. **Get Project Root**: Use the `getProjectRootFromSession(session, log)` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) to extract the project root path from the client session. Fall back to `args.projectRoot` if the session doesn't provide a root.
3. **Call Direct Function**: Invoke the corresponding `*Direct` function wrapper (e.g., `listTasksDirect` from [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)), passing an updated `args` object that includes the resolved `projectRoot`. Crucially, the third argument (context) passed to the direct function should **only include `{ log, session }`**. **Do NOT pass `reportProgress`**.
```javascript
// Example call to a non-AI direct function
const result = await someDirectFunction({ ...args, projectRoot }, log);
// Example call to an AI-based direct function
const resultAI = await someAIDirect({ ...args, projectRoot }, log, { session });
// Example call (applies to both AI and non-AI direct functions now)
const result = await someDirectFunction(
{ ...args, projectRoot }, // Args including resolved root
log, // MCP logger
{ session } // Context containing session
);
```
4. **Handle Result**: Receive the result object (`{ success, data/error, fromCache }`) from the `*Direct` function.
5. **Format Response**: Pass this result object to the `handleApiResult` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) for standardized MCP response formatting and error handling.
@@ -288,85 +240,6 @@ execute: async (args, { log, session }) => { // Note: reportProgress is omitted
}
```
### Using AsyncOperationManager for Background Tasks
For tools that execute potentially long-running operations *where the AI call is just one part* (e.g., `expand-task`, `update`), use the AsyncOperationManager. The `add-task` command, as refactored, does *not* require this in the MCP tool layer because the direct function handles the primary AI work and returns the final result synchronously from the perspective of the MCP tool.
For tools that *do* use `AsyncOperationManager`:
```javascript
import { AsyncOperationManager } from '../utils/async-operation-manager.js'; // Correct path assuming utils location
import { getProjectRootFromSession, createContentResponse, createErrorResponse } from './utils.js';
import { someIntensiveDirect } from '../core/task-master-core.js';
// ... inside server.addTool({...})
execute: async (args, { log, session }) => { // Note: reportProgress omitted
try {
log.info(`Starting background operation with args: ${JSON.stringify(args)}`);
// 1. Get Project Root
let rootFolder = getProjectRootFromSession(session, log);
if (!rootFolder && args.projectRoot) {
rootFolder = args.projectRoot;
log.info(`Using project root from args as fallback: ${rootFolder}`);
}
// Create operation description
const operationDescription = `Expanding task ${args.id}...`; // Example
// 2. Start async operation using AsyncOperationManager
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgressCallback) => { // This callback is provided by AsyncOperationManager
// This runs in the background
try {
// Report initial progress *from the manager's callback*
reportProgressCallback({ progress: 0, status: 'Starting operation...' });
// Call the direct function (passing only session context)
const result = await someIntensiveDirect(
{ ...args, projectRoot: rootFolder },
log,
{ session } // Pass session, NO reportProgress
);
// Report final progress *from the manager's callback*
reportProgressCallback({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data, // Include final data if successful
error: result.error // Include error object if failed
});
return result; // Return the direct function's result
} catch (error) {
// Handle errors within the async task
reportProgressCallback({
progress: 100,
status: 'Operation failed critically',
error: { message: error.message, code: error.code || 'ASYNC_OPERATION_FAILED' }
});
throw error; // Re-throw for the manager to catch
}
}
);
// 3. Return immediate response with operation ID
return {
status: 202, // StatusCodes.ACCEPTED
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
log.error(`Error starting background operation: ${error.message}`);
return createErrorResponse(`Failed to start operation: ${error.message}`); // Use standard error response
}
}
```
### Project Initialization Tool
The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project:
@@ -417,19 +290,13 @@ log.error(`Error occurred: ${error.message}`, { stack: error.stack });
log.info('Progress: 50% - AI call initiated...'); // Example progress logging
```
### Progress Reporting Convention
- ⚠️ **DEPRECATED within Direct Functions**: The `reportProgress` function passed in the `context` object should **NOT** be called from within `*Direct` functions. Doing so can cause client-side validation errors due to missing/incorrect `progressToken` handling.
- ✅ **DO**: For tools using `AsyncOperationManager`, use the `reportProgressCallback` function *provided by the manager* within the background task definition (as shown in the `AsyncOperationManager` example above) to report progress updates for the *overall operation*.
- ✅ **DO**: If finer-grained progress needs to be indicated *during* the execution of a `*Direct` function (whether called directly or via `AsyncOperationManager`), use `log.info()` statements (e.g., `log.info('Progress: Parsing AI response...')`).
### Session Usage Convention
## Session Usage Convention
The `session` object (destructured from `context`) contains authenticated session data and client information.
- **Authentication**: Access user-specific data (`session.userId`, etc.) if authentication is implemented.
- **Project Root**: The primary use in Task Master is accessing `session.roots` to determine the client's project root directory via the `getProjectRootFromSession` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)). See the Standard Tool Execution Pattern above.
- **Environment Variables**: The `session.env` object is critical for AI tools. Pass the `session` object to the `*Direct` function's context, and then to AI client utility functions (like `getAnthropicClientForMCP`) which will extract API keys and other relevant environment settings (e.g., `MODEL`, `MAX_TOKENS`) from `session.env`.
- **Environment Variables**: The `session.env` object provides access to environment variables set in the MCP client configuration (e.g., `.cursor/mcp.json`). This is the **primary mechanism** for the unified AI service layer (`ai-services-unified.js`) to securely access **API keys** when called from MCP context.
- **Capabilities**: Can be used to check client capabilities (`session.clientCapabilities`).
## Direct Function Wrappers (`*Direct`)
@@ -438,24 +305,25 @@ These functions, located in `mcp-server/src/core/direct-functions/`, form the co
- **Purpose**: Bridge MCP tools and core Task Master modules (`scripts/modules/*`). Handle AI interactions if applicable.
- **Responsibilities**:
- Receive `args` (including the `projectRoot` determined by the tool), `log` object, and optionally a `context` object (containing **only `{ session }` if needed).
- **Find `tasks.json`**: Use `findTasksJsonPath(args, log)` from [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js).
- Validate arguments specific to the core logic.
- **Handle AI Logic (if applicable)**: Initialize AI clients (using `session` from context), build prompts, make AI calls, parse responses.
- **Implement Caching (if applicable)**: Use `getCachedOrExecute` from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) for read operations.
- **Call Core Logic**: Call the underlying function from the core Task Master modules, passing necessary data (including AI results if applicable).
- ✅ **DO**: Pass `outputFormat: 'json'` (or similar) to the core function if it might produce console output.
- ✅ **DO**: Wrap the core function call with `enableSilentMode/disableSilentMode` if necessary.
- Handle errors gracefully (AI errors, core logic errors, file errors).
- Return a standardized result object: `{ success: boolean, data?: any, error?: { code: string, message: string }, fromCache?: boolean }`.
- ❌ **DON'T**: Call `reportProgress`. Use `log.info` for progress indication if needed.
- Receive `args` (including `projectRoot`), `log`, and optionally `{ session }` context.
- Find `tasks.json` using `findTasksJsonPath`.
- Validate arguments.
- **Implement Caching (if applicable)**: Use `getCachedOrExecute`.
- **Call Core Logic**: Invoke function from `scripts/modules/*`.
- Pass `outputFormat: 'json'` if applicable.
- Wrap with `enableSilentMode/disableSilentMode` if needed.
- Pass `{ mcpLog: logWrapper, session }` context if core logic needs it.
- Handle errors.
- Return standardized result object.
- ❌ **DON'T**: Call `reportProgress`.
- ❌ **DON'T**: Initialize AI clients or call AI services directly.
## Key Principles
- **Prefer Direct Function Calls**: MCP tools should always call `*Direct` wrappers instead of `executeTaskMasterCommand`.
- **Standardized Execution Flow**: Follow the pattern: MCP Tool -> `getProjectRootFromSession` -> `*Direct` Function -> Core Logic / AI Logic.
- **Path Resolution via Direct Functions**: The `*Direct` function is responsible for finding the exact `tasks.json` path using `findTasksJsonPath`, relying on the `projectRoot` passed in `args`.
- **AI Logic in Direct Functions**: For AI-based tools, the `*Direct` function handles AI client initialization, calls, and parsing, using the `session` object passed in its context.
- **AI Logic in Core Modules**: AI interactions (prompt building, calling unified service) reside within the core logic functions (`scripts/modules/*`), not direct functions.
- **Silent Mode in Direct Functions**: Wrap *core function* calls (from `scripts/modules`) with `enableSilentMode()` and `disableSilentMode()` if they produce console output not handled by `outputFormat`. Do not wrap AI calls.
- **Selective Async Processing**: Use `AsyncOperationManager` in the *MCP Tool layer* for operations involving multiple steps or long waits beyond a single AI call (e.g., file processing + AI call + file writing). Simple AI calls handled entirely within the `*Direct` function (like `addTaskDirect`) may not need it at the tool layer.
- **No `reportProgress` in Direct Functions**: Do not pass or use `reportProgress` within `*Direct` functions. Use `log.info()` for internal progress or report progress from the `AsyncOperationManager` callback in the MCP tool layer.
@@ -480,7 +348,7 @@ Follow these steps to add MCP support for an existing Task Master command (see [
1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`. Ensure the core function can suppress console output (e.g., via an `outputFormat` parameter).
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**:
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**:
- Create a new file (e.g., `your-command.js`) using **kebab-case** naming.
- Import necessary core functions, `findTasksJsonPath`, silent mode utilities, and potentially AI client/prompt utilities.
- Implement `async function yourCommandDirect(args, log, context = {})` using **camelCase** with `Direct` suffix. **Remember `context` should only contain `{ session }` if needed (for AI keys/config).**

View File

@@ -25,11 +25,17 @@ alwaysApply: false
The standard pattern for adding a feature follows this workflow:
1. **Core Logic**: Implement the business logic in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
2. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc).
3. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
4. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
5. **Configuration**: Update any configuration in [`utils.js`](mdc:scripts/modules/utils.js) if needed, following [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc).
6. **Documentation**: Update help text and documentation in [dev_workflow.mdc](mdc:scripts/modules/dev_workflow.mdc)
2. **AI Integration (If Applicable)**:
- Import necessary service functions (e.g., `generateTextService`, `streamTextService`) from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
- Prepare parameters (`role`, `session`, `systemPrompt`, `prompt`).
- Call the service function.
- Handle the response (direct text or stream object).
- **Important**: Prefer `generateTextService` for calls sending large context (like stringified JSON) where incremental display is not needed. See [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc) for detailed usage patterns and cautions.
3. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc).
4. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
5. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
6. **Configuration**: Update configuration settings or add new ones in [`config-manager.js`](mdc:scripts/modules/config-manager.js) and ensure getters/setters are appropriate. Update documentation in [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). Update the `.taskmasterconfig` structure if needed.
7. **Documentation**: Update help text and documentation in [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
## Critical Checklist for New Features
@@ -211,7 +217,29 @@ export {
```
```javascript
// 2. UI COMPONENTS: Add display function to ui.js
// 2. AI Integration: Add import and use necessary service functions
import { generateTextService } from './ai-services-unified.js';
// Example usage:
async function handleAIInteraction() {
const role = 'user';
const session = 'exampleSession';
const systemPrompt = 'You are a helpful assistant.';
const prompt = 'What is the capital of France?';
const result = await generateTextService(role, session, systemPrompt, prompt);
console.log(result);
}
// Export from the module
export {
// ... existing exports ...
handleAIInteraction,
};
```
```javascript
// 3. UI COMPONENTS: Add display function to ui.js
/**
* Display archive operation results
* @param {string} archivePath - Path to the archive file
@@ -232,7 +260,7 @@ export {
```
```javascript
// 3. COMMAND INTEGRATION: Add to commands.js
// 4. COMMAND INTEGRATION: Add to commands.js
import { archiveTasks } from './task-manager.js';
import { displayArchiveResults } from './ui.js';
@@ -452,7 +480,7 @@ npm test
For each new feature:
1. Add help text to the command definition
2. Update [`dev_workflow.mdc`](mdc:scripts/modules/dev_workflow.mdc) with command reference
2. Update [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) with command reference
3. Consider updating [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) if the feature significantly changes module responsibilities.
Follow the existing command reference format:
@@ -590,3 +618,8 @@ When implementing project initialization commands:
});
}
```
}
});
}
```

View File

@@ -69,5 +69,4 @@ alwaysApply: true
- Update references to external docs
- Maintain links between related rules
- Document breaking changes
Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure.

View File

@@ -3,14 +3,13 @@ description: Comprehensive reference for Taskmaster MCP tools and CLI commands.
globs: **/*
alwaysApply: true
---
# Taskmaster Tool & Command Reference
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools (for integrations like Cursor) and the corresponding `task-master` CLI commands (for direct user interaction or fallback).
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for MCP implementation details and [`commands.mdc`](mdc:.cursor/rules/commands.mdc) for CLI implementation guidelines.
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
**Important:** Several MCP tools involve AI processing and are long-running operations that may take up to a minute to complete. When using these tools, always inform users that the operation is in progress and to wait patiently for results. The AI-powered tools include: `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
---
@@ -24,18 +23,18 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key CLI Options:**
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
* `--description <text>`: `Provide a brief description for your project.`
* `--version <version>`: `Set the initial version for your project (e.g., '0.1.0').`
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
* **Usage:** Run this once at the beginning of a new project.
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
* **Key MCP Parameters/Options:**
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
* `projectVersion`: `Set the initial version for your project (e.g., '0.1.0').` (CLI: `--version <version>`)
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
* `authorName`: `Author name.` (CLI: `--author <author>`)
* `skipInstall`: `Skip installing dependencies (default: false).` (CLI: `--skip-install`)
* `addAliases`: `Add shell aliases (tm, taskmaster) (default: false).` (CLI: `--aliases`)
* `yes`: `Skip prompts and use defaults/provided arguments (default: false).` (CLI: `-y, --yes`)
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt.
@@ -43,15 +42,44 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **MCP Tool:** `parse_prd`
* **CLI Command:** `task-master parse-prd [file] [options]`
* **Description:** `Parse a Product Requirements Document (PRD) or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
* **Key Parameters/Options:**
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file (default: 'tasks/tasks.json').` (CLI: `-o, --output <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to 'tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD (libraries, database schemas, frameworks, tech stacks, etc.) while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in scripts/example_prd.txt as a template for creating the PRD based on their idea, for use with parse-prd.
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
---
## AI Model Configuration
### 2. Manage Models (`models`)
* **MCP Tool:** `models`
* **CLI Command:** `task-master models [options]`
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
* **Key MCP Parameters/Options:**
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
* **Key CLI Options:**
* `--set-main <model_id>`: `Set the primary model.`
* `--set-research <model_id>`: `Set the research model.`
* `--set-fallback <model_id>`: `Set the fallback model.`
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
---
@@ -63,9 +91,9 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master list [options]`
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
* **Key Parameters/Options:**
* `status`: `Show only Taskmaster tasks matching this status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
* `status`: `Show only Taskmaster tasks matching this status, e.g., 'pending' or 'done'.` (CLI: `-s, --status <status>`)
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Get an overview of the project status, often used at the start of a work session.
### 4. Get Next Task (`next_task`)
@@ -74,7 +102,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master next [options]`
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
* **Key Parameters/Options:**
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Identify what to work on next according to the plan.
### 5. Get Task Details (`get_task`)
@@ -83,8 +111,8 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master show [id] [options]`
* **Description:** `Display detailed information for a specific Taskmaster task or subtask by its ID.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task (e.g., '15') or subtask (e.g., '15.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `id`: `Required. The ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Understand the full details, implementation notes, and test strategy for a specific task before starting work.
---
@@ -97,10 +125,11 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master add-task [options]`
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
* **Key Parameters/Options:**
* `prompt`: `Required. Describe the new task you want Taskmaster to create (e.g., "Implement user authentication using JWT").` (CLI: `-p, --prompt <text>`)
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start (e.g., '12,14').` (CLI: `-d, --dependencies <ids>`)
* `priority`: `Set the priority for the new task ('high', 'medium', 'low'; default: 'medium').` (CLI: `--priority <priority>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Quickly add newly identified tasks during development.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -112,13 +141,13 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
* `title`: `Required (if not using taskId). The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
* `dependencies`: `Specify IDs of other tasks or subtasks (e.g., '15', '16.1') that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
* `status`: `Set the initial status for the new subtask (default: 'pending').` (CLI: `-s, --status <status>`)
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Break down tasks manually or reorganize existing tasks.
### 8. Update Tasks (`update`)
@@ -127,10 +156,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master update [options]`
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
* **Key Parameters/Options:**
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher (and not 'done') will be considered.` (CLI: `--from <id>`)
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks (e.g., "We are now using React Query instead of Redux Toolkit for data fetching").` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates based on external knowledge (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -138,12 +167,12 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **MCP Tool:** `update_task`
* **CLI Command:** `task-master update-task [options]`
* **Description:** `Modify a specific Taskmaster task (or subtask) by its ID, incorporating new information or changes.`
* **Description:** `Modify a specific Taskmaster task or subtask by its ID, incorporating new information or changes.`
* **Key Parameters/Options:**
* `id`: `Required. The specific ID of the Taskmaster task (e.g., '15') or subtask (e.g., '15.2') you want to update.` (CLI: `-i, --id <id>`)
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to update.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt='Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -153,10 +182,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master update-subtask [options]`
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
* **Key Parameters/Options:**
* `id`: `Required. The specific ID of the Taskmaster subtask (e.g., '15.2') you want to add information to.` (CLI: `-i, --id <id>`)
* `id`: `Required. The specific ID of the Taskmaster subtask, e.g., '15.2', you want to add information to.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt='Discovered that the API requires header X.\nImplementation needs adjustment...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -164,11 +193,11 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **MCP Tool:** `set_task_status`
* **CLI Command:** `task-master set-status [options]`
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks (e.g., 'pending', 'in-progress', 'done').`
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
* **Key Parameters/Options:**
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s) (e.g., '15', '15.2', '16,17.1') to update.` (CLI: `-i, --id <id>`)
* `status`: `Required. The new status to set (e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled').` (CLI: `-s, --status <status>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Mark progress as tasks move through the development cycle.
### 12. Remove Task (`remove_task`)
@@ -177,9 +206,9 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master remove-task [options]`
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task (e.g., '5') or subtask (e.g., '5.2') to permanently remove.` (CLI: `-i, --id <id>`)
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
@@ -191,28 +220,28 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **MCP Tool:** `expand_task`
* **CLI Command:** `task-master expand [options]`
* **Description:** `Use Taskmaster's AI to break down a complex task (or all tasks) into smaller, manageable subtasks.`
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
* **Key Parameters/Options:**
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
* `num`: `Suggests how many subtasks Taskmaster should aim to create (uses complexity analysis by default).` (CLI: `-n, --num <number>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed subtask generation (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `prompt`: `Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
* `force`: `Use this to make Taskmaster replace existing subtasks with newly generated ones.` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding.
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 14. Expand All Tasks (`expand_all`)
* **MCP Tool:** `expand_all`
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
* **Description:** `Tell Taskmaster to automatically expand all 'pending' tasks based on complexity analysis.`
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
* **Key Parameters/Options:**
* `num`: `Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
* `research`: `Enable Perplexity AI for more informed subtask generation (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `prompt`: `Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
* `force`: `Make Taskmaster replace existing subtasks.` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -222,9 +251,9 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove (e.g., '15', '16,18').` (Required unless using `all`) (CLI: `-i, --id <ids>`)
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
### 16. Remove Subtask (`remove_subtask`)
@@ -233,10 +262,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master remove-subtask [options]`
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
* **Key Parameters/Options:**
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove (e.g., '15.2', '16.1,16.3').` (CLI: `-i, --id <id>`)
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
---
@@ -250,8 +279,8 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first (the prerequisite).` (CLI: `-d, --depends-on <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
* **Usage:** Establish the correct order of execution between tasks.
### 18. Remove Dependency (`remove_dependency`)
@@ -262,7 +291,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Update task relationships when the order of execution changes.
### 19. Validate Dependencies (`validate_dependencies`)
@@ -271,7 +300,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master validate-dependencies [options]`
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
* **Key Parameters/Options:**
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Audit the integrity of your task dependencies.
### 20. Fix Dependencies (`fix_dependencies`)
@@ -280,7 +309,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master fix-dependencies [options]`
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
* **Key Parameters/Options:**
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Clean up dependency errors automatically.
---
@@ -295,8 +324,8 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
* `research`: `Enable Perplexity AI for more accurate complexity analysis (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -320,34 +349,33 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
* **Key Parameters/Options:**
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date.
---
## Environment Variables Configuration
## Environment Variables Configuration (Updated)
Taskmaster's behavior can be customized via environment variables. These affect both CLI and MCP server operation:
Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
* **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude.
* **MODEL**: Claude model to use (default: `claude-3-opus-20240229`).
* **MAX_TOKENS**: Maximum tokens for AI responses (default: 8192).
* **TEMPERATURE**: Temperature for AI model responses (default: 0.7).
* **DEBUG**: Enable debug logging (`true`/`false`, default: `false`).
* **LOG_LEVEL**: Console output level (`debug`, `info`, `warn`, `error`, default: `info`).
* **DEFAULT_SUBTASKS**: Default number of subtasks for `expand` (default: 5).
* **DEFAULT_PRIORITY**: Default priority for new tasks (default: `medium`).
* **PROJECT_NAME**: Project name used in metadata.
* **PROJECT_VERSION**: Project version used in metadata.
* **PERPLEXITY_API_KEY**: API key for Perplexity AI (for `--research` flags).
* **PERPLEXITY_MODEL**: Perplexity model to use (default: `sonar-medium-online`).
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
Set these in your `.env` file in the project root or in your environment before running Taskmaster.
* **API Keys (Required for corresponding provider):**
* `ANTHROPIC_API_KEY`
* `PERPLEXITY_API_KEY`
* `OPENAI_API_KEY`
* `GOOGLE_API_KEY`
* `MISTRAL_API_KEY`
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
* `OPENROUTER_API_KEY`
* `XAI_API_KEY`
* `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):**
* `AZURE_OPENAI_ENDPOINT`
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool.
---
For implementation details:
* CLI commands: See [`commands.mdc`](mdc:.cursor/rules/commands.mdc)
* MCP server: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)
* Task structure: See [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc)
* Workflow: See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc)
For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc).

View File

@@ -283,107 +283,97 @@ When testing ES modules (`"type": "module"` in package.json), traditional mockin
- Imported functions may not use your mocked dependencies even with proper jest.mock() setup
- ES module exports are read-only properties (cannot be reassigned during tests)
- **Mocking Entire Modules**
```javascript
// Mock the entire module with custom implementation
jest.mock('../../scripts/modules/task-manager.js', () => {
// Get original implementation for functions you want to preserve
const originalModule = jest.requireActual('../../scripts/modules/task-manager.js');
- **Mocking Modules Statically Imported**
- For modules imported with standard `import` statements at the top level:
- Use `jest.mock('path/to/module', factory)` **before** any imports.
- Jest hoists these mocks.
- Ensure the factory function returns the mocked structure correctly.
// Return mix of original and mocked functionality
return {
...originalModule,
generateTaskFiles: jest.fn() // Replace specific functions
};
- **Mocking Dependencies for Dynamically Imported Modules**
- **Problem**: Standard `jest.mock()` often fails for dependencies of modules loaded later using dynamic `import('path/to/module')`. The mocks aren't applied correctly when the dynamic import resolves.
- **Solution**: Use `jest.unstable_mockModule(modulePath, factory)` **before** the dynamic `import()` call.
```javascript
// 1. Define mock function instances
const mockExistsSync = jest.fn();
const mockReadFileSync = jest.fn();
// ... other mocks
// 2. Mock the dependency module *before* the dynamic import
jest.unstable_mockModule('fs', () => ({
__esModule: true, // Important for ES module mocks
// Mock named exports
existsSync: mockExistsSync,
readFileSync: mockReadFileSync,
// Mock default export if necessary
// default: { ... }
}));
// 3. Dynamically import the module under test (e.g., in beforeAll or test case)
let moduleUnderTest;
beforeAll(async () => {
// Ensure mocks are reset if needed before import
mockExistsSync.mockReset();
mockReadFileSync.mockReset();
// ... reset other mocks ...
// Import *after* unstable_mockModule is called
moduleUnderTest = await import('../../scripts/modules/module-using-fs.js');
});
// Import after mocks
import * as taskManager from '../../scripts/modules/task-manager.js';
// 4. Now tests can use moduleUnderTest, and its 'fs' calls will hit the mocks
test('should use mocked fs.readFileSync', () => {
mockReadFileSync.mockReturnValue('mock data');
moduleUnderTest.readFileAndProcess();
expect(mockReadFileSync).toHaveBeenCalled();
// ... other assertions
});
```
- ✅ **DO**: Call `jest.unstable_mockModule()` before `await import()`.
- ✅ **DO**: Include `__esModule: true` in the mock factory for ES modules.
- ✅ **DO**: Mock named and default exports as needed within the factory.
- ✅ **DO**: Reset mock functions (`mockFn.mockReset()`) before the dynamic import if they might have been called previously.
// Now you can use the mock directly
const { generateTaskFiles } = taskManager;
- **Mocking Entire Modules (Static Import)**
```javascript
// Mock the entire module with custom implementation for static imports
// ... (existing example remains valid) ...
```
- **Direct Implementation Testing**
- Instead of calling the actual function which may have module-scope reference issues:
```javascript
test('should perform expected actions', () => {
// Setup mocks for this specific test
mockReadJSON.mockImplementationOnce(() => sampleData);
// Manually simulate the function's behavior
const data = mockReadJSON('path/file.json');
mockValidateAndFixDependencies(data, 'path/file.json');
// Skip calling the actual function and verify mocks directly
expect(mockReadJSON).toHaveBeenCalledWith('path/file.json');
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path/file.json');
});
// ... (existing example remains valid) ...
```
- **Avoiding Module Property Assignment**
```javascript
// ❌ DON'T: This causes "Cannot assign to read only property" errors
const utils = await import('../../scripts/modules/utils.js');
utils.readJSON = mockReadJSON; // Error: read-only property
// ✅ DO: Use the module factory pattern in jest.mock()
jest.mock('../../scripts/modules/utils.js', () => ({
readJSON: mockReadJSONFunc,
writeJSON: mockWriteJSONFunc
}));
// ... (existing example remains valid) ...
```
- **Handling Mock Verification Failures**
- If verification like `expect(mockFn).toHaveBeenCalled()` fails:
1. Check that your mock setup is before imports
2. Ensure you're using the right mock instance
3. Verify your test invokes behavior that would call the mock
4. Use `jest.clearAllMocks()` in beforeEach to reset mock state
5. Consider implementing a simpler test that directly verifies mock behavior
- **Full Example Pattern**
```javascript
// 1. Define mock implementations
const mockReadJSON = jest.fn();
const mockValidateAndFixDependencies = jest.fn();
// 2. Mock modules
jest.mock('../../scripts/modules/utils.js', () => ({
readJSON: mockReadJSON,
// Include other functions as needed
}));
jest.mock('../../scripts/modules/dependency-manager.js', () => ({
validateAndFixDependencies: mockValidateAndFixDependencies
}));
// 3. Import after mocks
import * as taskManager from '../../scripts/modules/task-manager.js';
describe('generateTaskFiles function', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('should generate task files', () => {
// 4. Setup test-specific mock behavior
const sampleData = { tasks: [{ id: 1, title: 'Test' }] };
mockReadJSON.mockReturnValueOnce(sampleData);
// 5. Create direct implementation test
// Instead of calling: taskManager.generateTaskFiles('path', 'dir')
// Simulate reading data
const data = mockReadJSON('path');
expect(mockReadJSON).toHaveBeenCalledWith('path');
// Simulate other operations the function would perform
mockValidateAndFixDependencies(data, 'path');
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path');
});
});
```
1. Check that your mock setup (`jest.mock` or `jest.unstable_mockModule`) is correctly placed **before** imports (static or dynamic).
2. Ensure you're using the right mock instance and it's properly passed to the module.
3. Verify your test invokes behavior that *should* call the mock.
4. Use `jest.clearAllMocks()` or specific `mockFn.mockReset()` in `beforeEach` to prevent state leakage between tests.
5. **Check Console Assertions**: If verifying `console.log`, `console.warn`, or `console.error` calls, ensure your assertion matches the *actual* arguments passed. If the code logs a single formatted string, assert against that single string (using `expect.stringContaining` or exact match), not multiple `expect.stringContaining` arguments.
```javascript
// Example: Code logs console.error(`Error: ${message}. Details: ${details}`)
// ❌ DON'T: Assert multiple arguments if only one is logged
// expect(console.error).toHaveBeenCalledWith(
// expect.stringContaining('Error:'),
// expect.stringContaining('Details:')
// );
// ✅ DO: Assert the single string argument
expect(console.error).toHaveBeenCalledWith(
expect.stringContaining('Error: Specific message. Details: More details')
);
// or for exact match:
expect(console.error).toHaveBeenCalledWith(
'Error: Specific message. Details: More details'
);
```
6. Consider implementing a simpler test that *only* verifies the mock behavior in isolation.
## Mocking Guidelines

View File

@@ -3,7 +3,6 @@ description: Guidelines for implementing utility functions
globs: scripts/modules/utils.js, mcp-server/src/**/*
alwaysApply: false
---
# Utility Function Guidelines
## General Principles
@@ -79,28 +78,30 @@ alwaysApply: false
}
```
## Configuration Management (in `scripts/modules/utils.js`)
## Configuration Management (via `config-manager.js`)
- **Environment Variables**:
- ✅ DO: Provide default values for all configuration
- ✅ DO: Use environment variables for customization
- ✅ DO: Document available configuration options
- ❌ DON'T: Hardcode values that should be configurable
Taskmaster configuration (excluding API keys) is primarily managed through the `.taskmasterconfig` file located in the project root and accessed via getters in [`scripts/modules/config-manager.js`](mdc:scripts/modules/config-manager.js).
```javascript
// ✅ DO: Set up configuration with defaults and environment overrides
const CONFIG = {
model: process.env.MODEL || 'claude-3-opus-20240229', // Updated default model
maxTokens: parseInt(process.env.MAX_TOKENS || '4000'),
temperature: parseFloat(process.env.TEMPERATURE || '0.7'),
debug: process.env.DEBUG === "true",
logLevel: process.env.LOG_LEVEL || "info",
defaultSubtasks: parseInt(process.env.DEFAULT_SUBTASKS || "3"),
defaultPriority: process.env.DEFAULT_PRIORITY || "medium",
projectName: process.env.PROJECT_NAME || "Task Master Project", // Generic project name
projectVersion: "1.5.0" // Version should be updated via release process
};
```
- **`.taskmasterconfig` File**:
- ✅ DO: Use this JSON file to store settings like AI model selections (main, research, fallback), parameters (temperature, maxTokens), logging level, default priority/subtasks, etc.
- ✅ DO: Manage this file using the `task-master models --setup` CLI command or the `models` MCP tool.
- ✅ DO: Rely on [`config-manager.js`](mdc:scripts/modules/config-manager.js) to load this file (using the correct project root passed from MCP or found via CLI utils), merge with defaults, and provide validated settings.
- ❌ DON'T: Store API keys in this file.
- ❌ DON'T: Manually edit this file unless necessary.
- **Configuration Getters (`config-manager.js`)**:
- ✅ DO: Import and use specific getters from `config-manager.js` (e.g., `getMainProvider()`, `getLogLevel()`, `getMainMaxTokens()`) to access configuration values *needed for application logic* (like `getDefaultSubtasks`).
- ✅ DO: Pass the `explicitRoot` parameter to getters if calling from MCP direct functions to ensure the correct project's config is loaded.
- ❌ DON'T: Call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions (`scripts/modules/task-manager/*`). Instead, pass the `role` to the unified AI service.
- ❌ DON'T: Access configuration values directly from environment variables (except API keys).
- **API Key Handling (`utils.js` & `ai-services-unified.js`)**:
- ✅ DO: Store API keys **only** in `.env` (for CLI, loaded by `dotenv` in `scripts/dev.js`) or `.cursor/mcp.json` (for MCP, accessed via `session.env`).
- ✅ DO: Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available *before* potentially attempting an AI call if needed, but note the unified service performs its own internal check.
- ✅ DO: Understand that the unified service layer (`ai-services-unified.js`) internally resolves API keys using `resolveEnvVariable(key, session)` from `utils.js`.
- **Error Handling**:
- ✅ DO: Handle potential `ConfigurationError` if the `.taskmasterconfig` file is missing or invalid when accessed via `getConfig` (e.g., in `commands.js` or direct functions).
## Logging Utilities (in `scripts/modules/utils.js`)

View File

@@ -1,20 +1,9 @@
# API Keys (Required)
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Format: sk-ant-api03-...
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Format: pplx-...
# Model Configuration
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229
PERPLEXITY_MODEL=sonar-pro # Perplexity model for research-backed subtasks
MAX_TOKENS=64000 # Maximum tokens for model responses
TEMPERATURE=0.2 # Temperature for model responses (0.0-1.0)
# Logging Configuration
DEBUG=false # Enable debug logging (true/false)
LOG_LEVEL=info # Log level (debug, info, warn, error)
# Task Generation Settings
DEFAULT_SUBTASKS=5 # Default number of subtasks when expanding
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
# Project Metadata (Optional)
PROJECT_NAME=Your Project Name # Override default project name in tasks.json
# API Keys (Required for using in any role i.e. main/research/fallback -- see `task-master models`)
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_KEY_HERE
PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
XAI_API_KEY=YOUR_XAI_KEY_HERE
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE

2
.gitignore vendored
View File

@@ -19,6 +19,8 @@ npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
tests/e2e/_runs/
tests/e2e/log/
# Coverage directory used by tools like istanbul
coverage

31
.taskmasterconfig Normal file
View File

@@ -0,0 +1,31 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 100000,
"temperature": 0.2
},
"research": {
"provider": "xai",
"modelId": "grok-3",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 120000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -13,25 +13,22 @@ A task management system for AI-driven development with Claude, designed to work
## Configuration
The script can be configured through environment variables in a `.env` file at the root of the project:
Taskmaster uses two primary configuration methods:
### Required Configuration
1. **`.taskmasterconfig` File (Project Root)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
- Stores most settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default priority/subtasks, project name.
- **Created and managed using `task-master models --setup` CLI command or the `models` MCP tool.**
- Do not edit manually unless you know what you are doing.
### Optional Configuration
2. **Environment Variables (`.env` file or MCP `env` block)**
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`, etc.) and specific endpoints (like `OLLAMA_BASE_URL`).
- **For CLI:** Place keys in a `.env` file in your project root.
- **For MCP/Cursor:** Place keys in the `env` section of your `.cursor/mcp.json` (or other MCP config according to the AI IDE or client you use) file under the `taskmaster-ai` server definition.
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
**Important:** Settings like model choices, max tokens, temperature, and log level are **no longer configured via environment variables.** Use the `task-master models` command or tool.
See the [Configuration Guide](docs/configuration.md) for full details.
## Installation

View File

@@ -31,12 +31,12 @@ MCP (Model Control Protocol) provides the easiest way to get started with Task M
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": "64000",
"TEMPERATURE": "0.2",
"DEFAULT_SUBTASKS": "5",
"DEFAULT_PRIORITY": "medium"
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}

31
assets/.taskmasterconfig Normal file
View File

@@ -0,0 +1,31 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3.5-sonnet-20240620",
"maxTokens": 120000,
"temperature": 0.1
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -1,14 +1,8 @@
# Required
ANTHROPIC_API_KEY=your-api-key-here # For most AI ops -- Format: sk-ant-api03-... (Required)
PERPLEXITY_API_KEY=pplx-abcde # For research -- Format: pplx-abcde (Optional, Highly Recommended)
# Optional - defaults shown
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229 (Required)
PERPLEXITY_MODEL=sonar-pro # Make sure you have access to sonar-pro otherwise you can use sonar regular (Optional)
MAX_TOKENS=64000 # Maximum tokens for model responses (Required)
TEMPERATURE=0.2 # Temperature for model responses (0.0-1.0) - lower = less creativity and follow your prompt closely (Required)
DEBUG=false # Enable debug logging (true/false)
LOG_LEVEL=info # Log level (debug, info, warn, error)
DEFAULT_SUBTASKS=5 # Default number of subtasks when expanding
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
PROJECT_NAME={{projectName}} # Project name for tasks.json metadata
# API Keys (Required to enable respective provider)
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Required: Format: sk-ant-api03-...
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Optional: Format: pplx-...
OPENAI_API_KEY=your_openai_api_key_here # Optional, for OpenAI/OpenRouter models. Format: sk-proj-...
GOOGLE_API_KEY=your_google_api_key_here # Optional, for Google Gemini models.
MISTRAL_API_KEY=your_mistral_key_here # Optional, for Mistral AI models.
XAI_API_KEY=YOUR_XAI_KEY_HERE # Optional, for xAI AI models.
AZURE_OPENAI_API_KEY=your_azure_key_here # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).

View File

@@ -16,27 +16,22 @@ In an AI-driven development process—particularly with tools like [Cursor](http
8. **Clear subtasks**—remove subtasks from specified tasks to allow regeneration or restructuring.
9. **Show task details**—display detailed information about a specific task and its subtasks.
## Configuration
## Configuration (Updated)
The script can be configured through environment variables in a `.env` file at the root of the project:
Task Master configuration is now managed through two primary methods:
### Required Configuration
1. **`.taskmasterconfig` File (Project Root - Primary)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
- Stores AI model selections (`main`, `research`, `fallback`), model parameters (`maxTokens`, `temperature`), `logLevel`, `defaultSubtasks`, `defaultPriority`, `projectName`, etc.
- Managed using the `task-master models --setup` command or the `models` MCP tool.
- This is the main configuration file for most settings.
### Optional Configuration
2. **Environment Variables (`.env` File - API Keys Only)**
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`).
- Create a `.env` file in your project root for CLI usage.
- See `assets/env.example` for required key names.
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
**Important:** Settings like `MODEL`, `MAX_TOKENS`, `TEMPERATURE`, `LOG_LEVEL`, etc., are **no longer set via `.env`**. Use `task-master models --setup` instead.
## How It Works
@@ -194,21 +189,14 @@ Notes:
- Can be combined with the `expand` command to immediately generate new subtasks
- Works with both parent tasks and individual subtasks
## AI Integration
## AI Integration (Updated)
The script integrates with two AI services:
1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks.
2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified.
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
To use the Perplexity integration:
1. Obtain a Perplexity API key
2. Add `PERPLEXITY_API_KEY` to your `.env` file
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
4. Use the `--research` flag with the `expand` command
- The script now uses a unified AI service layer (`ai-services-unified.js`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmasterconfig` based on the requested `role` (`main` or `research`).
- API keys are automatically resolved from your `.env` file (for CLI) or MCP session environment.
- To use the research capabilities (e.g., `expand --research`), ensure you have:
1. Configured a model for the `research` role using `task-master models --setup` (Perplexity models are recommended).
2. Added the corresponding API key (e.g., `PERPLEXITY_API_KEY`) to your `.env` file.
## Logging

View File

@@ -1,257 +0,0 @@
# AI Client Utilities for MCP Tools
This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools.
## Basic Usage with Direct Functions
```javascript
// In your direct function implementation:
import {
getAnthropicClientForMCP,
getModelConfig,
handleClaudeError
} from '../utils/ai-client-utils.js';
export async function someAiOperationDirect(args, log, context) {
try {
// Initialize Anthropic client with session from context
const client = getAnthropicClientForMCP(context.session, log);
// Get model configuration with defaults or session overrides
const modelConfig = getModelConfig(context.session);
// Make API call with proper error handling
try {
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: 'Your prompt here' }]
});
return {
success: true,
data: response
};
} catch (apiError) {
// Use helper to get user-friendly error message
const friendlyMessage = handleClaudeError(apiError);
return {
success: false,
error: {
code: 'AI_API_ERROR',
message: friendlyMessage
}
};
}
} catch (error) {
// Handle client initialization errors
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message
}
};
}
}
```
## Integration with AsyncOperationManager
```javascript
// In your MCP tool implementation:
import {
AsyncOperationManager,
StatusCodes
} from '../../utils/async-operation-manager.js';
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
export async function someAiOperation(args, context) {
const { session, mcpLog } = context;
const log = mcpLog || console;
try {
// Create operation description
const operationDescription = `AI operation: ${args.someParam}`;
// Start async operation
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgress) => {
try {
// Initial progress report
reportProgress({
progress: 0,
status: 'Starting AI operation...'
});
// Call direct function with session and progress reporting
const result = await someAiOperationDirect(args, log, {
reportProgress,
mcpLog: log,
session
});
// Final progress update
reportProgress({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data,
error: result.error
});
return result;
} catch (error) {
// Handle errors in the operation
reportProgress({
progress: 100,
status: 'Operation failed',
error: {
message: error.message,
code: error.code || 'OPERATION_FAILED'
}
});
throw error;
}
}
);
// Return immediate response with operation ID
return {
status: StatusCodes.ACCEPTED,
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
// Handle errors in the MCP tool
log.error(`Error in someAiOperation: ${error.message}`);
return {
status: StatusCodes.INTERNAL_SERVER_ERROR,
body: {
success: false,
error: {
code: 'OPERATION_FAILED',
message: error.message
}
}
};
}
}
```
## Using Research Capabilities with Perplexity
```javascript
// In your direct function:
import {
getPerplexityClientForMCP,
getBestAvailableAIModel
} from '../utils/ai-client-utils.js';
export async function researchOperationDirect(args, log, context) {
try {
// Get the best AI model for this operation based on needs
const { type, client } = await getBestAvailableAIModel(
context.session,
{ requiresResearch: true },
log
);
// Report which model we're using
if (context.reportProgress) {
await context.reportProgress({
progress: 10,
status: `Using ${type} model for research...`
});
}
// Make API call based on the model type
if (type === 'perplexity') {
// Call Perplexity
const response = await client.chat.completions.create({
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
messages: [{ role: 'user', content: args.researchQuery }],
temperature: 0.1
});
return {
success: true,
data: response.choices[0].message.content
};
} else {
// Call Claude as fallback
// (Implementation depends on specific needs)
// ...
}
} catch (error) {
// Handle errors
return {
success: false,
error: {
code: 'RESEARCH_ERROR',
message: error.message
}
};
}
}
```
## Model Configuration Override Example
```javascript
// In your direct function:
import { getModelConfig } from '../utils/ai-client-utils.js';
// Using custom defaults for a specific operation
const operationDefaults = {
model: 'claude-3-haiku-20240307', // Faster, smaller model
maxTokens: 1000, // Lower token limit
temperature: 0.2 // Lower temperature for more deterministic output
};
// Get model config with operation-specific defaults
const modelConfig = getModelConfig(context.session, operationDefaults);
// Now use modelConfig in your API calls
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature
// Other parameters...
});
```
## Best Practices
1. **Error Handling**:
- Always use try/catch blocks around both client initialization and API calls
- Use `handleClaudeError` to provide user-friendly error messages
- Return standardized error objects with code and message
2. **Progress Reporting**:
- Report progress at key points (starting, processing, completing)
- Include meaningful status messages
- Include error details in progress reports when failures occur
3. **Session Handling**:
- Always pass the session from the context to the AI client getters
- Use `getModelConfig` to respect user settings from session
4. **Model Selection**:
- Use `getBestAvailableAIModel` when you need to select between different models
- Set `requiresResearch: true` when you need Perplexity capabilities
5. **AsyncOperationManager Integration**:
- Create descriptive operation names
- Handle all errors within the operation function
- Return standardized results from direct functions
- Return immediate responses with operation IDs

View File

@@ -52,6 +52,9 @@ task-master show 1.2
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
# Update tasks using research role
task-master update --from=<id> --prompt="<prompt>" --research
```
## Update a Specific Task
@@ -60,7 +63,7 @@ task-master update --from=<id> --prompt="<prompt>"
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates with Perplexity AI
# Use research-backed updates
task-master update-task --id=<id> --prompt="<prompt>" --research
```
@@ -73,7 +76,7 @@ task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Use research-backed updates with Perplexity AI
# Use research-backed updates
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
@@ -187,9 +190,12 @@ task-master fix-dependencies
## Add a New Task
```bash
# Add a new task using AI
# Add a new task using AI (main role)
task-master add-task --prompt="Description of the new task"
# Add a new task using AI (research role)
task-master add-task --prompt="Description of the new task" --research
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
@@ -203,3 +209,30 @@ task-master add-task --prompt="Description" --priority=high
# Initialize a new project with Task Master structure
task-master init
```
## Configure AI Models
```bash
# View current AI model configuration and API key status
task-master models
# Set the primary model for generation/updates (provider inferred if known)
task-master models --set-main=claude-3-opus-20240229
# Set the research model
task-master models --set-research=sonar-pro
# Set the fallback model
task-master models --set-fallback=claude-3-haiku-20240307
# Set a custom Ollama model for the main role
task-master models --set-main=my-local-llama --ollama
# Set a custom OpenRouter model for the research role
task-master models --set-research=google/gemini-pro --openrouter
# Run interactive setup to configure models, including custom ones
task-master models --setup
```
Configuration is stored in `.taskmasterconfig` in your project root. API keys are still managed via `.env` or MCP configuration. Use `task-master models` without flags to see available built-in models. Use `--setup` for a guided experience.

View File

@@ -1,53 +1,89 @@
# Configuration
Task Master can be configured through environment variables in a `.env` file at the root of your project.
Taskmaster uses two primary methods for configuration:
## Required Configuration
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`)
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the root directory of your project when you run the `task-master models --setup` interactive setup. You typically do this during the initialization sequence. Do not manually edit this file beyond adjusting Temperature and Max Tokens depending on your model.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}
```
## Optional Configuration
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key.
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides (in .taskmasterconfig):**
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key.
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `MODEL` (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`)
- `MAX_TOKENS` (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
- `TEMPERATURE` (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
- `DEBUG` (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
- `LOG_LEVEL` (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
- `DEFAULT_SUBTASKS` (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
- `DEFAULT_PRIORITY` (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
- `PROJECT_NAME` (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
- `PROJECT_VERSION` (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`)
- `PERPLEXITY_API_KEY`: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`)
- `PERPLEXITY_MODEL` (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`)
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
## Example .env File
## Example `.env` File (for API Keys)
```
# Required
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
# Required API keys for providers configured in .taskmasterconfig
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# etc.
# Optional - Claude Configuration
MODEL=claude-3-7-sonnet-20250219
MAX_TOKENS=4000
TEMPERATURE=0.7
# Optional - Perplexity API for Research
PERPLEXITY_API_KEY=pplx-your-api-key
PERPLEXITY_MODEL=sonar-medium-online
# Optional - Project Info
PROJECT_NAME=My Project
PROJECT_VERSION=1.0.0
# Optional - Application Configuration
DEFAULT_SUBTASKS=3
DEFAULT_PRIORITY=medium
DEBUG=false
LOG_LEVEL=info
# Optional Endpoint Overrides
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
```
## Troubleshooting
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in `.taskmasterconfig`.
### If `task-master init` doesn't respond:
Try running it with Node directly:

View File

@@ -51,3 +51,33 @@ Can you analyze the complexity of our tasks to help me understand which ones nee
```
Can you show me the complexity report in a more readable format?
```
### Breaking Down Complex Tasks
```
Task 5 seems complex. Can you break it down into subtasks?
```
(Agent runs: `task-master expand --id=5`)
```
Please break down task 5 using research-backed generation.
```
(Agent runs: `task-master expand --id=5 --research`)
### Updating Tasks with Research
```
We need to update task 15 based on the latest React Query v5 changes. Can you research this and update the task?
```
(Agent runs: `task-master update-task --id=15 --prompt="Update based on React Query v5 changes" --research`)
### Adding Tasks with Research
```
Please add a new task to implement user profile image uploads using Cloudinary, research the best approach.
```
(Agent runs: `task-master add-task --prompt="Implement user profile image uploads using Cloudinary" --research`)

View File

@@ -10,7 +10,13 @@ There are two ways to set up Task Master: using MCP (recommended) or via npm ins
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
1. **Add the MCP config to your editor** (Cursor recommended, but it works with other text editors):
1. **Install the package**
```bash
npm i -g task-master-ai
```
2. **Add the MCP config to your IDE/MCP Client** (Cursor is recommended, but it works with other clients):
```json
{
@@ -21,21 +27,28 @@ MCP (Model Control Protocol) provides the easiest way to get started with Task M
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}
}
```
2. **Enable the MCP** in your editor settings
**IMPORTANT:** An API key is _required_ for each AI provider you plan on using. Run the `task-master models` command to see your selected models and the status of your API keys across .env and mcp.json
3. **Prompt the AI** to initialize Task Master:
**To use AI commands in CLI** you MUST have API keys in the .env file
**To use AI commands in MCP** you MUST have API keys in the .mcp.json file (or MCP config equivalent)
We recommend having keys in both places and adding mcp.json to your gitignore so your API keys aren't checked into git.
3. **Enable the MCP** in your editor settings
4. **Prompt the AI** to initialize Task Master:
```
Can you please initialize taskmaster-ai into my project?
@@ -47,9 +60,9 @@ The AI will:
- Set up initial configuration files
- Guide you through the rest of the process
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. **Use natural language commands** to interact with Task Master:
6. **Use natural language commands** to interact with Task Master:
```
Can you parse my PRD at scripts/prd.txt?
@@ -241,13 +254,16 @@ If during implementation, you discover that:
Tell the agent:
```
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
# OR, if research is needed to find best practices for MongoDB:
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
@@ -290,7 +306,7 @@ The agent will execute:
task-master expand --all
```
For research-backed subtask generation using Perplexity AI:
For research-backed subtask generation using the configured research model:
```
Please break down task 5 using research-backed generation.

View File

@@ -15,11 +15,7 @@ export default {
roots: ['<rootDir>/tests'],
// The glob patterns Jest uses to detect test files
testMatch: [
'**/__tests__/**/*.js',
'**/?(*.)+(spec|test).js',
'**/tests/*.test.js'
],
testMatch: ['**/__tests__/**/*.js', '**/?(*.)+(spec|test).js'],
// Transform files
transform: {},

View File

@@ -8,15 +8,7 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getModelConfig
} from '../utils/ai-client-utils.js';
import {
_buildAddTaskPrompt,
parseTaskJsonResponse,
_handleAnthropicStream
} from '../../../../scripts/modules/ai-services.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for adding a new task with error handling.
@@ -29,20 +21,24 @@ import {
* @param {string} [args.testStrategy] - Test strategy (for manual task creation)
* @param {string} [args.dependencies] - Comma-separated list of task IDs this task depends on
* @param {string} [args.priority='medium'] - Task priority (high, medium, low)
* @param {string} [args.file='tasks/tasks.json'] - Path to the tasks file
* @param {string} [args.projectRoot] - Project root directory
* @param {string} [args.tasksJsonPath] - Path to the tasks.json file (resolved by tool)
* @param {boolean} [args.research=false] - Whether to use research capabilities for task creation
* @param {Object} log - Logger object
* @param {Object} context - Additional context (reportProgress, session)
* @param {Object} context - Additional context (session)
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
*/
export async function addTaskDirect(args, log, context = {}) {
// Destructure expected args
// Destructure expected args (including research)
const { tasksJsonPath, prompt, dependencies, priority, research } = args;
try {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
const { session } = context; // Destructure session from context
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
try {
// Check if tasksJsonPath was provided
if (!tasksJsonPath) {
log.error('addTaskDirect called without tasksJsonPath');
@@ -79,20 +75,17 @@ export async function addTaskDirect(args, log, context = {}) {
}
// Extract and prepare parameters
const taskPrompt = prompt;
const taskDependencies = Array.isArray(dependencies)
? dependencies
: dependencies
? dependencies // Already an array if passed directly
: dependencies // Check if dependencies exist and are a string
? String(dependencies)
.split(',')
.map((id) => parseInt(id.trim(), 10))
: [];
const taskPriority = priority || 'medium';
// Extract context parameters for advanced functionality
const { session } = context;
.map((id) => parseInt(id.trim(), 10)) // Split, trim, and parse
: []; // Default to empty array if null/undefined
const taskPriority = priority || 'medium'; // Default priority
let manualTaskData = null;
let newTaskId;
if (isManualCreation) {
// Create manual task data object
@@ -108,150 +101,61 @@ export async function addTaskDirect(args, log, context = {}) {
);
// Call the addTask function with manual task data
const newTaskId = await addTask(
newTaskId = await addTask(
tasksPath,
null, // No prompt needed for manual creation
null, // prompt is null for manual creation
taskDependencies,
priority,
taskPriority,
{
mcpLog: log,
session
session,
mcpLog
},
'json', // Use JSON output format to prevent console output
null, // No custom environment
manualTaskData // Pass the manual task data
'json', // outputFormat
manualTaskData, // Pass the manual task data
false // research flag is false for manual creation
);
// Restore normal logging
disableSilentMode();
return {
success: true,
data: {
taskId: newTaskId,
message: `Successfully added new task #${newTaskId}`
}
};
} else {
// AI-driven task creation
log.info(
`Adding new task with prompt: "${prompt}", dependencies: [${taskDependencies.join(', ')}], priority: ${priority}`
`Adding new task with prompt: "${prompt}", dependencies: [${taskDependencies.join(', ')}], priority: ${taskPriority}, research: ${research}`
);
// Initialize AI client with session environment
let localAnthropic;
try {
localAnthropic = getAnthropicClientForMCP(session, log);
} catch (error) {
log.error(`Failed to initialize Anthropic client: ${error.message}`);
disableSilentMode();
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
}
};
}
// Get model configuration from session
const modelConfig = getModelConfig(session);
// Read existing tasks to provide context
let tasksData;
try {
const fs = await import('fs');
tasksData = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
} catch (error) {
log.warn(`Could not read existing tasks for context: ${error.message}`);
tasksData = { tasks: [] };
}
// Build prompts for AI
const { systemPrompt, userPrompt } = _buildAddTaskPrompt(
prompt,
tasksData.tasks
);
// Make the AI call using the streaming helper
let responseText;
try {
responseText = await _handleAnthropicStream(
localAnthropic,
{
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: userPrompt }],
system: systemPrompt
},
{
mcpLog: log
}
);
} catch (error) {
log.error(`AI processing failed: ${error.message}`);
disableSilentMode();
return {
success: false,
error: {
code: 'AI_PROCESSING_ERROR',
message: `Failed to generate task with AI: ${error.message}`
}
};
}
// Parse the AI response
let taskDataFromAI;
try {
taskDataFromAI = parseTaskJsonResponse(responseText);
} catch (error) {
log.error(`Failed to parse AI response: ${error.message}`);
disableSilentMode();
return {
success: false,
error: {
code: 'RESPONSE_PARSING_ERROR',
message: `Failed to parse AI response: ${error.message}`
}
};
}
// Call the addTask function with 'json' outputFormat to prevent console output when called via MCP
const newTaskId = await addTask(
// Call the addTask function, passing the research flag
newTaskId = await addTask(
tasksPath,
prompt,
prompt, // Use the prompt for AI creation
taskDependencies,
priority,
taskPriority,
{
mcpLog: log,
session
session,
mcpLog
},
'json',
null,
taskDataFromAI // Pass the parsed AI result as the manual task data
'json', // outputFormat
null, // manualTaskData is null for AI creation
research // Pass the research flag
);
// Restore normal logging
disableSilentMode();
return {
success: true,
data: {
taskId: newTaskId,
message: `Successfully added new task #${newTaskId}`
}
};
}
// Restore normal logging
disableSilentMode();
return {
success: true,
data: {
taskId: newTaskId,
message: `Successfully added new task #${newTaskId}`
}
};
} catch (error) {
// Make sure to restore normal logging even if there's an error
disableSilentMode();
log.error(`Error in addTaskDirect: ${error.message}`);
// Add specific error code checks if needed
return {
success: false,
error: {
code: 'ADD_TASK_ERROR',
code: error.code || 'ADD_TASK_ERROR', // Use error code if available
message: error.message
}
};

View File

@@ -2,37 +2,36 @@
* Direct function wrapper for analyzeTaskComplexity
*/
import { analyzeTaskComplexity } from '../../../../scripts/modules/task-manager.js';
import analyzeTaskComplexity from '../../../../scripts/modules/task-manager/analyze-task-complexity.js';
import {
enableSilentMode,
disableSilentMode,
isSilentMode,
readJSON
isSilentMode
} from '../../../../scripts/modules/utils.js';
import fs from 'fs';
import path from 'path';
import { createLogWrapper } from '../../tools/utils.js'; // Import the new utility
/**
* Analyze task complexity and generate recommendations
* @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.outputPath - Explicit absolute path to save the report.
* @param {string} [args.model] - LLM model to use for analysis
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
* @param {Object} log - Logger object
* @param {Object} [context={}] - Context object containing session data
* @param {Object} [context.session] - MCP session object
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
*/
export async function analyzeTaskComplexityDirect(args, log, context = {}) {
const { session } = context; // Only extract session, not reportProgress
const { session } = context; // Extract session
// Destructure expected args
const { tasksJsonPath, outputPath, model, threshold, research } = args;
const { tasksJsonPath, outputPath, model, threshold, research } = args; // Model is ignored by core function now
// --- Initial Checks (remain the same) ---
try {
log.info(`Analyzing task complexity with args: ${JSON.stringify(args)}`);
// Check if required paths were provided
if (!tasksJsonPath) {
log.error('analyzeTaskComplexityDirect called without tasksJsonPath');
return {
@@ -51,7 +50,6 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
};
}
// Use the provided paths
const tasksPath = tasksJsonPath;
const resolvedOutputPath = outputPath;
@@ -59,78 +57,91 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
log.info(`Output report will be saved to: ${resolvedOutputPath}`);
if (research) {
log.info('Using Perplexity AI for research-backed complexity analysis');
log.info('Using research role for complexity analysis');
}
// Create options object for analyzeTaskComplexity using provided paths
// Prepare options for the core function
const options = {
file: tasksPath,
output: resolvedOutputPath,
model: model,
// model: model, // No longer needed
threshold: threshold,
research: research === true
research: research === true // Ensure boolean
};
// --- End Initial Checks ---
// Enable silent mode to prevent console logs from interfering with JSON response
// --- Silent Mode and Logger Wrapper (remain the same) ---
const wasSilent = isSilentMode();
if (!wasSilent) {
enableSilentMode();
}
// Create a logWrapper that matches the expected mcpLog interface as specified in utilities.mdc
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args) // Map success to info
};
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
let report; // To store the result from the core function
try {
// Call the core function with session and logWrapper as mcpLog
await analyzeTaskComplexity(options, {
session,
mcpLog: logWrapper // Use the wrapper instead of passing log directly
// --- Call Core Function (Updated Context Passing) ---
// Call the core function, passing options and the context object { session, mcpLog }
report = await analyzeTaskComplexity(options, {
session, // Pass the session object
mcpLog // Pass the logger wrapper
});
// --- End Core Function Call ---
} catch (error) {
log.error(`Error in analyzeTaskComplexity: ${error.message}`);
log.error(
`Error in analyzeTaskComplexity core function: ${error.message}`
);
// Restore logging if we changed it
if (!wasSilent && isSilentMode()) {
disableSilentMode();
}
return {
success: false,
error: {
code: 'ANALYZE_ERROR',
message: `Error running complexity analysis: ${error.message}`
code: 'ANALYZE_CORE_ERROR', // More specific error code
message: `Error running core complexity analysis: ${error.message}`
}
};
} finally {
// Always restore normal logging in finally block, but only if we enabled it
if (!wasSilent) {
// Always restore normal logging in finally block if we enabled silent mode
if (!wasSilent && isSilentMode()) {
disableSilentMode();
}
}
// Verify the report file was created
// --- Result Handling (remains largely the same) ---
// Verify the report file was created (core function writes it)
if (!fs.existsSync(resolvedOutputPath)) {
return {
success: false,
error: {
code: 'ANALYZE_ERROR',
message: 'Analysis completed but no report file was created'
code: 'ANALYZE_REPORT_MISSING', // Specific code
message:
'Analysis completed but no report file was created at the expected path.'
}
};
}
// The core function now returns the report object directly
if (!report || !report.complexityAnalysis) {
log.error(
'Core analyzeTaskComplexity function did not return a valid report object.'
);
return {
success: false,
error: {
code: 'INVALID_CORE_RESPONSE',
message: 'Core analysis function returned an invalid response.'
}
};
}
// Read the report file
let report;
try {
report = JSON.parse(fs.readFileSync(resolvedOutputPath, 'utf8'));
const analysisArray = report.complexityAnalysis; // Already an array
// Important: Handle different report formats
// The core function might return an array or an object with a complexityAnalysis property
const analysisArray = Array.isArray(report)
? report
: report.complexityAnalysis || [];
// Count tasks by complexity
// Count tasks by complexity (remains the same)
const highComplexityTasks = analysisArray.filter(
(t) => t.complexityScore >= 8
).length;
@@ -152,29 +163,33 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
mediumComplexityTasks,
lowComplexityTasks
}
// Include the full report data if needed by the client
// fullReport: report
}
};
} catch (parseError) {
log.error(`Error parsing report file: ${parseError.message}`);
// Should not happen if core function returns object, but good safety check
log.error(`Internal error processing report data: ${parseError.message}`);
return {
success: false,
error: {
code: 'REPORT_PARSE_ERROR',
message: `Error parsing complexity report: ${parseError.message}`
code: 'REPORT_PROCESS_ERROR',
message: `Internal error processing complexity report: ${parseError.message}`
}
};
}
// --- End Result Handling ---
} catch (error) {
// Make sure to restore normal logging even if there's an error
// Catch errors from initial checks or path resolution
// Make sure to restore normal logging if silent mode was enabled
if (isSilentMode()) {
disableSilentMode();
}
log.error(`Error in analyzeTaskComplexityDirect: ${error.message}`);
log.error(`Error in analyzeTaskComplexityDirect setup: ${error.message}`);
return {
success: false,
error: {
code: 'CORE_FUNCTION_ERROR',
code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message
}
};

View File

@@ -9,7 +9,6 @@ import {
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { getCachedOrExecute } from '../../tools/utils.js';
import path from 'path';
/**
* Direct function wrapper for displaying the complexity report with error handling and caching.

View File

@@ -5,138 +5,85 @@
import { expandAllTasks } from '../../../../scripts/modules/task-manager.js';
import {
enableSilentMode,
disableSilentMode,
isSilentMode
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { getAnthropicClientForMCP } from '../utils/ai-client-utils.js';
import path from 'path';
import fs from 'fs';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Expand all pending tasks with subtasks
* Expand all pending tasks with subtasks (Direct Function Wrapper)
* @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {number|string} [args.num] - Number of subtasks to generate
* @param {boolean} [args.research] - Enable Perplexity AI for research-backed subtask generation
* @param {boolean} [args.research] - Enable research-backed subtask generation
* @param {string} [args.prompt] - Additional context to guide subtask generation
* @param {boolean} [args.force] - Force regeneration of subtasks for tasks that already have them
* @param {Object} log - Logger object
* @param {Object} log - Logger object from FastMCP
* @param {Object} context - Context object containing session
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
*/
export async function expandAllTasksDirect(args, log, context = {}) {
const { session } = context; // Only extract session, not reportProgress
const { session } = context; // Extract session
// Destructure expected args
const { tasksJsonPath, num, research, prompt, force } = args;
try {
log.info(`Expanding all tasks with args: ${JSON.stringify(args)}`);
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
// Check if tasksJsonPath was provided
if (!tasksJsonPath) {
log.error('expandAllTasksDirect called without tasksJsonPath');
return {
success: false,
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
};
}
// Enable silent mode early to prevent any console output
enableSilentMode();
try {
// Remove internal path finding
/*
const tasksPath = findTasksJsonPath(args, log);
*/
// Use provided path
const tasksPath = tasksJsonPath;
// Parse parameters
const numSubtasks = num ? parseInt(num, 10) : undefined;
const useResearch = research === true;
const additionalContext = prompt || '';
const forceFlag = force === true;
log.info(
`Expanding all tasks with ${numSubtasks || 'default'} subtasks each...`
);
if (useResearch) {
log.info('Using Perplexity AI for research-backed subtask generation');
// Initialize AI client for research-backed expansion
try {
await getAnthropicClientForMCP(session, log);
} catch (error) {
// Ensure silent mode is disabled before returning error
disableSilentMode();
log.error(`Failed to initialize AI client: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
}
};
}
}
if (additionalContext) {
log.info(`Additional context: "${additionalContext}"`);
}
if (forceFlag) {
log.info('Force regeneration of subtasks is enabled');
}
// Call the core function with session context for AI operations
// and outputFormat as 'json' to prevent UI elements
const result = await expandAllTasks(
tasksPath,
numSubtasks,
useResearch,
additionalContext,
forceFlag,
{ mcpLog: log, session },
'json' // Use JSON output format to prevent UI elements
);
// The expandAllTasks function now returns a result object
return {
success: true,
data: {
message: 'Successfully expanded all pending tasks with subtasks',
details: {
numSubtasks: numSubtasks,
research: useResearch,
prompt: additionalContext,
force: forceFlag,
tasksExpanded: result.expandedCount,
totalEligibleTasks: result.tasksToExpand
}
}
};
} finally {
// Restore normal logging in finally block to ensure it runs even if there's an error
disableSilentMode();
}
} catch (error) {
// Ensure silent mode is disabled if an error occurs
if (isSilentMode()) {
disableSilentMode();
}
log.error(`Error in expandAllTasksDirect: ${error.message}`);
if (!tasksJsonPath) {
log.error('expandAllTasksDirect called without tasksJsonPath');
return {
success: false,
error: {
code: 'CORE_FUNCTION_ERROR',
message: error.message
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
};
}
enableSilentMode(); // Enable silent mode for the core function call
try {
log.info(
`Calling core expandAllTasks with args: ${JSON.stringify({ num, research, prompt, force })}`
);
// Parse parameters (ensure correct types)
const numSubtasks = num ? parseInt(num, 10) : undefined;
const useResearch = research === true;
const additionalContext = prompt || '';
const forceFlag = force === true;
// Call the core function, passing options and the context object { session, mcpLog }
const result = await expandAllTasks(
tasksJsonPath,
numSubtasks,
useResearch,
additionalContext,
forceFlag,
{ session, mcpLog }
);
// Core function now returns a summary object
return {
success: true,
data: {
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
details: result // Include the full result details
}
};
} catch (error) {
// Log the error using the MCP logger
log.error(`Error during core expandAllTasks execution: ${error.message}`);
// Optionally log stack trace if available and debug enabled
// if (error.stack && log.debug) { log.debug(error.stack); }
return {
success: false,
error: {
code: 'CORE_FUNCTION_ERROR', // Or a more specific code if possible
message: error.message
}
};
} finally {
disableSilentMode(); // IMPORTANT: Ensure silent mode is always disabled
}
}

View File

@@ -3,7 +3,7 @@
* Direct function implementation for expanding a task into subtasks
*/
import { expandTask } from '../../../../scripts/modules/task-manager.js';
import expandTask from '../../../../scripts/modules/task-manager/expand-task.js';
import {
readJSON,
writeJSON,
@@ -11,12 +11,9 @@ import {
disableSilentMode,
isSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getModelConfig
} from '../utils/ai-client-utils.js';
import path from 'path';
import fs from 'fs';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for expanding a task into subtasks with error handling.
@@ -25,15 +22,16 @@ import fs from 'fs';
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID of the task to expand.
* @param {number|string} [args.num] - Number of subtasks to generate.
* @param {boolean} [args.research] - Enable Perplexity AI for research-backed subtask generation.
* @param {boolean} [args.research] - Enable research role for subtask generation.
* @param {string} [args.prompt] - Additional context to guide subtask generation.
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
* @param {Object} log - Logger object
* @param {Object} context - Context object containing session and reportProgress
* @param {Object} context - Context object containing session
* @param {Object} [context.session] - MCP Session object
* @returns {Promise<Object>} - Task expansion result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function expandTaskDirect(args, log, context = {}) {
const { session } = context;
const { session } = context; // Extract session
// Destructure expected args
const { tasksJsonPath, id, num, research, prompt, force } = args;
@@ -85,28 +83,9 @@ export async function expandTaskDirect(args, log, context = {}) {
const additionalContext = prompt || '';
const forceFlag = force === true;
// Initialize AI client if needed (for expandTask function)
try {
// This ensures the AI client is available by checking it
if (useResearch) {
log.info('Verifying AI client for research-backed expansion');
await getAnthropicClientForMCP(session, log);
}
} catch (error) {
log.error(`Failed to initialize AI client: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
},
fromCache: false
};
}
try {
log.info(
`[expandTaskDirect] Expanding task ${taskId} into ${numSubtasks || 'default'} subtasks. Research: ${useResearch}`
`[expandTaskDirect] Expanding task ${taskId} into ${numSubtasks || 'default'} subtasks. Research: ${useResearch}, Force: ${forceFlag}`
);
// Read tasks data
@@ -202,23 +181,27 @@ export async function expandTaskDirect(args, log, context = {}) {
// Save tasks.json with potentially empty subtasks array
writeJSON(tasksPath, data);
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
// Process the request
try {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
const wasSilent = isSilentMode();
if (!wasSilent) enableSilentMode();
// Call expandTask with session context to ensure AI client is properly initialized
// Call the core expandTask function with the wrapped logger
const result = await expandTask(
tasksPath,
taskId,
numSubtasks,
useResearch,
additionalContext,
{ mcpLog: log, session } // Only pass mcpLog and session, NOT reportProgress
{ mcpLog, session }
);
// Restore normal logging
disableSilentMode();
if (!wasSilent && isSilentMode()) disableSilentMode();
// Read the updated data
const updatedData = readJSON(tasksPath);
@@ -244,7 +227,7 @@ export async function expandTaskDirect(args, log, context = {}) {
};
} catch (error) {
// Make sure to restore normal logging even if there's an error
disableSilentMode();
if (!wasSilent && isSilentMode()) disableSilentMode();
log.error(`Error expanding task: ${error.message}`);
return {

View File

@@ -8,7 +8,6 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import path from 'path';
/**
* Direct function wrapper for generateTaskFiles with error handling.

View File

@@ -0,0 +1,121 @@
/**
* models.js
* Direct function for managing AI model configurations via MCP
*/
import {
getModelConfiguration,
getAvailableModelsList,
setModel
} from '../../../../scripts/modules/task-manager/models.js';
import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Get or update model configuration
* @param {Object} args - Arguments passed by the MCP tool
* @param {Object} log - MCP logger
* @param {Object} context - MCP context (contains session)
* @returns {Object} Result object with success, data/error fields
*/
export async function modelsDirect(args, log, context = {}) {
const { session } = context;
const { projectRoot } = args; // Extract projectRoot from args
// Create a logger wrapper that the core functions can use
const mcpLog = createLogWrapper(log);
log.info(`Executing models_direct with args: ${JSON.stringify(args)}`);
log.info(`Using project root: ${projectRoot}`);
// Validate flags: cannot use both openrouter and ollama simultaneously
if (args.openrouter && args.ollama) {
log.error(
'Error: Cannot use both openrouter and ollama flags simultaneously.'
);
return {
success: false,
error: {
code: 'INVALID_ARGS',
message: 'Cannot use both openrouter and ollama flags simultaneously.'
}
};
}
try {
enableSilentMode();
try {
// Check for the listAvailableModels flag
if (args.listAvailableModels === true) {
return await getAvailableModelsList({
session,
mcpLog,
projectRoot // Pass projectRoot to function
});
}
// Handle setting a specific model
if (args.setMain) {
return await setModel('main', args.setMain, {
session,
mcpLog,
projectRoot, // Pass projectRoot to function
providerHint: args.openrouter
? 'openrouter'
: args.ollama
? 'ollama'
: undefined // Pass hint
});
}
if (args.setResearch) {
return await setModel('research', args.setResearch, {
session,
mcpLog,
projectRoot, // Pass projectRoot to function
providerHint: args.openrouter
? 'openrouter'
: args.ollama
? 'ollama'
: undefined // Pass hint
});
}
if (args.setFallback) {
return await setModel('fallback', args.setFallback, {
session,
mcpLog,
projectRoot, // Pass projectRoot to function
providerHint: args.openrouter
? 'openrouter'
: args.ollama
? 'ollama'
: undefined // Pass hint
});
}
// Default action: get current configuration
return await getModelConfiguration({
session,
mcpLog,
projectRoot // Pass projectRoot to function
});
} finally {
disableSilentMode();
}
} catch (error) {
log.error(`Error in models_direct: ${error.message}`);
return {
success: false,
error: {
code: 'DIRECT_FUNCTION_ERROR',
message: error.message,
details: error.stack
}
};
}
}

View File

@@ -10,41 +10,22 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getModelConfig
} from '../utils/ai-client-utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for parsing PRD documents and generating tasks.
*
* @param {Object} args - Command arguments containing input, numTasks or tasks, and output options.
* @param {Object} args - Command arguments containing projectRoot, input, output, numTasks options.
* @param {Object} log - Logger object.
* @param {Object} context - Context object containing session data.
* @returns {Promise<Object>} - Result object with success status and data/error information.
*/
export async function parsePRDDirect(args, log, context = {}) {
const { session } = context; // Only extract session, not reportProgress
const { session } = context; // Only extract session
try {
log.info(`Parsing PRD document with args: ${JSON.stringify(args)}`);
// Initialize AI client for PRD parsing
let aiClient;
try {
aiClient = getAnthropicClientForMCP(session, log);
} catch (error) {
log.error(`Failed to initialize AI client: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
},
fromCache: false
};
}
// Validate required parameters
if (!args.projectRoot) {
const errorMessage = 'Project root is required for parsePRDDirect';
@@ -55,7 +36,6 @@ export async function parsePRDDirect(args, log, context = {}) {
fromCache: false
};
}
if (!args.input) {
const errorMessage = 'Input file path is required for parsePRDDirect';
log.error(errorMessage);
@@ -65,7 +45,6 @@ export async function parsePRDDirect(args, log, context = {}) {
fromCache: false
};
}
if (!args.output) {
const errorMessage = 'Output file path is required for parsePRDDirect';
log.error(errorMessage);
@@ -130,17 +109,14 @@ export async function parsePRDDirect(args, log, context = {}) {
`Preparing to parse PRD from ${inputPath} and output to ${outputPath} with ${numTasks} tasks, append mode: ${append}`
);
// Create the logger wrapper for proper logging in the core function
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args) // Map success to info
};
// --- Logger Wrapper ---
const mcpLog = createLogWrapper(log);
// Get model config from session
const modelConfig = getModelConfig(session);
// Prepare options for the core function
const options = {
mcpLog,
session
};
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
@@ -153,7 +129,7 @@ export async function parsePRDDirect(args, log, context = {}) {
}
// Execute core parsePRD function with AI client
await parsePRD(
const tasksDataResult = await parsePRD(
inputPath,
outputPath,
numTasks,
@@ -173,13 +149,19 @@ export async function parsePRDDirect(args, log, context = {}) {
const actionVerb = append ? 'appended' : 'generated';
const message = `Successfully ${actionVerb} ${tasksData.tasks?.length || 0} tasks from PRD`;
if (!tasksDataResult || !tasksDataResult.tasks || !tasksData) {
throw new Error(
'Core parsePRD function did not return valid task data.'
);
}
log.info(message);
return {
success: true,
data: {
message,
taskCount: tasksData.tasks?.length || 0,
taskCount: tasksDataResult.tasks?.length || 0,
outputPath,
appended: append
},
@@ -206,7 +188,7 @@ export async function parsePRDDirect(args, log, context = {}) {
return {
success: false,
error: {
code: 'PARSE_PRD_ERROR',
code: error.code || 'PARSE_PRD_ERROR', // Use error code if available
message: error.message || 'Unknown error parsing PRD'
},
fromCache: false

View File

@@ -17,12 +17,13 @@ import {
* @param {Object} args - Command arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID of the task or subtask to show.
* @param {string} [args.status] - Optional status to filter subtasks by.
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Task details result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function showTaskDirect(args, log) {
// Destructure expected args
const { tasksJsonPath, id } = args;
const { tasksJsonPath, id, status } = args;
if (!tasksJsonPath) {
log.error('showTaskDirect called without tasksJsonPath');
@@ -50,8 +51,8 @@ export async function showTaskDirect(args, log) {
};
}
// Generate cache key using the provided task path and ID
const cacheKey = `showTask:${tasksJsonPath}:${taskId}`;
// Generate cache key using the provided task path, ID, and status filter
const cacheKey = `showTask:${tasksJsonPath}:${taskId}:${status || 'all'}`;
// Define the action function to be executed on cache miss
const coreShowTaskAction = async () => {
@@ -60,7 +61,7 @@ export async function showTaskDirect(args, log) {
enableSilentMode();
log.info(
`Retrieving task details for ID: ${taskId} from ${tasksJsonPath}`
`Retrieving task details for ID: ${taskId} from ${tasksJsonPath}${status ? ` (filtering by status: ${status})` : ''}`
);
// Read tasks data using the provided path
@@ -76,8 +77,12 @@ export async function showTaskDirect(args, log) {
};
}
// Find the specific task
const task = findTaskById(data.tasks, taskId);
// Find the specific task, passing the status filter
const { task, originalSubtaskCount } = findTaskById(
data.tasks,
taskId,
status
);
if (!task) {
disableSilentMode(); // Disable before returning
@@ -85,7 +90,7 @@ export async function showTaskDirect(args, log) {
success: false,
error: {
code: 'TASK_NOT_FOUND',
message: `Task with ID ${taskId} not found`
message: `Task with ID ${taskId} not found${status ? ` or no subtasks match status '${status}'` : ''}`
}
};
}
@@ -93,13 +98,16 @@ export async function showTaskDirect(args, log) {
// Restore normal logging
disableSilentMode();
// Return the task data with the full tasks array for reference
// (needed for formatDependenciesWithStatus function in UI)
log.info(`Successfully found task ${taskId}`);
// Return the task data, the original subtask count (if applicable),
// and the full tasks array for reference (needed for formatDependenciesWithStatus function in UI)
log.info(
`Successfully found task ${taskId}${status ? ` (with status filter: ${status})` : ''}`
);
return {
success: true,
data: {
task,
originalSubtaskCount,
allTasks: data.tasks
}
};

View File

@@ -8,10 +8,7 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getPerplexityClientForMCP
} from '../utils/ai-client-utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for updateSubtaskById with error handling.
@@ -95,40 +92,12 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
`Updating subtask with ID ${subtaskIdStr} with prompt "${prompt}" and research: ${useResearch}`
);
// Initialize the appropriate AI client based on research flag
try {
if (useResearch) {
// Initialize Perplexity client
await getPerplexityClientForMCP(session);
} else {
// Initialize Anthropic client
await getAnthropicClientForMCP(session);
}
} catch (error) {
log.error(`AI client initialization error: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message || 'Failed to initialize AI client'
},
fromCache: false
};
}
try {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Create a logger wrapper object to handle logging without breaking the mcpLog[level] calls
// This ensures outputFormat is set to 'json' while still supporting proper logging
const logWrapper = {
info: (message) => log.info(message),
warn: (message) => log.warn(message),
error: (message) => log.error(message),
debug: (message) => log.debug && log.debug(message),
success: (message) => log.info(message) // Map success to info if needed
};
// Create the logger wrapper using the utility function
const mcpLog = createLogWrapper(log);
// Execute core updateSubtaskById function
// Pass both session and logWrapper as mcpLog to ensure outputFormat is 'json'
@@ -139,7 +108,7 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
useResearch,
{
session,
mcpLog: logWrapper
mcpLog
}
);

View File

@@ -8,10 +8,7 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getPerplexityClientForMCP
} from '../utils/ai-client-utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for updateTaskById with error handling.
@@ -92,28 +89,6 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
// Get research flag
const useResearch = research === true;
// Initialize appropriate AI client based on research flag
let aiClient;
try {
if (useResearch) {
log.info('Using Perplexity AI for research-backed task update');
aiClient = await getPerplexityClientForMCP(session, log);
} else {
log.info('Using Claude AI for task update');
aiClient = getAnthropicClientForMCP(session, log);
}
} catch (error) {
log.error(`Failed to initialize AI client: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
},
fromCache: false
};
}
log.info(
`Updating task with ID ${taskId} with prompt "${prompt}" and research: ${useResearch}`
);
@@ -122,14 +97,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Create a logger wrapper that matches what updateTaskById expects
const logWrapper = {
info: (message) => log.info(message),
warn: (message) => log.warn(message),
error: (message) => log.error(message),
debug: (message) => log.debug && log.debug(message),
success: (message) => log.info(message) // Map success to info since many loggers don't have success
};
// Create the logger wrapper using the utility function
const mcpLog = createLogWrapper(log);
// Execute core updateTaskById function with proper parameters
await updateTaskById(
@@ -138,7 +107,7 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
prompt,
useResearch,
{
mcpLog: logWrapper, // Use our wrapper object that has the expected method structure
mcpLog, // Pass the wrapped logger
session
},
'json'

View File

@@ -8,180 +8,123 @@ import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import {
getAnthropicClientForMCP,
getPerplexityClientForMCP
} from '../utils/ai-client-utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for updating tasks based on new context/prompt.
*
* @param {Object} args - Command arguments containing fromId, prompt, useResearch and tasksJsonPath.
* @param {Object} args - Command arguments containing from, prompt, research and tasksJsonPath.
* @param {Object} log - Logger object.
* @param {Object} context - Context object containing session data.
* @returns {Promise<Object>} - Result object with success status and data/error information.
*/
export async function updateTasksDirect(args, log, context = {}) {
const { session } = context; // Only extract session, not reportProgress
const { session } = context; // Extract session
const { tasksJsonPath, from, prompt, research } = args;
try {
log.info(`Updating tasks with args: ${JSON.stringify(args)}`);
// Create the standard logger wrapper
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args)
};
// Check if tasksJsonPath was provided
if (!tasksJsonPath) {
const errorMessage = 'tasksJsonPath is required but was not provided.';
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
// Check for the common mistake of using 'id' instead of 'from'
if (args.id !== undefined && from === undefined) {
const errorMessage =
"You specified 'id' parameter but 'update' requires 'from' parameter. Use 'from' for this tool or use 'update_task' tool if you want to update a single task.";
log.error(errorMessage);
return {
success: false,
error: {
code: 'PARAMETER_MISMATCH',
message: errorMessage,
suggestion:
"Use 'from' parameter instead of 'id', or use the 'update_task' tool for single task updates"
},
fromCache: false
};
}
// Check required parameters
if (!from) {
const errorMessage =
'No from ID specified. Please provide a task ID to start updating from.';
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_FROM_ID', message: errorMessage },
fromCache: false
};
}
if (!prompt) {
const errorMessage =
'No prompt specified. Please provide a prompt with new context for task updates.';
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_PROMPT', message: errorMessage },
fromCache: false
};
}
// Parse fromId - handle both string and number values
let fromId;
if (typeof from === 'string') {
fromId = parseInt(from, 10);
if (isNaN(fromId)) {
const errorMessage = `Invalid from ID: ${from}. Task ID must be a positive integer.`;
log.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_FROM_ID', message: errorMessage },
fromCache: false
};
}
} else {
fromId = from;
}
// Get research flag
const useResearch = research === true;
// Initialize appropriate AI client based on research flag
let aiClient;
try {
if (useResearch) {
log.info('Using Perplexity AI for research-backed task updates');
aiClient = await getPerplexityClientForMCP(session, log);
} else {
log.info('Using Claude AI for task updates');
aiClient = getAnthropicClientForMCP(session, log);
}
} catch (error) {
log.error(`Failed to initialize AI client: ${error.message}`);
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: `Cannot initialize AI client: ${error.message}`
},
fromCache: false
};
}
log.info(
`Updating tasks from ID ${fromId} with prompt "${prompt}" and research: ${useResearch}`
);
// Create the logger wrapper to ensure compatibility with core functions
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args), // Handle optional debug
success: (message, ...args) => log.info(message, ...args) // Map success to info if needed
// --- Input Validation (Keep existing checks) ---
if (!tasksJsonPath) {
log.error('updateTasksDirect called without tasksJsonPath');
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: 'tasksJsonPath is required' },
fromCache: false
};
try {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Execute core updateTasks function, passing the AI client and session
await updateTasks(tasksJsonPath, fromId, prompt, useResearch, {
mcpLog: logWrapper, // Pass the wrapper instead of the raw log object
session
});
// Since updateTasks doesn't return a value but modifies the tasks file,
// we'll return a success message
return {
success: true,
data: {
message: `Successfully updated tasks from ID ${fromId} based on the prompt`,
fromId,
tasksPath: tasksJsonPath,
useResearch
},
fromCache: false // This operation always modifies state and should never be cached
};
} catch (error) {
log.error(`Error updating tasks: ${error.message}`);
return {
success: false,
error: {
code: 'UPDATE_TASKS_ERROR',
message: error.message || 'Unknown error updating tasks'
},
fromCache: false
};
} finally {
// Make sure to restore normal logging even if there's an error
disableSilentMode();
}
} catch (error) {
// Ensure silent mode is disabled
disableSilentMode();
log.error(`Error updating tasks: ${error.message}`);
}
if (args.id !== undefined && from === undefined) {
// Keep 'from' vs 'id' check
const errorMessage =
"Use 'from' parameter, not 'id', or use 'update_task' tool.";
log.error(errorMessage);
return {
success: false,
error: { code: 'PARAMETER_MISMATCH', message: errorMessage },
fromCache: false
};
}
if (!from) {
log.error('Missing from ID.');
return {
success: false,
error: { code: 'MISSING_FROM_ID', message: 'No from ID specified.' },
fromCache: false
};
}
if (!prompt) {
log.error('Missing prompt.');
return {
success: false,
error: { code: 'MISSING_PROMPT', message: 'No prompt specified.' },
fromCache: false
};
}
let fromId;
try {
fromId = parseInt(from, 10);
if (isNaN(fromId) || fromId <= 0) throw new Error();
} catch {
log.error(`Invalid from ID: ${from}`);
return {
success: false,
error: {
code: 'UPDATE_TASKS_ERROR',
message: error.message || 'Unknown error updating tasks'
code: 'INVALID_FROM_ID',
message: `Invalid from ID: ${from}. Must be a positive integer.`
},
fromCache: false
};
}
const useResearch = research === true;
// --- End Input Validation ---
log.info(`Updating tasks from ID ${fromId}. Research: ${useResearch}`);
enableSilentMode(); // Enable silent mode
try {
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
// Execute core updateTasks function, passing session context
await updateTasks(
tasksJsonPath,
fromId,
prompt,
useResearch,
// Pass context with logger wrapper and session
{ mcpLog, session },
'json' // Explicitly request JSON format for MCP
);
// Since updateTasks modifies file and doesn't return data, create success message
return {
success: true,
data: {
message: `Successfully initiated update for tasks from ID ${fromId} based on the prompt.`,
fromId,
tasksPath: tasksJsonPath,
useResearch
},
fromCache: false // Modifies state
};
} catch (error) {
log.error(`Error executing core updateTasks: ${error.message}`);
return {
success: false,
error: {
code: 'UPDATE_TASKS_CORE_ERROR',
message: error.message || 'Unknown error updating tasks'
},
fromCache: false
};
} finally {
disableSilentMode(); // Ensure silent mode is disabled
}
}

View File

@@ -29,19 +29,11 @@ import { complexityReportDirect } from './direct-functions/complexity-report.js'
import { addDependencyDirect } from './direct-functions/add-dependency.js';
import { removeTaskDirect } from './direct-functions/remove-task.js';
import { initializeProjectDirect } from './direct-functions/initialize-project-direct.js';
import { modelsDirect } from './direct-functions/models.js';
// Re-export utility functions
export { findTasksJsonPath } from './utils/path-utils.js';
// Re-export AI client utilities
export {
getAnthropicClientForMCP,
getPerplexityClientForMCP,
getModelConfig,
getBestAvailableAIModel,
handleClaudeError
} from './utils/ai-client-utils.js';
// Use Map for potential future enhancements like introspection or dynamic dispatch
export const directFunctions = new Map([
['listTasksDirect', listTasksDirect],
@@ -66,7 +58,9 @@ export const directFunctions = new Map([
['fixDependenciesDirect', fixDependenciesDirect],
['complexityReportDirect', complexityReportDirect],
['addDependencyDirect', addDependencyDirect],
['removeTaskDirect', removeTaskDirect]
['removeTaskDirect', removeTaskDirect],
['initializeProjectDirect', initializeProjectDirect],
['modelsDirect', modelsDirect]
]);
// Re-export all direct function implementations
@@ -94,5 +88,6 @@ export {
complexityReportDirect,
addDependencyDirect,
removeTaskDirect,
initializeProjectDirect
initializeProjectDirect,
modelsDirect
};

View File

@@ -1,213 +0,0 @@
/**
* ai-client-utils.js
* Utility functions for initializing AI clients in MCP context
*/
import { Anthropic } from '@anthropic-ai/sdk';
import dotenv from 'dotenv';
// Load environment variables for CLI mode
dotenv.config();
// Default model configuration from CLI environment
const DEFAULT_MODEL_CONFIG = {
model: 'claude-3-7-sonnet-20250219',
maxTokens: 64000,
temperature: 0.2
};
/**
* Get an Anthropic client instance initialized with MCP session environment variables
* @param {Object} [session] - Session object from MCP containing environment variables
* @param {Object} [log] - Logger object to use (defaults to console)
* @returns {Anthropic} Anthropic client instance
* @throws {Error} If API key is missing
*/
export function getAnthropicClientForMCP(session, log = console) {
try {
// Extract API key from session.env or fall back to environment variables
const apiKey =
session?.env?.ANTHROPIC_API_KEY || process.env.ANTHROPIC_API_KEY;
if (!apiKey) {
throw new Error(
'ANTHROPIC_API_KEY not found in session environment or process.env'
);
}
// Initialize and return a new Anthropic client
return new Anthropic({
apiKey,
defaultHeaders: {
'anthropic-beta': 'output-128k-2025-02-19' // Include header for increased token limit
}
});
} catch (error) {
log.error(`Failed to initialize Anthropic client: ${error.message}`);
throw error;
}
}
/**
* Get a Perplexity client instance initialized with MCP session environment variables
* @param {Object} [session] - Session object from MCP containing environment variables
* @param {Object} [log] - Logger object to use (defaults to console)
* @returns {OpenAI} OpenAI client configured for Perplexity API
* @throws {Error} If API key is missing or OpenAI package can't be imported
*/
export async function getPerplexityClientForMCP(session, log = console) {
try {
// Extract API key from session.env or fall back to environment variables
const apiKey =
session?.env?.PERPLEXITY_API_KEY || process.env.PERPLEXITY_API_KEY;
if (!apiKey) {
throw new Error(
'PERPLEXITY_API_KEY not found in session environment or process.env'
);
}
// Dynamically import OpenAI (it may not be used in all contexts)
const { default: OpenAI } = await import('openai');
// Initialize and return a new OpenAI client configured for Perplexity
return new OpenAI({
apiKey,
baseURL: 'https://api.perplexity.ai'
});
} catch (error) {
log.error(`Failed to initialize Perplexity client: ${error.message}`);
throw error;
}
}
/**
* Get model configuration from session environment or fall back to defaults
* @param {Object} [session] - Session object from MCP containing environment variables
* @param {Object} [defaults] - Default model configuration to use if not in session
* @returns {Object} Model configuration with model, maxTokens, and temperature
*/
export function getModelConfig(session, defaults = DEFAULT_MODEL_CONFIG) {
// Get values from session or fall back to defaults
return {
model: session?.env?.MODEL || defaults.model,
maxTokens: parseInt(session?.env?.MAX_TOKENS || defaults.maxTokens),
temperature: parseFloat(session?.env?.TEMPERATURE || defaults.temperature)
};
}
/**
* Returns the best available AI model based on specified options
* @param {Object} session - Session object from MCP containing environment variables
* @param {Object} options - Options for model selection
* @param {boolean} [options.requiresResearch=false] - Whether the operation requires research capabilities
* @param {boolean} [options.claudeOverloaded=false] - Whether Claude is currently overloaded
* @param {Object} [log] - Logger object to use (defaults to console)
* @returns {Promise<Object>} Selected model info with type and client
* @throws {Error} If no AI models are available
*/
export async function getBestAvailableAIModel(
session,
options = {},
log = console
) {
const { requiresResearch = false, claudeOverloaded = false } = options;
// Test case: When research is needed but no Perplexity, use Claude
if (
requiresResearch &&
!(session?.env?.PERPLEXITY_API_KEY || process.env.PERPLEXITY_API_KEY) &&
(session?.env?.ANTHROPIC_API_KEY || process.env.ANTHROPIC_API_KEY)
) {
try {
log.warn('Perplexity not available for research, using Claude');
const client = getAnthropicClientForMCP(session, log);
return { type: 'claude', client };
} catch (error) {
log.error(`Claude not available: ${error.message}`);
throw new Error('No AI models available for research');
}
}
// Regular path: Perplexity for research when available
if (
requiresResearch &&
(session?.env?.PERPLEXITY_API_KEY || process.env.PERPLEXITY_API_KEY)
) {
try {
const client = await getPerplexityClientForMCP(session, log);
return { type: 'perplexity', client };
} catch (error) {
log.warn(`Perplexity not available: ${error.message}`);
// Fall through to Claude as backup
}
}
// Test case: Claude for overloaded scenario
if (
claudeOverloaded &&
(session?.env?.ANTHROPIC_API_KEY || process.env.ANTHROPIC_API_KEY)
) {
try {
log.warn(
'Claude is overloaded but no alternatives are available. Proceeding with Claude anyway.'
);
const client = getAnthropicClientForMCP(session, log);
return { type: 'claude', client };
} catch (error) {
log.error(
`Claude not available despite being overloaded: ${error.message}`
);
throw new Error('No AI models available');
}
}
// Default case: Use Claude when available and not overloaded
if (
!claudeOverloaded &&
(session?.env?.ANTHROPIC_API_KEY || process.env.ANTHROPIC_API_KEY)
) {
try {
const client = getAnthropicClientForMCP(session, log);
return { type: 'claude', client };
} catch (error) {
log.warn(`Claude not available: ${error.message}`);
// Fall through to error if no other options
}
}
// If we got here, no models were successfully initialized
throw new Error('No AI models available. Please check your API keys.');
}
/**
* Handle Claude API errors with user-friendly messages
* @param {Error} error - The error from Claude API
* @returns {string} User-friendly error message
*/
export function handleClaudeError(error) {
// Check if it's a structured error response
if (error.type === 'error' && error.error) {
switch (error.error.type) {
case 'overloaded_error':
return 'Claude is currently experiencing high demand and is overloaded. Please wait a few minutes and try again.';
case 'rate_limit_error':
return 'You have exceeded the rate limit. Please wait a few minutes before making more requests.';
case 'invalid_request_error':
return 'There was an issue with the request format. If this persists, please report it as a bug.';
default:
return `Claude API error: ${error.error.message}`;
}
}
// Check for network/timeout errors
if (error.message?.toLowerCase().includes('timeout')) {
return 'The request to Claude timed out. Please try again.';
}
if (error.message?.toLowerCase().includes('network')) {
return 'There was a network error connecting to Claude. Please check your internet connection and try again.';
}
// Default error message
return `Error communicating with Claude: ${error.message}`;
}

View File

@@ -1,251 +0,0 @@
import { v4 as uuidv4 } from 'uuid';
class AsyncOperationManager {
constructor() {
this.operations = new Map(); // Stores active operation state
this.completedOperations = new Map(); // Stores completed operations
this.maxCompletedOperations = 100; // Maximum number of completed operations to store
this.listeners = new Map(); // For potential future notifications
}
/**
* Adds an operation to be executed asynchronously.
* @param {Function} operationFn - The async function to execute (e.g., a Direct function).
* @param {Object} args - Arguments to pass to the operationFn.
* @param {Object} context - The MCP tool context { log, reportProgress, session }.
* @returns {string} The unique ID assigned to this operation.
*/
addOperation(operationFn, args, context) {
const operationId = `op-${uuidv4()}`;
const operation = {
id: operationId,
status: 'pending',
startTime: Date.now(),
endTime: null,
result: null,
error: null,
// Store necessary parts of context, especially log for background execution
log: context.log,
reportProgress: context.reportProgress, // Pass reportProgress through
session: context.session // Pass session through if needed by the operationFn
};
this.operations.set(operationId, operation);
this.log(operationId, 'info', `Operation added.`);
// Start execution in the background (don't await here)
this._runOperation(operationId, operationFn, args, context).catch((err) => {
// Catch unexpected errors during the async execution setup itself
this.log(
operationId,
'error',
`Critical error starting operation: ${err.message}`,
{ stack: err.stack }
);
operation.status = 'failed';
operation.error = {
code: 'MANAGER_EXECUTION_ERROR',
message: err.message
};
operation.endTime = Date.now();
// Move to completed operations
this._moveToCompleted(operationId);
});
return operationId;
}
/**
* Internal function to execute the operation.
* @param {string} operationId - The ID of the operation.
* @param {Function} operationFn - The async function to execute.
* @param {Object} args - Arguments for the function.
* @param {Object} context - The original MCP tool context.
*/
async _runOperation(operationId, operationFn, args, context) {
const operation = this.operations.get(operationId);
if (!operation) return; // Should not happen
operation.status = 'running';
this.log(operationId, 'info', `Operation running.`);
this.emit('statusChanged', { operationId, status: 'running' });
try {
// Pass the necessary context parts to the direct function
// The direct function needs to be adapted if it needs reportProgress
// We pass the original context's log, plus our wrapped reportProgress
const result = await operationFn(args, operation.log, {
reportProgress: (progress) =>
this._handleProgress(operationId, progress),
mcpLog: operation.log, // Pass log as mcpLog if direct fn expects it
session: operation.session
});
operation.status = result.success ? 'completed' : 'failed';
operation.result = result.success ? result.data : null;
operation.error = result.success ? null : result.error;
this.log(
operationId,
'info',
`Operation finished with status: ${operation.status}`
);
} catch (error) {
this.log(
operationId,
'error',
`Operation failed with error: ${error.message}`,
{ stack: error.stack }
);
operation.status = 'failed';
operation.error = {
code: 'OPERATION_EXECUTION_ERROR',
message: error.message
};
} finally {
operation.endTime = Date.now();
this.emit('statusChanged', {
operationId,
status: operation.status,
result: operation.result,
error: operation.error
});
// Move to completed operations if done or failed
if (operation.status === 'completed' || operation.status === 'failed') {
this._moveToCompleted(operationId);
}
}
}
/**
* Move an operation from active operations to completed operations history.
* @param {string} operationId - The ID of the operation to move.
* @private
*/
_moveToCompleted(operationId) {
const operation = this.operations.get(operationId);
if (!operation) return;
// Store only the necessary data in completed operations
const completedData = {
id: operation.id,
status: operation.status,
startTime: operation.startTime,
endTime: operation.endTime,
result: operation.result,
error: operation.error
};
this.completedOperations.set(operationId, completedData);
this.operations.delete(operationId);
// Trim completed operations if exceeding maximum
if (this.completedOperations.size > this.maxCompletedOperations) {
// Get the oldest operation (sorted by endTime)
const oldest = [...this.completedOperations.entries()].sort(
(a, b) => a[1].endTime - b[1].endTime
)[0];
if (oldest) {
this.completedOperations.delete(oldest[0]);
}
}
}
/**
* Handles progress updates from the running operation and forwards them.
* @param {string} operationId - The ID of the operation reporting progress.
* @param {Object} progress - The progress object { progress, total? }.
*/
_handleProgress(operationId, progress) {
const operation = this.operations.get(operationId);
if (operation && operation.reportProgress) {
try {
// Use the reportProgress function captured from the original context
operation.reportProgress(progress);
this.log(
operationId,
'debug',
`Reported progress: ${JSON.stringify(progress)}`
);
} catch (err) {
this.log(
operationId,
'warn',
`Failed to report progress: ${err.message}`
);
// Don't stop the operation, just log the reporting failure
}
}
}
/**
* Retrieves the status and result/error of an operation.
* @param {string} operationId - The ID of the operation.
* @returns {Object | null} The operation details or null if not found.
*/
getStatus(operationId) {
// First check active operations
const operation = this.operations.get(operationId);
if (operation) {
return {
id: operation.id,
status: operation.status,
startTime: operation.startTime,
endTime: operation.endTime,
result: operation.result,
error: operation.error
};
}
// Then check completed operations
const completedOperation = this.completedOperations.get(operationId);
if (completedOperation) {
return completedOperation;
}
// Operation not found in either active or completed
return {
error: {
code: 'OPERATION_NOT_FOUND',
message: `Operation ID ${operationId} not found. It may have been completed and removed from history, or the ID may be invalid.`
},
status: 'not_found'
};
}
/**
* Internal logging helper to prefix logs with the operation ID.
* @param {string} operationId - The ID of the operation.
* @param {'info'|'warn'|'error'|'debug'} level - Log level.
* @param {string} message - Log message.
* @param {Object} [meta] - Additional metadata.
*/
log(operationId, level, message, meta = {}) {
const operation = this.operations.get(operationId);
// Use the logger instance associated with the operation if available, otherwise console
const logger = operation?.log || console;
const logFn = logger[level] || logger.log || console.log; // Fallback
logFn(`[AsyncOp ${operationId}] ${message}`, meta);
}
// --- Basic Event Emitter ---
on(eventName, listener) {
if (!this.listeners.has(eventName)) {
this.listeners.set(eventName, []);
}
this.listeners.get(eventName).push(listener);
}
emit(eventName, data) {
if (this.listeners.has(eventName)) {
this.listeners.get(eventName).forEach((listener) => listener(data));
}
}
}
// Export a singleton instance
const asyncOperationManager = new AsyncOperationManager();
// Export the manager and potentially the class if needed elsewhere
export { asyncOperationManager, AsyncOperationManager };

View File

@@ -5,7 +5,6 @@ import { fileURLToPath } from 'url';
import fs from 'fs';
import logger from './logger.js';
import { registerTaskMasterTools } from './tools/index.js';
import { asyncOperationManager } from './core/utils/async-manager.js';
// Load environment variables
dotenv.config();
@@ -35,9 +34,6 @@ class TaskMasterMCPServer {
this.server.addResourceTemplate({});
// Make the manager accessible (e.g., pass it to tool registration)
this.asyncManager = asyncOperationManager;
// Bind methods
this.init = this.init.bind(this);
this.start = this.start.bind(this);
@@ -88,7 +84,4 @@ class TaskMasterMCPServer {
}
}
// Export the manager from here as well, if needed elsewhere
export { asyncOperationManager };
export default TaskMasterMCPServer;

View File

@@ -1,5 +1,6 @@
import chalk from 'chalk';
import { isSilentMode } from '../../scripts/modules/utils.js';
import { getLogLevel } from '../../scripts/modules/config-manager.js';
// Define log levels
const LOG_LEVELS = {
@@ -10,10 +11,8 @@ const LOG_LEVELS = {
success: 4
};
// Get log level from environment or default to info
const LOG_LEVEL = process.env.LOG_LEVEL
? (LOG_LEVELS[process.env.LOG_LEVEL.toLowerCase()] ?? LOG_LEVELS.info)
: LOG_LEVELS.info;
// Get log level from config manager or default to info
const LOG_LEVEL = LOG_LEVELS[getLogLevel().toLowerCase()] ?? LOG_LEVELS.info;
/**
* Logs a message with the specified level

View File

@@ -6,9 +6,7 @@
import { z } from 'zod';
import {
createErrorResponse,
createContentResponse,
getProjectRootFromSession,
executeTaskMasterCommand,
handleApiResult
} from './utils.js';
import { addTaskDirect } from '../core/task-master-core.js';
@@ -101,6 +99,10 @@ export function registerAddTaskTool(server) {
tasksJsonPath: tasksJsonPath,
// Pass other relevant args
prompt: args.prompt,
title: args.title,
description: args.description,
details: args.details,
testStrategy: args.testStrategy,
dependencies: args.dependencies,
priority: args.priority,
research: args.research

View File

@@ -4,14 +4,11 @@
*/
import { z } from 'zod';
import {
handleApiResult,
createErrorResponse,
getProjectRootFromSession
} from './utils.js';
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js';
import { handleApiResult, createErrorResponse } from './utils.js';
import { analyzeTaskComplexityDirect } from '../core/direct-functions/analyze-task-complexity.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path';
import fs from 'fs';
/**
* Register the analyze tool with the MCP server
@@ -27,13 +24,7 @@ export function registerAnalyzeTool(server) {
.string()
.optional()
.describe(
'Output file path for the report (default: scripts/task-complexity-report.json)'
),
model: z
.string()
.optional()
.describe(
'LLM model to use for analysis (defaults to configured model)'
'Output file path relative to project root (default: scripts/task-complexity-report.json)'
),
threshold: z.coerce
.number()
@@ -47,12 +38,13 @@ export function registerAnalyzeTool(server) {
.string()
.optional()
.describe(
'Absolute path to the tasks file (default: tasks/tasks.json)'
'Absolute path to the tasks file in the /tasks folder inside the project root (default: tasks/tasks.json)'
),
research: z
.boolean()
.optional()
.describe('Use Perplexity AI for research-backed complexity analysis'),
.default(false)
.describe('Use research role for complexity analysis'),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
@@ -60,17 +52,15 @@ export function registerAnalyzeTool(server) {
execute: async (args, { log, session }) => {
try {
log.info(
`Analyzing task complexity with args: ${JSON.stringify(args)}`
`Executing analyze_project_complexity tool with args: ${JSON.stringify(args)}`
);
// Get project root from args or session
const rootFolder =
args.projectRoot || getProjectRootFromSession(session, log);
const rootFolder = args.projectRoot;
if (!rootFolder) {
return createErrorResponse(
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
);
return createErrorResponse('projectRoot is required.');
}
if (!path.isAbsolute(rootFolder)) {
return createErrorResponse('projectRoot must be an absolute path.');
}
let tasksJsonPath;
@@ -82,7 +72,7 @@ export function registerAnalyzeTool(server) {
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
`Failed to find tasks.json: ${error.message}`
`Failed to find tasks.json within project root '${rootFolder}': ${error.message}`
);
}
@@ -90,11 +80,25 @@ export function registerAnalyzeTool(server) {
? path.resolve(rootFolder, args.output)
: path.resolve(rootFolder, 'scripts', 'task-complexity-report.json');
const outputDir = path.dirname(outputPath);
try {
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir, { recursive: true });
log.info(`Created output directory: ${outputDir}`);
}
} catch (dirError) {
log.error(
`Failed to create output directory ${outputDir}: ${dirError.message}`
);
return createErrorResponse(
`Failed to create output directory: ${dirError.message}`
);
}
const result = await analyzeTaskComplexityDirect(
{
tasksJsonPath: tasksJsonPath,
outputPath: outputPath,
model: args.model,
threshold: args.threshold,
research: args.research
},
@@ -103,20 +107,17 @@ export function registerAnalyzeTool(server) {
);
if (result.success) {
log.info(`Task complexity analysis complete: ${result.data.message}`);
log.info(
`Report summary: ${JSON.stringify(result.data.reportSummary)}`
);
log.info(`Tool analyze_project_complexity finished successfully.`);
} else {
log.error(
`Failed to analyze task complexity: ${result.error.message}`
`Tool analyze_project_complexity failed: ${result.error?.message || 'Unknown error'}`
);
}
return handleApiResult(result, log, 'Error analyzing task complexity');
} catch (error) {
log.error(`Error in analyze tool: ${error.message}`);
return createErrorResponse(error.message);
log.error(`Critical error in analyze tool execute: ${error.message}`);
return createErrorResponse(`Internal tool error: ${error.message}`);
}
}
});

View File

@@ -19,22 +19,27 @@ import { findTasksJsonPath } from '../core/utils/path-utils.js';
export function registerExpandAllTool(server) {
server.addTool({
name: 'expand_all',
description: 'Expand all pending tasks into subtasks',
description:
'Expand all pending tasks into subtasks based on complexity or defaults',
parameters: z.object({
num: z
.string()
.optional()
.describe('Number of subtasks to generate for each task'),
.describe(
'Target number of subtasks per task (uses complexity/defaults otherwise)'
),
research: z
.boolean()
.optional()
.describe(
'Enable Perplexity AI for research-backed subtask generation'
'Enable research-backed subtask generation (e.g., using Perplexity)'
),
prompt: z
.string()
.optional()
.describe('Additional context to guide subtask generation'),
.describe(
'Additional context to guide subtask generation for all tasks'
),
force: z
.boolean()
.optional()
@@ -45,34 +50,37 @@ export function registerExpandAllTool(server) {
.string()
.optional()
.describe(
'Absolute path to the tasks file (default: tasks/tasks.json)'
'Absolute path to the tasks file in the /tasks folder inside the project root (default: tasks/tasks.json)'
),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
.optional()
.describe(
'Absolute path to the project root directory (derived from session if possible)'
)
}),
execute: async (args, { log, session }) => {
try {
log.info(`Expanding all tasks with args: ${JSON.stringify(args)}`);
log.info(
`Tool expand_all execution started with args: ${JSON.stringify(args)}`
);
// Get project root from args or session
const rootFolder =
args.projectRoot || getProjectRootFromSession(session, log);
// Ensure project root was determined
const rootFolder = getProjectRootFromSession(session, log);
if (!rootFolder) {
log.error('Could not determine project root from session.');
return createErrorResponse(
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
'Could not determine project root from session.'
);
}
log.info(`Project root determined: ${rootFolder}`);
// Resolve the path to tasks.json
let tasksJsonPath;
try {
tasksJsonPath = findTasksJsonPath(
{ projectRoot: rootFolder, file: args.file },
log
);
log.info(`Resolved tasks.json path: ${tasksJsonPath}`);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
@@ -82,9 +90,7 @@ export function registerExpandAllTool(server) {
const result = await expandAllTasksDirect(
{
// Pass the explicitly resolved path
tasksJsonPath: tasksJsonPath,
// Pass other relevant args
num: args.num,
research: args.research,
prompt: args.prompt,
@@ -94,18 +100,17 @@ export function registerExpandAllTool(server) {
{ session }
);
if (result.success) {
log.info(`Successfully expanded all tasks: ${result.data.message}`);
} else {
log.error(
`Failed to expand all tasks: ${result.error?.message || 'Unknown error'}`
);
}
return handleApiResult(result, log, 'Error expanding all tasks');
} catch (error) {
log.error(`Error in expand-all tool: ${error.message}`);
return createErrorResponse(error.message);
log.error(
`Unexpected error in expand_all tool execute: ${error.message}`
);
if (error.stack) {
log.error(error.stack);
}
return createErrorResponse(
`An unexpected error occurred: ${error.message}`
);
}
}
});

View File

@@ -11,8 +11,6 @@ import {
} from './utils.js';
import { expandTaskDirect } from '../core/task-master-core.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import fs from 'fs';
import path from 'path';
/**
* Register the expand-task tool with the MCP server
@@ -28,16 +26,26 @@ export function registerExpandTaskTool(server) {
research: z
.boolean()
.optional()
.describe('Use Perplexity AI for research-backed generation'),
.default(false)
.describe('Use research role for generation'),
prompt: z
.string()
.optional()
.describe('Additional context for subtask generation'),
file: z.string().optional().describe('Absolute path to the tasks file'),
file: z
.string()
.optional()
.describe(
'Path to the tasks file relative to project root (e.g., tasks/tasks.json)'
),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.'),
force: z.boolean().optional().describe('Force the expansion')
force: z
.boolean()
.optional()
.default(false)
.describe('Force expansion even if subtasks exist')
}),
execute: async (args, { log, session }) => {
try {

View File

@@ -40,6 +40,10 @@ export function registerShowTaskTool(server) {
description: 'Get detailed information about a specific task',
parameters: z.object({
id: z.string().describe('Task ID to get'),
status: z
.string()
.optional()
.describe("Filter subtasks by status (e.g., 'pending', 'done')"),
file: z.string().optional().describe('Absolute path to the tasks file'),
projectRoot: z
.string()
@@ -52,11 +56,9 @@ export function registerShowTaskTool(server) {
); // Use JSON.stringify for better visibility
try {
log.info(`Getting task details for ID: ${args.id}`);
log.info(
`Session object received in execute: ${JSON.stringify(session)}`
); // Use JSON.stringify for better visibility
`Getting task details for ID: ${args.id}${args.status ? ` (filtering subtasks by status: ${args.status})` : ''}`
);
// Get project root from args or session
const rootFolder =
@@ -91,10 +93,9 @@ export function registerShowTaskTool(server) {
const result = await showTaskDirect(
{
// Pass the explicitly resolved path
tasksJsonPath: tasksJsonPath,
// Pass other relevant args
id: args.id
id: args.id,
status: args.status
},
log
);

View File

@@ -27,39 +27,51 @@ import { registerComplexityReportTool } from './complexity-report.js';
import { registerAddDependencyTool } from './add-dependency.js';
import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { asyncOperationManager } from '../core/utils/async-manager.js';
import { registerModelsTool } from './models.js';
/**
* Register all Task Master tools with the MCP server
* @param {Object} server - FastMCP server instance
* @param {asyncOperationManager} asyncManager - The async operation manager instance
*/
export function registerTaskMasterTools(server, asyncManager) {
export function registerTaskMasterTools(server) {
try {
// Register each tool
registerListTasksTool(server);
registerSetTaskStatusTool(server);
// Register each tool in a logical workflow order
// Group 1: Initialization & Setup
registerInitializeProjectTool(server);
registerModelsTool(server);
registerParsePRDTool(server);
// Group 2: Task Listing & Viewing
registerListTasksTool(server);
registerShowTaskTool(server);
registerNextTaskTool(server);
registerComplexityReportTool(server);
// Group 3: Task Status & Management
registerSetTaskStatusTool(server);
registerGenerateTool(server);
// Group 4: Task Creation & Modification
registerAddTaskTool(server);
registerAddSubtaskTool(server);
registerUpdateTool(server);
registerUpdateTaskTool(server);
registerUpdateSubtaskTool(server);
registerGenerateTool(server);
registerShowTaskTool(server);
registerNextTaskTool(server);
registerExpandTaskTool(server);
registerAddTaskTool(server, asyncManager);
registerAddSubtaskTool(server);
registerRemoveTaskTool(server);
registerRemoveSubtaskTool(server);
registerAnalyzeTool(server);
registerClearSubtasksTool(server);
// Group 5: Task Analysis & Expansion
registerAnalyzeTool(server);
registerExpandTaskTool(server);
registerExpandAllTool(server);
// Group 6: Dependency Management
registerAddDependencyTool(server);
registerRemoveDependencyTool(server);
registerValidateDependenciesTool(server);
registerFixDependenciesTool(server);
registerComplexityReportTool(server);
registerAddDependencyTool(server);
registerRemoveTaskTool(server);
registerInitializeProjectTool(server);
} catch (error) {
logger.error(`Error registering Task Master tools: ${error.message}`);
throw error;

View File

@@ -1,9 +1,5 @@
import { z } from 'zod';
import {
createContentResponse,
createErrorResponse,
handleApiResult
} from './utils.js';
import { createErrorResponse, handleApiResult } from './utils.js';
import { initializeProjectDirect } from '../core/task-master-core.js';
export function registerInitializeProjectTool(server) {

View File

@@ -0,0 +1,89 @@
/**
* models.js
* MCP tool for managing AI model configurations
*/
import { z } from 'zod';
import {
getProjectRootFromSession,
handleApiResult,
createErrorResponse
} from './utils.js';
import { modelsDirect } from '../core/task-master-core.js';
/**
* Register the models tool with the MCP server
* @param {Object} server - FastMCP server instance
*/
export function registerModelsTool(server) {
server.addTool({
name: 'models',
description:
'Get information about available AI models or set model configurations. Run without arguments to get the current model configuration and API key status for the selected model providers.',
parameters: z.object({
setMain: z
.string()
.optional()
.describe(
'Set the primary model for task generation/updates. Model provider API key is required in the MCP config ENV.'
),
setResearch: z
.string()
.optional()
.describe(
'Set the model for research-backed operations. Model provider API key is required in the MCP config ENV.'
),
setFallback: z
.string()
.optional()
.describe(
'Set the model to use if the primary fails. Model provider API key is required in the MCP config ENV.'
),
listAvailableModels: z
.boolean()
.optional()
.describe('List all available models not currently in use.'),
projectRoot: z
.string()
.optional()
.describe('The directory of the project. Must be an absolute path.'),
openrouter: z
.boolean()
.optional()
.describe('Indicates the set model ID is a custom OpenRouter model.'),
ollama: z
.boolean()
.optional()
.describe('Indicates the set model ID is a custom Ollama model.')
}),
execute: async (args, { log, session }) => {
try {
log.info(`Starting models tool with args: ${JSON.stringify(args)}`);
// Get project root from args or session
const rootFolder =
args.projectRoot || getProjectRootFromSession(session, log);
// Ensure project root was determined
if (!rootFolder) {
return createErrorResponse(
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
);
}
// Call the direct function
const result = await modelsDirect(
{ ...args, projectRoot: rootFolder },
log,
{ session }
);
// Handle and return the result
return handleApiResult(result, log);
} catch (error) {
log.error(`Error in models tool: ${error.message}`);
return createErrorResponse(error.message);
}
}
});
}

View File

@@ -10,11 +10,7 @@ import {
createErrorResponse
} from './utils.js';
import { parsePRDDirect } from '../core/task-master-core.js';
import {
resolveProjectPaths,
findPRDDocumentPath,
resolveTasksOutputPath
} from '../core/utils/path-utils.js';
import { resolveProjectPaths } from '../core/utils/path-utils.js';
/**
* Register the parsePRD tool with the MCP server

View File

@@ -24,7 +24,7 @@ export function registerRemoveTaskTool(server) {
id: z
.string()
.describe(
"ID(s) of the task(s) or subtask(s) to remove (e.g., '5' or '5.2' or '5,6,7')"
"ID of the task or subtask to remove (e.g., '5' or '5.2'). Can be comma-separated to update multiple tasks/subtasks at once."
),
file: z.string().optional().describe('Absolute path to the tasks file'),
projectRoot: z

View File

@@ -24,7 +24,7 @@ export function registerSetTaskStatusTool(server) {
id: z
.string()
.describe(
"Task ID or subtask ID (e.g., '15', '15.2'). Can be comma-separated for multiple updates."
"Task ID or subtask ID (e.g., '15', '15.2'). Can be comma-separated to update multiple tasks/subtasks at once."
),
status: z
.string()

View File

@@ -4,13 +4,10 @@
*/
import { z } from 'zod';
import {
handleApiResult,
createErrorResponse,
getProjectRootFromSession
} from './utils.js';
import { handleApiResult, createErrorResponse } from './utils.js';
import { updateTasksDirect } from '../core/task-master-core.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
* Register the update tool with the MCP server
@@ -41,26 +38,25 @@ export function registerUpdateTool(server) {
}),
execute: async (args, { log, session }) => {
try {
log.info(`Updating tasks with args: ${JSON.stringify(args)}`);
log.info(`Executing update tool with args: ${JSON.stringify(args)}`);
// Get project root from args or session
const rootFolder =
args.projectRoot || getProjectRootFromSession(session, log);
// Ensure project root was determined
if (!rootFolder) {
// 1. Get Project Root
const rootFolder = args.projectRoot;
if (!rootFolder || !path.isAbsolute(rootFolder)) {
return createErrorResponse(
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
'projectRoot is required and must be absolute.'
);
}
log.info(`Project root: ${rootFolder}`);
// Resolve the path to tasks.json
// 2. Resolve Path
let tasksJsonPath;
try {
tasksJsonPath = findTasksJsonPath(
{ projectRoot: rootFolder, file: args.file },
log
);
log.info(`Resolved tasks path: ${tasksJsonPath}`);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
@@ -68,6 +64,7 @@ export function registerUpdateTool(server) {
);
}
// 3. Call Direct Function
const result = await updateTasksDirect(
{
tasksJsonPath: tasksJsonPath,
@@ -79,20 +76,12 @@ export function registerUpdateTool(server) {
{ session }
);
if (result.success) {
log.info(
`Successfully updated tasks from ID ${args.from}: ${result.data.message}`
);
} else {
log.error(
`Failed to update tasks: ${result.error?.message || 'Unknown error'}`
);
}
// 4. Handle Result
log.info(`updateTasksDirect result: success=${result.success}`);
return handleApiResult(result, log, 'Error updating tasks');
} catch (error) {
log.error(`Error in update tool: ${error.message}`);
return createErrorResponse(error.message);
log.error(`Critical error in update tool execute: ${error.message}`);
return createErrorResponse(`Internal tool error: ${error.message}`);
}
}
});

View File

@@ -443,7 +443,7 @@ function createContentResponse(content) {
* @param {string} errorMessage - Error message to include in response
* @returns {Object} - Error content response object in FastMCP format
*/
export function createErrorResponse(errorMessage) {
function createErrorResponse(errorMessage) {
return {
content: [
{
@@ -455,6 +455,25 @@ export function createErrorResponse(errorMessage) {
};
}
/**
* Creates a logger wrapper object compatible with core function expectations.
* Adapts the MCP logger to the { info, warn, error, debug, success } structure.
* @param {Object} log - The MCP logger instance.
* @returns {Object} - The logger wrapper object.
*/
function createLogWrapper(log) {
return {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
// Handle optional debug method
debug: (message, ...args) =>
log.debug ? log.debug(message, ...args) : null,
// Map success to info as a common fallback
success: (message, ...args) => log.info(message, ...args)
};
}
// Ensure all functions are exported
export {
getProjectRoot,
@@ -463,5 +482,7 @@ export {
executeTaskMasterCommand,
getCachedOrExecute,
processMCPResponseData,
createContentResponse
createContentResponse,
createErrorResponse,
createLogWrapper
};

2099
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,8 +14,8 @@
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
"prepare-package": "node scripts/prepare-package.js",
"prepublishOnly": "npm run prepare-package",
"test:e2e": "./tests/e2e/run_e2e.sh",
"test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
"prepare": "chmod +x bin/task-master.js mcp-server/server.js",
"changeset": "changeset",
"release": "changeset publish",
@@ -39,10 +39,16 @@
"author": "Eyal Toledano",
"license": "MIT WITH Commons-Clause",
"dependencies": {
"@ai-sdk/anthropic": "^1.2.10",
"@ai-sdk/azure": "^1.3.17",
"@ai-sdk/google": "^1.2.13",
"@ai-sdk/mistral": "^1.2.7",
"@ai-sdk/openai": "^1.3.20",
"@ai-sdk/perplexity": "^1.1.7",
"@ai-sdk/xai": "^1.2.15",
"@anthropic-ai/sdk": "^0.39.0",
"boxen": "^8.0.1",
"chalk": "^4.1.2",
"cli-table3": "^0.6.5",
"@openrouter/ai-sdk-provider": "^0.4.5",
"ai": "^4.3.10",
"commander": "^11.1.0",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
@@ -55,6 +61,7 @@
"inquirer": "^12.5.0",
"jsonwebtoken": "^9.0.2",
"lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0",
"openai": "^4.89.0",
"ora": "^8.2.0",
"uuid": "^11.1.0"
@@ -89,10 +96,19 @@
"@changesets/changelog-github": "^0.5.1",
"@changesets/cli": "^2.28.1",
"@types/jest": "^29.5.14",
"boxen": "^8.0.1",
"chalk": "^5.4.1",
"cli-table3": "^0.6.5",
"execa": "^8.0.1",
"ink": "^5.0.1",
"jest": "^29.7.0",
"jest-environment-node": "^29.7.0",
"mock-fs": "^5.5.0",
"node-fetch": "^3.3.2",
"prettier": "^3.5.3",
"supertest": "^7.1.0"
"react": "^18.3.1",
"supertest": "^7.1.0",
"tsx": "^4.16.2",
"zod": "^3.23.8"
}
}

View File

@@ -8,6 +8,9 @@
* It imports functionality from the modules directory and provides a CLI.
*/
import dotenv from 'dotenv';
dotenv.config();
// Add at the very beginning of the file
if (process.env.DEBUG === '1') {
console.error('DEBUG - dev.js received args:', process.argv.slice(2));

View File

@@ -24,6 +24,7 @@ import boxen from 'boxen';
import gradient from 'gradient-string';
import { isSilentMode } from './modules/utils.js';
import { convertAllCursorRulesToRooRules } from './modules/rule-transformer.js';
import { execSync } from 'child_process';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
@@ -332,10 +333,7 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
}
// For other files, warn and prompt before overwriting
log(
'warn',
`${targetPath} already exists. Skipping file creation to avoid overwriting existing content.`
);
log('warn', `${targetPath} already exists, skipping.`);
return;
}
@@ -344,7 +342,7 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
log('info', `Created file: ${targetPath}`);
}
// Main function to initialize a new project (Now relies solely on passed options)
// Main function to initialize a new project (No longer needs isInteractive logic)
async function initializeProject(options = {}) {
// Receives options as argument
// Only display banner if not in silent mode
@@ -360,8 +358,8 @@ async function initializeProject(options = {}) {
console.log('==================================================');
}
// Determine if we should skip prompts based on the passed options
const skipPrompts = options.yes;
const skipPrompts = options.yes || (options.name && options.description);
if (!isSilentMode()) {
console.log('Skip prompts determined:', skipPrompts);
}
@@ -371,7 +369,12 @@ async function initializeProject(options = {}) {
console.log('SKIPPING PROMPTS - Using defaults or provided values');
}
// We no longer need these variables
// Use provided options or defaults
const projectName = options.name || 'task-master-project';
const projectDescription =
options.description || 'A project managed with Task Master AI';
const projectVersion = options.version || '0.1.0';
const authorName = options.author || 'Vibe coder';
const dryRun = options.dryRun || false;
const addAliases = options.aliases || false;
@@ -387,10 +390,9 @@ async function initializeProject(options = {}) {
};
}
// Create structure using only necessary values
createProjectStructure(addAliases);
createProjectStructure(addAliases, dryRun);
} else {
// Prompting logic (only runs if skipPrompts is false)
// Interactive logic
log('info', 'Required options not provided, proceeding with prompts.');
const rl = readline.createInterface({
input: process.stdin,
@@ -425,11 +427,10 @@ async function initializeProject(options = {}) {
if (!shouldContinue) {
log('info', 'Project initialization cancelled by user');
process.exit(0); // Exit if cancelled
return; // Added return for clarity
process.exit(0);
return;
}
// Still respect dryRun if passed initially even when prompting
const dryRun = options.dryRun || false;
if (dryRun) {
@@ -445,11 +446,11 @@ async function initializeProject(options = {}) {
}
// Create structure using only necessary values
createProjectStructure(addAliasesPrompted);
createProjectStructure(addAliasesPrompted, dryRun);
} catch (error) {
rl.close();
log('error', `Error during prompting: ${error.message}`); // Use log function
process.exit(1); // Exit on error during prompts
log('error', `Error during initialization process: ${error.message}`);
process.exit(1);
}
}
}
@@ -464,7 +465,7 @@ function promptQuestion(rl, question) {
}
// Function to create the project structure
function createProjectStructure(addAliases) {
function createProjectStructure(addAliases, dryRun) {
const targetDir = process.cwd();
log('info', `Initializing project in ${targetDir}`);
@@ -503,6 +504,15 @@ function createProjectStructure(addAliases) {
replacements
);
// Copy .taskmasterconfig with project name
copyTemplateFile(
'.taskmasterconfig',
path.join(targetDir, '.taskmasterconfig'),
{
...replacements
}
);
// Copy .gitignore
copyTemplateFile('gitignore', path.join(targetDir, '.gitignore'));
@@ -562,11 +572,75 @@ function createProjectStructure(addAliases) {
replacements
);
// Add shell aliases if requested
if (addAliases) {
addShellAliases();
// Initialize git repository if git is available
try {
if (!fs.existsSync(path.join(targetDir, '.git'))) {
log('info', 'Initializing git repository...');
execSync('git init', { stdio: 'ignore' });
log('success', 'Git repository initialized');
}
} catch (error) {
log('warn', 'Git not available, skipping repository initialization');
}
// Run npm install automatically
const npmInstallOptions = {
cwd: targetDir,
// Default to inherit for interactive CLI, change if silent
stdio: 'inherit'
};
if (isSilentMode()) {
// If silent (MCP mode), suppress npm install output
npmInstallOptions.stdio = 'ignore';
log('info', 'Running npm install silently...'); // Log our own message
} else {
// Interactive mode, show the boxen message
console.log(
boxen(chalk.cyan('Installing dependencies...'), {
padding: 0.5,
margin: 0.5,
borderStyle: 'round',
borderColor: 'blue'
})
);
}
// === Add Model Configuration Step ===
if (!isSilentMode() && !dryRun) {
console.log(
boxen(chalk.cyan('Configuring AI Models...'), {
padding: 0.5,
margin: { top: 1, bottom: 0.5 },
borderStyle: 'round',
borderColor: 'blue'
})
);
log(
'info',
'Running interactive model setup. Please select your preferred AI models.'
);
try {
execSync('npx task-master models --setup', {
stdio: 'inherit',
cwd: targetDir
});
log('success', 'AI Models configured.');
} catch (error) {
log('error', 'Failed to configure AI models:', error.message);
log('warn', 'You may need to run "task-master models --setup" manually.');
}
} else if (isSilentMode() && !dryRun) {
log('info', 'Skipping interactive model setup in silent (MCP) mode.');
log(
'warn',
'Please configure AI models using "task-master models --set-..." or the "models" MCP tool.'
);
} else if (dryRun) {
log('info', 'DRY RUN: Skipping interactive model setup.');
}
// ====================================
// Display success message
if (!isSilentMode()) {
console.log(
@@ -590,43 +664,59 @@ function createProjectStructure(addAliases) {
if (!isSilentMode()) {
console.log(
boxen(
chalk.cyan.bold('Things you can now do:') +
chalk.cyan.bold('Things you should do next:') +
'\n\n' +
chalk.white('1. ') +
chalk.yellow(
'Rename .env.example to .env and add your ANTHROPIC_API_KEY and PERPLEXITY_API_KEY'
'Configure AI models (if needed) and add API keys to `.env`'
) +
'\n' +
chalk.white(' ├─ ') +
chalk.dim('Models: Use `task-master models` commands') +
'\n' +
chalk.white(' └─ ') +
chalk.dim(
'Keys: Add provider API keys to .env (or inside the MCP config file i.e. .cursor/mcp.json)'
) +
'\n' +
chalk.white('2. ') +
chalk.yellow(
'Discuss your idea with AI, and once ready ask for a PRD using the example_prd.txt file, and save what you get to scripts/PRD.txt'
'Discuss your idea with AI and ask for a PRD using example_prd.txt, and save it to scripts/PRD.txt'
) +
'\n' +
chalk.white('3. ') +
chalk.yellow(
'Ask Cursor Agent to parse your PRD.txt and generate tasks'
'Ask Cursor Agent (or run CLI) to parse your PRD and generate initial tasks:'
) +
'\n' +
chalk.white(' └─ ') +
chalk.dim('You can also run ') +
chalk.cyan('task-master parse-prd <your-prd-file.txt>') +
chalk.dim('MCP Tool: ') +
chalk.cyan('parse_prd') +
chalk.dim(' | CLI: ') +
chalk.cyan('task-master parse-prd scripts/prd.txt') +
'\n' +
chalk.white('4. ') +
chalk.yellow('Ask Cursor to analyze the complexity of your tasks') +
chalk.yellow(
'Ask Cursor to analyze the complexity of the tasks in your PRD using research'
) +
'\n' +
chalk.white(' └─ ') +
chalk.dim('MCP Tool: ') +
chalk.cyan('analyze_project_complexity') +
chalk.dim(' | CLI: ') +
chalk.cyan('task-master analyze-complexity') +
'\n' +
chalk.white('5. ') +
chalk.yellow(
'Ask Cursor which task is next to determine where to start'
'Ask Cursor to expand all of your tasks using the complexity analysis'
) +
'\n' +
chalk.white('6. ') +
chalk.yellow(
'Ask Cursor to expand any complex tasks that are too large or complex.'
) +
chalk.yellow('Ask Cursor to begin working on the next task') +
'\n' +
chalk.white('7. ') +
chalk.yellow(
'Ask Cursor to set the status of a task, or multiple tasks. Use the task id from the task lists.'
'Ask Cursor to set the status of one or many tasks/subtasks at a time. Use the task id from the task lists.'
) +
'\n' +
chalk.white('8. ') +
@@ -639,6 +729,10 @@ function createProjectStructure(addAliases) {
'\n\n' +
chalk.dim(
'* Review the README.md file to learn how to use other commands via Cursor Agent.'
) +
'\n' +
chalk.dim(
'* Use the task-master command without arguments to see all available commands.'
),
{
padding: 1,

View File

@@ -0,0 +1,494 @@
/**
* ai-services-unified.js
* Centralized AI service layer using provider modules and config-manager.
*/
// Vercel AI SDK functions are NOT called directly anymore.
// import { generateText, streamText, generateObject } from 'ai';
// --- Core Dependencies ---
import {
getMainProvider,
getMainModelId,
getResearchProvider,
getResearchModelId,
getFallbackProvider,
getFallbackModelId,
getParametersForRole
} from './config-manager.js';
import { log, resolveEnvVariable } from './utils.js';
import * as anthropic from '../../src/ai-providers/anthropic.js';
import * as perplexity from '../../src/ai-providers/perplexity.js';
import * as google from '../../src/ai-providers/google.js';
import * as openai from '../../src/ai-providers/openai.js';
import * as xai from '../../src/ai-providers/xai.js';
import * as openrouter from '../../src/ai-providers/openrouter.js';
// TODO: Import other provider modules when implemented (ollama, etc.)
// --- Provider Function Map ---
// Maps provider names (lowercase) to their respective service functions
const PROVIDER_FUNCTIONS = {
anthropic: {
generateText: anthropic.generateAnthropicText,
streamText: anthropic.streamAnthropicText,
generateObject: anthropic.generateAnthropicObject
},
perplexity: {
generateText: perplexity.generatePerplexityText,
streamText: perplexity.streamPerplexityText,
generateObject: perplexity.generatePerplexityObject
},
google: {
// Add Google entry
generateText: google.generateGoogleText,
streamText: google.streamGoogleText,
generateObject: google.generateGoogleObject
},
openai: {
// ADD: OpenAI entry
generateText: openai.generateOpenAIText,
streamText: openai.streamOpenAIText,
generateObject: openai.generateOpenAIObject
},
xai: {
// ADD: xAI entry
generateText: xai.generateXaiText,
streamText: xai.streamXaiText,
generateObject: xai.generateXaiObject // Note: Object generation might be unsupported
},
openrouter: {
// ADD: OpenRouter entry
generateText: openrouter.generateOpenRouterText,
streamText: openrouter.streamOpenRouterText,
generateObject: openrouter.generateOpenRouterObject
}
// TODO: Add entries for ollama, etc. when implemented
};
// --- Configuration for Retries ---
const MAX_RETRIES = 2;
const INITIAL_RETRY_DELAY_MS = 1000;
// Helper function to check if an error is retryable
function isRetryableError(error) {
const errorMessage = error.message?.toLowerCase() || '';
return (
errorMessage.includes('rate limit') ||
errorMessage.includes('overloaded') ||
errorMessage.includes('service temporarily unavailable') ||
errorMessage.includes('timeout') ||
errorMessage.includes('network error') ||
error.status === 429 ||
error.status >= 500
);
}
/**
* Extracts a user-friendly error message from a potentially complex AI error object.
* Prioritizes nested messages and falls back to the top-level message.
* @param {Error | object | any} error - The error object.
* @returns {string} A concise error message.
*/
function _extractErrorMessage(error) {
try {
// Attempt 1: Look for Vercel SDK specific nested structure (common)
if (error?.data?.error?.message) {
return error.data.error.message;
}
// Attempt 2: Look for nested error message directly in the error object
if (error?.error?.message) {
return error.error.message;
}
// Attempt 3: Look for nested error message in response body if it's JSON string
if (typeof error?.responseBody === 'string') {
try {
const body = JSON.parse(error.responseBody);
if (body?.error?.message) {
return body.error.message;
}
} catch (parseError) {
// Ignore if responseBody is not valid JSON
}
}
// Attempt 4: Use the top-level message if it exists
if (typeof error?.message === 'string' && error.message) {
return error.message;
}
// Attempt 5: Handle simple string errors
if (typeof error === 'string') {
return error;
}
// Fallback
return 'An unknown AI service error occurred.';
} catch (e) {
// Safety net
return 'Failed to extract error message.';
}
}
/**
* Internal helper to resolve the API key for a given provider.
* @param {string} providerName - The name of the provider (lowercase).
* @param {object|null} session - Optional MCP session object.
* @returns {string|null} The API key or null if not found/needed.
* @throws {Error} If a required API key is missing.
*/
function _resolveApiKey(providerName, session) {
const keyMap = {
openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY'
};
// Double check this -- I have had to use an api key for ollama in the past
// if (providerName === 'ollama') {
// return null; // Ollama typically doesn't require an API key for basic setup
// }
const envVarName = keyMap[providerName];
if (!envVarName) {
throw new Error(
`Unknown provider '${providerName}' for API key resolution.`
);
}
const apiKey = resolveEnvVariable(envVarName, session);
if (!apiKey) {
throw new Error(
`Required API key ${envVarName} for provider '${providerName}' is not set in environment or session.`
);
}
return apiKey;
}
/**
* Internal helper to attempt a provider-specific AI API call with retries.
*
* @param {function} providerApiFn - The specific provider function to call (e.g., generateAnthropicText).
* @param {object} callParams - Parameters object for the provider function.
* @param {string} providerName - Name of the provider (for logging).
* @param {string} modelId - Specific model ID (for logging).
* @param {string} attemptRole - The role being attempted (for logging).
* @returns {Promise<object>} The result from the successful API call.
* @throws {Error} If the call fails after all retries.
*/
async function _attemptProviderCallWithRetries(
providerApiFn,
callParams,
providerName,
modelId,
attemptRole
) {
let retries = 0;
const fnName = providerApiFn.name;
while (retries <= MAX_RETRIES) {
try {
log(
'info',
`Attempt ${retries + 1}/${MAX_RETRIES + 1} calling ${fnName} (Provider: ${providerName}, Model: ${modelId}, Role: ${attemptRole})`
);
// Call the specific provider function directly
const result = await providerApiFn(callParams);
log(
'info',
`${fnName} succeeded for role ${attemptRole} (Provider: ${providerName}) on attempt ${retries + 1}`
);
return result;
} catch (error) {
log(
'warn',
`Attempt ${retries + 1} failed for role ${attemptRole} (${fnName} / ${providerName}): ${error.message}`
);
if (isRetryableError(error) && retries < MAX_RETRIES) {
retries++;
const delay = INITIAL_RETRY_DELAY_MS * Math.pow(2, retries - 1);
log(
'info',
`Retryable error detected. Retrying in ${delay / 1000}s...`
);
await new Promise((resolve) => setTimeout(resolve, delay));
} else {
log(
'error',
`Non-retryable error or max retries reached for role ${attemptRole} (${fnName} / ${providerName}).`
);
throw error;
}
}
}
// Should not be reached due to throw in the else block
throw new Error(
`Exhausted all retries for role ${attemptRole} (${fnName} / ${providerName})`
);
}
/**
* Base logic for unified service functions.
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
* @param {object} params - Original parameters passed to the service function.
* @returns {Promise<any>} Result from the underlying provider call.
*/
async function _unifiedServiceRunner(serviceType, params) {
const {
role: initialRole,
session,
systemPrompt,
prompt,
schema,
objectName,
...restApiParams
} = params;
log('info', `${serviceType}Service called`, { role: initialRole });
let sequence;
if (initialRole === 'main') {
sequence = ['main', 'fallback', 'research'];
} else if (initialRole === 'fallback') {
sequence = ['fallback', 'research'];
} else if (initialRole === 'research') {
sequence = ['research', 'fallback'];
} else {
log(
'warn',
`Unknown initial role: ${initialRole}. Defaulting to main -> fallback -> research sequence.`
);
sequence = ['main', 'fallback', 'research'];
}
let lastError = null;
let lastCleanErrorMessage =
'AI service call failed for all configured roles.';
for (const currentRole of sequence) {
let providerName, modelId, apiKey, roleParams, providerFnSet, providerApiFn;
try {
log('info', `New AI service call with role: ${currentRole}`);
// 1. Get Config: Provider, Model, Parameters for the current role
// Call individual getters based on the current role
if (currentRole === 'main') {
providerName = getMainProvider();
modelId = getMainModelId();
} else if (currentRole === 'research') {
providerName = getResearchProvider();
modelId = getResearchModelId();
} else if (currentRole === 'fallback') {
providerName = getFallbackProvider();
modelId = getFallbackModelId();
} else {
log(
'error',
`Unknown role encountered in _unifiedServiceRunner: ${currentRole}`
);
lastError =
lastError || new Error(`Unknown AI role specified: ${currentRole}`);
continue;
}
if (!providerName || !modelId) {
log(
'warn',
`Skipping role '${currentRole}': Provider or Model ID not configured.`
);
lastError =
lastError ||
new Error(
`Configuration missing for role '${currentRole}'. Provider: ${providerName}, Model: ${modelId}`
);
continue;
}
roleParams = getParametersForRole(currentRole);
// 2. Get Provider Function Set
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
if (!providerFnSet) {
log(
'warn',
`Skipping role '${currentRole}': Provider '${providerName}' not supported or map entry missing.`
);
lastError =
lastError ||
new Error(`Unsupported provider configured: ${providerName}`);
continue;
}
// Use the original service type to get the function
providerApiFn = providerFnSet[serviceType];
if (typeof providerApiFn !== 'function') {
log(
'warn',
`Skipping role '${currentRole}': Service type '${serviceType}' not implemented for provider '${providerName}'.`
);
lastError =
lastError ||
new Error(
`Service '${serviceType}' not implemented for provider ${providerName}`
);
continue;
}
// 3. Resolve API Key (will throw if required and missing)
apiKey = _resolveApiKey(providerName?.toLowerCase(), session);
// 4. Construct Messages Array
const messages = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
// {
// type: 'text',
// text: 'Large cached context here like a tasks json',
// providerOptions: {
// anthropic: { cacheControl: { type: 'ephemeral' } }
// }
// }
// Example
// if (params.context) { // context is a json string of a tasks object or some other stu
// messages.push({
// type: 'text',
// text: params.context,
// providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } }
// });
// }
if (prompt) {
// Ensure prompt exists before adding
messages.push({ role: 'user', content: prompt });
} else {
// Throw an error if the prompt is missing, as it's essential
throw new Error('User prompt content is missing.');
}
// 5. Prepare call parameters (using messages array)
const callParams = {
apiKey,
modelId,
maxTokens: roleParams.maxTokens,
temperature: roleParams.temperature,
messages,
...(serviceType === 'generateObject' && { schema, objectName }),
...restApiParams
};
// 6. Attempt the call with retries
const result = await _attemptProviderCallWithRetries(
providerApiFn,
callParams,
providerName,
modelId,
currentRole
);
log('info', `${serviceType}Service succeeded using role: ${currentRole}`);
return result;
} catch (error) {
const cleanMessage = _extractErrorMessage(error);
log(
'error',
`Service call failed for role ${currentRole} (Provider: ${providerName || 'unknown'}, Model: ${modelId || 'unknown'}): ${cleanMessage}`
);
lastError = error;
lastCleanErrorMessage = cleanMessage;
if (serviceType === 'generateObject') {
const lowerCaseMessage = cleanMessage.toLowerCase();
if (
lowerCaseMessage.includes(
'no endpoints found that support tool use'
) ||
lowerCaseMessage.includes('does not support tool_use') ||
lowerCaseMessage.includes('tool use is not supported') ||
lowerCaseMessage.includes('tools are not supported') ||
lowerCaseMessage.includes('function calling is not supported')
) {
const specificErrorMsg = `Model '${modelId || 'unknown'}' via provider '${providerName || 'unknown'}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
log('error', `[Tool Support Error] ${specificErrorMsg}`);
throw new Error(specificErrorMsg);
}
}
}
}
// If loop completes, all roles failed
log('error', `All roles in the sequence [${sequence.join(', ')}] failed.`);
// Throw a new error with the cleaner message from the last failure
throw new Error(lastCleanErrorMessage);
}
/**
* Unified service function for generating text.
* Handles client retrieval, retries, and fallback sequence.
*
* @param {object} params - Parameters for the service call.
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
* @param {object} [params.session=null] - Optional MCP session object.
* @param {string} params.prompt - The prompt for the AI.
* @param {string} [params.systemPrompt] - Optional system prompt.
* // Other specific generateText params can be included here.
* @returns {Promise<string>} The generated text content.
*/
async function generateTextService(params) {
return _unifiedServiceRunner('generateText', params);
}
/**
* Unified service function for streaming text.
* Handles client retrieval, retries, and fallback sequence.
*
* @param {object} params - Parameters for the service call.
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
* @param {object} [params.session=null] - Optional MCP session object.
* @param {string} params.prompt - The prompt for the AI.
* @param {string} [params.systemPrompt] - Optional system prompt.
* // Other specific streamText params can be included here.
* @returns {Promise<ReadableStream<string>>} A readable stream of text deltas.
*/
async function streamTextService(params) {
return _unifiedServiceRunner('streamText', params);
}
/**
* Unified service function for generating structured objects.
* Handles client retrieval, retries, and fallback sequence.
*
* @param {object} params - Parameters for the service call.
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
* @param {object} [params.session=null] - Optional MCP session object.
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the expected object.
* @param {string} params.prompt - The prompt for the AI.
* @param {string} [params.systemPrompt] - Optional system prompt.
* @param {string} [params.objectName='generated_object'] - Name for object/tool.
* @param {number} [params.maxRetries=3] - Max retries for object generation.
* @returns {Promise<object>} The generated object matching the schema.
*/
async function generateObjectService(params) {
const defaults = {
objectName: 'generated_object',
maxRetries: 3
};
const combinedParams = { ...defaults, ...params };
return _unifiedServiceRunner('generateObject', combinedParams);
}
export { generateTextService, streamTextService, generateObjectService };

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,716 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { fileURLToPath } from 'url';
import { log, resolveEnvVariable, findProjectRoot } from './utils.js';
// Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Load supported models from JSON file using the calculated __dirname
let MODEL_MAP;
try {
const supportedModelsRaw = fs.readFileSync(
path.join(__dirname, 'supported-models.json'),
'utf-8'
);
MODEL_MAP = JSON.parse(supportedModelsRaw);
} catch (error) {
console.error(
chalk.red(
'FATAL ERROR: Could not load supported-models.json. Please ensure the file exists and is valid JSON.'
),
error
);
MODEL_MAP = {}; // Default to empty map on error to avoid crashing, though functionality will be limited
process.exit(1); // Exit if models can't be loaded
}
const CONFIG_FILE_NAME = '.taskmasterconfig';
// Define valid providers dynamically from the loaded MODEL_MAP
const VALID_PROVIDERS = Object.keys(MODEL_MAP || {});
// Default configuration values (used if .taskmasterconfig is missing or incomplete)
const DEFAULTS = {
models: {
main: {
provider: 'anthropic',
modelId: 'claude-3-7-sonnet-20250219',
maxTokens: 64000,
temperature: 0.2
},
research: {
provider: 'perplexity',
modelId: 'sonar-pro',
maxTokens: 8700,
temperature: 0.1
},
fallback: {
// No default fallback provider/model initially
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
maxTokens: 64000, // Default parameters if fallback IS configured
temperature: 0.2
}
},
global: {
logLevel: 'info',
debug: false,
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseUrl: 'http://localhost:11434/api'
}
};
// --- Internal Config Loading ---
let loadedConfig = null;
let loadedConfigRoot = null; // Track which root loaded the config
// Custom Error for configuration issues
class ConfigurationError extends Error {
constructor(message) {
super(message);
this.name = 'ConfigurationError';
}
}
function _loadAndValidateConfig(explicitRoot = null) {
const defaults = DEFAULTS; // Use the defined defaults
let rootToUse = explicitRoot;
let configSource = explicitRoot
? `explicit root (${explicitRoot})`
: 'defaults (no root provided yet)';
// ---> If no explicit root, TRY to find it <---
if (!rootToUse) {
rootToUse = findProjectRoot();
if (rootToUse) {
configSource = `found root (${rootToUse})`;
} else {
// No root found, return defaults immediately
return defaults;
}
}
// ---> End find project root logic <---
// --- Proceed with loading from the determined rootToUse ---
const configPath = path.join(rootToUse, CONFIG_FILE_NAME);
let config = { ...defaults }; // Start with a deep copy of defaults
let configExists = false;
if (fs.existsSync(configPath)) {
configExists = true;
try {
const rawData = fs.readFileSync(configPath, 'utf-8');
const parsedConfig = JSON.parse(rawData);
// Deep merge parsed config onto defaults
config = {
models: {
main: { ...defaults.models.main, ...parsedConfig?.models?.main },
research: {
...defaults.models.research,
...parsedConfig?.models?.research
},
fallback:
parsedConfig?.models?.fallback?.provider &&
parsedConfig?.models?.fallback?.modelId
? { ...defaults.models.fallback, ...parsedConfig.models.fallback }
: { ...defaults.models.fallback }
},
global: { ...defaults.global, ...parsedConfig?.global }
};
configSource = `file (${configPath})`; // Update source info
// --- Validation (Warn if file content is invalid) ---
// Use log.warn for consistency
if (!validateProvider(config.models.main.provider)) {
console.warn(
chalk.yellow(
`Warning: Invalid main provider "${config.models.main.provider}" in ${configPath}. Falling back to default.`
)
);
config.models.main = { ...defaults.models.main };
}
if (!validateProvider(config.models.research.provider)) {
console.warn(
chalk.yellow(
`Warning: Invalid research provider "${config.models.research.provider}" in ${configPath}. Falling back to default.`
)
);
config.models.research = { ...defaults.models.research };
}
if (
config.models.fallback?.provider &&
!validateProvider(config.models.fallback.provider)
) {
console.warn(
chalk.yellow(
`Warning: Invalid fallback provider "${config.models.fallback.provider}" in ${configPath}. Fallback model configuration will be ignored.`
)
);
config.models.fallback.provider = undefined;
config.models.fallback.modelId = undefined;
}
} catch (error) {
// Use console.error for actual errors during parsing
console.error(
chalk.red(
`Error reading or parsing ${configPath}: ${error.message}. Using default configuration.`
)
);
config = { ...defaults }; // Reset to defaults on parse error
configSource = `defaults (parse error at ${configPath})`;
}
} else {
// Config file doesn't exist at the determined rootToUse.
if (explicitRoot) {
// Only warn if an explicit root was *expected*.
console.warn(
chalk.yellow(
`Warning: ${CONFIG_FILE_NAME} not found at provided project root (${explicitRoot}). Using default configuration. Run 'task-master models --setup' to configure.`
)
);
} else {
console.warn(
chalk.yellow(
`Warning: ${CONFIG_FILE_NAME} not found at derived root (${rootToUse}). Using defaults.`
)
);
}
// Keep config as defaults
config = { ...defaults };
configSource = `defaults (file not found at ${configPath})`;
}
return config;
}
/**
* Gets the current configuration, loading it if necessary.
* Handles MCP initialization context gracefully.
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @param {boolean} forceReload - Force reloading the config file.
* @returns {object} The loaded configuration object.
*/
function getConfig(explicitRoot = null, forceReload = false) {
// Determine if a reload is necessary
const needsLoad =
!loadedConfig ||
forceReload ||
(explicitRoot && explicitRoot !== loadedConfigRoot);
if (needsLoad) {
const newConfig = _loadAndValidateConfig(explicitRoot); // _load handles null explicitRoot
// Only update the global cache if loading was forced or if an explicit root
// was provided (meaning we attempted to load a specific project's config).
// We avoid caching the initial default load triggered without an explicitRoot.
if (forceReload || explicitRoot) {
loadedConfig = newConfig;
loadedConfigRoot = explicitRoot; // Store the root used for this loaded config
}
return newConfig; // Return the newly loaded/default config
}
// If no load was needed, return the cached config
return loadedConfig;
}
/**
* Validates if a provider name is in the list of supported providers.
* @param {string} providerName The name of the provider.
* @returns {boolean} True if the provider is valid, false otherwise.
*/
function validateProvider(providerName) {
return VALID_PROVIDERS.includes(providerName);
}
/**
* Optional: Validates if a modelId is known for a given provider based on MODEL_MAP.
* This is a non-strict validation; an unknown model might still be valid.
* @param {string} providerName The name of the provider.
* @param {string} modelId The model ID.
* @returns {boolean} True if the modelId is in the map for the provider, false otherwise.
*/
function validateProviderModelCombination(providerName, modelId) {
// If provider isn't even in our map, we can't validate the model
if (!MODEL_MAP[providerName]) {
return true; // Allow unknown providers or those without specific model lists
}
// If the provider is known, check if the model is in its list OR if the list is empty (meaning accept any)
return (
MODEL_MAP[providerName].length === 0 ||
// Use .some() to check the 'id' property of objects in the array
MODEL_MAP[providerName].some((modelObj) => modelObj.id === modelId)
);
}
// --- Role-Specific Getters ---
function getModelConfigForRole(role, explicitRoot = null) {
const config = getConfig(explicitRoot);
const roleConfig = config?.models?.[role];
if (!roleConfig) {
log(
'warn',
`No model configuration found for role: ${role}. Returning default.`
);
return DEFAULTS.models[role] || {};
}
return roleConfig;
}
function getMainProvider(explicitRoot = null) {
return getModelConfigForRole('main', explicitRoot).provider;
}
function getMainModelId(explicitRoot = null) {
return getModelConfigForRole('main', explicitRoot).modelId;
}
function getMainMaxTokens(explicitRoot = null) {
// Directly return value from config (which includes defaults)
return getModelConfigForRole('main', explicitRoot).maxTokens;
}
function getMainTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('main', explicitRoot).temperature;
}
function getResearchProvider(explicitRoot = null) {
return getModelConfigForRole('research', explicitRoot).provider;
}
function getResearchModelId(explicitRoot = null) {
return getModelConfigForRole('research', explicitRoot).modelId;
}
function getResearchMaxTokens(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('research', explicitRoot).maxTokens;
}
function getResearchTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('research', explicitRoot).temperature;
}
function getFallbackProvider(explicitRoot = null) {
// Directly return value from config (will be undefined if not set)
return getModelConfigForRole('fallback', explicitRoot).provider;
}
function getFallbackModelId(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).modelId;
}
function getFallbackMaxTokens(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).maxTokens;
}
function getFallbackTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).temperature;
}
// --- Global Settings Getters ---
function getGlobalConfig(explicitRoot = null) {
const config = getConfig(explicitRoot);
// Ensure global defaults are applied if global section is missing
return { ...DEFAULTS.global, ...(config?.global || {}) };
}
function getLogLevel(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).logLevel.toLowerCase();
}
function getDebugFlag(explicitRoot = null) {
// Directly return value from config, ensure boolean
return getGlobalConfig(explicitRoot).debug === true;
}
function getDefaultSubtasks(explicitRoot = null) {
// Directly return value from config, ensure integer
const val = getGlobalConfig(explicitRoot).defaultSubtasks;
const parsedVal = parseInt(val, 10);
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
}
function getDefaultPriority(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).defaultPriority;
}
function getProjectName(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).projectName;
}
function getOllamaBaseUrl(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).ollamaBaseUrl;
}
/**
* Gets model parameters (maxTokens, temperature) for a specific role,
* considering model-specific overrides from supported-models.json.
* @param {string} role - The role ('main', 'research', 'fallback').
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @returns {{maxTokens: number, temperature: number}}
*/
function getParametersForRole(role, explicitRoot = null) {
const roleConfig = getModelConfigForRole(role, explicitRoot);
const roleMaxTokens = roleConfig.maxTokens;
const roleTemperature = roleConfig.temperature;
const modelId = roleConfig.modelId;
const providerName = roleConfig.provider;
let effectiveMaxTokens = roleMaxTokens; // Start with the role's default
try {
// Find the model definition in MODEL_MAP
const providerModels = MODEL_MAP[providerName];
if (providerModels && Array.isArray(providerModels)) {
const modelDefinition = providerModels.find((m) => m.id === modelId);
// Check if a model-specific max_tokens is defined and valid
if (
modelDefinition &&
typeof modelDefinition.max_tokens === 'number' &&
modelDefinition.max_tokens > 0
) {
const modelSpecificMaxTokens = modelDefinition.max_tokens;
// Use the minimum of the role default and the model specific limit
effectiveMaxTokens = Math.min(roleMaxTokens, modelSpecificMaxTokens);
log(
'debug',
`Applying model-specific max_tokens (${modelSpecificMaxTokens}) for ${modelId}. Effective limit: ${effectiveMaxTokens}`
);
} else {
log(
'debug',
`No valid model-specific max_tokens override found for ${modelId}. Using role default: ${roleMaxTokens}`
);
}
} else {
log(
'debug',
`No model definitions found for provider ${providerName} in MODEL_MAP. Using role default maxTokens: ${roleMaxTokens}`
);
}
} catch (lookupError) {
log(
'warn',
`Error looking up model-specific max_tokens for ${modelId}: ${lookupError.message}. Using role default: ${roleMaxTokens}`
);
// Fallback to role default on error
effectiveMaxTokens = roleMaxTokens;
}
return {
maxTokens: effectiveMaxTokens,
temperature: roleTemperature
};
}
/**
* Checks if the API key for a given provider is set in the environment.
* Checks process.env first, then session.env if session is provided.
* @param {string} providerName - The name of the provider (e.g., 'openai', 'anthropic').
* @param {object|null} [session=null] - The MCP session object (optional).
* @returns {boolean} True if the API key is set, false otherwise.
*/
function isApiKeySet(providerName, session = null) {
// Define the expected environment variable name for each provider
if (providerName?.toLowerCase() === 'ollama') {
return true; // Indicate key status is effectively "OK"
}
const keyMap = {
openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY'
// Add other providers as needed
};
const providerKey = providerName?.toLowerCase();
if (!providerKey || !keyMap[providerKey]) {
log('warn', `Unknown provider name: ${providerName} in isApiKeySet check.`);
return false;
}
const envVarName = keyMap[providerKey];
const apiKeyValue = resolveEnvVariable(envVarName, session);
// Check if the key exists, is not empty, and is not a placeholder
return (
apiKeyValue &&
apiKeyValue.trim() !== '' &&
!/YOUR_.*_API_KEY_HERE/.test(apiKeyValue) && // General placeholder check
!apiKeyValue.includes('KEY_HERE')
); // Another common placeholder pattern
}
/**
* Checks the API key status within .cursor/mcp.json for a given provider.
* Reads the mcp.json file, finds the taskmaster-ai server config, and checks the relevant env var.
* @param {string} providerName The name of the provider.
* @param {string|null} projectRoot - Optional explicit path to the project root.
* @returns {boolean} True if the key exists and is not a placeholder, false otherwise.
*/
function getMcpApiKeyStatus(providerName, projectRoot = null) {
const rootDir = projectRoot || findProjectRoot(); // Use existing root finding
if (!rootDir) {
console.warn(
chalk.yellow('Warning: Could not find project root to check mcp.json.')
);
return false; // Cannot check without root
}
const mcpConfigPath = path.join(rootDir, '.cursor', 'mcp.json');
if (!fs.existsSync(mcpConfigPath)) {
// console.warn(chalk.yellow('Warning: .cursor/mcp.json not found.'));
return false; // File doesn't exist
}
try {
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
const mcpConfig = JSON.parse(mcpConfigRaw);
const mcpEnv = mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
if (!mcpEnv) {
// console.warn(chalk.yellow('Warning: Could not find taskmaster-ai env in mcp.json.'));
return false; // Structure missing
}
let apiKeyToCheck = null;
let placeholderValue = null;
switch (providerName) {
case 'anthropic':
apiKeyToCheck = mcpEnv.ANTHROPIC_API_KEY;
placeholderValue = 'YOUR_ANTHROPIC_API_KEY_HERE';
break;
case 'openai':
apiKeyToCheck = mcpEnv.OPENAI_API_KEY;
placeholderValue = 'YOUR_OPENAI_API_KEY_HERE'; // Assuming placeholder matches OPENAI
break;
case 'openrouter':
apiKeyToCheck = mcpEnv.OPENROUTER_API_KEY;
placeholderValue = 'YOUR_OPENROUTER_API_KEY_HERE';
break;
case 'google':
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY;
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
break;
case 'perplexity':
apiKeyToCheck = mcpEnv.PERPLEXITY_API_KEY;
placeholderValue = 'YOUR_PERPLEXITY_API_KEY_HERE';
break;
case 'xai':
apiKeyToCheck = mcpEnv.XAI_API_KEY;
placeholderValue = 'YOUR_XAI_API_KEY_HERE';
break;
case 'ollama':
return true; // No key needed
case 'mistral':
apiKeyToCheck = mcpEnv.MISTRAL_API_KEY;
placeholderValue = 'YOUR_MISTRAL_API_KEY_HERE';
break;
case 'azure':
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
break;
default:
return false; // Unknown provider
}
return !!apiKeyToCheck && !/KEY_HERE$/.test(apiKeyToCheck);
} catch (error) {
console.error(
chalk.red(`Error reading or parsing .cursor/mcp.json: ${error.message}`)
);
return false;
}
}
/**
* Gets a list of available models based on the MODEL_MAP.
* @returns {Array<{id: string, name: string, provider: string, swe_score: number|null, cost_per_1m_tokens: {input: number|null, output: number|null}|null, allowed_roles: string[]}>}
*/
function getAvailableModels() {
const available = [];
for (const [provider, models] of Object.entries(MODEL_MAP)) {
if (models.length > 0) {
models.forEach((modelObj) => {
// Basic name generation - can be improved
const modelId = modelObj.id;
const sweScore = modelObj.swe_score;
const cost = modelObj.cost_per_1m_tokens;
const allowedRoles = modelObj.allowed_roles || ['main', 'fallback'];
const nameParts = modelId
.split('-')
.map((p) => p.charAt(0).toUpperCase() + p.slice(1));
// Handle specific known names better if needed
let name = nameParts.join(' ');
if (modelId === 'claude-3.5-sonnet-20240620')
name = 'Claude 3.5 Sonnet';
if (modelId === 'claude-3-7-sonnet-20250219')
name = 'Claude 3.7 Sonnet';
if (modelId === 'gpt-4o') name = 'GPT-4o';
if (modelId === 'gpt-4-turbo') name = 'GPT-4 Turbo';
if (modelId === 'sonar-pro') name = 'Perplexity Sonar Pro';
if (modelId === 'sonar-mini') name = 'Perplexity Sonar Mini';
available.push({
id: modelId,
name: name,
provider: provider,
swe_score: sweScore,
cost_per_1m_tokens: cost,
allowed_roles: allowedRoles
});
});
} else {
// For providers with empty lists (like ollama), maybe add a placeholder or skip
available.push({
id: `[${provider}-any]`,
name: `Any (${provider})`,
provider: provider
});
}
}
return available;
}
/**
* Writes the configuration object to the file.
* @param {Object} config The configuration object to write.
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @returns {boolean} True if successful, false otherwise.
*/
function writeConfig(config, explicitRoot = null) {
// ---> Determine root path reliably <---
let rootPath = explicitRoot;
if (explicitRoot === null || explicitRoot === undefined) {
// Logic matching _loadAndValidateConfig
const foundRoot = findProjectRoot(); // *** Explicitly call findProjectRoot ***
if (!foundRoot) {
console.error(
chalk.red(
'Error: Could not determine project root. Configuration not saved.'
)
);
return false;
}
rootPath = foundRoot;
}
// ---> End determine root path logic <---
const configPath =
path.basename(rootPath) === CONFIG_FILE_NAME
? rootPath
: path.join(rootPath, CONFIG_FILE_NAME);
try {
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
loadedConfig = config; // Update the cache after successful write
return true;
} catch (error) {
console.error(
chalk.red(
`Error writing configuration to ${configPath}: ${error.message}`
)
);
return false;
}
}
/**
* Checks if the .taskmasterconfig file exists at the project root
* @param {string|null} explicitRoot - Optional explicit path to the project root
* @returns {boolean} True if the file exists, false otherwise
*/
function isConfigFilePresent(explicitRoot = null) {
// ---> Determine root path reliably <---
let rootPath = explicitRoot;
if (explicitRoot === null || explicitRoot === undefined) {
// Logic matching _loadAndValidateConfig
const foundRoot = findProjectRoot(); // *** Explicitly call findProjectRoot ***
if (!foundRoot) {
return false; // Cannot check if root doesn't exist
}
rootPath = foundRoot;
}
// ---> End determine root path logic <---
const configPath = path.join(rootPath, CONFIG_FILE_NAME);
return fs.existsSync(configPath);
}
/**
* Gets a list of all provider names defined in the MODEL_MAP.
* @returns {string[]} An array of provider names.
*/
function getAllProviders() {
return Object.keys(MODEL_MAP || {});
}
export {
// Core config access
getConfig,
writeConfig,
ConfigurationError, // Export custom error type
isConfigFilePresent, // Add the new function export
// Validation
validateProvider,
validateProviderModelCombination,
VALID_PROVIDERS,
MODEL_MAP,
getAvailableModels,
// Role-specific getters (No env var overrides)
getMainProvider,
getMainModelId,
getMainMaxTokens,
getMainTemperature,
getResearchProvider,
getResearchModelId,
getResearchMaxTokens,
getResearchTemperature,
getFallbackProvider,
getFallbackModelId,
getFallbackMaxTokens,
getFallbackTemperature,
// Global setting getters (No env var overrides)
getLogLevel,
getDebugFlag,
getDefaultSubtasks,
getDefaultPriority,
getProjectName,
getOllamaBaseUrl,
getParametersForRole,
// API Key Checkers (still relevant)
isApiKeySet,
getMcpApiKeyStatus,
// ADD: Function to get all provider names
getAllProviders
};

View File

@@ -6,7 +6,6 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import { Anthropic } from '@anthropic-ai/sdk';
import {
log,
@@ -22,11 +21,6 @@ import { displayBanner } from './ui.js';
import { generateTaskFiles } from './task-manager.js';
// Initialize Anthropic client
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
/**
* Add a dependency to a task
* @param {string} tasksPath - Path to the tasks.json file
@@ -361,11 +355,13 @@ function isCircularDependency(tasks, taskId, chain = []) {
// Find the task or subtask
let task = null;
let parentIdForSubtask = null;
// Check if this is a subtask reference (e.g., "1.2")
if (taskIdStr.includes('.')) {
const [parentId, subtaskId] = taskIdStr.split('.').map(Number);
const parentTask = tasks.find((t) => t.id === parentId);
parentIdForSubtask = parentId; // Store parent ID if it's a subtask
if (parentTask && parentTask.subtasks) {
task = parentTask.subtasks.find((st) => st.id === subtaskId);
@@ -385,10 +381,18 @@ function isCircularDependency(tasks, taskId, chain = []) {
}
// Check each dependency recursively
const newChain = [...chain, taskId];
return task.dependencies.some((depId) =>
isCircularDependency(tasks, depId, newChain)
);
const newChain = [...chain, taskIdStr]; // Use taskIdStr for consistency
return task.dependencies.some((depId) => {
let normalizedDepId = String(depId);
// Normalize relative subtask dependencies
if (typeof depId === 'number' && parentIdForSubtask !== null) {
// If the current task is a subtask AND the dependency is a number,
// assume it refers to a sibling subtask.
normalizedDepId = `${parentIdForSubtask}.${depId}`;
}
// Pass the normalized ID to the recursive call
return isCircularDependency(tasks, normalizedDepId, newChain);
});
}
/**
@@ -587,118 +591,43 @@ async function validateDependenciesCommand(tasksPath, options = {}) {
`Analyzing dependencies for ${taskCount} tasks and ${subtaskCount} subtasks...`
);
// Track validation statistics
const stats = {
nonExistentDependenciesRemoved: 0,
selfDependenciesRemoved: 0,
tasksFixed: 0,
subtasksFixed: 0
};
// Create a custom logger instead of reassigning the imported log function
const warnings = [];
const customLogger = function (level, ...args) {
if (level === 'warn') {
warnings.push(args.join(' '));
// Count the type of fix based on the warning message
const msg = args.join(' ');
if (msg.includes('self-dependency')) {
stats.selfDependenciesRemoved++;
} else if (msg.includes('invalid')) {
stats.nonExistentDependenciesRemoved++;
}
// Count if it's a task or subtask being fixed
if (msg.includes('from subtask')) {
stats.subtasksFixed++;
} else if (msg.includes('from task')) {
stats.tasksFixed++;
}
}
// Call the original log function
return log(level, ...args);
};
// Run validation with custom logger
try {
// Temporarily save validateTaskDependencies function with normal log
const originalValidateTaskDependencies = validateTaskDependencies;
// Directly call the validation function
const validationResult = validateTaskDependencies(data.tasks);
// Create patched version that uses customLogger
const patchedValidateTaskDependencies = (tasks, tasksPath) => {
// Temporarily redirect log calls in this scope
const originalLog = log;
const logProxy = function (...args) {
return customLogger(...args);
};
if (!validationResult.valid) {
log(
'error',
`Dependency validation failed. Found ${validationResult.issues.length} issue(s):`
);
validationResult.issues.forEach((issue) => {
let errorMsg = ` [${issue.type.toUpperCase()}] Task ${issue.taskId}: ${issue.message}`;
if (issue.dependencyId) {
errorMsg += ` (Dependency: ${issue.dependencyId})`;
}
log('error', errorMsg); // Log each issue as an error
});
// Call the original function in a context where log calls are intercepted
const result = (() => {
// Use Function.prototype.bind to create a new function that has logProxy available
// Pass isCircularDependency explicitly to make it available
return Function(
'tasks',
'tasksPath',
'log',
'customLogger',
'isCircularDependency',
'taskExists',
`return (${originalValidateTaskDependencies.toString()})(tasks, tasksPath);`
)(
tasks,
tasksPath,
logProxy,
customLogger,
isCircularDependency,
taskExists
);
})();
// Optionally exit if validation fails, depending on desired behavior
// process.exit(1); // Uncomment if validation failure should stop the process
return result;
};
const changesDetected = patchedValidateTaskDependencies(
data.tasks,
tasksPath
);
// Create a detailed report
if (changesDetected) {
log('success', 'Invalid dependencies were removed from tasks.json');
// Show detailed stats in a nice box - only if not in silent mode
// Display summary box even on failure, showing issues found
if (!isSilentMode()) {
console.log(
boxen(
chalk.green(`Dependency Validation Results:\n\n`) +
chalk.red(`Dependency Validation FAILED\n\n`) +
`${chalk.cyan('Tasks checked:')} ${taskCount}\n` +
`${chalk.cyan('Subtasks checked:')} ${subtaskCount}\n` +
`${chalk.cyan('Non-existent dependencies removed:')} ${stats.nonExistentDependenciesRemoved}\n` +
`${chalk.cyan('Self-dependencies removed:')} ${stats.selfDependenciesRemoved}\n` +
`${chalk.cyan('Tasks fixed:')} ${stats.tasksFixed}\n` +
`${chalk.cyan('Subtasks fixed:')} ${stats.subtasksFixed}`,
`${chalk.red('Issues found:')} ${validationResult.issues.length}`, // Display count from result
{
padding: 1,
borderColor: 'green',
borderColor: 'red',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
}
)
);
// Show all warnings in a collapsible list if there are many
if (warnings.length > 0) {
console.log(chalk.yellow('\nDetailed fixes:'));
warnings.forEach((warning) => {
console.log(` ${warning}`);
});
}
}
// Regenerate task files to reflect the changes
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
log('info', 'Task files regenerated to reflect dependency changes');
} else {
log(
'success',

View File

@@ -6,6 +6,5 @@
// Export all modules
export * from './utils.js';
export * from './ui.js';
export * from './ai-services.js';
export * from './task-manager.js';
export * from './commands.js';

View File

@@ -204,7 +204,6 @@ function transformCursorToRooRules(content) {
);
// 2. Handle tool references - even partial ones
result = result.replace(/search/g, 'search_files');
result = result.replace(/\bedit_file\b/gi, 'apply_diff');
result = result.replace(/\bsearch tool\b/gi, 'search_files tool');
result = result.replace(/\bSearch Tool\b/g, 'Search_Files Tool');

View File

@@ -0,0 +1,438 @@
{
"anthropic": [
{
"id": "claude-3-7-sonnet-20250219",
"swe_score": 0.623,
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 120000
},
{
"id": "claude-3-5-sonnet-20241022",
"swe_score": 0.49,
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000
},
{
"id": "claude-3-5-haiku-20241022",
"swe_score": 0.406,
"cost_per_1m_tokens": { "input": 0.8, "output": 4.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000
},
{
"id": "claude-3-opus-20240229",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 15, "output": 75 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000
}
],
"openai": [
{
"id": "gpt-4o",
"swe_score": 0.332,
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "o1",
"swe_score": 0.489,
"cost_per_1m_tokens": { "input": 15.0, "output": 60.0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "o3",
"swe_score": 0.5,
"cost_per_1m_tokens": { "input": 10.0, "output": 40.0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "o3-mini",
"swe_score": 0.493,
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000
},
{
"id": "o4-mini",
"swe_score": 0.45,
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "o1-mini",
"swe_score": 0.4,
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "o1-pro",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 150.0, "output": 600.0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gpt-4-5-preview",
"swe_score": 0.38,
"cost_per_1m_tokens": { "input": 75.0, "output": 150.0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gpt-4-1-mini",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.4, "output": 1.6 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gpt-4-1-nano",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gpt-4o-mini",
"swe_score": 0.3,
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gpt-4o-search-preview",
"swe_score": 0.33,
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "gpt-4o-mini-search-preview",
"swe_score": 0.3,
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
"allowed_roles": ["main", "fallback", "research"]
}
],
"google": [
{
"id": "gemini-2.5-pro-exp-03-25",
"swe_score": 0.638,
"cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"]
},
{
"id": "gemini-2.5-flash-preview-04-17",
"swe_score": 0,
"cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"]
},
{
"id": "gemini-2.0-flash",
"swe_score": 0.754,
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gemini-2.0-flash-thinking-experimental",
"swe_score": 0.754,
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gemini-2.0-pro",
"swe_score": 0,
"cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"]
}
],
"perplexity": [
{
"id": "sonar-pro",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["research"],
"max_tokens": 8700
},
{
"id": "sonar",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 1, "output": 1 },
"allowed_roles": ["research"],
"max_tokens": 8700
},
{
"id": "deep-research",
"swe_score": 0.211,
"cost_per_1m_tokens": { "input": 2, "output": 8 },
"allowed_roles": ["research"],
"max_tokens": 8700
},
{
"id": "sonar-reasoning-pro",
"swe_score": 0.211,
"cost_per_1m_tokens": { "input": 2, "output": 8 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 8700
},
{
"id": "sonar-reasoning",
"swe_score": 0.211,
"cost_per_1m_tokens": { "input": 1, "output": 5 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 8700
}
],
"xai": [
{
"id": "grok-3",
"name": "Grok 3",
"swe_score": null,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-3-mini",
"name": "Grok 3 Mini",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.3, "output": 0.5 },
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-3-fast",
"name": "Grok 3 Fast",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 5, "output": 25 },
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-3-mini-fast",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.6, "output": 4 },
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
}
],
"ollama": [
{
"id": "gemma3:27b",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "gemma3:12b",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwq",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "deepseek-r1",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "mistral-small3.1",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "llama3.3",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "phi4",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"]
}
],
"openrouter": [
{
"id": "google/gemini-2.0-flash-001",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 1048576
},
{
"id": "google/gemini-2.5-pro-exp-03-25",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 1000000
},
{
"id": "deepseek/deepseek-chat-v3-0324:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 163840
},
{
"id": "deepseek/deepseek-chat-v3-0324",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.27, "output": 1.1 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000
},
{
"id": "deepseek/deepseek-r1:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 163840
},
{
"id": "microsoft/mai-ds-r1:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 163840
},
{
"id": "google/gemini-2.5-pro-preview-03-25",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 1.25, "output": 10 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 65535
},
{
"id": "google/gemini-2.5-flash-preview",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 65535
},
{
"id": "google/gemini-2.5-flash-preview:thinking",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.15, "output": 3.5 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 65535
},
{
"id": "openai/o3",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 10, "output": 40 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 200000
},
{
"id": "openai/o4-mini",
"swe_score": 0.45,
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000
},
{
"id": "openai/o4-mini-high",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000
},
{
"id": "openai/o1-pro",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 150, "output": 600 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000
},
{
"id": "meta-llama/llama-3.3-70b-instruct",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 120, "output": 600 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 1048576
},
{
"id": "google/gemma-3-12b-it:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 131072
},
{
"id": "google/gemma-3-12b-it",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 50, "output": 100 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 131072
},
{
"id": "google/gemma-3-27b-it:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 96000
},
{
"id": "google/gemma-3-27b-it",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 100, "output": 200 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 131072
},
{
"id": "qwen/qwq-32b:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 40000
},
{
"id": "qwen/qwq-32b",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 150, "output": 200 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 131072
},
{
"id": "qwen/qwen-max",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 1.6, "output": 6.4 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 32768
},
{
"id": "qwen/qwen-turbo",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.05, "output": 0.2 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 1000000
},
{
"id": "mistralai/mistral-small-3.1-24b-instruct:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 96000
},
{
"id": "mistralai/mistral-small-3.1-24b-instruct",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0.1, "output": 0.3 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 128000
},
{
"id": "thudm/glm-4-32b:free",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"allowed_roles": ["main", "fallback"],
"max_tokens": 32768
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,153 @@
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import { isTaskDependentOn } from '../task-manager.js';
import generateTaskFiles from './generate-task-files.js';
/**
* Add a subtask to a parent task
* @param {string} tasksPath - Path to the tasks.json file
* @param {number|string} parentId - ID of the parent task
* @param {number|string|null} existingTaskId - ID of an existing task to convert to subtask (optional)
* @param {Object} newSubtaskData - Data for creating a new subtask (used if existingTaskId is null)
* @param {boolean} generateFiles - Whether to regenerate task files after adding the subtask
* @returns {Object} The newly created or converted subtask
*/
async function addSubtask(
tasksPath,
parentId,
existingTaskId = null,
newSubtaskData = null,
generateFiles = true
) {
try {
log('info', `Adding subtask to parent task ${parentId}...`);
// Read the existing tasks
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
}
// Convert parent ID to number
const parentIdNum = parseInt(parentId, 10);
// Find the parent task
const parentTask = data.tasks.find((t) => t.id === parentIdNum);
if (!parentTask) {
throw new Error(`Parent task with ID ${parentIdNum} not found`);
}
// Initialize subtasks array if it doesn't exist
if (!parentTask.subtasks) {
parentTask.subtasks = [];
}
let newSubtask;
// Case 1: Convert an existing task to a subtask
if (existingTaskId !== null) {
const existingTaskIdNum = parseInt(existingTaskId, 10);
// Find the existing task
const existingTaskIndex = data.tasks.findIndex(
(t) => t.id === existingTaskIdNum
);
if (existingTaskIndex === -1) {
throw new Error(`Task with ID ${existingTaskIdNum} not found`);
}
const existingTask = data.tasks[existingTaskIndex];
// Check if task is already a subtask
if (existingTask.parentTaskId) {
throw new Error(
`Task ${existingTaskIdNum} is already a subtask of task ${existingTask.parentTaskId}`
);
}
// Check for circular dependency
if (existingTaskIdNum === parentIdNum) {
throw new Error(`Cannot make a task a subtask of itself`);
}
// Check if parent task is a subtask of the task we're converting
// This would create a circular dependency
if (isTaskDependentOn(data.tasks, parentTask, existingTaskIdNum)) {
throw new Error(
`Cannot create circular dependency: task ${parentIdNum} is already a subtask or dependent of task ${existingTaskIdNum}`
);
}
// Find the highest subtask ID to determine the next ID
const highestSubtaskId =
parentTask.subtasks.length > 0
? Math.max(...parentTask.subtasks.map((st) => st.id))
: 0;
const newSubtaskId = highestSubtaskId + 1;
// Clone the existing task to be converted to a subtask
newSubtask = {
...existingTask,
id: newSubtaskId,
parentTaskId: parentIdNum
};
// Add to parent's subtasks
parentTask.subtasks.push(newSubtask);
// Remove the task from the main tasks array
data.tasks.splice(existingTaskIndex, 1);
log(
'info',
`Converted task ${existingTaskIdNum} to subtask ${parentIdNum}.${newSubtaskId}`
);
}
// Case 2: Create a new subtask
else if (newSubtaskData) {
// Find the highest subtask ID to determine the next ID
const highestSubtaskId =
parentTask.subtasks.length > 0
? Math.max(...parentTask.subtasks.map((st) => st.id))
: 0;
const newSubtaskId = highestSubtaskId + 1;
// Create the new subtask object
newSubtask = {
id: newSubtaskId,
title: newSubtaskData.title,
description: newSubtaskData.description || '',
details: newSubtaskData.details || '',
status: newSubtaskData.status || 'pending',
dependencies: newSubtaskData.dependencies || [],
parentTaskId: parentIdNum
};
// Add to parent's subtasks
parentTask.subtasks.push(newSubtask);
log('info', `Created new subtask ${parentIdNum}.${newSubtaskId}`);
} else {
throw new Error(
'Either existingTaskId or newSubtaskData must be provided'
);
}
// Write the updated tasks back to the file
writeJSON(tasksPath, data);
// Generate task files if requested
if (generateFiles) {
log('info', 'Regenerating task files...');
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
}
return newSubtask;
} catch (error) {
log('error', `Error adding subtask: ${error.message}`);
throw error;
}
}
export default addSubtask;

View File

@@ -0,0 +1,363 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import { z } from 'zod';
import {
displayBanner,
getStatusWithColor,
startLoadingIndicator,
stopLoadingIndicator
} from '../ui.js';
import { log, readJSON, writeJSON, truncate } from '../utils.js';
import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
// Define Zod schema for the expected AI output object
const AiTaskDataSchema = z.object({
title: z.string().describe('Clear, concise title for the task'),
description: z
.string()
.describe('A one or two sentence description of the task'),
details: z
.string()
.describe('In-depth implementation details, considerations, and guidance'),
testStrategy: z
.string()
.describe('Detailed approach for verifying task completion')
});
/**
* Add a new task using AI
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} prompt - Description of the task to add (required for AI-driven creation)
* @param {Array} dependencies - Task dependencies
* @param {string} priority - Task priority
* @param {function} reportProgress - Function to report progress to MCP server (optional)
* @param {Object} mcpLog - MCP logger object (optional)
* @param {Object} session - Session object from MCP server (optional)
* @param {string} outputFormat - Output format (text or json)
* @param {Object} customEnv - Custom environment variables (optional) - Note: AI params override deprecated
* @param {Object} manualTaskData - Manual task data (optional, for direct task creation without AI)
* @param {boolean} useResearch - Whether to use the research model (passed to unified service)
* @returns {number} The new task ID
*/
async function addTask(
tasksPath,
prompt,
dependencies = [],
priority = getDefaultPriority(), // Keep getter for default priority
{ reportProgress, mcpLog, session } = {},
outputFormat = 'text',
// customEnv = null, // Removed as AI param overrides are deprecated
manualTaskData = null,
useResearch = false // <-- Add useResearch parameter
) {
let loadingIndicator = null;
// Create custom reporter that checks for MCP log
const report = (message, level = 'info') => {
if (mcpLog) {
mcpLog[level](message);
} else if (outputFormat === 'text') {
log(level, message);
}
};
try {
// Only display banner and UI elements for text output (CLI)
if (outputFormat === 'text') {
displayBanner();
console.log(
boxen(chalk.white.bold(`Creating New Task`), {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
})
);
}
// Read the existing tasks
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
report('Invalid or missing tasks.json.', 'error');
throw new Error('Invalid or missing tasks.json.');
}
// Find the highest task ID to determine the next ID
const highestId =
data.tasks.length > 0 ? Math.max(...data.tasks.map((t) => t.id)) : 0;
const newTaskId = highestId + 1;
// Only show UI box for CLI mode
if (outputFormat === 'text') {
console.log(
boxen(chalk.white.bold(`Creating New Task #${newTaskId}`), {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
})
);
}
// Validate dependencies before proceeding
const invalidDeps = dependencies.filter((depId) => {
// Ensure depId is parsed as a number for comparison
const numDepId = parseInt(depId, 10);
return isNaN(numDepId) || !data.tasks.some((t) => t.id === numDepId);
});
if (invalidDeps.length > 0) {
report(
`The following dependencies do not exist or are invalid: ${invalidDeps.join(', ')}`,
'warn'
);
report('Removing invalid dependencies...', 'info');
dependencies = dependencies.filter(
(depId) => !invalidDeps.includes(depId)
);
}
// Ensure dependencies are numbers
const numericDependencies = dependencies.map((dep) => parseInt(dep, 10));
let taskData;
// Check if manual task data is provided
if (manualTaskData) {
report('Using manually provided task data', 'info');
taskData = manualTaskData;
report('DEBUG: Taking MANUAL task data path.', 'debug');
// Basic validation for manual data
if (
!taskData.title ||
typeof taskData.title !== 'string' ||
!taskData.description ||
typeof taskData.description !== 'string'
) {
throw new Error(
'Manual task data must include at least a title and description.'
);
}
} else {
report('DEBUG: Taking AI task generation path.', 'debug');
// --- Refactored AI Interaction ---
report('Generating task data with AI...', 'info');
// Create context string for task creation prompt
let contextTasks = '';
if (numericDependencies.length > 0) {
const dependentTasks = data.tasks.filter((t) =>
numericDependencies.includes(t.id)
);
contextTasks = `\nThis task depends on the following tasks:\n${dependentTasks
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
} else {
const recentTasks = [...data.tasks]
.sort((a, b) => b.id - a.id)
.slice(0, 3);
if (recentTasks.length > 0) {
contextTasks = `\nRecent tasks in the project:\n${recentTasks
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
}
}
// System Prompt
const systemPrompt =
"You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema.";
// Task Structure Description (for user prompt)
const taskStructureDesc = `
{
"title": "Task title goes here",
"description": "A concise one or two sentence description of what the task involves",
"details": "In-depth implementation details, considerations, and guidance.",
"testStrategy": "Detailed approach for verifying task completion."
}`;
// Add any manually provided details to the prompt for context
let contextFromArgs = '';
if (manualTaskData?.title)
contextFromArgs += `\n- Suggested Title: "${manualTaskData.title}"`;
if (manualTaskData?.description)
contextFromArgs += `\n- Suggested Description: "${manualTaskData.description}"`;
if (manualTaskData?.details)
contextFromArgs += `\n- Additional Details Context: "${manualTaskData.details}"`;
if (manualTaskData?.testStrategy)
contextFromArgs += `\n- Additional Test Strategy Context: "${manualTaskData.testStrategy}"`;
// User Prompt
const userPrompt = `Create a comprehensive new task (Task #${newTaskId}) for a software development project based on this description: "${prompt}"
${contextTasks}
${contextFromArgs ? `\nConsider these additional details provided by the user:${contextFromArgs}` : ''}
Return your answer as a single JSON object matching the schema precisely:
${taskStructureDesc}
Make sure the details and test strategy are thorough and specific.`;
// Start the loading indicator - only for text mode
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...`
);
}
try {
// Determine the service role based on the useResearch flag
const serviceRole = useResearch ? 'research' : 'main';
report('DEBUG: Calling generateObjectService...', 'debug');
// Call the unified AI service
const aiGeneratedTaskData = await generateObjectService({
role: serviceRole, // <-- Use the determined role
session: session, // Pass session for API key resolution
schema: AiTaskDataSchema, // Pass the Zod schema
objectName: 'newTaskData', // Name for the object
systemPrompt: systemPrompt,
prompt: userPrompt,
reportProgress // Pass progress reporter if available
});
report('DEBUG: generateObjectService returned successfully.', 'debug');
report('Successfully generated task data from AI.', 'success');
taskData = aiGeneratedTaskData; // Assign the validated object
} catch (error) {
report(
`DEBUG: generateObjectService caught error: ${error.message}`,
'debug'
);
report(`Error generating task with AI: ${error.message}`, 'error');
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
throw error; // Re-throw error after logging
} finally {
report('DEBUG: generateObjectService finally block reached.', 'debug');
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
}
// --- End Refactored AI Interaction ---
}
// Create the new task object
const newTask = {
id: newTaskId,
title: taskData.title,
description: taskData.description,
details: taskData.details || '',
testStrategy: taskData.testStrategy || '',
status: 'pending',
dependencies: numericDependencies, // Use validated numeric dependencies
priority: priority,
subtasks: [] // Initialize with empty subtasks array
};
// Add the task to the tasks array
data.tasks.push(newTask);
report('DEBUG: Writing tasks.json...', 'debug');
// Write the updated tasks to the file
writeJSON(tasksPath, data);
report('DEBUG: tasks.json written.', 'debug');
// Generate markdown task files
report('Generating task files...', 'info');
report('DEBUG: Calling generateTaskFiles...', 'debug');
// Pass mcpLog if available to generateTaskFiles
await generateTaskFiles(tasksPath, path.dirname(tasksPath), { mcpLog });
report('DEBUG: generateTaskFiles finished.', 'debug');
// Show success message - only for text output (CLI)
if (outputFormat === 'text') {
const table = new Table({
head: [
chalk.cyan.bold('ID'),
chalk.cyan.bold('Title'),
chalk.cyan.bold('Description')
],
colWidths: [5, 30, 50] // Adjust widths as needed
});
table.push([
newTask.id,
truncate(newTask.title, 27),
truncate(newTask.description, 47)
]);
console.log(chalk.green('✅ New task created successfully:'));
console.log(table.toString());
// Helper to get priority color
const getPriorityColor = (p) => {
switch (p?.toLowerCase()) {
case 'high':
return 'red';
case 'low':
return 'gray';
case 'medium':
default:
return 'yellow';
}
};
// Show success message box
console.log(
boxen(
chalk.white.bold(`Task ${newTaskId} Created Successfully`) +
'\n\n' +
chalk.white(`Title: ${newTask.title}`) +
'\n' +
chalk.white(`Status: ${getStatusWithColor(newTask.status)}`) +
'\n' +
chalk.white(
`Priority: ${chalk[getPriorityColor(newTask.priority)](newTask.priority)}`
) +
'\n' +
(numericDependencies.length > 0
? chalk.white(`Dependencies: ${numericDependencies.join(', ')}`) +
'\n'
: '') +
'\n' +
chalk.white.bold('Next Steps:') +
'\n' +
chalk.cyan(
`1. Run ${chalk.yellow(`task-master show ${newTaskId}`)} to see complete task details`
) +
'\n' +
chalk.cyan(
`2. Run ${chalk.yellow(`task-master set-status --id=${newTaskId} --status=in-progress`)} to start working on it`
) +
'\n' +
chalk.cyan(
`3. Run ${chalk.yellow(`task-master expand --id=${newTaskId}`)} to break it down into subtasks`
),
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
)
);
}
// Return the new task ID
report(`DEBUG: Returning new task ID: ${newTaskId}`, 'debug');
return newTaskId;
} catch (error) {
// Stop any loading indicator on error
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
}
report(`Error adding task: ${error.message}`, 'error');
if (outputFormat === 'text') {
console.error(chalk.red(`Error: ${error.message}`));
}
// In MCP mode, we let the direct function handler catch and format
throw error;
}
}
export default addTask;

View File

@@ -0,0 +1,484 @@
import chalk from 'chalk';
import boxen from 'boxen';
import readline from 'readline';
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
import { generateTextService } from '../ai-services-unified.js';
import { getDebugFlag, getProjectName } from '../config-manager.js';
/**
* Generates the prompt for complexity analysis.
* (Moved from ai-services.js and simplified)
* @param {Object} tasksData - The tasks data object.
* @returns {string} The generated prompt.
*/
function generateInternalComplexityAnalysisPrompt(tasksData) {
const tasksString = JSON.stringify(tasksData.tasks, null, 2);
return `Analyze the following tasks to determine their complexity (1-10 scale) and recommend the number of subtasks for expansion. Provide a brief reasoning and an initial expansion prompt for each.
Tasks:
${tasksString}
Respond ONLY with a valid JSON array matching the schema:
[
{
"taskId": <number>,
"taskTitle": "<string>",
"complexityScore": <number 1-10>,
"recommendedSubtasks": <number>,
"expansionPrompt": "<string>",
"reasoning": "<string>"
},
...
]
Do not include any explanatory text, markdown formatting, or code block markers before or after the JSON array.`;
}
/**
* Analyzes task complexity and generates expansion recommendations
* @param {Object} options Command options
* @param {string} options.file - Path to tasks file
* @param {string} options.output - Path to report output file
* @param {string|number} [options.threshold] - Complexity threshold
* @param {boolean} [options.research] - Use research role
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
* @param {number} [options._originalTaskCount] - Original task count (internal use)
* @param {Object} context - Context object, potentially containing session and mcpLog
* @param {Object} [context.session] - Session object from MCP server (optional)
* @param {Object} [context.mcpLog] - MCP logger object (optional)
* @param {function} [context.reportProgress] - Deprecated: Function to report progress (ignored)
*/
async function analyzeTaskComplexity(options, context = {}) {
const { session, mcpLog } = context;
const tasksPath = options.file || 'tasks/tasks.json';
const outputPath = options.output || 'scripts/task-complexity-report.json';
const thresholdScore = parseFloat(options.threshold || '5');
const useResearch = options.research || false;
const outputFormat = mcpLog ? 'json' : 'text';
const reportLog = (message, level = 'info') => {
if (mcpLog) {
mcpLog[level](message);
} else if (!isSilentMode() && outputFormat === 'text') {
log(level, message);
}
};
if (outputFormat === 'text') {
console.log(
chalk.blue(
`Analyzing task complexity and generating expansion recommendations...`
)
);
}
try {
reportLog(`Reading tasks from ${tasksPath}...`, 'info');
let tasksData;
let originalTaskCount = 0;
if (options._filteredTasksData) {
tasksData = options._filteredTasksData;
originalTaskCount = options._originalTaskCount || tasksData.tasks.length;
if (!options._originalTaskCount) {
try {
const originalData = readJSON(tasksPath);
if (originalData && originalData.tasks) {
originalTaskCount = originalData.tasks.length;
}
} catch (e) {
log('warn', `Could not read original tasks file: ${e.message}`);
}
}
} else {
tasksData = readJSON(tasksPath);
if (
!tasksData ||
!tasksData.tasks ||
!Array.isArray(tasksData.tasks) ||
tasksData.tasks.length === 0
) {
throw new Error('No tasks found in the tasks file');
}
originalTaskCount = tasksData.tasks.length;
const activeStatuses = ['pending', 'blocked', 'in-progress'];
const filteredTasks = tasksData.tasks.filter((task) =>
activeStatuses.includes(task.status?.toLowerCase() || 'pending')
);
tasksData = {
...tasksData,
tasks: filteredTasks,
_originalTaskCount: originalTaskCount
};
}
const skippedCount = originalTaskCount - tasksData.tasks.length;
reportLog(
`Found ${originalTaskCount} total tasks in the task file.`,
'info'
);
if (skippedCount > 0) {
const skipMessage = `Skipping ${skippedCount} tasks marked as done/cancelled/deferred. Analyzing ${tasksData.tasks.length} active tasks.`;
reportLog(skipMessage, 'info');
if (outputFormat === 'text') {
console.log(chalk.yellow(skipMessage));
}
}
if (tasksData.tasks.length === 0) {
const emptyReport = {
meta: {
generatedAt: new Date().toISOString(),
tasksAnalyzed: 0,
thresholdScore: thresholdScore,
projectName: getProjectName(session),
usedResearch: useResearch
},
complexityAnalysis: []
};
reportLog(`Writing empty complexity report to ${outputPath}...`, 'info');
writeJSON(outputPath, emptyReport);
reportLog(
`Task complexity analysis complete. Report written to ${outputPath}`,
'success'
);
if (outputFormat === 'text') {
console.log(
chalk.green(
`Task complexity analysis complete. Report written to ${outputPath}`
)
);
const highComplexity = 0;
const mediumComplexity = 0;
const lowComplexity = 0;
const totalAnalyzed = 0;
console.log('\nComplexity Analysis Summary:');
console.log('----------------------------');
console.log(`Tasks in input file: ${originalTaskCount}`);
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
console.log(`High complexity tasks: ${highComplexity}`);
console.log(`Medium complexity tasks: ${mediumComplexity}`);
console.log(`Low complexity tasks: ${lowComplexity}`);
console.log(
`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`
);
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
console.log(
`\nSee ${outputPath} for the full report and expansion commands.`
);
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master complexity-report')} to review detailed findings\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down complex tasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master expand --all')} to expand all pending tasks based on complexity`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
}
return emptyReport;
}
const prompt = generateInternalComplexityAnalysisPrompt(tasksData);
// System prompt remains simple for text generation
const systemPrompt =
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
let loadingIndicator = null;
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator('Calling AI service...');
}
let fullResponse = ''; // To store the raw text response
try {
const role = useResearch ? 'research' : 'main';
reportLog(`Using AI service with role: ${role}`, 'info');
// *** CHANGED: Use generateTextService ***
fullResponse = await generateTextService({
prompt,
systemPrompt,
role,
session
// No schema or objectName needed
});
// *** End Service Call Change ***
reportLog(
'Successfully received text response via AI service',
'success'
);
// --- Stop Loading Indicator (Unchanged) ---
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
loadingIndicator = null;
}
if (outputFormat === 'text') {
readline.clearLine(process.stdout, 0);
readline.cursorTo(process.stdout, 0);
console.log(
chalk.green('AI service call complete. Parsing response...')
);
}
// --- End Stop Loading Indicator ---
// --- Re-introduce Manual JSON Parsing & Cleanup ---
reportLog(`Parsing complexity analysis from text response...`, 'info');
let complexityAnalysis;
try {
let cleanedResponse = fullResponse;
// Basic trim first
cleanedResponse = cleanedResponse.trim();
// Remove potential markdown code block fences
const codeBlockMatch = cleanedResponse.match(
/```(?:json)?\s*([\s\S]*?)\s*```/
);
if (codeBlockMatch) {
cleanedResponse = codeBlockMatch[1].trim(); // Trim content inside block
reportLog('Extracted JSON from code block', 'info');
} else {
// If no code block, ensure it starts with '[' and ends with ']'
// This is less robust but a common fallback
const firstBracket = cleanedResponse.indexOf('[');
const lastBracket = cleanedResponse.lastIndexOf(']');
if (firstBracket !== -1 && lastBracket > firstBracket) {
cleanedResponse = cleanedResponse.substring(
firstBracket,
lastBracket + 1
);
reportLog('Extracted content between first [ and last ]', 'info');
} else {
reportLog(
'Warning: Response does not appear to be a JSON array.',
'warn'
);
// Keep going, maybe JSON.parse can handle it or will fail informatively
}
}
if (outputFormat === 'text' && getDebugFlag(session)) {
console.log(chalk.gray('Attempting to parse cleaned JSON...'));
console.log(chalk.gray('Cleaned response (first 100 chars):'));
console.log(chalk.gray(cleanedResponse.substring(0, 100)));
console.log(chalk.gray('Last 100 chars:'));
console.log(
chalk.gray(cleanedResponse.substring(cleanedResponse.length - 100))
);
}
try {
complexityAnalysis = JSON.parse(cleanedResponse);
} catch (jsonError) {
reportLog(
'Initial JSON parsing failed. Raw response might be malformed.',
'error'
);
reportLog(`Original JSON Error: ${jsonError.message}`, 'error');
if (outputFormat === 'text' && getDebugFlag(session)) {
console.log(chalk.red('--- Start Raw Malformed Response ---'));
console.log(chalk.gray(fullResponse));
console.log(chalk.red('--- End Raw Malformed Response ---'));
}
// Re-throw the specific JSON parsing error
throw new Error(
`Failed to parse JSON response: ${jsonError.message}`
);
}
// Ensure it's an array after parsing
if (!Array.isArray(complexityAnalysis)) {
throw new Error('Parsed response is not a valid JSON array.');
}
} catch (error) {
// Catch errors specifically from the parsing/cleanup block
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
reportLog(
`Error parsing complexity analysis JSON: ${error.message}`,
'error'
);
if (outputFormat === 'text') {
console.error(
chalk.red(
`Error parsing complexity analysis JSON: ${error.message}`
)
);
}
throw error; // Re-throw parsing error
}
// --- End Manual JSON Parsing & Cleanup ---
// --- Post-processing (Missing Task Check) - (Unchanged) ---
const taskIds = tasksData.tasks.map((t) => t.id);
const analysisTaskIds = complexityAnalysis.map((a) => a.taskId);
const missingTaskIds = taskIds.filter(
(id) => !analysisTaskIds.includes(id)
);
if (missingTaskIds.length > 0) {
reportLog(
`Missing analysis for ${missingTaskIds.length} tasks: ${missingTaskIds.join(', ')}`,
'warn'
);
if (outputFormat === 'text') {
console.log(
chalk.yellow(
`Missing analysis for ${missingTaskIds.length} tasks: ${missingTaskIds.join(', ')}`
)
);
}
for (const missingId of missingTaskIds) {
const missingTask = tasksData.tasks.find((t) => t.id === missingId);
if (missingTask) {
reportLog(`Adding default analysis for task ${missingId}`, 'info');
complexityAnalysis.push({
taskId: missingId,
taskTitle: missingTask.title,
complexityScore: 5,
recommendedSubtasks: 3,
expansionPrompt: `Break down this task with a focus on ${missingTask.title.toLowerCase()}.`,
reasoning:
'Automatically added due to missing analysis in AI response.'
});
}
}
}
// --- End Post-processing ---
// --- Report Creation & Writing (Unchanged) ---
const finalReport = {
meta: {
generatedAt: new Date().toISOString(),
tasksAnalyzed: tasksData.tasks.length,
thresholdScore: thresholdScore,
projectName: getProjectName(session),
usedResearch: useResearch
},
complexityAnalysis: complexityAnalysis
};
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
writeJSON(outputPath, finalReport);
reportLog(
`Task complexity analysis complete. Report written to ${outputPath}`,
'success'
);
// --- End Report Creation & Writing ---
// --- Display CLI Summary (Unchanged) ---
if (outputFormat === 'text') {
console.log(
chalk.green(
`Task complexity analysis complete. Report written to ${outputPath}`
)
);
const highComplexity = complexityAnalysis.filter(
(t) => t.complexityScore >= 8
).length;
const mediumComplexity = complexityAnalysis.filter(
(t) => t.complexityScore >= 5 && t.complexityScore < 8
).length;
const lowComplexity = complexityAnalysis.filter(
(t) => t.complexityScore < 5
).length;
const totalAnalyzed = complexityAnalysis.length;
console.log('\nComplexity Analysis Summary:');
console.log('----------------------------');
console.log(
`Active tasks sent for analysis: ${tasksData.tasks.length}`
);
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
console.log(`High complexity tasks: ${highComplexity}`);
console.log(`Medium complexity tasks: ${mediumComplexity}`);
console.log(`Low complexity tasks: ${lowComplexity}`);
console.log(
`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`
);
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
console.log(
`\nSee ${outputPath} for the full report and expansion commands.`
);
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master complexity-report')} to review detailed findings\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down complex tasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master expand --all')} to expand all pending tasks based on complexity`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
if (getDebugFlag(session)) {
console.debug(
chalk.gray(
`Final analysis object: ${JSON.stringify(finalReport, null, 2)}`
)
);
}
}
// --- End Display CLI Summary ---
return finalReport;
} catch (error) {
// Catches errors from generateTextService call
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
reportLog(`Error during AI service call: ${error.message}`, 'error');
if (outputFormat === 'text') {
console.error(
chalk.red(`Error during AI service call: ${error.message}`)
);
if (error.message.includes('API key')) {
console.log(
chalk.yellow(
'\nPlease ensure your API keys are correctly configured in .env or ~/.taskmaster/.env'
)
);
console.log(
chalk.yellow("Run 'task-master models --setup' if needed.")
);
}
}
throw error; // Re-throw AI service error
}
} catch (error) {
// Catches general errors (file read, etc.)
reportLog(`Error analyzing task complexity: ${error.message}`, 'error');
if (outputFormat === 'text') {
console.error(
chalk.red(`Error analyzing task complexity: ${error.message}`)
);
if (getDebugFlag(session)) {
console.error(error);
}
process.exit(1);
} else {
throw error;
}
}
}
export default analyzeTaskComplexity;

View File

@@ -0,0 +1,144 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import { log, readJSON, writeJSON, truncate } from '../utils.js';
import { displayBanner } from '../ui.js';
import generateTaskFiles from './generate-task-files.js';
/**
* Clear subtasks from specified tasks
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} taskIds - Task IDs to clear subtasks from
*/
function clearSubtasks(tasksPath, taskIds) {
displayBanner();
log('info', `Reading tasks from ${tasksPath}...`);
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
log('error', 'No valid tasks found.');
process.exit(1);
}
console.log(
boxen(chalk.white.bold('Clearing Subtasks'), {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
})
);
// Handle multiple task IDs (comma-separated)
const taskIdArray = taskIds.split(',').map((id) => id.trim());
let clearedCount = 0;
// Create a summary table for the cleared subtasks
const summaryTable = new Table({
head: [
chalk.cyan.bold('Task ID'),
chalk.cyan.bold('Task Title'),
chalk.cyan.bold('Subtasks Cleared')
],
colWidths: [10, 50, 20],
style: { head: [], border: [] }
});
taskIdArray.forEach((taskId) => {
const id = parseInt(taskId, 10);
if (isNaN(id)) {
log('error', `Invalid task ID: ${taskId}`);
return;
}
const task = data.tasks.find((t) => t.id === id);
if (!task) {
log('error', `Task ${id} not found`);
return;
}
if (!task.subtasks || task.subtasks.length === 0) {
log('info', `Task ${id} has no subtasks to clear`);
summaryTable.push([
id.toString(),
truncate(task.title, 47),
chalk.yellow('No subtasks')
]);
return;
}
const subtaskCount = task.subtasks.length;
task.subtasks = [];
clearedCount++;
log('info', `Cleared ${subtaskCount} subtasks from task ${id}`);
summaryTable.push([
id.toString(),
truncate(task.title, 47),
chalk.green(`${subtaskCount} subtasks cleared`)
]);
});
if (clearedCount > 0) {
writeJSON(tasksPath, data);
// Show summary table
console.log(
boxen(chalk.white.bold('Subtask Clearing Summary:'), {
padding: { left: 2, right: 2, top: 0, bottom: 0 },
margin: { top: 1, bottom: 0 },
borderColor: 'blue',
borderStyle: 'round'
})
);
console.log(summaryTable.toString());
// Regenerate task files to reflect changes
log('info', 'Regenerating task files...');
generateTaskFiles(tasksPath, path.dirname(tasksPath));
// Success message
console.log(
boxen(
chalk.green(
`Successfully cleared subtasks from ${chalk.bold(clearedCount)} task(s)`
),
{
padding: 1,
borderColor: 'green',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
// Next steps suggestion
console.log(
boxen(
chalk.white.bold('Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master expand --id=<id>')} to generate new subtasks\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master list --with-subtasks')} to verify changes`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
} else {
console.log(
boxen(chalk.yellow('No subtasks were cleared'), {
padding: 1,
borderColor: 'yellow',
borderStyle: 'round',
margin: { top: 1 }
})
);
}
}
export default clearSubtasks;

View File

@@ -0,0 +1,177 @@
import { log, readJSON, isSilentMode } from '../utils.js';
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
import expandTask from './expand-task.js';
import { getDebugFlag } from '../config-manager.js';
/**
* Expand all eligible pending or in-progress tasks using the expandTask function.
* @param {string} tasksPath - Path to the tasks.json file
* @param {number} [numSubtasks] - Optional: Target number of subtasks per task.
* @param {boolean} [useResearch=false] - Whether to use the research AI role.
* @param {string} [additionalContext=''] - Optional additional context.
* @param {boolean} [force=false] - Force expansion even if tasks already have subtasks.
* @param {Object} context - Context object containing session and mcpLog.
* @param {Object} [context.session] - Session object from MCP.
* @param {Object} [context.mcpLog] - MCP logger object.
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). MCP calls should use 'json'.
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, message?: string}>} - Result summary.
*/
async function expandAllTasks(
tasksPath,
numSubtasks, // Keep this signature, expandTask handles defaults
useResearch = false,
additionalContext = '',
force = false, // Keep force here for the filter logic
context = {},
outputFormat = 'text' // Assume text default for CLI
) {
const { session, mcpLog } = context;
const isMCPCall = !!mcpLog; // Determine if called from MCP
// Use mcpLog if available, otherwise use the default console log wrapper respecting silent mode
const logger =
mcpLog ||
(outputFormat === 'json'
? {
// Basic logger for JSON output mode
info: (msg) => {},
warn: (msg) => {},
error: (msg) => console.error(`ERROR: ${msg}`), // Still log errors
debug: (msg) => {}
}
: {
// CLI logger respecting silent mode
info: (msg) => !isSilentMode() && log('info', msg),
warn: (msg) => !isSilentMode() && log('warn', msg),
error: (msg) => !isSilentMode() && log('error', msg),
debug: (msg) =>
!isSilentMode() && getDebugFlag(session) && log('debug', msg)
});
let loadingIndicator = null;
let expandedCount = 0;
let failedCount = 0;
// No skipped count needed now as the filter handles it upfront
let tasksToExpandCount = 0; // Renamed for clarity
if (!isMCPCall && outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
'Analyzing tasks for expansion...'
);
}
try {
logger.info(`Reading tasks from ${tasksPath}`);
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`Invalid tasks data in ${tasksPath}`);
}
// --- Restore Original Filtering Logic ---
const tasksToExpand = data.tasks.filter(
(task) =>
(task.status === 'pending' || task.status === 'in-progress') && // Include 'in-progress'
(!task.subtasks || task.subtasks.length === 0 || force) // Check subtasks/force here
);
tasksToExpandCount = tasksToExpand.length; // Get the count from the filtered array
logger.info(`Found ${tasksToExpandCount} tasks eligible for expansion.`);
// --- End Restored Filtering Logic ---
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator, 'Analysis complete.');
}
if (tasksToExpandCount === 0) {
logger.info('No tasks eligible for expansion.');
// --- Fix: Restore success: true and add message ---
return {
success: true, // Indicate overall success despite no action
expandedCount: 0,
failedCount: 0,
skippedCount: 0,
tasksToExpand: 0,
message: 'No tasks eligible for expansion.'
};
// --- End Fix ---
}
// Iterate over the already filtered tasks
for (const task of tasksToExpand) {
// --- Remove Redundant Check ---
// The check below is no longer needed as the initial filter handles it
/*
if (task.subtasks && task.subtasks.length > 0 && !force) {
logger.info(
`Skipping task ${task.id}: Already has subtasks. Use --force to overwrite.`
);
skippedCount++;
continue;
}
*/
// --- End Removed Redundant Check ---
// Start indicator for individual task expansion in CLI mode
let taskIndicator = null;
if (!isMCPCall && outputFormat === 'text') {
taskIndicator = startLoadingIndicator(`Expanding task ${task.id}...`);
}
try {
// Call the refactored expandTask function
await expandTask(
tasksPath,
task.id,
numSubtasks, // Pass numSubtasks, expandTask handles defaults/complexity
useResearch,
additionalContext,
context, // Pass the whole context object { session, mcpLog }
force // Pass the force flag down
);
expandedCount++;
if (taskIndicator) {
stopLoadingIndicator(taskIndicator, `Task ${task.id} expanded.`);
}
logger.info(`Successfully expanded task ${task.id}.`);
} catch (error) {
failedCount++;
if (taskIndicator) {
stopLoadingIndicator(
taskIndicator,
`Failed to expand task ${task.id}.`,
false
);
}
logger.error(`Failed to expand task ${task.id}: ${error.message}`);
// Continue to the next task
}
}
// Log final summary (removed skipped count from message)
logger.info(
`Expansion complete: ${expandedCount} expanded, ${failedCount} failed.`
);
// Return summary (skippedCount is now 0) - Add success: true here as well for consistency
return {
success: true, // Indicate overall success
expandedCount,
failedCount,
skippedCount: 0,
tasksToExpand: tasksToExpandCount
};
} catch (error) {
if (loadingIndicator)
stopLoadingIndicator(loadingIndicator, 'Error.', false);
logger.error(`Error during expand all operation: ${error.message}`);
if (!isMCPCall && getDebugFlag(session)) {
console.error(error); // Log full stack in debug CLI mode
}
// Re-throw error for the caller to handle, the direct function will format it
throw error; // Let direct function wrapper handle formatting
/* Original re-throw:
throw new Error(`Failed to expand all tasks: ${error.message}`);
*/
}
}
export default expandAllTasks;

View File

@@ -0,0 +1,570 @@
import fs from 'fs';
import path from 'path';
import { z } from 'zod';
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
import { generateTextService } from '../ai-services-unified.js';
import { getDefaultSubtasks, getDebugFlag } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
// --- Zod Schemas (Keep from previous step) ---
const subtaskSchema = z
.object({
id: z
.number()
.int()
.positive()
.describe('Sequential subtask ID starting from 1'),
title: z.string().min(5).describe('Clear, specific title for the subtask'),
description: z
.string()
.min(10)
.describe('Detailed description of the subtask'),
dependencies: z
.array(z.number().int())
.describe('IDs of prerequisite subtasks within this expansion'),
details: z.string().min(20).describe('Implementation details and guidance'),
status: z
.string()
.describe(
'The current status of the subtask (should be pending initially)'
),
testStrategy: z
.string()
.optional()
.describe('Approach for testing this subtask')
})
.strict();
const subtaskArraySchema = z.array(subtaskSchema);
const subtaskWrapperSchema = z.object({
subtasks: subtaskArraySchema.describe('The array of generated subtasks.')
});
// --- End Zod Schemas ---
/**
* Generates the system prompt for the main AI role (e.g., Claude).
* @param {number} subtaskCount - The target number of subtasks.
* @returns {string} The system prompt.
*/
function generateMainSystemPrompt(subtaskCount) {
return `You are an AI assistant helping with task breakdown for software development.
You need to break down a high-level task into ${subtaskCount} specific subtasks that can be implemented one by one.
Subtasks should:
1. Be specific and actionable implementation steps
2. Follow a logical sequence
3. Each handle a distinct part of the parent task
4. Include clear guidance on implementation approach
5. Have appropriate dependency chains between subtasks (using the new sequential IDs)
6. Collectively cover all aspects of the parent task
For each subtask, provide:
- id: Sequential integer starting from the provided nextSubtaskId
- title: Clear, specific title
- description: Detailed description
- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)
- details: Implementation details
- testStrategy: Optional testing approach
Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array matching the structure described. Do not include any explanatory text, markdown formatting, or code block markers.`;
}
/**
* Generates the user prompt for the main AI role (e.g., Claude).
* @param {Object} task - The parent task object.
* @param {number} subtaskCount - The target number of subtasks.
* @param {string} additionalContext - Optional additional context.
* @param {number} nextSubtaskId - The starting ID for the new subtasks.
* @returns {string} The user prompt.
*/
function generateMainUserPrompt(
task,
subtaskCount,
additionalContext,
nextSubtaskId
) {
const contextPrompt = additionalContext
? `\n\nAdditional context: ${additionalContext}`
: '';
const schemaDescription = `
{
"subtasks": [
{
"id": ${nextSubtaskId}, // First subtask ID
"title": "Specific subtask title",
"description": "Detailed description",
"dependencies": [], // e.g., [${nextSubtaskId + 1}] if it depends on the next
"details": "Implementation guidance",
"testStrategy": "Optional testing approach"
},
// ... (repeat for a total of ${subtaskCount} subtasks with sequential IDs)
]
}`;
return `Break down this task into exactly ${subtaskCount} specific subtasks:
Task ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
Current details: ${task.details || 'None'}
${contextPrompt}
Return ONLY the JSON object containing the "subtasks" array, matching this structure:
${schemaDescription}`;
}
/**
* Generates the user prompt for the research AI role (e.g., Perplexity).
* @param {Object} task - The parent task object.
* @param {number} subtaskCount - The target number of subtasks.
* @param {string} additionalContext - Optional additional context.
* @param {number} nextSubtaskId - The starting ID for the new subtasks.
* @returns {string} The user prompt.
*/
function generateResearchUserPrompt(
task,
subtaskCount,
additionalContext,
nextSubtaskId
) {
const contextPrompt = additionalContext
? `\n\nConsider this context: ${additionalContext}`
: '';
const schemaDescription = `
{
"subtasks": [
{
"id": <number>, // Sequential ID starting from ${nextSubtaskId}
"title": "<string>",
"description": "<string>",
"dependencies": [<number>], // e.g., [${nextSubtaskId + 1}]
"details": "<string>",
"testStrategy": "<string>" // Optional
},
// ... (repeat for ${subtaskCount} subtasks)
]
}`;
return `Analyze the following task and break it down into exactly ${subtaskCount} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
Parent Task:
ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
Current details: ${task.details || 'None'}
${contextPrompt}
CRITICAL: Respond ONLY with a valid JSON object containing a single key "subtasks". The value must be an array of the generated subtasks, strictly matching this structure:
${schemaDescription}
Do not include ANY explanatory text, markdown, or code block markers. Just the JSON object.`;
}
/**
* Parse subtasks from AI's text response. Includes basic cleanup.
* @param {string} text - Response text from AI.
* @param {number} startId - Starting subtask ID expected.
* @param {number} expectedCount - Expected number of subtasks.
* @param {number} parentTaskId - Parent task ID for context.
* @param {Object} logger - Logging object (mcpLog or console log).
* @returns {Array} Parsed and potentially corrected subtasks array.
* @throws {Error} If parsing fails or JSON is invalid/malformed.
*/
function parseSubtasksFromText(
text,
startId,
expectedCount,
parentTaskId,
logger
) {
logger.info('Attempting to parse subtasks object from text response...');
if (!text || text.trim() === '') {
throw new Error('AI response text is empty.');
}
let cleanedResponse = text.trim();
const originalResponseForDebug = cleanedResponse;
// 1. Extract from Markdown code block first
const codeBlockMatch = cleanedResponse.match(
/```(?:json)?\s*([\s\S]*?)\s*```/
);
if (codeBlockMatch) {
cleanedResponse = codeBlockMatch[1].trim();
logger.info('Extracted JSON content from Markdown code block.');
} else {
// 2. If no code block, find first '{' and last '}' for the object
const firstBrace = cleanedResponse.indexOf('{');
const lastBrace = cleanedResponse.lastIndexOf('}');
if (firstBrace !== -1 && lastBrace > firstBrace) {
cleanedResponse = cleanedResponse.substring(firstBrace, lastBrace + 1);
logger.info('Extracted content between first { and last }.');
} else {
logger.warn(
'Response does not appear to contain a JSON object structure. Parsing raw response.'
);
}
}
// 3. Attempt to parse the object
let parsedObject;
try {
parsedObject = JSON.parse(cleanedResponse);
} catch (parseError) {
logger.error(`Failed to parse JSON object: ${parseError.message}`);
logger.error(
`Problematic JSON string (first 500 chars): ${cleanedResponse.substring(0, 500)}`
);
logger.error(
`Original Raw Response (first 500 chars): ${originalResponseForDebug.substring(0, 500)}`
);
throw new Error(
`Failed to parse JSON response object: ${parseError.message}`
);
}
// 4. Validate the object structure and extract the subtasks array
if (
!parsedObject ||
typeof parsedObject !== 'object' ||
!Array.isArray(parsedObject.subtasks)
) {
logger.error(
`Parsed content is not an object or missing 'subtasks' array. Content: ${JSON.stringify(parsedObject).substring(0, 200)}`
);
throw new Error(
'Parsed AI response is not a valid object containing a "subtasks" array.'
);
}
const parsedSubtasks = parsedObject.subtasks; // Extract the array
logger.info(
`Successfully parsed ${parsedSubtasks.length} potential subtasks from the object.`
);
if (expectedCount && parsedSubtasks.length !== expectedCount) {
logger.warn(
`Expected ${expectedCount} subtasks, but parsed ${parsedSubtasks.length}.`
);
}
// 5. Validate and Normalize each subtask using Zod schema
let currentId = startId;
const validatedSubtasks = [];
const validationErrors = [];
for (const rawSubtask of parsedSubtasks) {
const correctedSubtask = {
...rawSubtask,
id: currentId, // Enforce sequential ID
dependencies: Array.isArray(rawSubtask.dependencies)
? rawSubtask.dependencies
.map((dep) => (typeof dep === 'string' ? parseInt(dep, 10) : dep))
.filter(
(depId) => !isNaN(depId) && depId >= startId && depId < currentId
) // Ensure deps are numbers, valid range
: [],
status: 'pending' // Enforce pending status
// parentTaskId can be added if needed: parentTaskId: parentTaskId
};
const result = subtaskSchema.safeParse(correctedSubtask);
if (result.success) {
validatedSubtasks.push(result.data); // Add the validated data
} else {
logger.warn(
`Subtask validation failed for raw data: ${JSON.stringify(rawSubtask).substring(0, 100)}...`
);
result.error.errors.forEach((err) => {
const errorMessage = ` - Field '${err.path.join('.')}': ${err.message}`;
logger.warn(errorMessage);
validationErrors.push(`Subtask ${currentId}: ${errorMessage}`);
});
// Optionally, decide whether to include partially valid tasks or skip them
// For now, we'll skip invalid ones
}
currentId++; // Increment ID for the next *potential* subtask
}
if (validationErrors.length > 0) {
logger.error(
`Found ${validationErrors.length} validation errors in the generated subtasks.`
);
// Optionally throw an error here if strict validation is required
// throw new Error(`Subtask validation failed:\n${validationErrors.join('\n')}`);
logger.warn('Proceeding with only the successfully validated subtasks.');
}
if (validatedSubtasks.length === 0 && parsedSubtasks.length > 0) {
throw new Error(
'AI response contained potential subtasks, but none passed validation.'
);
}
// Ensure we don't return more than expected, preferring validated ones
return validatedSubtasks.slice(0, expectedCount || validatedSubtasks.length);
}
/**
* Expand a task into subtasks using the unified AI service (generateTextService).
* Appends new subtasks by default. Replaces existing subtasks if force=true.
* Integrates complexity report to determine subtask count and prompt if available,
* unless numSubtasks is explicitly provided.
* @param {string} tasksPath - Path to the tasks.json file
* @param {number} taskId - Task ID to expand
* @param {number | null | undefined} [numSubtasks] - Optional: Explicit target number of subtasks. If null/undefined, check complexity report or config default.
* @param {boolean} [useResearch=false] - Whether to use the research AI role.
* @param {string} [additionalContext=''] - Optional additional context.
* @param {Object} context - Context object containing session and mcpLog.
* @param {Object} [context.session] - Session object from MCP.
* @param {Object} [context.mcpLog] - MCP logger object.
* @param {boolean} [force=false] - If true, replace existing subtasks; otherwise, append.
* @returns {Promise<Object>} The updated parent task object with new subtasks.
* @throws {Error} If task not found, AI service fails, or parsing fails.
*/
async function expandTask(
tasksPath,
taskId,
numSubtasks,
useResearch = false,
additionalContext = '',
context = {},
force = false
) {
const { session, mcpLog } = context;
const outputFormat = mcpLog ? 'json' : 'text';
// Use mcpLog if available, otherwise use the default console log wrapper
const logger = mcpLog || {
info: (msg) => !isSilentMode() && log('info', msg),
warn: (msg) => !isSilentMode() && log('warn', msg),
error: (msg) => !isSilentMode() && log('error', msg),
debug: (msg) =>
!isSilentMode() && getDebugFlag(session) && log('debug', msg) // Use getDebugFlag
};
if (mcpLog) {
logger.info(`expandTask called with context: session=${!!session}`);
}
try {
// --- Task Loading/Filtering (Unchanged) ---
logger.info(`Reading tasks from ${tasksPath}`);
const data = readJSON(tasksPath);
if (!data || !data.tasks)
throw new Error(`Invalid tasks data in ${tasksPath}`);
const taskIndex = data.tasks.findIndex(
(t) => t.id === parseInt(taskId, 10)
);
if (taskIndex === -1) throw new Error(`Task ${taskId} not found`);
const task = data.tasks[taskIndex];
logger.info(`Expanding task ${taskId}: ${task.title}`);
// --- End Task Loading/Filtering ---
// --- Handle Force Flag: Clear existing subtasks if force=true ---
if (force && Array.isArray(task.subtasks) && task.subtasks.length > 0) {
logger.info(
`Force flag set. Clearing existing ${task.subtasks.length} subtasks for task ${taskId}.`
);
task.subtasks = []; // Clear existing subtasks
}
// --- End Force Flag Handling ---
// --- Complexity Report Integration ---
let finalSubtaskCount;
let promptContent = '';
let complexityReasoningContext = '';
let systemPrompt; // Declare systemPrompt here
const projectRoot = path.dirname(path.dirname(tasksPath));
const complexityReportPath = path.join(
projectRoot,
'scripts/task-complexity-report.json'
);
let taskAnalysis = null;
try {
if (fs.existsSync(complexityReportPath)) {
const complexityReport = readJSON(complexityReportPath);
taskAnalysis = complexityReport?.complexityAnalysis?.find(
(a) => a.taskId === task.id
);
if (taskAnalysis) {
logger.info(
`Found complexity analysis for task ${task.id}: Score ${taskAnalysis.complexityScore}`
);
if (taskAnalysis.reasoning) {
complexityReasoningContext = `\nComplexity Analysis Reasoning: ${taskAnalysis.reasoning}`;
}
} else {
logger.info(
`No complexity analysis found for task ${task.id} in report.`
);
}
} else {
logger.info(
`Complexity report not found at ${complexityReportPath}. Skipping complexity check.`
);
}
} catch (reportError) {
logger.warn(
`Could not read or parse complexity report: ${reportError.message}. Proceeding without it.`
);
}
// Determine final subtask count
const explicitNumSubtasks = parseInt(numSubtasks, 10);
if (!isNaN(explicitNumSubtasks) && explicitNumSubtasks > 0) {
finalSubtaskCount = explicitNumSubtasks;
logger.info(
`Using explicitly provided subtask count: ${finalSubtaskCount}`
);
} else if (taskAnalysis?.recommendedSubtasks) {
finalSubtaskCount = parseInt(taskAnalysis.recommendedSubtasks, 10);
logger.info(
`Using subtask count from complexity report: ${finalSubtaskCount}`
);
} else {
finalSubtaskCount = getDefaultSubtasks(session);
logger.info(`Using default number of subtasks: ${finalSubtaskCount}`);
}
if (isNaN(finalSubtaskCount) || finalSubtaskCount <= 0) {
logger.warn(
`Invalid subtask count determined (${finalSubtaskCount}), defaulting to 3.`
);
finalSubtaskCount = 3;
}
// Determine prompt content AND system prompt
const nextSubtaskId = (task.subtasks?.length || 0) + 1;
if (taskAnalysis?.expansionPrompt) {
// Use prompt from complexity report
promptContent = taskAnalysis.expansionPrompt;
// Append additional context and reasoning
promptContent += `\n\n${additionalContext}`.trim();
promptContent += `${complexityReasoningContext}`.trim();
// --- Use Simplified System Prompt for Report Prompts ---
systemPrompt = `You are an AI assistant helping with task breakdown. Generate exactly ${finalSubtaskCount} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
logger.info(
`Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.`
);
// --- End Simplified System Prompt ---
} else {
// Use standard prompt generation
const combinedAdditionalContext =
`${additionalContext}${complexityReasoningContext}`.trim();
if (useResearch) {
promptContent = generateResearchUserPrompt(
task,
finalSubtaskCount,
combinedAdditionalContext,
nextSubtaskId
);
// Use the specific research system prompt if needed, or a standard one
systemPrompt = `You are an AI assistant that responds ONLY with valid JSON objects as requested. The object should contain a 'subtasks' array.`; // Or keep generateResearchSystemPrompt if it exists
} else {
promptContent = generateMainUserPrompt(
task,
finalSubtaskCount,
combinedAdditionalContext,
nextSubtaskId
);
// Use the original detailed system prompt for standard generation
systemPrompt = generateMainSystemPrompt(finalSubtaskCount);
}
logger.info(`Using standard prompt generation for task ${task.id}.`);
}
// --- End Complexity Report / Prompt Logic ---
// --- AI Subtask Generation using generateTextService ---
let generatedSubtasks = [];
let loadingIndicator = null;
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
`Generating ${finalSubtaskCount} subtasks...`
);
}
let responseText = '';
try {
const role = useResearch ? 'research' : 'main';
logger.info(`Using AI service with role: ${role}`);
// Call generateTextService with the determined prompts
responseText = await generateTextService({
prompt: promptContent,
systemPrompt: systemPrompt, // Use the determined system prompt
role,
session
});
logger.info(
'Successfully received text response from AI service',
'success'
);
// Parse Subtasks
generatedSubtasks = parseSubtasksFromText(
responseText,
nextSubtaskId,
finalSubtaskCount,
task.id,
logger
);
logger.info(
`Successfully parsed ${generatedSubtasks.length} subtasks from AI response.`
);
} catch (error) {
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
logger.error(
`Error during AI call or parsing for task ${taskId}: ${error.message}`, // Added task ID context
'error'
);
// Log raw response in debug mode if parsing failed
if (
error.message.includes('Failed to parse valid subtasks') &&
getDebugFlag(session)
) {
logger.error(`Raw AI Response that failed parsing:\n${responseText}`);
}
throw error;
} finally {
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
}
// --- Task Update & File Writing ---
// Ensure task.subtasks is an array before appending
if (!Array.isArray(task.subtasks)) {
task.subtasks = [];
}
// Append the newly generated and validated subtasks
task.subtasks.push(...generatedSubtasks);
// --- End Change: Append instead of replace ---
data.tasks[taskIndex] = task; // Assign the modified task back
logger.info(`Writing updated tasks to ${tasksPath}`);
writeJSON(tasksPath, data);
logger.info(`Generating individual task files...`);
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
logger.info(`Task files generated.`);
// --- End Task Update & File Writing ---
return task; // Return the updated task object
} catch (error) {
// Catches errors from file reading, parsing, AI call etc.
logger.error(`Error expanding task ${taskId}: ${error.message}`, 'error');
if (outputFormat === 'text' && getDebugFlag(session)) {
console.error(error); // Log full stack in debug CLI mode
}
throw error; // Re-throw for the caller
}
}
export default expandTask;

View File

@@ -0,0 +1,122 @@
/**
* Return the next work item:
* • Prefer an eligible SUBTASK that belongs to any parent task
* whose own status is `in-progress`.
* • If no such subtask exists, fall back to the best top-level task
* (previous behaviour).
*
* The function still exports the same name (`findNextTask`) so callers
* don't need to change. It now always returns an object with
* ─ id → number (task) or "parentId.subId" (subtask)
* ─ title → string
* ─ status → string
* ─ priority → string ("high" | "medium" | "low")
* ─ dependencies → array (all IDs expressed in the same dotted form)
* ─ parentId → number (present only when it's a subtask)
*
* @param {Object[]} tasks full array of top-level tasks, each may contain .subtasks[]
* @returns {Object|null} next work item or null if nothing is eligible
*/
function findNextTask(tasks) {
// ---------- helpers ----------------------------------------------------
const priorityValues = { high: 3, medium: 2, low: 1 };
const toFullSubId = (parentId, maybeDotId) => {
// "12.3" -> "12.3"
// 4 -> "12.4" (numeric / short form)
if (typeof maybeDotId === 'string' && maybeDotId.includes('.')) {
return maybeDotId;
}
return `${parentId}.${maybeDotId}`;
};
// ---------- build completed-ID set (tasks *and* subtasks) --------------
const completedIds = new Set();
tasks.forEach((t) => {
if (t.status === 'done' || t.status === 'completed') {
completedIds.add(String(t.id));
}
if (Array.isArray(t.subtasks)) {
t.subtasks.forEach((st) => {
if (st.status === 'done' || st.status === 'completed') {
completedIds.add(`${t.id}.${st.id}`);
}
});
}
});
// ---------- 1) look for eligible subtasks ------------------------------
const candidateSubtasks = [];
tasks
.filter((t) => t.status === 'in-progress' && Array.isArray(t.subtasks))
.forEach((parent) => {
parent.subtasks.forEach((st) => {
const stStatus = (st.status || 'pending').toLowerCase();
if (stStatus !== 'pending' && stStatus !== 'in-progress') return;
const fullDeps =
st.dependencies?.map((d) => toFullSubId(parent.id, d)) ?? [];
const depsSatisfied =
fullDeps.length === 0 ||
fullDeps.every((depId) => completedIds.has(String(depId)));
if (depsSatisfied) {
candidateSubtasks.push({
id: `${parent.id}.${st.id}`,
title: st.title || `Subtask ${st.id}`,
status: st.status || 'pending',
priority: st.priority || parent.priority || 'medium',
dependencies: fullDeps,
parentId: parent.id
});
}
});
});
if (candidateSubtasks.length > 0) {
// sort by priority → dep-count → parent-id → sub-id
candidateSubtasks.sort((a, b) => {
const pa = priorityValues[a.priority] ?? 2;
const pb = priorityValues[b.priority] ?? 2;
if (pb !== pa) return pb - pa;
if (a.dependencies.length !== b.dependencies.length)
return a.dependencies.length - b.dependencies.length;
// compare parent then sub-id numerically
const [aPar, aSub] = a.id.split('.').map(Number);
const [bPar, bSub] = b.id.split('.').map(Number);
if (aPar !== bPar) return aPar - bPar;
return aSub - bSub;
});
return candidateSubtasks[0];
}
// ---------- 2) fall back to top-level tasks (original logic) ------------
const eligibleTasks = tasks.filter((task) => {
const status = (task.status || 'pending').toLowerCase();
if (status !== 'pending' && status !== 'in-progress') return false;
const deps = task.dependencies ?? [];
return deps.every((depId) => completedIds.has(String(depId)));
});
if (eligibleTasks.length === 0) return null;
const nextTask = eligibleTasks.sort((a, b) => {
const pa = priorityValues[a.priority || 'medium'] ?? 2;
const pb = priorityValues[b.priority || 'medium'] ?? 2;
if (pb !== pa) return pb - pa;
const da = (a.dependencies ?? []).length;
const db = (b.dependencies ?? []).length;
if (da !== db) return da - db;
return a.id - b.id;
})[0];
return nextTask;
}
export default findNextTask;

View File

@@ -0,0 +1,156 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { log, readJSON } from '../utils.js';
import { formatDependenciesWithStatus } from '../ui.js';
import { validateAndFixDependencies } from '../dependency-manager.js';
import { getDebugFlag } from '../config-manager.js';
/**
* Generate individual task files from tasks.json
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} outputDir - Output directory for task files
* @param {Object} options - Additional options (mcpLog for MCP mode)
* @returns {Object|undefined} Result object in MCP mode, undefined in CLI mode
*/
function generateTaskFiles(tasksPath, outputDir, options = {}) {
try {
// Determine if we're in MCP mode by checking for mcpLog
const isMcpMode = !!options?.mcpLog;
log('info', `Preparing to regenerate task files in ${tasksPath}`);
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
}
// Create the output directory if it doesn't exist
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir, { recursive: true });
}
log('info', `Found ${data.tasks.length} tasks to regenerate`);
// Validate and fix dependencies before generating files
log('info', `Validating and fixing dependencies`);
validateAndFixDependencies(data, tasksPath);
// Generate task files
log('info', 'Generating individual task files...');
data.tasks.forEach((task) => {
const taskPath = path.join(
outputDir,
`task_${task.id.toString().padStart(3, '0')}.txt`
);
// Format the content
let content = `# Task ID: ${task.id}\n`;
content += `# Title: ${task.title}\n`;
content += `# Status: ${task.status || 'pending'}\n`;
// Format dependencies with their status
if (task.dependencies && task.dependencies.length > 0) {
content += `# Dependencies: ${formatDependenciesWithStatus(task.dependencies, data.tasks, false)}\n`;
} else {
content += '# Dependencies: None\n';
}
content += `# Priority: ${task.priority || 'medium'}\n`;
content += `# Description: ${task.description || ''}\n`;
// Add more detailed sections
content += '# Details:\n';
content += (task.details || '')
.split('\n')
.map((line) => line)
.join('\n');
content += '\n\n';
content += '# Test Strategy:\n';
content += (task.testStrategy || '')
.split('\n')
.map((line) => line)
.join('\n');
content += '\n';
// Add subtasks if they exist
if (task.subtasks && task.subtasks.length > 0) {
content += '\n# Subtasks:\n';
task.subtasks.forEach((subtask) => {
content += `## ${subtask.id}. ${subtask.title} [${subtask.status || 'pending'}]\n`;
if (subtask.dependencies && subtask.dependencies.length > 0) {
// Format subtask dependencies
let subtaskDeps = subtask.dependencies
.map((depId) => {
if (typeof depId === 'number') {
// Handle numeric dependencies to other subtasks
const foundSubtask = task.subtasks.find(
(st) => st.id === depId
);
if (foundSubtask) {
// Just return the plain ID format without any color formatting
return `${task.id}.${depId}`;
}
}
return depId.toString();
})
.join(', ');
content += `### Dependencies: ${subtaskDeps}\n`;
} else {
content += '### Dependencies: None\n';
}
content += `### Description: ${subtask.description || ''}\n`;
content += '### Details:\n';
content += (subtask.details || '')
.split('\n')
.map((line) => line)
.join('\n');
content += '\n\n';
});
}
// Write the file
fs.writeFileSync(taskPath, content);
// log('info', `Generated: task_${task.id.toString().padStart(3, '0')}.txt`); // Pollutes the CLI output
});
log(
'success',
`All ${data.tasks.length} tasks have been generated into '${outputDir}'.`
);
// Return success data in MCP mode
if (isMcpMode) {
return {
success: true,
count: data.tasks.length,
directory: outputDir
};
}
} catch (error) {
log('error', `Error generating task files: ${error.message}`);
// Only show error UI in CLI mode
if (!options?.mcpLog) {
console.error(chalk.red(`Error generating task files: ${error.message}`));
if (getDebugFlag()) {
// Use getter
console.error(error);
}
process.exit(1);
} else {
// In MCP mode, throw the error for the caller to handle
throw error;
}
}
}
export default generateTaskFiles;

View File

@@ -0,0 +1,42 @@
/**
* Check if a task is dependent on another task (directly or indirectly)
* Used to prevent circular dependencies
* @param {Array} allTasks - Array of all tasks
* @param {Object} task - The task to check
* @param {number} targetTaskId - The task ID to check dependency against
* @returns {boolean} Whether the task depends on the target task
*/
function isTaskDependentOn(allTasks, task, targetTaskId) {
// If the task is a subtask, check if its parent is the target
if (task.parentTaskId === targetTaskId) {
return true;
}
// Check direct dependencies
if (task.dependencies && task.dependencies.includes(targetTaskId)) {
return true;
}
// Check dependencies of dependencies (recursive)
if (task.dependencies) {
for (const depId of task.dependencies) {
const depTask = allTasks.find((t) => t.id === depId);
if (depTask && isTaskDependentOn(allTasks, depTask, targetTaskId)) {
return true;
}
}
}
// Check subtasks for dependencies
if (task.subtasks) {
for (const subtask of task.subtasks) {
if (isTaskDependentOn(allTasks, subtask, targetTaskId)) {
return true;
}
}
}
return false;
}
export default isTaskDependentOn;

View File

@@ -0,0 +1,719 @@
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import { log, readJSON, truncate } from '../utils.js';
import findNextTask from './find-next-task.js';
import {
displayBanner,
getStatusWithColor,
formatDependenciesWithStatus,
createProgressBar
} from '../ui.js';
/**
* List all tasks
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} statusFilter - Filter by status
* @param {boolean} withSubtasks - Whether to show subtasks
* @param {string} outputFormat - Output format (text or json)
* @returns {Object} - Task list result for json format
*/
function listTasks(
tasksPath,
statusFilter,
withSubtasks = false,
outputFormat = 'text'
) {
try {
// Only display banner for text output
if (outputFormat === 'text') {
displayBanner();
}
const data = readJSON(tasksPath); // Reads the whole tasks.json
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
}
// Filter tasks by status if specified
const filteredTasks =
statusFilter && statusFilter.toLowerCase() !== 'all' // <-- Added check for 'all'
? data.tasks.filter(
(task) =>
task.status &&
task.status.toLowerCase() === statusFilter.toLowerCase()
)
: data.tasks; // Default to all tasks if no filter or filter is 'all'
// Calculate completion statistics
const totalTasks = data.tasks.length;
const completedTasks = data.tasks.filter(
(task) => task.status === 'done' || task.status === 'completed'
).length;
const completionPercentage =
totalTasks > 0 ? (completedTasks / totalTasks) * 100 : 0;
// Count statuses for tasks
const doneCount = completedTasks;
const inProgressCount = data.tasks.filter(
(task) => task.status === 'in-progress'
).length;
const pendingCount = data.tasks.filter(
(task) => task.status === 'pending'
).length;
const blockedCount = data.tasks.filter(
(task) => task.status === 'blocked'
).length;
const deferredCount = data.tasks.filter(
(task) => task.status === 'deferred'
).length;
const cancelledCount = data.tasks.filter(
(task) => task.status === 'cancelled'
).length;
// Count subtasks and their statuses
let totalSubtasks = 0;
let completedSubtasks = 0;
let inProgressSubtasks = 0;
let pendingSubtasks = 0;
let blockedSubtasks = 0;
let deferredSubtasks = 0;
let cancelledSubtasks = 0;
data.tasks.forEach((task) => {
if (task.subtasks && task.subtasks.length > 0) {
totalSubtasks += task.subtasks.length;
completedSubtasks += task.subtasks.filter(
(st) => st.status === 'done' || st.status === 'completed'
).length;
inProgressSubtasks += task.subtasks.filter(
(st) => st.status === 'in-progress'
).length;
pendingSubtasks += task.subtasks.filter(
(st) => st.status === 'pending'
).length;
blockedSubtasks += task.subtasks.filter(
(st) => st.status === 'blocked'
).length;
deferredSubtasks += task.subtasks.filter(
(st) => st.status === 'deferred'
).length;
cancelledSubtasks += task.subtasks.filter(
(st) => st.status === 'cancelled'
).length;
}
});
const subtaskCompletionPercentage =
totalSubtasks > 0 ? (completedSubtasks / totalSubtasks) * 100 : 0;
// For JSON output, return structured data
if (outputFormat === 'json') {
// *** Modification: Remove 'details' field for JSON output ***
const tasksWithoutDetails = filteredTasks.map((task) => {
// <-- USES filteredTasks!
// Omit 'details' from the parent task
const { details, ...taskRest } = task;
// If subtasks exist, omit 'details' from them too
if (taskRest.subtasks && Array.isArray(taskRest.subtasks)) {
taskRest.subtasks = taskRest.subtasks.map((subtask) => {
const { details: subtaskDetails, ...subtaskRest } = subtask;
return subtaskRest;
});
}
return taskRest;
});
// *** End of Modification ***
return {
tasks: tasksWithoutDetails, // <--- THIS IS THE ARRAY BEING RETURNED
filter: statusFilter || 'all', // Return the actual filter used
stats: {
total: totalTasks,
completed: doneCount,
inProgress: inProgressCount,
pending: pendingCount,
blocked: blockedCount,
deferred: deferredCount,
cancelled: cancelledCount,
completionPercentage,
subtasks: {
total: totalSubtasks,
completed: completedSubtasks,
inProgress: inProgressSubtasks,
pending: pendingSubtasks,
blocked: blockedSubtasks,
deferred: deferredSubtasks,
cancelled: cancelledSubtasks,
completionPercentage: subtaskCompletionPercentage
}
}
};
}
// ... existing code for text output ...
// Calculate status breakdowns as percentages of total
const taskStatusBreakdown = {
'in-progress': totalTasks > 0 ? (inProgressCount / totalTasks) * 100 : 0,
pending: totalTasks > 0 ? (pendingCount / totalTasks) * 100 : 0,
blocked: totalTasks > 0 ? (blockedCount / totalTasks) * 100 : 0,
deferred: totalTasks > 0 ? (deferredCount / totalTasks) * 100 : 0,
cancelled: totalTasks > 0 ? (cancelledCount / totalTasks) * 100 : 0
};
const subtaskStatusBreakdown = {
'in-progress':
totalSubtasks > 0 ? (inProgressSubtasks / totalSubtasks) * 100 : 0,
pending: totalSubtasks > 0 ? (pendingSubtasks / totalSubtasks) * 100 : 0,
blocked: totalSubtasks > 0 ? (blockedSubtasks / totalSubtasks) * 100 : 0,
deferred:
totalSubtasks > 0 ? (deferredSubtasks / totalSubtasks) * 100 : 0,
cancelled:
totalSubtasks > 0 ? (cancelledSubtasks / totalSubtasks) * 100 : 0
};
// Create progress bars with status breakdowns
const taskProgressBar = createProgressBar(
completionPercentage,
30,
taskStatusBreakdown
);
const subtaskProgressBar = createProgressBar(
subtaskCompletionPercentage,
30,
subtaskStatusBreakdown
);
// Calculate dependency statistics
const completedTaskIds = new Set(
data.tasks
.filter((t) => t.status === 'done' || t.status === 'completed')
.map((t) => t.id)
);
const tasksWithNoDeps = data.tasks.filter(
(t) =>
t.status !== 'done' &&
t.status !== 'completed' &&
(!t.dependencies || t.dependencies.length === 0)
).length;
const tasksWithAllDepsSatisfied = data.tasks.filter(
(t) =>
t.status !== 'done' &&
t.status !== 'completed' &&
t.dependencies &&
t.dependencies.length > 0 &&
t.dependencies.every((depId) => completedTaskIds.has(depId))
).length;
const tasksWithUnsatisfiedDeps = data.tasks.filter(
(t) =>
t.status !== 'done' &&
t.status !== 'completed' &&
t.dependencies &&
t.dependencies.length > 0 &&
!t.dependencies.every((depId) => completedTaskIds.has(depId))
).length;
// Calculate total tasks ready to work on (no deps + satisfied deps)
const tasksReadyToWork = tasksWithNoDeps + tasksWithAllDepsSatisfied;
// Calculate most depended-on tasks
const dependencyCount = {};
data.tasks.forEach((task) => {
if (task.dependencies && task.dependencies.length > 0) {
task.dependencies.forEach((depId) => {
dependencyCount[depId] = (dependencyCount[depId] || 0) + 1;
});
}
});
// Find the most depended-on task
let mostDependedOnTaskId = null;
let maxDependents = 0;
for (const [taskId, count] of Object.entries(dependencyCount)) {
if (count > maxDependents) {
maxDependents = count;
mostDependedOnTaskId = parseInt(taskId);
}
}
// Get the most depended-on task
const mostDependedOnTask =
mostDependedOnTaskId !== null
? data.tasks.find((t) => t.id === mostDependedOnTaskId)
: null;
// Calculate average dependencies per task
const totalDependencies = data.tasks.reduce(
(sum, task) => sum + (task.dependencies ? task.dependencies.length : 0),
0
);
const avgDependenciesPerTask = totalDependencies / data.tasks.length;
// Find next task to work on
const nextItem = findNextTask(data.tasks);
// Get terminal width - more reliable method
let terminalWidth;
try {
// Try to get the actual terminal columns
terminalWidth = process.stdout.columns;
} catch (e) {
// Fallback if columns cannot be determined
log('debug', 'Could not determine terminal width, using default');
}
// Ensure we have a reasonable default if detection fails
terminalWidth = terminalWidth || 80;
// Ensure terminal width is at least a minimum value to prevent layout issues
terminalWidth = Math.max(terminalWidth, 80);
// Create dashboard content
const projectDashboardContent =
chalk.white.bold('Project Dashboard') +
'\n' +
`Tasks Progress: ${chalk.greenBright(taskProgressBar)} ${completionPercentage.toFixed(0)}%\n` +
`Done: ${chalk.green(doneCount)} In Progress: ${chalk.blue(inProgressCount)} Pending: ${chalk.yellow(pendingCount)} Blocked: ${chalk.red(blockedCount)} Deferred: ${chalk.gray(deferredCount)} Cancelled: ${chalk.gray(cancelledCount)}\n\n` +
`Subtasks Progress: ${chalk.cyan(subtaskProgressBar)} ${subtaskCompletionPercentage.toFixed(0)}%\n` +
`Completed: ${chalk.green(completedSubtasks)}/${totalSubtasks} In Progress: ${chalk.blue(inProgressSubtasks)} Pending: ${chalk.yellow(pendingSubtasks)} Blocked: ${chalk.red(blockedSubtasks)} Deferred: ${chalk.gray(deferredSubtasks)} Cancelled: ${chalk.gray(cancelledSubtasks)}\n\n` +
chalk.cyan.bold('Priority Breakdown:') +
'\n' +
`${chalk.red('•')} ${chalk.white('High priority:')} ${data.tasks.filter((t) => t.priority === 'high').length}\n` +
`${chalk.yellow('•')} ${chalk.white('Medium priority:')} ${data.tasks.filter((t) => t.priority === 'medium').length}\n` +
`${chalk.green('•')} ${chalk.white('Low priority:')} ${data.tasks.filter((t) => t.priority === 'low').length}`;
const dependencyDashboardContent =
chalk.white.bold('Dependency Status & Next Task') +
'\n' +
chalk.cyan.bold('Dependency Metrics:') +
'\n' +
`${chalk.green('•')} ${chalk.white('Tasks with no dependencies:')} ${tasksWithNoDeps}\n` +
`${chalk.green('•')} ${chalk.white('Tasks ready to work on:')} ${tasksReadyToWork}\n` +
`${chalk.yellow('•')} ${chalk.white('Tasks blocked by dependencies:')} ${tasksWithUnsatisfiedDeps}\n` +
`${chalk.magenta('•')} ${chalk.white('Most depended-on task:')} ${mostDependedOnTask ? chalk.cyan(`#${mostDependedOnTaskId} (${maxDependents} dependents)`) : chalk.gray('None')}\n` +
`${chalk.blue('•')} ${chalk.white('Avg dependencies per task:')} ${avgDependenciesPerTask.toFixed(1)}\n\n` +
chalk.cyan.bold('Next Task to Work On:') +
'\n' +
`ID: ${chalk.cyan(nextItem ? nextItem.id : 'N/A')} - ${nextItem ? chalk.white.bold(truncate(nextItem.title, 40)) : chalk.yellow('No task available')}\n` +
`Priority: ${nextItem ? chalk.white(nextItem.priority || 'medium') : ''} Dependencies: ${nextItem ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true) : ''}`;
// Calculate width for side-by-side display
// Box borders, padding take approximately 4 chars on each side
const minDashboardWidth = 50; // Minimum width for dashboard
const minDependencyWidth = 50; // Minimum width for dependency dashboard
const totalMinWidth = minDashboardWidth + minDependencyWidth + 4; // Extra 4 chars for spacing
// If terminal is wide enough, show boxes side by side with responsive widths
if (terminalWidth >= totalMinWidth) {
// Calculate widths proportionally for each box - use exact 50% width each
const availableWidth = terminalWidth;
const halfWidth = Math.floor(availableWidth / 2);
// Account for border characters (2 chars on each side)
const boxContentWidth = halfWidth - 4;
// Create boxen options with precise widths
const dashboardBox = boxen(projectDashboardContent, {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
width: boxContentWidth,
dimBorder: false
});
const dependencyBox = boxen(dependencyDashboardContent, {
padding: 1,
borderColor: 'magenta',
borderStyle: 'round',
width: boxContentWidth,
dimBorder: false
});
// Create a better side-by-side layout with exact spacing
const dashboardLines = dashboardBox.split('\n');
const dependencyLines = dependencyBox.split('\n');
// Make sure both boxes have the same height
const maxHeight = Math.max(dashboardLines.length, dependencyLines.length);
// For each line of output, pad the dashboard line to exactly halfWidth chars
// This ensures the dependency box starts at exactly the right position
const combinedLines = [];
for (let i = 0; i < maxHeight; i++) {
// Get the dashboard line (or empty string if we've run out of lines)
const dashLine = i < dashboardLines.length ? dashboardLines[i] : '';
// Get the dependency line (or empty string if we've run out of lines)
const depLine = i < dependencyLines.length ? dependencyLines[i] : '';
// Remove any trailing spaces from dashLine before padding to exact width
const trimmedDashLine = dashLine.trimEnd();
// Pad the dashboard line to exactly halfWidth chars with no extra spaces
const paddedDashLine = trimmedDashLine.padEnd(halfWidth, ' ');
// Join the lines with no space in between
combinedLines.push(paddedDashLine + depLine);
}
// Join all lines and output
console.log(combinedLines.join('\n'));
} else {
// Terminal too narrow, show boxes stacked vertically
const dashboardBox = boxen(projectDashboardContent, {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 0, bottom: 1 }
});
const dependencyBox = boxen(dependencyDashboardContent, {
padding: 1,
borderColor: 'magenta',
borderStyle: 'round',
margin: { top: 0, bottom: 1 }
});
// Display stacked vertically
console.log(dashboardBox);
console.log(dependencyBox);
}
if (filteredTasks.length === 0) {
console.log(
boxen(
statusFilter
? chalk.yellow(`No tasks with status '${statusFilter}' found`)
: chalk.yellow('No tasks found'),
{ padding: 1, borderColor: 'yellow', borderStyle: 'round' }
)
);
return;
}
// COMPLETELY REVISED TABLE APPROACH
// Define percentage-based column widths and calculate actual widths
// Adjust percentages based on content type and user requirements
// Adjust ID width if showing subtasks (subtask IDs are longer: e.g., "1.2")
const idWidthPct = withSubtasks ? 10 : 7;
// Calculate max status length to accommodate "in-progress"
const statusWidthPct = 15;
// Increase priority column width as requested
const priorityWidthPct = 12;
// Make dependencies column smaller as requested (-20%)
const depsWidthPct = 20;
// Calculate title/description width as remaining space (+20% from dependencies reduction)
const titleWidthPct =
100 - idWidthPct - statusWidthPct - priorityWidthPct - depsWidthPct;
// Allow 10 characters for borders and padding
const availableWidth = terminalWidth - 10;
// Calculate actual column widths based on percentages
const idWidth = Math.floor(availableWidth * (idWidthPct / 100));
const statusWidth = Math.floor(availableWidth * (statusWidthPct / 100));
const priorityWidth = Math.floor(availableWidth * (priorityWidthPct / 100));
const depsWidth = Math.floor(availableWidth * (depsWidthPct / 100));
const titleWidth = Math.floor(availableWidth * (titleWidthPct / 100));
// Create a table with correct borders and spacing
const table = new Table({
head: [
chalk.cyan.bold('ID'),
chalk.cyan.bold('Title'),
chalk.cyan.bold('Status'),
chalk.cyan.bold('Priority'),
chalk.cyan.bold('Dependencies')
],
colWidths: [idWidth, titleWidth, statusWidth, priorityWidth, depsWidth],
style: {
head: [], // No special styling for header
border: [], // No special styling for border
compact: false // Use default spacing
},
wordWrap: true,
wrapOnWordBoundary: true
});
// Process tasks for the table
filteredTasks.forEach((task) => {
// Format dependencies with status indicators (colored)
let depText = 'None';
if (task.dependencies && task.dependencies.length > 0) {
// Use the proper formatDependenciesWithStatus function for colored status
depText = formatDependenciesWithStatus(
task.dependencies,
data.tasks,
true
);
} else {
depText = chalk.gray('None');
}
// Clean up any ANSI codes or confusing characters
const cleanTitle = task.title.replace(/\n/g, ' ');
// Get priority color
const priorityColor =
{
high: chalk.red,
medium: chalk.yellow,
low: chalk.gray
}[task.priority || 'medium'] || chalk.white;
// Format status
const status = getStatusWithColor(task.status, true);
// Add the row without truncating dependencies
table.push([
task.id.toString(),
truncate(cleanTitle, titleWidth - 3),
status,
priorityColor(truncate(task.priority || 'medium', priorityWidth - 2)),
depText // No truncation for dependencies
]);
// Add subtasks if requested
if (withSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
// Format subtask dependencies with status indicators
let subtaskDepText = 'None';
if (subtask.dependencies && subtask.dependencies.length > 0) {
// Handle both subtask-to-subtask and subtask-to-task dependencies
const formattedDeps = subtask.dependencies
.map((depId) => {
// Check if it's a dependency on another subtask
if (typeof depId === 'number' && depId < 100) {
const foundSubtask = task.subtasks.find(
(st) => st.id === depId
);
if (foundSubtask) {
const isDone =
foundSubtask.status === 'done' ||
foundSubtask.status === 'completed';
const isInProgress = foundSubtask.status === 'in-progress';
// Use consistent color formatting instead of emojis
if (isDone) {
return chalk.green.bold(`${task.id}.${depId}`);
} else if (isInProgress) {
return chalk.hex('#FFA500').bold(`${task.id}.${depId}`);
} else {
return chalk.red.bold(`${task.id}.${depId}`);
}
}
}
// Default to regular task dependency
const depTask = data.tasks.find((t) => t.id === depId);
if (depTask) {
const isDone =
depTask.status === 'done' || depTask.status === 'completed';
const isInProgress = depTask.status === 'in-progress';
// Use the same color scheme as in formatDependenciesWithStatus
if (isDone) {
return chalk.green.bold(`${depId}`);
} else if (isInProgress) {
return chalk.hex('#FFA500').bold(`${depId}`);
} else {
return chalk.red.bold(`${depId}`);
}
}
return chalk.cyan(depId.toString());
})
.join(', ');
subtaskDepText = formattedDeps || chalk.gray('None');
}
// Add the subtask row without truncating dependencies
table.push([
`${task.id}.${subtask.id}`,
chalk.dim(`└─ ${truncate(subtask.title, titleWidth - 5)}`),
getStatusWithColor(subtask.status, true),
chalk.dim('-'),
subtaskDepText // No truncation for dependencies
]);
});
}
});
// Ensure we output the table even if it had to wrap
try {
console.log(table.toString());
} catch (err) {
log('error', `Error rendering table: ${err.message}`);
// Fall back to simpler output
console.log(
chalk.yellow(
'\nFalling back to simple task list due to terminal width constraints:'
)
);
filteredTasks.forEach((task) => {
console.log(
`${chalk.cyan(task.id)}: ${chalk.white(task.title)} - ${getStatusWithColor(task.status)}`
);
});
}
// Show filter info if applied
if (statusFilter) {
console.log(chalk.yellow(`\nFiltered by status: ${statusFilter}`));
console.log(
chalk.yellow(`Showing ${filteredTasks.length} of ${totalTasks} tasks`)
);
}
// Define priority colors
const priorityColors = {
high: chalk.red.bold,
medium: chalk.yellow,
low: chalk.gray
};
// Show next task box in a prominent color
if (nextItem) {
// Prepare subtasks section if they exist (Only tasks have .subtasks property)
let subtasksSection = '';
// Check if the nextItem is a top-level task before looking for subtasks
const parentTaskForSubtasks = data.tasks.find(
(t) => String(t.id) === String(nextItem.id)
); // Find the original task object
if (
parentTaskForSubtasks &&
parentTaskForSubtasks.subtasks &&
parentTaskForSubtasks.subtasks.length > 0
) {
subtasksSection = `\n\n${chalk.white.bold('Subtasks:')}\n`;
subtasksSection += parentTaskForSubtasks.subtasks
.map((subtask) => {
// Using a more simplified format for subtask status display
const status = subtask.status || 'pending';
const statusColors = {
done: chalk.green,
completed: chalk.green,
pending: chalk.yellow,
'in-progress': chalk.blue,
deferred: chalk.gray,
blocked: chalk.red,
cancelled: chalk.gray
};
const statusColor =
statusColors[status.toLowerCase()] || chalk.white;
// Ensure subtask ID is displayed correctly using parent ID from the original task object
return `${chalk.cyan(`${parentTaskForSubtasks.id}.${subtask.id}`)} [${statusColor(status)}] ${subtask.title}`;
})
.join('\n');
}
console.log(
boxen(
chalk.hex('#FF8800').bold(
// Use nextItem.id and nextItem.title
`🔥 Next Task to Work On: #${nextItem.id} - ${nextItem.title}`
) +
'\n\n' +
// Use nextItem.priority, nextItem.status, nextItem.dependencies
`${chalk.white('Priority:')} ${priorityColors[nextItem.priority || 'medium'](nextItem.priority || 'medium')} ${chalk.white('Status:')} ${getStatusWithColor(nextItem.status, true)}\n` +
`${chalk.white('Dependencies:')} ${nextItem.dependencies && nextItem.dependencies.length > 0 ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true) : chalk.gray('None')}\n\n` +
// Use nextItem.description (Note: findNextTask doesn't return description, need to fetch original task/subtask for this)
// *** Fetching original item for description and details ***
`${chalk.white('Description:')} ${getWorkItemDescription(nextItem, data.tasks)}` +
subtasksSection + // <-- Subtasks are handled above now
'\n\n' +
// Use nextItem.id
`${chalk.cyan('Start working:')} ${chalk.yellow(`task-master set-status --id=${nextItem.id} --status=in-progress`)}\n` +
// Use nextItem.id
`${chalk.cyan('View details:')} ${chalk.yellow(`task-master show ${nextItem.id}`)}`,
{
padding: { left: 2, right: 2, top: 1, bottom: 1 },
borderColor: '#FF8800',
borderStyle: 'round',
margin: { top: 1, bottom: 1 },
title: '⚡ RECOMMENDED NEXT TASK ⚡',
titleAlignment: 'center',
width: terminalWidth - 4,
fullscreen: false
}
)
);
} else {
console.log(
boxen(
chalk.hex('#FF8800').bold('No eligible next task found') +
'\n\n' +
'All pending tasks have dependencies that are not yet completed, or all tasks are done.',
{
padding: 1,
borderColor: '#FF8800',
borderStyle: 'round',
margin: { top: 1, bottom: 1 },
title: '⚡ NEXT TASK ⚡',
titleAlignment: 'center',
width: terminalWidth - 4 // Use full terminal width minus a small margin
}
)
);
}
// Show next steps
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master next')} to see what to work on next\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master set-status --id=<id> --status=done')} to mark a task as complete`,
{
padding: 1,
borderColor: 'gray',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
} catch (error) {
log('error', `Error listing tasks: ${error.message}`);
if (outputFormat === 'json') {
// Return structured error for JSON output
throw {
code: 'TASK_LIST_ERROR',
message: error.message,
details: error.stack
};
}
console.error(chalk.red(`Error: ${error.message}`));
process.exit(1);
}
}
// *** Helper function to get description for task or subtask ***
function getWorkItemDescription(item, allTasks) {
if (!item) return 'N/A';
if (item.parentId) {
// It's a subtask
const parent = allTasks.find((t) => t.id === item.parentId);
const subtask = parent?.subtasks?.find(
(st) => `${parent.id}.${st.id}` === item.id
);
return subtask?.description || 'No description available.';
} else {
// It's a top-level task
const task = allTasks.find((t) => String(t.id) === String(item.id));
return task?.description || 'No description available.';
}
}
export default listTasks;

View File

@@ -0,0 +1,560 @@
/**
* models.js
* Core functionality for managing AI model configurations
*/
import path from 'path';
import fs from 'fs';
import https from 'https';
import {
getMainModelId,
getResearchModelId,
getFallbackModelId,
getAvailableModels,
getMainProvider,
getResearchProvider,
getFallbackProvider,
isApiKeySet,
getMcpApiKeyStatus,
getConfig,
writeConfig,
isConfigFilePresent,
getAllProviders
} from '../config-manager.js';
/**
* Fetches the list of models from OpenRouter API.
* @returns {Promise<Array|null>} A promise that resolves with the list of model IDs or null if fetch fails.
*/
function fetchOpenRouterModels() {
return new Promise((resolve) => {
const options = {
hostname: 'openrouter.ai',
path: '/api/v1/models',
method: 'GET',
headers: {
Accept: 'application/json'
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
if (res.statusCode === 200) {
try {
const parsedData = JSON.parse(data);
resolve(parsedData.data || []); // Return the array of models
} catch (e) {
console.error('Error parsing OpenRouter response:', e);
resolve(null); // Indicate failure
}
} else {
console.error(
`OpenRouter API request failed with status code: ${res.statusCode}`
);
resolve(null); // Indicate failure
}
});
});
req.on('error', (e) => {
console.error('Error fetching OpenRouter models:', e);
resolve(null); // Indicate failure
});
req.end();
});
}
/**
* Get the current model configuration
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with current model configuration
*/
async function getModelConfiguration(options = {}) {
const { mcpLog, projectRoot } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
if (!configExists) {
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
try {
// Get current settings - these should use the config from the found path automatically
const mainProvider = getMainProvider(projectRoot);
const mainModelId = getMainModelId(projectRoot);
const researchProvider = getResearchProvider(projectRoot);
const researchModelId = getResearchModelId(projectRoot);
const fallbackProvider = getFallbackProvider(projectRoot);
const fallbackModelId = getFallbackModelId(projectRoot);
// Check API keys
const mainCliKeyOk = isApiKeySet(mainProvider);
const mainMcpKeyOk = getMcpApiKeyStatus(mainProvider, projectRoot);
const researchCliKeyOk = isApiKeySet(researchProvider);
const researchMcpKeyOk = getMcpApiKeyStatus(researchProvider, projectRoot);
const fallbackCliKeyOk = fallbackProvider
? isApiKeySet(fallbackProvider)
: true;
const fallbackMcpKeyOk = fallbackProvider
? getMcpApiKeyStatus(fallbackProvider, projectRoot)
: true;
// Get available models to find detailed info
const availableModels = getAvailableModels(projectRoot);
// Find model details
const mainModelData = availableModels.find((m) => m.id === mainModelId);
const researchModelData = availableModels.find(
(m) => m.id === researchModelId
);
const fallbackModelData = fallbackModelId
? availableModels.find((m) => m.id === fallbackModelId)
: null;
// Return structured configuration data
return {
success: true,
data: {
activeModels: {
main: {
provider: mainProvider,
modelId: mainModelId,
sweScore: mainModelData?.swe_score || null,
cost: mainModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: mainCliKeyOk,
mcp: mainMcpKeyOk
}
},
research: {
provider: researchProvider,
modelId: researchModelId,
sweScore: researchModelData?.swe_score || null,
cost: researchModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: researchCliKeyOk,
mcp: researchMcpKeyOk
}
},
fallback: fallbackProvider
? {
provider: fallbackProvider,
modelId: fallbackModelId,
sweScore: fallbackModelData?.swe_score || null,
cost: fallbackModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: fallbackCliKeyOk,
mcp: fallbackMcpKeyOk
}
}
: null
},
message: 'Successfully retrieved current model configuration'
}
};
} catch (error) {
report('error', `Error getting model configuration: ${error.message}`);
return {
success: false,
error: {
code: 'CONFIG_ERROR',
message: error.message
}
};
}
}
/**
* Get all available models not currently in use
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with available models
*/
async function getAvailableModelsList(options = {}) {
const { mcpLog, projectRoot } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
if (!configExists) {
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
try {
// Get all available models
const allAvailableModels = getAvailableModels(projectRoot);
if (!allAvailableModels || allAvailableModels.length === 0) {
return {
success: true,
data: {
models: [],
message: 'No available models found'
}
};
}
// Get currently used model IDs
const mainModelId = getMainModelId(projectRoot);
const researchModelId = getResearchModelId(projectRoot);
const fallbackModelId = getFallbackModelId(projectRoot);
// Filter out placeholder models and active models
const activeIds = [mainModelId, researchModelId, fallbackModelId].filter(
Boolean
);
const otherAvailableModels = allAvailableModels.map((model) => ({
provider: model.provider || 'N/A',
modelId: model.id,
sweScore: model.swe_score || null,
cost: model.cost_per_1m_tokens || null,
allowedRoles: model.allowed_roles || []
}));
return {
success: true,
data: {
models: otherAvailableModels,
message: `Successfully retrieved ${otherAvailableModels.length} available models`
}
};
} catch (error) {
report('error', `Error getting available models: ${error.message}`);
return {
success: false,
error: {
code: 'MODELS_LIST_ERROR',
message: error.message
}
};
}
}
/**
* Update a specific model in the configuration
* @param {string} role - The model role to update ('main', 'research', 'fallback')
* @param {string} modelId - The model ID to set for the role
* @param {Object} [options] - Options for the operation
* @param {string} [options.providerHint] - Provider hint if already determined ('openrouter' or 'ollama')
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with result of update operation
*/
async function setModel(role, modelId, options = {}) {
const { mcpLog, projectRoot, providerHint } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
// Check if configuration file exists using provided project root
let configPath;
let configExists = false;
if (projectRoot) {
configPath = path.join(projectRoot, '.taskmasterconfig');
configExists = fs.existsSync(configPath);
report(
'info',
`Checking for .taskmasterconfig at: ${configPath}, exists: ${configExists}`
);
} else {
configExists = isConfigFilePresent();
report(
'info',
`Checking for .taskmasterconfig using isConfigFilePresent(), exists: ${configExists}`
);
}
if (!configExists) {
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The .taskmasterconfig file is missing. Run "task-master models --setup" to create it.'
}
};
}
// Validate role
if (!['main', 'research', 'fallback'].includes(role)) {
return {
success: false,
error: {
code: 'INVALID_ROLE',
message: `Invalid role: ${role}. Must be one of: main, research, fallback.`
}
};
}
// Validate model ID
if (typeof modelId !== 'string' || modelId.trim() === '') {
return {
success: false,
error: {
code: 'INVALID_MODEL_ID',
message: `Invalid model ID: ${modelId}. Must be a non-empty string.`
}
};
}
try {
const availableModels = getAvailableModels(projectRoot);
const currentConfig = getConfig(projectRoot);
let determinedProvider = null; // Initialize provider
let warningMessage = null;
// Find the model data in internal list initially to see if it exists at all
let modelData = availableModels.find((m) => m.id === modelId);
// --- Revised Logic: Prioritize providerHint --- //
if (providerHint) {
// Hint provided (--ollama or --openrouter flag used)
if (modelData && modelData.provider === providerHint) {
// Found internally AND provider matches the hint
determinedProvider = providerHint;
report(
'info',
`Model ${modelId} found internally with matching provider hint ${determinedProvider}.`
);
} else {
// Either not found internally, OR found but under a DIFFERENT provider than hinted.
// Proceed with custom logic based ONLY on the hint.
if (providerHint === 'openrouter') {
// Check OpenRouter ONLY because hint was openrouter
report('info', `Checking OpenRouter for ${modelId} (as hinted)...`);
const openRouterModels = await fetchOpenRouterModels();
if (
openRouterModels &&
openRouterModels.some((m) => m.id === modelId)
) {
determinedProvider = 'openrouter';
warningMessage = `Warning: Custom OpenRouter model '${modelId}' set. This model is not officially validated by Taskmaster and may not function as expected.`;
report('warn', warningMessage);
} else {
// Hinted as OpenRouter but not found in live check
throw new Error(
`Model ID "${modelId}" not found in the live OpenRouter model list. Please verify the ID and ensure it's available on OpenRouter.`
);
}
} else if (providerHint === 'ollama') {
// Hinted as Ollama - set provider directly WITHOUT checking OpenRouter
determinedProvider = 'ollama';
warningMessage = `Warning: Custom Ollama model '${modelId}' set. Ensure your Ollama server is running and has pulled this model. Taskmaster cannot guarantee compatibility.`;
report('warn', warningMessage);
} else {
// Invalid provider hint - should not happen
throw new Error(`Invalid provider hint received: ${providerHint}`);
}
}
} else {
// No hint provided (flags not used)
if (modelData) {
// Found internally, use the provider from the internal list
determinedProvider = modelData.provider;
report(
'info',
`Model ${modelId} found internally with provider ${determinedProvider}.`
);
} else {
// Model not found and no provider hint was given
return {
success: false,
error: {
code: 'MODEL_NOT_FOUND_NO_HINT',
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter or --ollama.`
}
};
}
}
// --- End of Revised Logic --- //
// At this point, we should have a determinedProvider if the model is valid (internally or custom)
if (!determinedProvider) {
// This case acts as a safeguard
return {
success: false,
error: {
code: 'PROVIDER_UNDETERMINED',
message: `Could not determine the provider for model ID "${modelId}".`
}
};
}
// Update configuration
currentConfig.models[role] = {
...currentConfig.models[role], // Keep existing params like maxTokens
provider: determinedProvider,
modelId: modelId
};
// Write updated configuration
const writeResult = writeConfig(currentConfig, projectRoot);
if (!writeResult) {
return {
success: false,
error: {
code: 'WRITE_ERROR',
message: 'Error writing updated configuration to .taskmasterconfig'
}
};
}
const successMessage = `Successfully set ${role} model to ${modelId} (Provider: ${determinedProvider})`;
report('info', successMessage);
return {
success: true,
data: {
role,
provider: determinedProvider,
modelId,
message: successMessage,
warning: warningMessage // Include warning in the response data
}
};
} catch (error) {
report('error', `Error setting ${role} model: ${error.message}`);
return {
success: false,
error: {
code: 'SET_MODEL_ERROR',
message: error.message
}
};
}
}
/**
* Get API key status for all known providers.
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with API key status report
*/
async function getApiKeyStatusReport(options = {}) {
const { mcpLog, projectRoot, session } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
try {
const providers = getAllProviders();
const providersToCheck = providers.filter(
(p) => p.toLowerCase() !== 'ollama'
); // Ollama is not a provider, it's a service, doesn't need an api key usually
const statusReport = providersToCheck.map((provider) => {
// Use provided projectRoot for MCP status check
const cliOk = isApiKeySet(provider, session); // Pass session for CLI check too
const mcpOk = getMcpApiKeyStatus(provider, projectRoot);
return {
provider,
cli: cliOk,
mcp: mcpOk
};
});
report('info', 'Successfully generated API key status report.');
return {
success: true,
data: {
report: statusReport,
message: 'API key status report generated.'
}
};
} catch (error) {
report('error', `Error generating API key status report: ${error.message}`);
return {
success: false,
error: {
code: 'API_KEY_STATUS_ERROR',
message: error.message
}
};
}
}
export {
getModelConfiguration,
getAvailableModelsList,
setModel,
getApiKeyStatusReport
};

View File

@@ -0,0 +1,212 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import { z } from 'zod';
import {
log,
writeJSON,
enableSilentMode,
disableSilentMode,
isSilentMode
} from '../utils.js';
import { generateObjectService } from '../ai-services-unified.js';
import { getDebugFlag } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
// Define Zod schema for task validation
const TaskSchema = z.object({
id: z.number(),
title: z.string(),
description: z.string(),
status: z.string().default('pending'),
dependencies: z.array(z.number()).default([]),
priority: z.string().default('medium'),
details: z.string().optional(),
testStrategy: z.string().optional()
});
// Define Zod schema for the complete tasks data
const TasksDataSchema = z.object({
tasks: z.array(TaskSchema),
metadata: z.object({
projectName: z.string(),
totalTasks: z.number(),
sourceFile: z.string(),
generatedAt: z.string()
})
});
/**
* Parse a PRD file and generate tasks
* @param {string} prdPath - Path to the PRD file
* @param {string} tasksPath - Path to the tasks.json file
* @param {number} numTasks - Number of tasks to generate
* @param {Object} options - Additional options
* @param {Object} options.reportProgress - Function to report progress to MCP server (optional)
* @param {Object} options.mcpLog - MCP logger object (optional)
* @param {Object} options.session - Session object from MCP server (optional)
*/
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
const { reportProgress, mcpLog, session } = options;
// Determine output format based on mcpLog presence (simplification)
const outputFormat = mcpLog ? 'json' : 'text';
// Create custom reporter that checks for MCP log and silent mode
const report = (message, level = 'info') => {
if (mcpLog) {
mcpLog[level](message);
} else if (!isSilentMode() && outputFormat === 'text') {
// Only log to console if not in silent mode and outputFormat is 'text'
log(level, message);
}
};
try {
report(`Parsing PRD file: ${prdPath}`, 'info');
// Read the PRD content
const prdContent = fs.readFileSync(prdPath, 'utf8');
// Build system prompt for PRD parsing
const systemPrompt = `You are an AI assistant helping to break down a Product Requirements Document (PRD) into a set of sequential development tasks.
Your goal is to create ${numTasks} well-structured, actionable development tasks based on the PRD provided.
Each task should follow this JSON structure:
{
"id": number,
"title": string,
"description": string,
"status": "pending",
"dependencies": number[] (IDs of tasks this depends on),
"priority": "high" | "medium" | "low",
"details": string (implementation details),
"testStrategy": string (validation approach)
}
Guidelines:
1. Create exactly ${numTasks} tasks, numbered from 1 to ${numTasks}
2. Each task should be atomic and focused on a single responsibility
3. Order tasks logically - consider dependencies and implementation sequence
4. Early tasks should focus on setup, core functionality first, then advanced features
5. Include clear validation/testing approach for each task
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs)
7. Assign priority (high/medium/low) based on criticality and dependency order
8. Include detailed implementation guidance in the "details" field
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches`;
// Build user prompt with PRD content
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into ${numTasks} tasks:
${prdContent}
Return your response in this format:
{
"tasks": [
{
"id": 1,
"title": "Setup Project Repository",
"description": "...",
...
},
...
],
"metadata": {
"projectName": "PRD Implementation",
"totalTasks": ${numTasks},
"sourceFile": "${prdPath}",
"generatedAt": "YYYY-MM-DD"
}
}`;
// Call the unified AI service
report('Calling AI service to generate tasks from PRD...', 'info');
// Call generateObjectService with proper parameters
const tasksData = await generateObjectService({
role: 'main', // Use 'main' role to get the model from config
session: session, // Pass session for API key resolution
schema: TasksDataSchema, // Pass the schema for validation
objectName: 'tasks_data', // Name the generated object
systemPrompt: systemPrompt, // System instructions
prompt: userPrompt, // User prompt with PRD content
reportProgress // Progress reporting function
});
// Create the directory if it doesn't exist
const tasksDir = path.dirname(tasksPath);
if (!fs.existsSync(tasksDir)) {
fs.mkdirSync(tasksDir, { recursive: true });
}
// Write the tasks to the file
writeJSON(tasksPath, tasksData);
report(
`Successfully generated ${tasksData.tasks.length} tasks from PRD`,
'success'
);
report(`Tasks saved to: ${tasksPath}`, 'info');
// Generate individual task files
if (reportProgress && mcpLog) {
// Enable silent mode when being called from MCP server
enableSilentMode();
await generateTaskFiles(tasksPath, tasksDir);
disableSilentMode();
} else {
await generateTaskFiles(tasksPath, tasksDir);
}
// Only show success boxes for text output (CLI)
if (outputFormat === 'text') {
console.log(
boxen(
chalk.green(
`Successfully generated ${tasksData.tasks.length} tasks from PRD`
),
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
)
);
console.log(
boxen(
chalk.white.bold('Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
}
return tasksData;
} catch (error) {
report(`Error parsing PRD: ${error.message}`, 'error');
// Only show error UI for text output (CLI)
if (outputFormat === 'text') {
console.error(chalk.red(`Error: ${error.message}`));
if (getDebugFlag(session)) {
// Use getter
console.error(error);
}
process.exit(1);
} else {
throw error; // Re-throw for JSON output
}
}
}
export default parsePRD;

View File

@@ -0,0 +1,119 @@
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import generateTaskFiles from './generate-task-files.js';
/**
* Remove a subtask from its parent task
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} subtaskId - ID of the subtask to remove in format "parentId.subtaskId"
* @param {boolean} convertToTask - Whether to convert the subtask to a standalone task
* @param {boolean} generateFiles - Whether to regenerate task files after removing the subtask
* @returns {Object|null} The removed subtask if convertToTask is true, otherwise null
*/
async function removeSubtask(
tasksPath,
subtaskId,
convertToTask = false,
generateFiles = true
) {
try {
log('info', `Removing subtask ${subtaskId}...`);
// Read the existing tasks
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
}
// Parse the subtask ID (format: "parentId.subtaskId")
if (!subtaskId.includes('.')) {
throw new Error(
`Invalid subtask ID format: ${subtaskId}. Expected format: "parentId.subtaskId"`
);
}
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
const parentId = parseInt(parentIdStr, 10);
const subtaskIdNum = parseInt(subtaskIdStr, 10);
// Find the parent task
const parentTask = data.tasks.find((t) => t.id === parentId);
if (!parentTask) {
throw new Error(`Parent task with ID ${parentId} not found`);
}
// Check if parent has subtasks
if (!parentTask.subtasks || parentTask.subtasks.length === 0) {
throw new Error(`Parent task ${parentId} has no subtasks`);
}
// Find the subtask to remove
const subtaskIndex = parentTask.subtasks.findIndex(
(st) => st.id === subtaskIdNum
);
if (subtaskIndex === -1) {
throw new Error(`Subtask ${subtaskId} not found`);
}
// Get a copy of the subtask before removing it
const removedSubtask = { ...parentTask.subtasks[subtaskIndex] };
// Remove the subtask from the parent
parentTask.subtasks.splice(subtaskIndex, 1);
// If parent has no more subtasks, remove the subtasks array
if (parentTask.subtasks.length === 0) {
delete parentTask.subtasks;
}
let convertedTask = null;
// Convert the subtask to a standalone task if requested
if (convertToTask) {
log('info', `Converting subtask ${subtaskId} to a standalone task...`);
// Find the highest task ID to determine the next ID
const highestId = Math.max(...data.tasks.map((t) => t.id));
const newTaskId = highestId + 1;
// Create the new task from the subtask
convertedTask = {
id: newTaskId,
title: removedSubtask.title,
description: removedSubtask.description || '',
details: removedSubtask.details || '',
status: removedSubtask.status || 'pending',
dependencies: removedSubtask.dependencies || [],
priority: parentTask.priority || 'medium' // Inherit priority from parent
};
// Add the parent task as a dependency if not already present
if (!convertedTask.dependencies.includes(parentId)) {
convertedTask.dependencies.push(parentId);
}
// Add the converted task to the tasks array
data.tasks.push(convertedTask);
log('info', `Created new task ${newTaskId} from subtask ${subtaskId}`);
} else {
log('info', `Subtask ${subtaskId} deleted`);
}
// Write the updated tasks back to the file
writeJSON(tasksPath, data);
// Generate task files if requested
if (generateFiles) {
log('info', 'Regenerating task files...');
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
}
return convertedTask;
} catch (error) {
log('error', `Error removing subtask: ${error.message}`);
throw error;
}
}
export default removeSubtask;

View File

@@ -0,0 +1,207 @@
import fs from 'fs';
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import generateTaskFiles from './generate-task-files.js';
import taskExists from './task-exists.js';
/**
* Removes one or more tasks or subtasks from the tasks file
* @param {string} tasksPath - Path to the tasks file
* @param {string} taskIds - Comma-separated string of task/subtask IDs to remove (e.g., '5,6.1,7')
* @returns {Object} Result object with success status, messages, and removed task info
*/
async function removeTask(tasksPath, taskIds) {
const results = {
success: true,
messages: [],
errors: [],
removedTasks: []
};
const taskIdsToRemove = taskIds
.split(',')
.map((id) => id.trim())
.filter(Boolean); // Remove empty strings if any
if (taskIdsToRemove.length === 0) {
results.success = false;
results.errors.push('No valid task IDs provided.');
return results;
}
try {
// Read the tasks file ONCE before the loop
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
}
const tasksToDeleteFiles = []; // Collect IDs of main tasks whose files should be deleted
for (const taskId of taskIdsToRemove) {
// Check if the task ID exists *before* attempting removal
if (!taskExists(data.tasks, taskId)) {
const errorMsg = `Task with ID ${taskId} not found or already removed.`;
results.errors.push(errorMsg);
results.success = false; // Mark overall success as false if any error occurs
continue; // Skip to the next ID
}
try {
// Handle subtask removal (e.g., '5.2')
if (typeof taskId === 'string' && taskId.includes('.')) {
const [parentTaskId, subtaskId] = taskId
.split('.')
.map((id) => parseInt(id, 10));
// Find the parent task
const parentTask = data.tasks.find((t) => t.id === parentTaskId);
if (!parentTask || !parentTask.subtasks) {
throw new Error(
`Parent task ${parentTaskId} or its subtasks not found for subtask ${taskId}`
);
}
// Find the subtask to remove
const subtaskIndex = parentTask.subtasks.findIndex(
(st) => st.id === subtaskId
);
if (subtaskIndex === -1) {
throw new Error(
`Subtask ${subtaskId} not found in parent task ${parentTaskId}`
);
}
// Store the subtask info before removal
const removedSubtask = {
...parentTask.subtasks[subtaskIndex],
parentTaskId: parentTaskId
};
results.removedTasks.push(removedSubtask);
// Remove the subtask from the parent
parentTask.subtasks.splice(subtaskIndex, 1);
results.messages.push(`Successfully removed subtask ${taskId}`);
}
// Handle main task removal
else {
const taskIdNum = parseInt(taskId, 10);
const taskIndex = data.tasks.findIndex((t) => t.id === taskIdNum);
if (taskIndex === -1) {
// This case should theoretically be caught by the taskExists check above,
// but keep it as a safeguard.
throw new Error(`Task with ID ${taskId} not found`);
}
// Store the task info before removal
const removedTask = data.tasks[taskIndex];
results.removedTasks.push(removedTask);
tasksToDeleteFiles.push(taskIdNum); // Add to list for file deletion
// Remove the task from the main array
data.tasks.splice(taskIndex, 1);
results.messages.push(`Successfully removed task ${taskId}`);
}
} catch (innerError) {
// Catch errors specific to processing *this* ID
const errorMsg = `Error processing ID ${taskId}: ${innerError.message}`;
results.errors.push(errorMsg);
results.success = false;
log('warn', errorMsg); // Log as warning and continue with next ID
}
} // End of loop through taskIdsToRemove
// --- Post-Loop Operations ---
// Only proceed with cleanup and saving if at least one task was potentially removed
if (results.removedTasks.length > 0) {
// Remove all references AFTER all tasks/subtasks are removed
const allRemovedIds = new Set(
taskIdsToRemove.map((id) =>
typeof id === 'string' && id.includes('.') ? id : parseInt(id, 10)
)
);
data.tasks.forEach((task) => {
// Clean dependencies in main tasks
if (task.dependencies) {
task.dependencies = task.dependencies.filter(
(depId) => !allRemovedIds.has(depId)
);
}
// Clean dependencies in remaining subtasks
if (task.subtasks) {
task.subtasks.forEach((subtask) => {
if (subtask.dependencies) {
subtask.dependencies = subtask.dependencies.filter(
(depId) =>
!allRemovedIds.has(`${task.id}.${depId}`) &&
!allRemovedIds.has(depId) // check both subtask and main task refs
);
}
});
}
});
// Save the updated tasks file ONCE
writeJSON(tasksPath, data);
// Delete task files AFTER saving tasks.json
for (const taskIdNum of tasksToDeleteFiles) {
const taskFileName = path.join(
path.dirname(tasksPath),
`task_${taskIdNum.toString().padStart(3, '0')}.txt`
);
if (fs.existsSync(taskFileName)) {
try {
fs.unlinkSync(taskFileName);
results.messages.push(`Deleted task file: ${taskFileName}`);
} catch (unlinkError) {
const unlinkMsg = `Failed to delete task file ${taskFileName}: ${unlinkError.message}`;
results.errors.push(unlinkMsg);
results.success = false;
log('warn', unlinkMsg);
}
}
}
// Generate updated task files ONCE
try {
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
results.messages.push('Task files regenerated successfully.');
} catch (genError) {
const genErrMsg = `Failed to regenerate task files: ${genError.message}`;
results.errors.push(genErrMsg);
results.success = false;
log('warn', genErrMsg);
}
} else if (results.errors.length === 0) {
// Case where valid IDs were provided but none existed
results.messages.push('No tasks found matching the provided IDs.');
}
// Consolidate messages for final output
const finalMessage = results.messages.join('\n');
const finalError = results.errors.join('\n');
return {
success: results.success,
message: finalMessage || 'No tasks were removed.',
error: finalError || null,
removedTasks: results.removedTasks
};
} catch (error) {
// Catch errors from reading file or other initial setup
log('error', `Error removing tasks: ${error.message}`);
return {
success: false,
message: '',
error: `Operation failed: ${error.message}`,
removedTasks: []
};
}
}
export default removeTask;

View File

@@ -0,0 +1,114 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import { log, readJSON, writeJSON, findTaskById } from '../utils.js';
import { displayBanner } from '../ui.js';
import { validateTaskDependencies } from '../dependency-manager.js';
import { getDebugFlag } from '../config-manager.js';
import updateSingleTaskStatus from './update-single-task-status.js';
import generateTaskFiles from './generate-task-files.js';
/**
* Set the status of a task
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} taskIdInput - Task ID(s) to update
* @param {string} newStatus - New status
* @param {Object} options - Additional options (mcpLog for MCP mode)
* @returns {Object|undefined} Result object in MCP mode, undefined in CLI mode
*/
async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
try {
// Determine if we're in MCP mode by checking for mcpLog
const isMcpMode = !!options?.mcpLog;
// Only display UI elements if not in MCP mode
if (!isMcpMode) {
displayBanner();
console.log(
boxen(chalk.white.bold(`Updating Task Status to: ${newStatus}`), {
padding: 1,
borderColor: 'blue',
borderStyle: 'round'
})
);
}
log('info', `Reading tasks from ${tasksPath}...`);
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
}
// Handle multiple task IDs (comma-separated)
const taskIds = taskIdInput.split(',').map((id) => id.trim());
const updatedTasks = [];
// Update each task
for (const id of taskIds) {
await updateSingleTaskStatus(tasksPath, id, newStatus, data, !isMcpMode);
updatedTasks.push(id);
}
// Write the updated tasks to the file
writeJSON(tasksPath, data);
// Validate dependencies after status update
log('info', 'Validating dependencies after status update...');
validateTaskDependencies(data.tasks);
// Generate individual task files
log('info', 'Regenerating task files...');
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
mcpLog: options.mcpLog
});
// Display success message - only in CLI mode
if (!isMcpMode) {
for (const id of updatedTasks) {
const task = findTaskById(data.tasks, id);
const taskName = task ? task.title : id;
console.log(
boxen(
chalk.white.bold(`Successfully updated task ${id} status:`) +
'\n' +
`From: ${chalk.yellow(task ? task.status : 'unknown')}\n` +
`To: ${chalk.green(newStatus)}`,
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
)
);
}
}
// Return success value for programmatic use
return {
success: true,
updatedTasks: updatedTasks.map((id) => ({
id,
status: newStatus
}))
};
} catch (error) {
log('error', `Error setting task status: ${error.message}`);
// Only show error UI in CLI mode
if (!options?.mcpLog) {
console.error(chalk.red(`Error: ${error.message}`));
// Pass session to getDebugFlag
if (getDebugFlag(options?.session)) {
// Use getter
console.error(error);
}
process.exit(1);
} else {
// In MCP mode, throw the error for the caller to handle
throw error;
}
}
}
export default setTaskStatus;

View File

@@ -0,0 +1,30 @@
/**
* Checks if a task with the given ID exists
* @param {Array} tasks - Array of tasks to search
* @param {string|number} taskId - ID of task or subtask to check
* @returns {boolean} Whether the task exists
*/
function taskExists(tasks, taskId) {
// Handle subtask IDs (e.g., "1.2")
if (typeof taskId === 'string' && taskId.includes('.')) {
const [parentIdStr, subtaskIdStr] = taskId.split('.');
const parentId = parseInt(parentIdStr, 10);
const subtaskId = parseInt(subtaskIdStr, 10);
// Find the parent task
const parentTask = tasks.find((t) => t.id === parentId);
// If parent exists, check if subtask exists
return (
parentTask &&
parentTask.subtasks &&
parentTask.subtasks.some((st) => st.id === subtaskId)
);
}
// Handle regular task IDs
const id = parseInt(taskId, 10);
return tasks.some((t) => t.id === id);
}
export default taskExists;

Some files were not shown because too many files have changed in this diff Show More