docs: Update documentation for new AI/config architecture and finalize cleanup

This commit updates all relevant documentation (READMEs, docs/*, .cursor/rules) to accurately reflect the finalized unified AI service architecture and the new configuration system (.taskmasterconfig + .env/mcp.json). It also includes the final code cleanup steps related to the refactoring.

Key Changes:

1.  **Documentation Updates:**

    *   Revised `README.md`, `README-task-master.md`, `assets/scripts_README.md`, `docs/configuration.md`, and `docs/tutorial.md` to explain the new configuration split (.taskmasterconfig vs .env/mcp.json).

    *   Updated MCP configuration examples in READMEs and tutorials to only include API keys in the `env` block.

    *   Added/updated examples for using the `--research` flag in `docs/command-reference.md`, `docs/examples.md`, and `docs/tutorial.md`.

    *   Updated `.cursor/rules/ai_services.mdc`, `.cursor/rules/architecture.mdc`, `.cursor/rules/dev_workflow.mdc`, `.cursor/rules/mcp.mdc`, `.cursor/rules/taskmaster.mdc`, `.cursor/rules/utilities.mdc`, and `.cursor/rules/new_features.mdc` to align with the new architecture, removing references to old patterns/files.

    *   Removed internal rule links from user-facing rules (`taskmaster.mdc`, `dev_workflow.mdc`, `self_improve.mdc`).

    *   Deleted outdated example file `docs/ai-client-utils-example.md`.

2.  **Final Code Refactor & Cleanup:**

    *   Corrected `update-task-by-id.js` by removing the last import from the old `ai-services.js`.

    *   Refactored `update-subtask-by-id.js` to correctly use the unified service and logger patterns.

    *   Removed the obsolete export block from `mcp-server/src/core/task-master-core.js`.

    *   Corrected logger implementation in `update-tasks.js` for CLI context.

    *   Updated API key mapping in `config-manager.js` and `ai-services-unified.js`.

3.  **Configuration Files:**

    *   Updated API keys in `.cursor/mcp.json`, replacing `GROK_API_KEY` with `XAI_API_KEY`.

    *   Updated `.env.example` with current API key names.

    *   Added `azureOpenaiBaseUrl` to `.taskmasterconfig` example.

4.  **Task Management:**

    *   Marked documentation subtask 61.10 as 'done'.

    *   Includes various other task content/status updates from the diff summary.

5.  **Changeset:**

    *   Added `.changeset/cuddly-zebras-matter.md` for user-facing `expand`/`expand-all` improvements.

This commit concludes the major architectural refactoring (Task 61) and ensures the documentation accurately reflects the current system.
This commit is contained in:
Eyal Toledano
2025-04-25 14:43:12 -04:00
parent afb47584bd
commit 36d559db26
24 changed files with 477 additions and 925 deletions

View File

@@ -4,14 +4,15 @@
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-apikeyhere",
"PERPLEXITY_API_KEY": "pplx-apikeyhere",
"OPENAI_API_KEY": "sk-proj-1234567890",
"GOOGLE_API_KEY": "AIzaSyB1234567890",
"GROK_API_KEY": "gsk_1234567890",
"MISTRAL_API_KEY": "mst_1234567890",
"AZURE_OPENAI_API_KEY": "1234567890",
"AZURE_OPENAI_ENDPOINT": "https://your-endpoint.openai.azure.com/"
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}

View File

@@ -5,114 +5,97 @@ globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js
# AI Services Layer Guidelines
This document outlines the architecture and usage patterns for interacting with Large Language Models (LLMs) via the Task Master's unified AI service layer. The goal is to centralize configuration, provider selection, API key management, fallback logic, and error handling.
This document outlines the architecture and usage patterns for interacting with Large Language Models (LLMs) via Task Master's unified AI service layer (`ai-services-unified.js`). The goal is to centralize configuration, provider selection, API key management, fallback logic, and error handling.
**Core Components:**
* **Configuration (`.taskmasterconfig` & [`config-manager.js`](mdc:scripts/modules/config-manager.js)):**
* Defines the AI provider and model ID for different roles (`main`, `research`, `fallback`).
* Defines the AI provider and model ID for different **roles** (`main`, `research`, `fallback`).
* Stores parameters like `maxTokens` and `temperature` per role.
* Managed via `task-master models --setup`.
* [`config-manager.js`](mdc:scripts/modules/config-manager.js) provides getters (e.g., `getMainProvider()`, `getMainModelId()`, `getParametersForRole()`) to access these settings.
* API keys are **NOT** stored here; they are resolved via `resolveEnvVariable` from `.env` or MCP session env. See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc).
* Relies on `data/supported-models.json` for model validation and metadata.
* Managed via the `task-master models --setup` CLI command.
* [`config-manager.js`](mdc:scripts/modules/config-manager.js) provides **getters** (e.g., `getMainProvider()`, `getParametersForRole()`) to access these settings. Core logic should **only** use these getters for *non-AI related application logic* (e.g., `getDefaultSubtasks`). The unified service fetches necessary AI parameters internally based on the `role`.
* **API keys** are **NOT** stored here; they are resolved via `resolveEnvVariable` (in [`utils.js`](mdc:scripts/modules/utils.js)) from `.env` (for CLI) or the MCP `session.env` object (for MCP calls). See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc).
* **Unified Service (`ai-services-unified.js`):**
* Exports primary interaction functions: `generateTextService`, `streamTextService`, `generateObjectService`.
* Exports primary interaction functions: `generateTextService`, `generateObjectService`. (Note: `streamTextService` exists but has known reliability issues with some providers/payloads).
* Contains the core `_unifiedServiceRunner` logic.
* Uses `config-manager.js` getters to determine the provider/model based on the requested `role`.
* Implements the fallback sequence (main -> fallback -> research or variations).
* Constructs the `messages` array (`[{ role: 'system', ... }, { role: 'user', ... }]`) required by the Vercel AI SDK.
* Calls internal retry logic (`_attemptProviderCallWithRetries`).
* Resolves API keys via `_resolveApiKey`.
* Maps requests to the correct provider implementation via `PROVIDER_FUNCTIONS`.
* Internally uses `config-manager.js` getters to determine the provider/model/parameters based on the requested `role`.
* Implements the **fallback sequence** (e.g., main -> fallback -> research) if the primary provider/model fails.
* Constructs the `messages` array required by the Vercel AI SDK.
* Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`).
* Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`).
* Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`.
* **Provider Implementations (`src/ai-providers/*.js`):**
* Contain provider-specific code (e.g., `src/ai-providers/anthropic.js`).
* Import Vercel AI SDK provider adapters (`@ai-sdk/anthropic`, `@ai-sdk/perplexity`, etc.).
* Wrap core Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`).
* Accept standard parameters (`apiKey`, `modelId`, `messages`, `maxTokens`, etc.).
* Return results in the format expected by `_unifiedServiceRunner`.
* Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`).
**Usage Pattern (from Core Logic like `task-manager`):**
**Usage Pattern (from Core Logic like `task-manager/*.js`):**
1. **Choose Service:** Decide whether you need a full text response (`generateTextService`) or a stream (`streamTextService`).
* ✅ **DO**: **Prefer `generateTextService`** for interactions that send large context payloads (e.g., stringified JSON) and **do not** require incremental display in the UI. This is currently more reliable, especially if Anthropic is the configured provider.
* ⚠️ **CAUTION**: `streamTextService` may be unreliable with the Vercel SDK's Anthropic adapter when sending large user messages. Use with caution or stick to `generateTextService` for such cases until SDK improvements are confirmed.
2. **Import Service:** Import the chosen service function from `../ai-services-unified.js`.
1. **Import Service:** Import `generateTextService` or `generateObjectService` from `../ai-services-unified.js`.
```javascript
// Preferred for updateSubtaskById, parsePRD, etc.
// Preferred for most tasks (especially with complex JSON)
import { generateTextService } from '../ai-services-unified.js';
// Use only if incremental display is implemented AND provider streaming is reliable
// import { streamTextService } from '../ai-services-unified.js';
// Use if structured output is reliable for the specific use case
// import { generateObjectService } from '../ai-services-unified.js';
```
3. **Prepare Parameters:** Construct the parameters object.
* `role`: `'main'`, `'research'`, or `'fallback'`. Determines the initial provider/model attempt.
* `session`: Pass the MCP `session` object if available (for API key resolution), otherwise `null` or omit.
2. **Prepare Parameters:** Construct the parameters object for the service call.
* `role`: **Required.** `'main'`, `'research'`, or `'fallback'`. Determines the initial provider/model/parameters used by the unified service.
* `session`: **Required if called from MCP context.** Pass the `session` object received by the direct function wrapper. The unified service uses `session.env` to find API keys.
* `systemPrompt`: Your system instruction string.
* `prompt`: The user message string (can be long, include stringified data, etc.).
* (For `generateObjectService`): `schema`, `objectName`.
* (For `generateObjectService` only): `schema` (Zod schema), `objectName`.
4. **Call Service:** Use `await` to call the service function.
3. **Call Service:** Use `await` to call the service function.
```javascript
// Example using generateTextService
// Example using generateTextService (most common)
try {
const resultText = await generateTextService({
role: 'main', // Or 'research'/'fallback'
session: session, // Or null
systemPrompt: "You are...",
prompt: userMessageContent // Can include stringified JSON etc.
});
additionalInformation = resultText.trim();
// ... process resultText ...
} catch (error) {
// Handle errors thrown if all providers/retries fail
report(`AI service call failed: ${error.message}`, 'error');
throw error;
}
// Example using streamTextService (Use with caution for Anthropic/large payloads)
try {
const streamResult = await streamTextService({
role: 'main',
session: session,
role: useResearch ? 'research' : 'main', // Determine role based on logic
session: context.session, // Pass session from context object
systemPrompt: "You are...",
prompt: userMessageContent
});
// Check if a stream was actually returned (might be null if overridden)
if (streamResult.textStream) {
for await (const chunk of streamResult.textStream) {
additionalInformation += chunk;
}
additionalInformation = additionalInformation.trim();
} else if (streamResult.text) {
// Handle case where generateText was used internally (Anthropic override)
// NOTE: This override logic is currently REMOVED as we prefer generateTextService directly
additionalInformation = streamResult.text.trim();
} else {
additionalInformation = ''; // Should not happen
}
// ... process additionalInformation ...
// Process the raw text response (e.g., parse JSON, use directly)
// ...
} catch (error) {
report(`AI service call failed: ${error.message}`, 'error');
// Handle errors thrown by the unified service (if all fallbacks/retries fail)
report('error', `Unified AI service call failed: ${error.message}`);
throw error;
}
// Example using generateObjectService (use cautiously)
try {
const resultObject = await generateObjectService({
role: 'main',
session: context.session,
schema: myZodSchema,
objectName: 'myDataObject',
systemPrompt: "You are...",
prompt: userMessageContent
});
// resultObject is already a validated JS object
// ...
} catch (error) {
report('error', `Unified AI service call failed: ${error.message}`);
throw error;
}
```
5. **Handle Results/Errors:** Process the returned text/stream/object or handle errors thrown by the service layer.
4. **Handle Results/Errors:** Process the returned text/object or handle errors thrown by the unified service layer.
**Key Implementation Rules & Gotchas:**
* ✅ **DO**: Centralize all AI calls through `generateTextService` / `streamTextService`.
* ✅ **DO**: Ensure `.taskmasterconfig` has valid provider names, model IDs, and parameters (`maxTokens` appropriate for the model).
* ✅ **DO**: Ensure API keys are correctly configured in `.env` / `.cursor/mcp.json`.
* ✅ **DO**: Pass the `session` object to the service call if available (for MCP calls).
* **DON'T**: Call Vercel AI SDK functions (`streamText`, `generateText`) directly from `task-manager` or commands.
* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`.
* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context.
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
* **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.
* ❌ **DON'T**: Initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (`task-manager/`) or MCP direct functions.
* ❌ **DON'T**: Fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters *for the AI call*. Pass the `role` instead.
* ❌ **DON'T**: Implement fallback or retry logic outside `ai-services-unified.js`.
* ❌ **DON'T**: Handle API key resolution outside the service layer.
* ⚠️ **Streaming Caution**: Be aware of potential reliability issues using `streamTextService` with Anthropic/large payloads via the SDK. Prefer `generateTextService` for these cases until proven otherwise.
* ⚠️ **Debugging Imports**: If you get `"X is not defined"` errors related to service functions, check for internal errors within `ai-services-unified.js` (like incorrect import paths or syntax errors).
* ❌ **DON'T**: Handle API key resolution outside the service layer (it uses `utils.js` internally).
* ⚠️ **generateObjectService Caution**: Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

View File

@@ -3,7 +3,6 @@ description: Describes the high-level architecture of the Task Master CLI applic
globs: scripts/modules/*.js
alwaysApply: false
---
# Application Architecture Overview
- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability.
@@ -14,173 +13,73 @@ alwaysApply: false
- **Purpose**: Defines and registers all CLI commands using Commander.js.
- **Responsibilities** (See also: [`commands.mdc`](mdc:.cursor/rules/commands.mdc)):
- Parses command-line arguments and options.
- Invokes appropriate functions from other modules to execute commands (e.g., calls `initializeProject` from `init.js` for the `init` command).
- Handles user input and output related to command execution.
- Implements input validation and error handling for CLI commands.
- **Key Components**:
- `programInstance` (Commander.js `Command` instance): Manages command definitions.
- `registerCommands(programInstance)`: Function to register all application commands.
- Command action handlers: Functions executed when a specific command is invoked, delegating to core modules.
- Invokes appropriate core logic functions from `scripts/modules/`.
- Handles user input/output for CLI.
- Implements CLI-specific validation.
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js): Task Data Management**
- **Purpose**: Manages task data, including loading, saving, creating, updating, deleting, and querying tasks.
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js) & `task-manager/` directory: Task Data & Core Logic**
- **Purpose**: Contains core functions for task data manipulation (CRUD), AI interactions, and related logic.
- **Responsibilities**:
- Reads and writes task data to `tasks.json` file.
- Implements functions for task CRUD operations (Create, Read, Update, Delete).
- Handles task parsing from PRD documents using AI.
- Manages task expansion and subtask generation.
- Updates task statuses and properties.
- Implements task listing and display logic.
- Performs task complexity analysis using AI.
- **Key Functions**:
- `readTasks(tasksPath)` / `writeTasks(tasksPath, tasksData)`: Load and save task data.
- `parsePRD(prdFilePath, outputPath, numTasks)`: Parses PRD document to create tasks.
- `expandTask(taskId, numSubtasks, useResearch, prompt, force)`: Expands a task into subtasks.
- `setTaskStatus(tasksPath, taskIdInput, newStatus)`: Updates task status.
- `listTasks(tasksPath, statusFilter, withSubtasks)`: Lists tasks with filtering and subtask display options.
- `analyzeComplexity(tasksPath, reportPath, useResearch, thresholdScore)`: Analyzes task complexity.
- Reading/writing `tasks.json`.
- Implementing functions for task CRUD, parsing PRDs, expanding tasks, updating status, etc.
- **Delegating AI interactions** to the `ai-services-unified.js` layer.
- Accessing non-AI configuration via `config-manager.js` getters.
- **Key Files**: Individual files within `scripts/modules/task-manager/` handle specific actions (e.g., `add-task.js`, `expand-task.js`).
- **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management**
- **Purpose**: Manages task dependencies, including adding, removing, validating, and fixing dependency relationships.
- **Responsibilities**:
- Adds and removes task dependencies.
- Validates dependency relationships to prevent circular dependencies and invalid references.
- Fixes invalid dependencies by removing non-existent or self-referential dependencies.
- Provides functions to check for circular dependencies.
- **Key Functions**:
- `addDependency(tasksPath, taskId, dependencyId)`: Adds a dependency between tasks.
- `removeDependency(tasksPath, taskId, dependencyId)`: Removes a dependency.
- `validateDependencies(tasksPath)`: Validates task dependencies.
- `fixDependencies(tasksPath)`: Fixes invalid task dependencies.
- `isCircularDependency(tasks, taskId, dependencyChain)`: Detects circular dependencies.
- **Purpose**: Manages task dependencies.
- **Responsibilities**: Add/remove/validate/fix dependencies.
- **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components**
- **Purpose**: Handles all user interface elements, including displaying information, formatting output, and providing user feedback.
- **Responsibilities**:
- Displays task lists, task details, and command outputs in a formatted way.
- Uses `chalk` for colored output and `boxen` for boxed messages.
- Implements table display using `cli-table3`.
- Shows loading indicators using `ora`.
- Provides helper functions for status formatting, dependency display, and progress reporting.
- Suggests next actions to the user after command execution.
- **Key Functions**:
- `displayTaskList(tasks, statusFilter, withSubtasks)`: Displays a list of tasks in a table.
- `displayTaskDetails(task)`: Displays detailed information for a single task.
- `displayComplexityReport(reportPath)`: Displays the task complexity report.
- `startLoadingIndicator(message)` / `stopLoadingIndicator(indicator)`: Manages loading indicators.
- `getStatusWithColor(status)`: Returns status string with color formatting.
- `formatDependenciesWithStatus(dependencies, allTasks, inTable)`: Formats dependency list with status indicators.
- **Purpose**: Handles CLI output formatting (tables, colors, boxes, spinners).
- **Responsibilities**: Displaying tasks, reports, progress, suggestions.
- **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js): Unified AI Service Layer**
- **Purpose**: Provides a centralized interface for interacting with various Large Language Models (LLMs) using the Vercel AI SDK.
- **Purpose**: Centralized interface for all LLM interactions using Vercel AI SDK.
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
- Exports primary functions (`generateTextService`, `streamTextService`, `generateObjectService`) for core modules to use.
- Implements provider selection logic based on configuration roles (`main`, `research`, `fallback`) retrieved from [`config-manager.js`](mdc:scripts/modules/config-manager.js).
- Manages API key resolution (via [`utils.js`](mdc:scripts/modules/utils.js)) from environment or MCP session.
- Handles fallback sequences between configured providers.
- Implements retry logic for specific API errors.
- Constructs the `messages` array format required by the Vercel AI SDK.
- Delegates actual API calls to provider-specific implementation modules.
- **Key Components**:
- `_unifiedServiceRunner`: Core logic for provider selection, fallback, and retries.
- `PROVIDER_FUNCTIONS`: Map linking provider names to their implementation functions.
- `generateTextService`, `streamTextService`, `generateObjectService`: Exported functions.
- Exports `generateTextService`, `generateObjectService`.
- Handles provider/model selection based on `role` and `.taskmasterconfig`.
- Resolves API keys (from `.env` or `session.env`).
- Implements fallback and retry logic.
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
- **Purpose**: Contains the wrapper code for interacting with specific LLM providers via the Vercel AI SDK.
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
- Imports Vercel AI SDK provider adapters (e.g., `@ai-sdk/anthropic`).
- Implements standardized functions (e.g., `generateAnthropicText`, `streamAnthropicText`) that wrap the core Vercel AI SDK functions (`generateText`, `streamText`).
- Accepts standardized parameters (`apiKey`, `modelId`, `messages`, etc.) from `ai-services-unified.js`.
- Returns results in the format expected by `ai-services-unified.js`.
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
- **Responsibilities**: Interact directly with Vercel AI SDK adapters.
- **[`config-manager.js`](mdc:scripts/modules/config-manager.js): Configuration Management**
- **Purpose**: Manages loading, validation, and access to configuration settings, primarily from `.taskmasterconfig`.
- **Purpose**: Loads, validates, and provides access to configuration.
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
- Reads and parses the `.taskmasterconfig` file.
- Merges file configuration with default values.
- Provides getters for accessing specific configuration values (e.g., `getMainProvider()`, `getMainModelId()`, `getParametersForRole()`, `getLogLevel()`).
- **Note**: Does *not* handle API key storage (keys are in `.env` or MCP `session.env`).
- **Key Components**:
- `getConfig()`: Loads and returns the merged configuration object.
- Role-specific getters (e.g., `getMainProvider`, `getMainModelId`, `getMainMaxTokens`).
- Global setting getters (e.g., `getLogLevel`, `getDebugFlag`).
- Reads and merges `.taskmasterconfig` with defaults.
- Provides getters (e.g., `getMainProvider`, `getLogLevel`, `getDefaultSubtasks`) for accessing settings.
- **Note**: Does **not** store or directly handle API keys (keys are in `.env` or MCP `session.env`).
- **[`utils.js`](mdc:scripts/modules/utils.js): Core Utility Functions**
- **Purpose**: Provides low-level, reusable utility functions used across the **CLI application**. **Note:** Configuration management is now handled by [`config-manager.js`](mdc:scripts/modules/config-manager.js).
- **Purpose**: Low-level, reusable CLI utilities.
- **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)):
- Implements logging utility with different log levels and output formatting.
- Provides file system operation utilities (read/write JSON files).
- Includes string manipulation utilities (e.g., `truncate`, `sanitizePrompt`).
- Offers task-specific utility functions (e.g., `formatTaskId`, `findTaskById`, `taskExists`).
- Implements graph algorithms like cycle detection for dependency management.
- Provides API Key resolution logic (`resolveEnvVariable`) used by `config-manager.js`.
- **Silent Mode Control**: Provides `enableSilentMode` and `disableSilentMode` functions to control log output.
- **Key Components**:
- `log(level, ...args)`: Logging function.
- `readJSON(filepath)` / `writeJSON(filepath, data)`: File I/O utilities for JSON files.
- `truncate(text, maxLength)`: String truncation utility.
- `formatTaskId(id)` / `findTaskById(tasks, taskId)`: Task ID and search utilities.
- `findCycles(subtaskId, dependencyMap)`: Cycle detection algorithm.
- `enableSilentMode()` / `disableSilentMode()`: Control console logging output.
- `resolveEnvVariable(key, session)`: Resolves environment variables (primarily API keys) from `process.env` and `session.env`.
- Logging (`log` function), File I/O (`readJSON`, `writeJSON`), String utils (`truncate`).
- Task utils (`findTaskById`), Dependency utils (`findCycles`).
- API Key Resolution (`resolveEnvVariable`).
- Silent Mode Control (`enableSilentMode`, `disableSilentMode`).
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
- **Purpose**: Provides an MCP (Model Context Protocol) interface for Task Master, allowing integration with external tools like Cursor. Uses FastMCP framework.
- **Purpose**: Provides MCP interface using FastMCP.
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
- Registers Task Master functionalities as tools consumable via MCP.
- Handles MCP requests via tool `execute` methods defined in `mcp-server/src/tools/*.js`.
- Tool `execute` methods call corresponding **direct function wrappers**.
- Tool `execute` methods use `getProjectRootFromSession` (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) to determine the project root from the client session and pass it to the direct function.
- **Direct function wrappers (`*Direct` functions in `mcp-server/src/core/direct-functions/*.js`) contain the main logic for handling MCP requests**, including path resolution, argument validation, caching, and calling core Task Master functions.
- Direct functions use `findTasksJsonPath` (from [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js)) to locate `tasks.json` based on the provided `projectRoot`.
- **Silent Mode Implementation**: Direct functions use `enableSilentMode` and `disableSilentMode` to prevent logs from interfering with JSON responses.
- **Project Initialization**: Provides `initialize_project` command for setting up new projects from within integrated clients.
- Tool `execute` methods use `handleApiResult` from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) to process the result from the direct function and format the final MCP response.
- Uses CLI execution via `executeTaskMasterCommand` as a fallback only when necessary.
- **Implements Robust Path Finding**: The utility [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) (specifically `getProjectRootFromSession`) and [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js) (specifically `findTasksJsonPath`) work together. The tool gets the root via session, passes it to the direct function, which uses `findTasksJsonPath` to locate the specific `tasks.json` file within that root.
- **Implements Caching**: Utilizes a caching layer (`ContextManager` with `lru-cache`). Caching logic is invoked *within* the direct function wrappers using the `getCachedOrExecute` utility for performance-sensitive read operations.
- Standardizes response formatting and data filtering using utilities in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
- **Resource Management**: Provides access to static and dynamic resources.
- **Key Components**:
- `mcp-server/src/index.js`: Main server class definition with FastMCP initialization, resource registration, and server lifecycle management.
- `mcp-server/src/server.js`: Main server setup and initialization.
- `mcp-server/src/tools/`: Directory containing individual tool definitions. Each tool's `execute` method orchestrates the call to core logic and handles the response.
- `mcp-server/src/tools/utils.js`: Provides MCP-specific utilities like `handleApiResult`, `processMCPResponseData`, `getCachedOrExecute`, and **`getProjectRootFromSession`**.
- `mcp-server/src/core/utils/`: Directory containing utility functions specific to the MCP server, like **`path-utils.js` for resolving `tasks.json` within a given root**.
- `mcp-server/src/core/direct-functions/`: Directory containing individual files for each **direct function wrapper (`*Direct`)**. These files contain the primary logic for MCP tool execution.
- `mcp-server/src/core/resources/`: Directory containing resource handlers for task templates, workflow definitions, and other static/dynamic data exposed to LLM clients.
- [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js): Acts as an import/export hub, collecting and exporting direct functions from the `direct-functions` directory and MCP utility functions.
- **Naming Conventions**:
- **Files** use **kebab-case**: `list-tasks.js`, `set-task-status.js`, `parse-prd.js`
- **Direct Functions** use **camelCase** with `Direct` suffix: `listTasksDirect`, `setTaskStatusDirect`, `parsePRDDirect`
- **Tool Registration Functions** use **camelCase** with `Tool` suffix: `registerListTasksTool`, `registerSetTaskStatusTool`
- **MCP Tool Names** use **snake_case**: `list_tasks`, `set_task_status`, `parse_prd_document`
- **Resource Handlers** use **camelCase** with pattern URI: `@mcp.resource("tasks://templates/{template_id}")`
- Registers tools (`mcp-server/src/tools/*.js`).
- Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`).
- Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session.
- Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`.
- Manages MCP caching and response formatting.
- **[`init.js`](mdc:scripts/init.js): Project Initialization Logic**
- **Purpose**: Contains the core logic for setting up a new Task Master project structure.
- **Responsibilities**:
- Creates necessary directories (`.cursor/rules`, `scripts`, `tasks`).
- Copies template files (`.env.example`, `.gitignore`, rule files, `dev.js`, etc.).
- Creates or merges `package.json` with required dependencies and scripts.
- Sets up MCP configuration (`.cursor/mcp.json`).
- Optionally initializes a git repository and installs dependencies.
- Handles user prompts for project details *if* called without skip flags (`-y`).
- **Key Function**:
- `initializeProject(options)`: The main function exported and called by the `init` command's action handler in [`commands.js`](mdc:scripts/modules/commands.js). It receives parsed options directly.
- **Note**: This script is used as a module and no longer handles its own argument parsing or direct execution via a separate `bin` file.
- **Purpose**: Sets up new Task Master project structure.
- **Responsibilities**: Creates directories, copies templates, manages `package.json`, sets up `.cursor/mcp.json`.
- **Data Flow and Module Dependencies**:
- **Data Flow and Module Dependencies (Updated)**:
- **Commands Initiate Actions**: User commands entered via the CLI (parsed by `commander` based on definitions in [`commands.js`](mdc:scripts/modules/commands.js)) are the entry points for most operations.
- **Command Handlers Delegate to Core Logic**: Action handlers within [`commands.js`](mdc:scripts/modules/commands.js) call functions in core modules like [`task-manager.js`](mdc:scripts/modules/task-manager.js), [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js), and [`init.js`](mdc:scripts/init.js) (for the `init` command) to perform the actual work.
- **Core Logic Calls AI Service Layer**: Core modules requiring AI functionality (like [`task-manager.js`](mdc:scripts/modules/task-manager.js)) **import and call functions from the unified AI service layer (`ai-services-unified.js`)**, such as `generateTextService`.
- **AI Service Layer Orchestrates**: [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js) uses [`config-manager.js`](mdc:scripts/modules/config-manager.js) to get settings, selects the appropriate provider function from [`src/ai-providers/*.js`](mdc:src/ai-providers/), resolves API keys (using `resolveEnvVariable` from [`utils.js`](mdc:scripts/modules/utils.js)), and handles fallbacks/retries.
- **Provider Implementation Executes**: The selected function in [`src/ai-providers/*.js`](mdc:src/ai-providers/) interacts with the Vercel AI SDK core functions (`generateText`, `streamText`) using the Vercel provider adapters.
- **UI for Presentation**: [`ui.js`](mdc:scripts/modules/ui.js) is used by command handlers and core modules to display information to the user. UI functions primarily consume data and format it for output.
- **Utilities for Common Tasks**: [`utils.js`](mdc:scripts/modules/utils.js) provides helper functions (logging, file I/O, string manipulation, API key resolution) used by various modules.
- **MCP Server Interaction**: External tools interact with the `mcp-server`. MCP Tool `execute` methods call direct function wrappers (`*Direct` functions) which then call the core logic from `scripts/modules/`. If AI is needed, the core logic calls the unified AI service layer as described above. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details.
- **CLI**: `bin/task-master.js` -> `scripts/dev.js` (loads `.env`) -> `scripts/modules/commands.js` -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
- **MCP**: External Tool -> `mcp-server/server.js` -> Tool (`mcp-server/src/tools/*`) -> Direct Function (`mcp-server/src/core/direct-functions/*`) -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API.
- **Configuration**: Core logic needing non-AI settings calls `config-manager.js` getters (passing `session.env` via `explicitRoot` if from MCP). Unified AI Service internally calls `config-manager.js` getters (using `role`) for AI params and `utils.js` (`resolveEnvVariable` with `session.env`) for API keys.
## Silent Mode Implementation Pattern in MCP Direct Functions

View File

@@ -3,7 +3,6 @@ description: Guide for using Task Master to manage task-driven development workf
globs: **/*
alwaysApply: true
---
# Task Master Development Workflow
This guide outlines the typical process for using Task Master to manage software development projects.
@@ -29,21 +28,21 @@ Task Master offers two primary ways to interact:
## Standard Development Workflow Process
- Start new projects by running `init` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json
- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs
- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Analyze task complexity with `analyze_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks
- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
- Clarify tasks by checking task files in tasks/ directory or asking for user input
- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements
- Break down complex tasks using `expand_task` / `task-master expand --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`.
- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating
- Implement code following task details, dependencies, and project standards
- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent=<id> --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json
@@ -53,29 +52,30 @@ Task Master offers two primary ways to interact:
## Task Complexity Analysis
- Run `analyze_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis
- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version.
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
- Use analysis results to determine appropriate subtask allocation
- Note that reports are automatically used by the `expand` tool/command
- Note that reports are automatically used by the `expand_task` tool/command
## Task Breakdown Process
- For tasks with complexity analysis, use `expand_task` / `task-master expand --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc))
- Otherwise use `expand_task` / `task-master expand --id=<id> --num=<number>`
- Add `--research` flag to leverage Perplexity AI for research-backed expansion
- Use `--prompt="<context>"` to provide additional context when needed
- Review and adjust generated subtasks as necessary
- Use `--all` flag with `expand` or `expand_all` to expand multiple pending tasks at once
- If subtasks need regeneration, clear them first with `clear_subtasks` / `task-master clear-subtasks` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
- Use `--prompt="<context>"` to provide additional context when needed.
- Review and adjust generated subtasks as necessary.
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
## Implementation Drift Handling
- When implementation differs significantly from planned approach
- When future tasks need modification due to current implementation choices
- When new dependencies or requirements emerge
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to update a single specific task.
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
## Task Status Management
@@ -97,9 +97,9 @@ Task Master offers two primary ways to interact:
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
- Refer to [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc) for more details on the task data structure.
- Refer to task structure details (previously linked to `tasks.mdc`).
## Configuration Management
## Configuration Management (Updated)
Taskmaster configuration is managed through two main mechanisms:
@@ -114,15 +114,15 @@ Taskmaster configuration is managed through two main mechanisms:
* Used **only** for sensitive API keys and specific endpoint URLs.
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
* Available keys/variables: See `assets/env.example` or the `Configuration` section in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the mcp.json
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the .env in the root of the project.
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
## Determining the Next Task
- Run `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to show the next task to work on
- Run `next_task` / `task-master next` to show the next task to work on.
- The command identifies tasks with all dependencies satisfied
- Tasks are prioritized by priority level, dependency count, and ID
- The command shows comprehensive task information including:
@@ -137,7 +137,7 @@ Taskmaster configuration is managed through two main mechanisms:
## Viewing Specific Task Details
- Run `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to view a specific task
- Run `get_task` / `task-master show <id>` to view a specific task.
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
- Displays comprehensive information similar to the next command, but for a specific task
- For parent tasks, shows all subtasks and their current status
@@ -147,8 +147,8 @@ Taskmaster configuration is managed through two main mechanisms:
## Managing Task Dependencies
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to add a dependency
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to remove a dependency
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
- The system prevents circular dependencies and duplicate dependency entries
- Dependencies are checked for existence before being added or removed
- Task files are automatically regenerated after dependency changes
@@ -168,14 +168,14 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
* Gather *all* relevant details from this exploration phase.
3. **Log the Plan:**
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
4. **Verify the Plan:**
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
5. **Begin Implementation:**
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)).
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
* Start coding based on the logged plan.
6. **Refine and Log Progress (Iteration 2+):**
@@ -193,7 +193,7 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
7. **Review & Update Rules (Post-Implementation):**
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
* Create new or update existing Cursor rules in the `.cursor/rules/` directory to capture these patterns, following the guidelines in [`cursor_rules.mdc`](mdc:.cursor/rules/cursor_rules.mdc) and [`self_improve.mdc`](mdc:.cursor/rules/self_improve.mdc).
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`).
8. **Mark Task Complete:**
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
@@ -202,10 +202,10 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
* Stage the relevant code changes and any updated/new rule files (`git add .`).
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
* Consider if a Changeset is needed according to [`changeset.mdc`](mdc:.cursor/rules/changeset.mdc). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
10. **Proceed to Next Subtask:**
* Identify the next subtask in the dependency chain (e.g., using `next_task` / `task-master next`) and repeat this iterative process starting from step 1.
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
## Code Analysis & Refactoring Techniques
@@ -215,10 +215,5 @@ Once a task has been broken down into subtasks using `expand_task` or similar me
`rg "export (async function|function|const) \w+"` or similar patterns.
- Can help compare functions between files during migrations or identify potential naming conflicts.
---
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*
`rg "export (async function|function|const) \w+"` or similar patterns.
- Can help compare functions between files during migrations or identify potential naming conflicts.
---
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*

View File

@@ -3,7 +3,6 @@ description: Guidelines for implementing and interacting with the Task Master MC
globs: mcp-server/src/**/*, scripts/modules/**/*
alwaysApply: false
---
# Task Master MCP Server Guidelines
This document outlines the architecture and implementation patterns for the Task Master Model Context Protocol (MCP) server, designed for integration with tools like Cursor.
@@ -90,69 +89,54 @@ When implementing a new direct function in `mcp-server/src/core/direct-functions
```
5. **Handling Logging Context (`mcpLog`)**:
- **Requirement**: Core functions that use the internal `report` helper function (common in `task-manager.js`, `dependency-manager.js`, etc.) expect the `options` object to potentially contain an `mcpLog` property. This `mcpLog` object **must** have callable methods for each log level (e.g., `mcpLog.info(...)`, `mcpLog.error(...)`).
- **Challenge**: The `log` object provided by FastMCP to the direct function's context, while functional, might not perfectly match this expected structure or could change in the future. Passing it directly can lead to runtime errors like `mcpLog[level] is not a function`.
- **Solution: The Logger Wrapper Pattern**: To reliably bridge the FastMCP `log` object and the core function's `mcpLog` expectation, use a simple wrapper object within the direct function:
- **Requirement**: Core functions (like those in `task-manager.js`) may accept an `options` object containing an optional `mcpLog` property. If provided, the core function expects this object to have methods like `mcpLog.info(...)`, `mcpLog.error(...)`.
- **Solution: The Logger Wrapper Pattern**: When calling a core function from a direct function, pass the `log` object provided by FastMCP *wrapped* in the standard `logWrapper` object. This ensures the core function receives a logger with the expected method structure.
```javascript
// Standard logWrapper pattern within a Direct Function
const logWrapper = {
info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args),
error: (message, ...args) => log.error(message, ...args),
debug: (message, ...args) => log.debug && log.debug(message, ...args), // Handle optional debug
success: (message, ...args) => log.info(message, ...args) // Map success to info if needed
debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args)
};
// ... later when calling the core function ...
await coreFunction(
// ... other arguments ...
tasksPath,
taskId,
{
mcpLog: logWrapper, // Pass the wrapper object
session
session // Also pass session if needed by core logic or AI service
},
'json' // Pass 'json' output format if supported by core function
);
```
- **Critical For JSON Output Format**: Passing the `logWrapper` as `mcpLog` serves a dual purpose:
1. **Prevents Runtime Errors**: It ensures the `mcpLog[level](...)` calls within the core function succeed
2. **Controls Output Format**: In functions like `updateTaskById` and `updateSubtaskById`, the presence of `mcpLog` in the options triggers setting `outputFormat = 'json'` (instead of 'text'). This prevents UI elements (spinners, boxes) from being generated, which would break the JSON response.
- **Proven Solution**: This pattern has successfully fixed multiple issues in our MCP tools (including `update-task` and `update-subtask`), where direct passing of the `log` object or omitting `mcpLog` led to either runtime errors or JSON parsing failures from UI output.
- **When To Use**: Implement this wrapper in any direct function that calls a core function with an `options` object that might use `mcpLog` for logging or output format control.
- **Why it Works**: The `logWrapper` explicitly defines the `.info()`, `.warn()`, `.error()`, etc., methods that the core function's `report` helper needs, ensuring the `mcpLog[level](...)` call succeeds. It simply forwards the logging calls to the actual FastMCP `log` object.
- **Combined with Silent Mode**: Remember that using the `logWrapper` for `mcpLog` is **necessary *in addition* to using `enableSilentMode()` / `disableSilentMode()`** (see next point). The wrapper handles structured logging *within* the core function, while silent mode suppresses direct `console.log` and UI elements (spinners, boxes) that would break the MCP JSON response.
- **JSON Output**: Passing `mcpLog` (via the wrapper) often triggers the core function to use a JSON-friendly output format, suppressing spinners/boxes.
- ✅ **DO**: Implement this pattern in direct functions calling core functions that might use `mcpLog`.
6. **Silent Mode Implementation**:
- ✅ **DO**: Import silent mode utilities at the top: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';`
- ✅ **DO**: Ensure core Task Master functions called from direct functions do **not** pollute `stdout` with console output (banners, spinners, logs) that would break MCP's JSON communication.
- **Preferred**: Modify the core function to accept an `outputFormat: 'json'` parameter and check it internally before printing UI elements. Pass `'json'` from the direct function.
- **Required Fallback/Guarantee**: If the core function cannot be modified or its output suppression is unreliable, **wrap the core function call** within the direct function using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block. This guarantees no console output interferes with the MCP response.
- ✅ **DO**: Use `isSilentMode()` function to check global silent mode status if needed (rare in direct functions), NEVER access the global `silentMode` variable directly.
- ❌ **DON'T**: Wrap AI client initialization or AI API calls in `enable/disableSilentMode`; their logging is controlled via the `log` object (passed potentially within the `logWrapper` for core functions).
- ❌ **DON'T**: Assume a core function is silent just because it *should* be. Verify or use the `enable/disableSilentMode` wrapper.
- **Example (Direct Function Guaranteeing Silence and using Log Wrapper)**:
- ✅ **DO**: Import silent mode utilities: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';`
- ✅ **DO**: Wrap core function calls *within direct functions* using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block if the core function might produce console output (spinners, boxes, direct `console.log`) that isn't reliably controlled by passing `{ mcpLog }` or an `outputFormat` parameter.
- ✅ **DO**: Always disable silent mode in the `finally` block.
- ❌ **DON'T**: Wrap calls to the unified AI service (`generateTextService`, `generateObjectService`) in silent mode; their logging is handled internally.
- **Example (Direct Function Guaranteeing Silence & using Log Wrapper)**:
```javascript
export async function coreWrapperDirect(args, log, context = {}) {
const { session } = context;
const tasksPath = findTasksJsonPath(args, log);
// Create the logger wrapper
const logWrapper = { /* ... as defined above ... */ };
const logWrapper = { /* ... */ };
enableSilentMode(); // Ensure silence for direct console output
try {
// Call core function, passing wrapper and 'json' format
const result = await coreFunction(
tasksPath,
args.param1,
{ mcpLog: logWrapper, session },
'json' // Explicitly request JSON format if supported
);
tasksPath,
args.param1,
{ mcpLog: logWrapper, session }, // Pass context
'json' // Request JSON format if supported
);
return { success: true, data: result };
} catch (error) {
log.error(`Error: ${error.message}`);
// Return standardized error object
return { success: false, error: { /* ... */ } };
} finally {
disableSilentMode(); // Critical: Always disable in finally
@@ -163,32 +147,6 @@ When implementing a new direct function in `mcp-server/src/core/direct-functions
7. **Debugging MCP/Core Logic Interaction**:
- ✅ **DO**: If an MCP tool fails with unclear errors (like JSON parsing failures), run the equivalent `task-master` CLI command in the terminal. The CLI often provides more detailed error messages originating from the core logic (e.g., `ReferenceError`, stack traces) that are obscured by the MCP layer.
### Specific Guidelines for AI-Based Direct Functions
Direct functions that interact with AI (e.g., `addTaskDirect`, `expandTaskDirect`) have additional responsibilities:
- **Context Parameter**: These functions receive an additional `context` object as their third parameter. **Critically, this object should only contain `{ session }`**. Do NOT expect or use `reportProgress` from this context.
```javascript
export async function yourAIDirect(args, log, context = {}) {
const { session } = context; // Only expect session
// ...
}
```
- **AI Client Initialization**:
- ✅ **DO**: Use the utilities from [`mcp-server/src/core/utils/ai-client-utils.js`](mdc:mcp-server/src/core/utils/ai-client-utils.js) (e.g., `getAnthropicClientForMCP(session, log)`) to get AI client instances. These correctly use the `session` object to resolve API keys.
- ✅ **DO**: Wrap client initialization in a try/catch block and return a specific `AI_CLIENT_ERROR` on failure.
- **AI Interaction**:
- ✅ **DO**: Build prompts using helper functions where appropriate (e.g., from `ai-prompt-helpers.js`).
- ✅ **DO**: Make the AI API call using appropriate helpers (e.g., `_handleAnthropicStream`). Pass the `log` object to these helpers for internal logging. **Do NOT pass `reportProgress`**.
- ✅ **DO**: Parse the AI response using helpers (e.g., `parseTaskJsonResponse`) and handle parsing errors with a specific code (e.g., `RESPONSE_PARSING_ERROR`).
- **Calling Core Logic**:
- ✅ **DO**: After successful AI interaction, call the relevant core Task Master function (from `scripts/modules/`) if needed (e.g., `addTaskDirect` calls `addTask`).
- ✅ **DO**: Pass necessary data, including potentially the parsed AI results, to the core function.
- ✅ **DO**: If the core function can produce console output, call it with an `outputFormat: 'json'` argument (or similar, depending on the function) to suppress CLI output. Ensure the core function is updated to respect this. Use `enableSilentMode/disableSilentMode` around the core function call as a fallback if `outputFormat` is not supported or insufficient.
- **Progress Indication**:
- ❌ **DON'T**: Call `reportProgress` within the direct function.
- ✅ **DO**: If intermediate progress status is needed *within* the long-running direct function, use standard logging: `log.info('Progress: Processing AI response...')`.
## Tool Definition and Execution
### Tool Structure
@@ -221,21 +179,14 @@ server.addTool({
The `execute` function receives validated arguments and the FastMCP context:
```javascript
// Standard signature
execute: async (args, context) => {
// Tool implementation
}
// Destructured signature (recommended)
execute: async (args, { log, reportProgress, session }) => {
execute: async (args, { log, session }) => {
// Tool implementation
}
```
- **args**: The first parameter contains all the validated parameters defined in the tool's schema.
- **context**: The second parameter is an object containing `{ log, reportProgress, session }` provided by FastMCP.
- ✅ **DO**: Use `{ log, session }` when calling direct functions.
- ⚠️ **WARNING**: Avoid passing `reportProgress` down to direct functions due to client compatibility issues. See Progress Reporting Convention below.
- **args**: Validated parameters.
- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`).
### Standard Tool Execution Pattern
@@ -245,11 +196,12 @@ The `execute` method within each MCP tool (in `mcp-server/src/tools/*.js`) shoul
2. **Get Project Root**: Use the `getProjectRootFromSession(session, log)` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) to extract the project root path from the client session. Fall back to `args.projectRoot` if the session doesn't provide a root.
3. **Call Direct Function**: Invoke the corresponding `*Direct` function wrapper (e.g., `listTasksDirect` from [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)), passing an updated `args` object that includes the resolved `projectRoot`. Crucially, the third argument (context) passed to the direct function should **only include `{ log, session }`**. **Do NOT pass `reportProgress`**.
```javascript
// Example call to a non-AI direct function
const result = await someDirectFunction({ ...args, projectRoot }, log);
// Example call to an AI-based direct function
const resultAI = await someAIDirect({ ...args, projectRoot }, log, { session });
// Example call (applies to both AI and non-AI direct functions now)
const result = await someDirectFunction(
{ ...args, projectRoot }, // Args including resolved root
log, // MCP logger
{ session } // Context containing session
);
```
4. **Handle Result**: Receive the result object (`{ success, data/error, fromCache }`) from the `*Direct` function.
5. **Format Response**: Pass this result object to the `handleApiResult` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) for standardized MCP response formatting and error handling.
@@ -288,85 +240,6 @@ execute: async (args, { log, session }) => { // Note: reportProgress is omitted
}
```
### Using AsyncOperationManager for Background Tasks
For tools that execute potentially long-running operations *where the AI call is just one part* (e.g., `expand-task`, `update`), use the AsyncOperationManager. The `add-task` command, as refactored, does *not* require this in the MCP tool layer because the direct function handles the primary AI work and returns the final result synchronously from the perspective of the MCP tool.
For tools that *do* use `AsyncOperationManager`:
```javascript
import { AsyncOperationManager } from '../utils/async-operation-manager.js'; // Correct path assuming utils location
import { getProjectRootFromSession, createContentResponse, createErrorResponse } from './utils.js';
import { someIntensiveDirect } from '../core/task-master-core.js';
// ... inside server.addTool({...})
execute: async (args, { log, session }) => { // Note: reportProgress omitted
try {
log.info(`Starting background operation with args: ${JSON.stringify(args)}`);
// 1. Get Project Root
let rootFolder = getProjectRootFromSession(session, log);
if (!rootFolder && args.projectRoot) {
rootFolder = args.projectRoot;
log.info(`Using project root from args as fallback: ${rootFolder}`);
}
// Create operation description
const operationDescription = `Expanding task ${args.id}...`; // Example
// 2. Start async operation using AsyncOperationManager
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgressCallback) => { // This callback is provided by AsyncOperationManager
// This runs in the background
try {
// Report initial progress *from the manager's callback*
reportProgressCallback({ progress: 0, status: 'Starting operation...' });
// Call the direct function (passing only session context)
const result = await someIntensiveDirect(
{ ...args, projectRoot: rootFolder },
log,
{ session } // Pass session, NO reportProgress
);
// Report final progress *from the manager's callback*
reportProgressCallback({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data, // Include final data if successful
error: result.error // Include error object if failed
});
return result; // Return the direct function's result
} catch (error) {
// Handle errors within the async task
reportProgressCallback({
progress: 100,
status: 'Operation failed critically',
error: { message: error.message, code: error.code || 'ASYNC_OPERATION_FAILED' }
});
throw error; // Re-throw for the manager to catch
}
}
);
// 3. Return immediate response with operation ID
return {
status: 202, // StatusCodes.ACCEPTED
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
log.error(`Error starting background operation: ${error.message}`);
return createErrorResponse(`Failed to start operation: ${error.message}`); // Use standard error response
}
}
```
### Project Initialization Tool
The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project:
@@ -417,19 +290,13 @@ log.error(`Error occurred: ${error.message}`, { stack: error.stack });
log.info('Progress: 50% - AI call initiated...'); // Example progress logging
```
### Progress Reporting Convention
- ⚠️ **DEPRECATED within Direct Functions**: The `reportProgress` function passed in the `context` object should **NOT** be called from within `*Direct` functions. Doing so can cause client-side validation errors due to missing/incorrect `progressToken` handling.
- ✅ **DO**: For tools using `AsyncOperationManager`, use the `reportProgressCallback` function *provided by the manager* within the background task definition (as shown in the `AsyncOperationManager` example above) to report progress updates for the *overall operation*.
- ✅ **DO**: If finer-grained progress needs to be indicated *during* the execution of a `*Direct` function (whether called directly or via `AsyncOperationManager`), use `log.info()` statements (e.g., `log.info('Progress: Parsing AI response...')`).
### Session Usage Convention
## Session Usage Convention
The `session` object (destructured from `context`) contains authenticated session data and client information.
- **Authentication**: Access user-specific data (`session.userId`, etc.) if authentication is implemented.
- **Project Root**: The primary use in Task Master is accessing `session.roots` to determine the client's project root directory via the `getProjectRootFromSession` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)). See the Standard Tool Execution Pattern above.
- **Environment Variables**: The `session.env` object is critical for AI tools. Pass the `session` object to the `*Direct` function's context, and then to AI client utility functions (like `getAnthropicClientForMCP`) which will extract API keys and other relevant environment settings (e.g., `MODEL`, `MAX_TOKENS`) from `session.env`.
- **Environment Variables**: The `session.env` object provides access to environment variables set in the MCP client configuration (e.g., `.cursor/mcp.json`). This is the **primary mechanism** for the unified AI service layer (`ai-services-unified.js`) to securely access **API keys** when called from MCP context.
- **Capabilities**: Can be used to check client capabilities (`session.clientCapabilities`).
## Direct Function Wrappers (`*Direct`)
@@ -438,24 +305,25 @@ These functions, located in `mcp-server/src/core/direct-functions/`, form the co
- **Purpose**: Bridge MCP tools and core Task Master modules (`scripts/modules/*`). Handle AI interactions if applicable.
- **Responsibilities**:
- Receive `args` (including the `projectRoot` determined by the tool), `log` object, and optionally a `context` object (containing **only `{ session }` if needed).
- **Find `tasks.json`**: Use `findTasksJsonPath(args, log)` from [`core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js).
- Validate arguments specific to the core logic.
- **Handle AI Logic (if applicable)**: Initialize AI clients (using `session` from context), build prompts, make AI calls, parse responses.
- **Implement Caching (if applicable)**: Use `getCachedOrExecute` from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) for read operations.
- **Call Core Logic**: Call the underlying function from the core Task Master modules, passing necessary data (including AI results if applicable).
- ✅ **DO**: Pass `outputFormat: 'json'` (or similar) to the core function if it might produce console output.
- ✅ **DO**: Wrap the core function call with `enableSilentMode/disableSilentMode` if necessary.
- Handle errors gracefully (AI errors, core logic errors, file errors).
- Return a standardized result object: `{ success: boolean, data?: any, error?: { code: string, message: string }, fromCache?: boolean }`.
- ❌ **DON'T**: Call `reportProgress`. Use `log.info` for progress indication if needed.
- Receive `args` (including `projectRoot`), `log`, and optionally `{ session }` context.
- Find `tasks.json` using `findTasksJsonPath`.
- Validate arguments.
- **Implement Caching (if applicable)**: Use `getCachedOrExecute`.
- **Call Core Logic**: Invoke function from `scripts/modules/*`.
- Pass `outputFormat: 'json'` if applicable.
- Wrap with `enableSilentMode/disableSilentMode` if needed.
- Pass `{ mcpLog: logWrapper, session }` context if core logic needs it.
- Handle errors.
- Return standardized result object.
- ❌ **DON'T**: Call `reportProgress`.
- ❌ **DON'T**: Initialize AI clients or call AI services directly.
## Key Principles
- **Prefer Direct Function Calls**: MCP tools should always call `*Direct` wrappers instead of `executeTaskMasterCommand`.
- **Standardized Execution Flow**: Follow the pattern: MCP Tool -> `getProjectRootFromSession` -> `*Direct` Function -> Core Logic / AI Logic.
- **Path Resolution via Direct Functions**: The `*Direct` function is responsible for finding the exact `tasks.json` path using `findTasksJsonPath`, relying on the `projectRoot` passed in `args`.
- **AI Logic in Direct Functions**: For AI-based tools, the `*Direct` function handles AI client initialization, calls, and parsing, using the `session` object passed in its context.
- **AI Logic in Core Modules**: AI interactions (prompt building, calling unified service) reside within the core logic functions (`scripts/modules/*`), not direct functions.
- **Silent Mode in Direct Functions**: Wrap *core function* calls (from `scripts/modules`) with `enableSilentMode()` and `disableSilentMode()` if they produce console output not handled by `outputFormat`. Do not wrap AI calls.
- **Selective Async Processing**: Use `AsyncOperationManager` in the *MCP Tool layer* for operations involving multiple steps or long waits beyond a single AI call (e.g., file processing + AI call + file writing). Simple AI calls handled entirely within the `*Direct` function (like `addTaskDirect`) may not need it at the tool layer.
- **No `reportProgress` in Direct Functions**: Do not pass or use `reportProgress` within `*Direct` functions. Use `log.info()` for internal progress or report progress from the `AsyncOperationManager` callback in the MCP tool layer.
@@ -480,7 +348,7 @@ Follow these steps to add MCP support for an existing Task Master command (see [
1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`. Ensure the core function can suppress console output (e.g., via an `outputFormat` parameter).
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**:
2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**:
- Create a new file (e.g., `your-command.js`) using **kebab-case** naming.
- Import necessary core functions, `findTasksJsonPath`, silent mode utilities, and potentially AI client/prompt utilities.
- Implement `async function yourCommandDirect(args, log, context = {})` using **camelCase** with `Direct` suffix. **Remember `context` should only contain `{ session }` if needed (for AI keys/config).**

View File

@@ -69,5 +69,4 @@ alwaysApply: true
- Update references to external docs
- Maintain links between related rules
- Document breaking changes
Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure.
Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure.

View File

@@ -3,14 +3,13 @@ description: Comprehensive reference for Taskmaster MCP tools and CLI commands.
globs: **/*
alwaysApply: true
---
# Taskmaster Tool & Command Reference
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for MCP implementation details and [`commands.mdc`](mdc:.cursor/rules/commands.mdc) for CLI implementation guidelines.
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
**Important:** Several MCP tools involve AI processing and are long-running operations that may take up to a minute to complete. When using these tools, always inform users that the operation is in progress and to wait patiently for results. The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
---
@@ -125,6 +124,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Quickly add newly identified tasks during development.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -154,7 +154,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates based on external knowledge. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -167,7 +167,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to update.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt='Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -180,7 +180,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `id`: `Required. The specific ID of the Taskmaster subtask, e.g., '15.2', you want to add information to.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt='Discovered that the API requires header X.\nImplementation needs adjustment...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -216,27 +216,27 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **MCP Tool:** `expand_task`
* **CLI Command:** `task-master expand [options]`
* **Description:** `Use Taskmaster's AI to break down a complex task or all tasks into smaller, manageable subtasks.`
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
* **Key Parameters/Options:**
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
* `num`: `Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis by default.` (CLI: `-n, --num <number>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed subtask generation. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `prompt`: `Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
* `force`: `Use this to make Taskmaster replace existing subtasks with newly generated ones.` (CLI: `--force`)
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding.
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 14. Expand All Tasks (`expand_all`)
* **MCP Tool:** `expand_all`
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
* **Description:** `Tell Taskmaster to automatically expand all 'pending' tasks based on complexity analysis.`
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
* **Key Parameters/Options:**
* `num`: `Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
* `research`: `Enable Perplexity AI for more informed subtask generation. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `prompt`: `Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
* `force`: `Make Taskmaster replace existing subtasks.` (CLI: `--force`)
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -320,7 +320,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
* `research`: `Enable Perplexity AI for more accurate complexity analysis. Requires PERPLEXITY_API_KEY.` (CLI: `-r, --research`)
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
@@ -350,9 +350,9 @@ This document provides a detailed reference for interacting with Taskmaster, cov
---
## Environment Variables Configuration
## Environment Variables Configuration (Updated)
Taskmaster primarily uses the `.taskmasterconfig` file for configuration (models, parameters, logging level, etc.), managed via the `task-master models --setup` command. API keys are stored in either the .env file (for CLI usage) or the mcp.json (for MCP usage)
Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
@@ -361,24 +361,17 @@ Environment variables are used **only** for sensitive API keys related to AI pro
* `PERPLEXITY_API_KEY`
* `OPENAI_API_KEY`
* `GOOGLE_API_KEY`
* `GROK_API_KEY`
* `MISTRAL_API_KEY`
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
* **Endpoints (Optional/Provider Specific):**
* `OPENROUTER_API_KEY`
* `XAI_API_KEY`
* `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):**
* `AZURE_OPENAI_ENDPOINT`
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
Set these in your `.env` file in the project root (for CLI use) or within the `env` section of your `.cursor/mcp.json` file (for MCP/Cursor integration). All other settings like model choice, max tokens, temperature, logging level, etc., are now managed in `.taskmasterconfig` via `task-master models --setup`.
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool.
---
For implementation details:
* CLI commands: See [`commands.mdc`](mdc:.cursor/rules/commands.mdc)
* MCP server: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)
* Task structure: See [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc)
* Workflow: See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc)
* CLI commands: See [`commands.mdc`](mdc:.cursor/rules/commands.mdc)
* MCP server: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)
* Task structure: See [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc)
* Workflow: See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc)
For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc).

View File

@@ -3,7 +3,6 @@ description: Guidelines for implementing utility functions
globs: scripts/modules/utils.js, mcp-server/src/**/*
alwaysApply: false
---
# Utility Function Guidelines
## General Principles
@@ -85,24 +84,24 @@ Taskmaster configuration (excluding API keys) is primarily managed through the `
- **`.taskmasterconfig` File**:
- ✅ DO: Use this JSON file to store settings like AI model selections (main, research, fallback), parameters (temperature, maxTokens), logging level, default priority/subtasks, etc.
- ✅ DO: Manage this file using the `task-master models --setup` command.
- ✅ DO: Rely on [`config-manager.js`](mdc:scripts/modules/config-manager.js) to load this file, merge with defaults, and provide validated settings.
- ✅ DO: Manage this file using the `task-master models --setup` CLI command or the `models` MCP tool.
- ✅ DO: Rely on [`config-manager.js`](mdc:scripts/modules/config-manager.js) to load this file (using the correct project root passed from MCP or found via CLI utils), merge with defaults, and provide validated settings.
- ❌ DON'T: Store API keys in this file.
- ❌ DON'T: Rely on the old `CONFIG` object previously defined in `utils.js`.
- ❌ DON'T: Manually edit this file unless necessary.
- **Configuration Getters (`config-manager.js`)**:
- ✅ DO: Import and use specific getters from `config-manager.js` (e.g., `getMainProvider()`, `getLogLevel()`, `getMainMaxTokens()`) to access configuration values.
- ✅ DO: Pass the optional `explicitRoot` parameter to getters if you need to load config from a specific project path (mainly relevant for MCP direct functions).
- ✅ DO: Import and use specific getters from `config-manager.js` (e.g., `getMainProvider()`, `getLogLevel()`, `getMainMaxTokens()`) to access configuration values *needed for application logic* (like `getDefaultSubtasks`).
- ✅ DO: Pass the `explicitRoot` parameter to getters if calling from MCP direct functions to ensure the correct project's config is loaded.
- ❌ DON'T: Call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions (`scripts/modules/task-manager/*`). Instead, pass the `role` to the unified AI service.
- ❌ DON'T: Access configuration values directly from environment variables (except API keys).
- ❌ DON'T: Use the now-removed `CONFIG` object from `utils.js`.
- **API Key Handling (`utils.js` & `config-manager.js`)**:
- ✅ DO: Store API keys **only** in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
- ✅ DO: Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available.
- ✅ DO: Internally, API keys are resolved using `resolveEnvVariable(key, session)` (from `utils.js`), which checks `process.env` and `session.env`.
- **API Key Handling (`utils.js` & `ai-services-unified.js`)**:
- ✅ DO: Store API keys **only** in `.env` (for CLI, loaded by `dotenv` in `scripts/dev.js`) or `.cursor/mcp.json` (for MCP, accessed via `session.env`).
- ✅ DO: Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available *before* potentially attempting an AI call if needed, but note the unified service performs its own internal check.
- ✅ DO: Understand that the unified service layer (`ai-services-unified.js`) internally resolves API keys using `resolveEnvVariable(key, session)` from `utils.js`.
- **Error Handling**:
- ✅ DO: Be prepared to handle `ConfigurationError` if the `.taskmasterconfig` file is missing (see `runCLI` in [`commands.js`](mdc:scripts/modules/commands.js) for example).
- ✅ DO: Handle potential `ConfigurationError` if the `.taskmasterconfig` file is missing or invalid when accessed via `getConfig` (e.g., in `commands.js` or direct functions).
## Logging Utilities (in `scripts/modules/utils.js`)
@@ -516,9 +515,4 @@ export {
};
```
Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) and [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for more context on MCP server architecture and integration.
getCachedOrExecute
};
```
Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) and [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for more context on MCP server architecture and integration.

View File

@@ -1,20 +1,10 @@
# API Keys (Required)
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Format: sk-ant-api03-...
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Format: pplx-...
# Model Configuration
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229
PERPLEXITY_MODEL=sonar-pro # Perplexity model for research-backed subtasks
MAX_TOKENS=64000 # Maximum tokens for model responses
TEMPERATURE=0.2 # Temperature for model responses (0.0-1.0)
# Logging Configuration
DEBUG=false # Enable debug logging (true/false)
LOG_LEVEL=info # Log level (debug, info, warn, error)
# Task Generation Settings
DEFAULT_SUBTASKS=5 # Default number of subtasks when expanding
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
# Project Metadata (Optional)
PROJECT_NAME=Your Project Name # Override default project name in tasks.json
# API Keys (Required for using in any role i.e. main/research/fallback -- see `task-master models`)
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_KEY_HERE
PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE
OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE
GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE
MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
XAI_API_KEY=YOUR_XAI_KEY_HERE
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
OLLAMA_API_KEY=YOUR_OLLAMA_KEY_HERE

View File

@@ -1,30 +1,31 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 120000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseUrl": "http://localhost:11434/api"
}
}
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 120000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -13,25 +13,22 @@ A task management system for AI-driven development with Claude, designed to work
## Configuration
The script can be configured through environment variables in a `.env` file at the root of the project:
Taskmaster uses two primary configuration methods:
### Required Configuration
1. **`.taskmasterconfig` File (Project Root)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
- Stores most settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default priority/subtasks, project name.
- **Created and managed using `task-master models --setup` CLI command or the `models` MCP tool.**
- Do not edit manually unless you know what you are doing.
### Optional Configuration
2. **Environment Variables (`.env` file or MCP `env` block)**
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`, etc.) and specific endpoints (like `OLLAMA_BASE_URL`).
- **For CLI:** Place keys in a `.env` file in your project root.
- **For MCP/Cursor:** Place keys in the `env` section of your `.cursor/mcp.json` (or other MCP config according to the AI IDE or client you use) file under the `taskmaster-ai` server definition.
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
**Important:** Settings like model choices, max tokens, temperature, and log level are **no longer configured via environment variables.** Use the `task-master models` command or tool.
See the [Configuration Guide](docs/configuration.md) for full details.
## Installation

View File

@@ -37,12 +37,13 @@ npm i -g task-master-ai
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_KEY_HERE"
}
}
}

View File

@@ -25,6 +25,7 @@
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseUrl": "http://localhost:11434/api"
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -3,8 +3,7 @@ ANTHROPIC_API_KEY=your_anthropic_api_key_here # Required: Format: sk-ant-a
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Optional: Format: pplx-...
OPENAI_API_KEY=your_openai_api_key_here # Optional, for OpenAI/OpenRouter models. Format: sk-proj-...
GOOGLE_API_KEY=your_google_api_key_here # Optional, for Google Gemini models.
GROK_API_KEY=your_grok_api_key_here # Optional, for XAI Grok models.
MISTRAL_API_KEY=your_mistral_key_here # Optional, for Mistral AI models.
AZURE_OPENAI_API_KEY=your_azure_key_here # Optional, for Azure OpenAI models.
AZURE_OPENAI_ENDPOINT=your_azure_endpoint_here # Optional, for Azure OpenAI.
OLLAMA_BASE_URL=http://localhost:11434/api # Base URL for local Ollama instance (Optional)
XAI_API_KEY=YOUR_XAI_KEY_HERE # Optional, for xAI AI models.
AZURE_OPENAI_API_KEY=your_azure_key_here # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).
OLLAMA_API_KEY=YOUR_OLLAMA_KEY_HERE # Optional, for local Ollama AI models (requires endpoint in .taskmasterconfig).

View File

@@ -16,27 +16,22 @@ In an AI-driven development process—particularly with tools like [Cursor](http
8. **Clear subtasks**—remove subtasks from specified tasks to allow regeneration or restructuring.
9. **Show task details**—display detailed information about a specific task and its subtasks.
## Configuration
## Configuration (Updated)
The script can be configured through environment variables in a `.env` file at the root of the project:
Task Master configuration is now managed through two primary methods:
### Required Configuration
1. **`.taskmasterconfig` File (Project Root - Primary)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
- Stores AI model selections (`main`, `research`, `fallback`), model parameters (`maxTokens`, `temperature`), `logLevel`, `defaultSubtasks`, `defaultPriority`, `projectName`, etc.
- Managed using the `task-master models --setup` command or the `models` MCP tool.
- This is the main configuration file for most settings.
### Optional Configuration
2. **Environment Variables (`.env` File - API Keys Only)**
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`).
- Create a `.env` file in your project root for CLI usage.
- See `assets/env.example` for required key names.
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
**Important:** Settings like `MODEL`, `MAX_TOKENS`, `TEMPERATURE`, `LOG_LEVEL`, etc., are **no longer set via `.env`**. Use `task-master models --setup` instead.
## How It Works
@@ -194,21 +189,14 @@ Notes:
- Can be combined with the `expand` command to immediately generate new subtasks
- Works with both parent tasks and individual subtasks
## AI Integration
## AI Integration (Updated)
The script integrates with two AI services:
1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks.
2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified.
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
To use the Perplexity integration:
1. Obtain a Perplexity API key
2. Add `PERPLEXITY_API_KEY` to your `.env` file
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
4. Use the `--research` flag with the `expand` command
- The script now uses a unified AI service layer (`ai-services-unified.js`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmasterconfig` based on the requested `role` (`main` or `research`).
- API keys are automatically resolved from your `.env` file (for CLI) or MCP session environment.
- To use the research capabilities (e.g., `expand --research`), ensure you have:
1. Configured a model for the `research` role using `task-master models --setup` (Perplexity models are recommended).
2. Added the corresponding API key (e.g., `PERPLEXITY_API_KEY`) to your `.env` file.
## Logging

View File

@@ -1,257 +0,0 @@
# AI Client Utilities for MCP Tools
This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools.
## Basic Usage with Direct Functions
```javascript
// In your direct function implementation:
import {
getAnthropicClientForMCP,
getModelConfig,
handleClaudeError
} from '../utils/ai-client-utils.js';
export async function someAiOperationDirect(args, log, context) {
try {
// Initialize Anthropic client with session from context
const client = getAnthropicClientForMCP(context.session, log);
// Get model configuration with defaults or session overrides
const modelConfig = getModelConfig(context.session);
// Make API call with proper error handling
try {
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: 'Your prompt here' }]
});
return {
success: true,
data: response
};
} catch (apiError) {
// Use helper to get user-friendly error message
const friendlyMessage = handleClaudeError(apiError);
return {
success: false,
error: {
code: 'AI_API_ERROR',
message: friendlyMessage
}
};
}
} catch (error) {
// Handle client initialization errors
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message
}
};
}
}
```
## Integration with AsyncOperationManager
```javascript
// In your MCP tool implementation:
import {
AsyncOperationManager,
StatusCodes
} from '../../utils/async-operation-manager.js';
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
export async function someAiOperation(args, context) {
const { session, mcpLog } = context;
const log = mcpLog || console;
try {
// Create operation description
const operationDescription = `AI operation: ${args.someParam}`;
// Start async operation
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgress) => {
try {
// Initial progress report
reportProgress({
progress: 0,
status: 'Starting AI operation...'
});
// Call direct function with session and progress reporting
const result = await someAiOperationDirect(args, log, {
reportProgress,
mcpLog: log,
session
});
// Final progress update
reportProgress({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data,
error: result.error
});
return result;
} catch (error) {
// Handle errors in the operation
reportProgress({
progress: 100,
status: 'Operation failed',
error: {
message: error.message,
code: error.code || 'OPERATION_FAILED'
}
});
throw error;
}
}
);
// Return immediate response with operation ID
return {
status: StatusCodes.ACCEPTED,
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
// Handle errors in the MCP tool
log.error(`Error in someAiOperation: ${error.message}`);
return {
status: StatusCodes.INTERNAL_SERVER_ERROR,
body: {
success: false,
error: {
code: 'OPERATION_FAILED',
message: error.message
}
}
};
}
}
```
## Using Research Capabilities with Perplexity
```javascript
// In your direct function:
import {
getPerplexityClientForMCP,
getBestAvailableAIModel
} from '../utils/ai-client-utils.js';
export async function researchOperationDirect(args, log, context) {
try {
// Get the best AI model for this operation based on needs
const { type, client } = await getBestAvailableAIModel(
context.session,
{ requiresResearch: true },
log
);
// Report which model we're using
if (context.reportProgress) {
await context.reportProgress({
progress: 10,
status: `Using ${type} model for research...`
});
}
// Make API call based on the model type
if (type === 'perplexity') {
// Call Perplexity
const response = await client.chat.completions.create({
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
messages: [{ role: 'user', content: args.researchQuery }],
temperature: 0.1
});
return {
success: true,
data: response.choices[0].message.content
};
} else {
// Call Claude as fallback
// (Implementation depends on specific needs)
// ...
}
} catch (error) {
// Handle errors
return {
success: false,
error: {
code: 'RESEARCH_ERROR',
message: error.message
}
};
}
}
```
## Model Configuration Override Example
```javascript
// In your direct function:
import { getModelConfig } from '../utils/ai-client-utils.js';
// Using custom defaults for a specific operation
const operationDefaults = {
model: 'claude-3-haiku-20240307', // Faster, smaller model
maxTokens: 1000, // Lower token limit
temperature: 0.2 // Lower temperature for more deterministic output
};
// Get model config with operation-specific defaults
const modelConfig = getModelConfig(context.session, operationDefaults);
// Now use modelConfig in your API calls
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature
// Other parameters...
});
```
## Best Practices
1. **Error Handling**:
- Always use try/catch blocks around both client initialization and API calls
- Use `handleClaudeError` to provide user-friendly error messages
- Return standardized error objects with code and message
2. **Progress Reporting**:
- Report progress at key points (starting, processing, completing)
- Include meaningful status messages
- Include error details in progress reports when failures occur
3. **Session Handling**:
- Always pass the session from the context to the AI client getters
- Use `getModelConfig` to respect user settings from session
4. **Model Selection**:
- Use `getBestAvailableAIModel` when you need to select between different models
- Set `requiresResearch: true` when you need Perplexity capabilities
5. **AsyncOperationManager Integration**:
- Create descriptive operation names
- Handle all errors within the operation function
- Return standardized results from direct functions
- Return immediate responses with operation IDs

View File

@@ -52,6 +52,9 @@ task-master show 1.2
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
# Update tasks using research role
task-master update --from=<id> --prompt="<prompt>" --research
```
## Update a Specific Task
@@ -60,7 +63,7 @@ task-master update --from=<id> --prompt="<prompt>"
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates with Perplexity AI
# Use research-backed updates
task-master update-task --id=<id> --prompt="<prompt>" --research
```
@@ -73,7 +76,7 @@ task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Use research-backed updates with Perplexity AI
# Use research-backed updates
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
@@ -187,9 +190,12 @@ task-master fix-dependencies
## Add a New Task
```bash
# Add a new task using AI
# Add a new task using AI (main role)
task-master add-task --prompt="Description of the new task"
# Add a new task using AI (research role)
task-master add-task --prompt="Description of the new task" --research
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3

View File

@@ -1,53 +1,89 @@
# Configuration
Task Master can be configured through environment variables in a `.env` file at the root of your project.
Taskmaster uses two primary methods for configuration:
## Required Configuration
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`)
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** Create this file in the root directory of your project.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseUrl": "http://localhost:11434/api",
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
}
}
```
## Optional Configuration
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key.
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides (in .taskmasterconfig):**
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key.
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `MODEL` (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`)
- `MAX_TOKENS` (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
- `TEMPERATURE` (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
- `DEBUG` (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
- `LOG_LEVEL` (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
- `DEFAULT_SUBTASKS` (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
- `DEFAULT_PRIORITY` (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
- `PROJECT_NAME` (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
- `PROJECT_VERSION` (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`)
- `PERPLEXITY_API_KEY`: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`)
- `PERPLEXITY_MODEL` (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`)
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
## Example .env File
## Example `.env` File (for API Keys)
```
# Required
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
# Required API keys for providers configured in .taskmasterconfig
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# etc.
# Optional - Claude Configuration
MODEL=claude-3-7-sonnet-20250219
MAX_TOKENS=4000
TEMPERATURE=0.7
# Optional - Perplexity API for Research
PERPLEXITY_API_KEY=pplx-your-api-key
PERPLEXITY_MODEL=sonar-medium-online
# Optional - Project Info
PROJECT_NAME=My Project
PROJECT_VERSION=1.0.0
# Optional - Application Configuration
DEFAULT_SUBTASKS=3
DEFAULT_PRIORITY=medium
DEBUG=false
LOG_LEVEL=info
# Optional Endpoint Overrides
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
```
## Troubleshooting
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid.
### If `task-master init` doesn't respond:
Try running it with Node directly:

View File

@@ -51,3 +51,33 @@ Can you analyze the complexity of our tasks to help me understand which ones nee
```
Can you show me the complexity report in a more readable format?
```
### Breaking Down Complex Tasks
```
Task 5 seems complex. Can you break it down into subtasks?
```
(Agent runs: `task-master expand --id=5`)
```
Please break down task 5 using research-backed generation.
```
(Agent runs: `task-master expand --id=5 --research`)
### Updating Tasks with Research
```
We need to update task 15 based on the latest React Query v5 changes. Can you research this and update the task?
```
(Agent runs: `task-master update-task --id=15 --prompt="Update based on React Query v5 changes" --research`)
### Adding Tasks with Research
```
Please add a new task to implement user profile image uploads using Cloudinary, research the best approach.
```
(Agent runs: `task-master add-task --prompt="Implement user profile image uploads using Cloudinary" --research`)

View File

@@ -27,21 +27,22 @@ npm i -g task-master-ai
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_KEY_HERE"
}
}
}
}
```
2. **Enable the MCP** in your editor settings
3. **Enable the MCP** in your editor settings
3. **Prompt the AI** to initialize Task Master:
4. **Prompt the AI** to initialize Task Master:
```
Can you please initialize taskmaster-ai into my project?
@@ -53,9 +54,9 @@ The AI will:
- Set up initial configuration files
- Guide you through the rest of the process
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. **Use natural language commands** to interact with Task Master:
6. **Use natural language commands** to interact with Task Master:
```
Can you parse my PRD at scripts/prd.txt?
@@ -247,13 +248,16 @@ If during implementation, you discover that:
Tell the agent:
```
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
# OR, if research is needed to find best practices for MongoDB:
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
@@ -296,7 +300,7 @@ The agent will execute:
task-master expand --all
```
For research-backed subtask generation using Perplexity AI:
For research-backed subtask generation using the configured research model:
```
Please break down task 5 using research-backed generation.

View File

@@ -77,17 +77,17 @@ function _resolveApiKey(providerName, session) {
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
grok: 'GROK_API_KEY',
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY'
// ollama doesn't need an API key mapped here
xai: 'XAI_API_KEY',
ollama: 'OLLAMA_API_KEY'
};
if (providerName === 'ollama') {
return null; // Ollama typically doesn't require an API key for basic setup
}
// Double check this -- I have had to use an api key for ollama in the past
// if (providerName === 'ollama') {
// return null; // Ollama typically doesn't require an API key for basic setup
// }
const envVarName = keyMap[providerName];
if (!envVarName) {

View File

@@ -376,11 +376,11 @@ function isApiKeySet(providerName, session = null) {
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
grok: 'GROK_API_KEY', // Assuming GROK_API_KEY based on env.example
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY', // Azure needs endpoint too, but key presence is a start
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY'
xai: 'XAI_API_KEY',
ollama: 'OLLAMA_API_KEY'
// Add other providers as needed
};
@@ -435,10 +435,13 @@ function getMcpApiKeyStatus(providerName) {
placeholderValue = 'YOUR_ANTHROPIC_API_KEY_HERE';
break;
case 'openai':
case 'openrouter':
apiKeyToCheck = mcpEnv.OPENAI_API_KEY;
placeholderValue = 'YOUR_OPENAI_API_KEY_HERE'; // Assuming placeholder matches OPENAI
break;
case 'openrouter':
apiKeyToCheck = mcpEnv.OPENROUTER_API_KEY;
placeholderValue = 'YOUR_OPENROUTER_API_KEY_HERE';
break;
case 'google':
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY;
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
@@ -447,13 +450,19 @@ function getMcpApiKeyStatus(providerName) {
apiKeyToCheck = mcpEnv.PERPLEXITY_API_KEY;
placeholderValue = 'YOUR_PERPLEXITY_API_KEY_HERE';
break;
case 'grok':
case 'xai':
apiKeyToCheck = mcpEnv.GROK_API_KEY;
placeholderValue = 'YOUR_GROK_API_KEY_HERE';
apiKeyToCheck = mcpEnv.XAI_API_KEY;
placeholderValue = 'YOUR_XAI_API_KEY_HERE';
break;
case 'ollama':
return true; // No key needed
case 'mistral':
apiKeyToCheck = mcpEnv.MISTRAL_API_KEY;
placeholderValue = 'YOUR_MISTRAL_API_KEY_HERE';
break;
case 'azure':
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
default:
return false; // Unknown provider
}

View File

@@ -87,7 +87,7 @@ const clients = {
gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),
openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),
perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),
grok: createClient({ provider: 'grok', apiKey: process.env.GROK_API_KEY })
grok: createClient({ provider: 'xai', apiKey: process.env.XAI_API_KEY })
};
export function getClient(model) {
@@ -364,7 +364,7 @@ function validateEnvironment(provider) {
perplexity: ['PERPLEXITY_API_KEY'],
openrouter: ['OPENROUTER_API_KEY'],
ollama: ['OLLAMA_BASE_URL'],
grok: ['GROK_API_KEY', 'GROK_BASE_URL']
xai: ['XAI_API_KEY']
};
const missing = requirements[provider]?.filter(env => !process.env[env]) || [];
@@ -642,7 +642,7 @@ When implementing the refactored research processing logic, ensure the following
```
</info added on 2025-04-20T03:55:39.633Z>
## 10. Create Comprehensive Documentation and Examples [pending]
## 10. Create Comprehensive Documentation and Examples [done]
### Dependencies: 61.6, 61.7, 61.8, 61.9
### Description: Develop comprehensive documentation for the new model management features, including examples, troubleshooting guides, and best practices.
### Details:
@@ -1720,7 +1720,7 @@ The new AI architecture introduces a clear separation between sensitive credenti
- Add a configuration validation section explaining how the system verifies settings
</info added on 2025-04-20T03:51:04.461Z>
## 33. Cleanup Old AI Service Files [pending]
## 33. Cleanup Old AI Service Files [done]
### Dependencies: 61.31, 61.32
### Description: After all other migration subtasks (refactoring, provider implementation, testing, documentation) are complete and verified, remove the old `ai-services.js` and `ai-client-factory.js` files from the `scripts/modules/` directory. Ensure no code still references them.
### Details:
@@ -2161,3 +2161,9 @@ These enhancements ensure robust validation, unified service usage, and maintain
### Details:
## 44. Add setters for temperature, max tokens on per role basis. [pending]
### Dependencies: None
### Description: NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.
### Details:

View File

@@ -2982,7 +2982,7 @@
"id": 61,
"title": "Implement Flexible AI Model Management",
"description": "Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.",
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'grok', apiKey: process.env.GROK_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'xai', apiKey: process.env.XAI_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
"testStrategy": "### Test Strategy\n1. **Unit Tests**:\n - Test CLI commands for listing, setting, and validating models.\n - Mock Vercel AI SDK calls to ensure proper integration and error handling.\n\n2. **Integration Tests**:\n - Validate end-to-end functionality of model management commands.\n - Test dynamic switching of models via environment variables.\n\n3. **Error Handling Tests**:\n - Simulate invalid model names and verify error messages.\n - Test API failures for each model provider and ensure graceful degradation.\n\n4. **Documentation Validation**:\n - Verify that `.env_example` and CLI usage examples are accurate and comprehensive.\n\n5. **Performance Tests**:\n - Measure response times for API calls through Vercel AI SDK.\n - Ensure no significant latency is introduced by model switching.\n\n6. **SDK-Specific Tests**:\n - Validate the behavior of `generateText` and `streamText` functions for supported models.\n - Test compatibility with serverless and edge deployments.",
"status": "in-progress",
"dependencies": [],
@@ -3015,7 +3015,7 @@
"dependencies": [
1
],
"details": "1. Install Vercel AI SDK: `npm install @vercel/ai`\n2. Create an `ai-client-factory.js` module that implements the Factory pattern\n3. Define client creation functions for each supported model (Claude, OpenAI, Ollama, Gemini, OpenRouter, Perplexity, Grok)\n4. Implement error handling for missing API keys or configuration issues\n5. Add caching mechanism to reuse existing clients\n6. Create a unified interface for all clients regardless of the underlying model\n7. Implement client validation to ensure proper initialization\n8. Testing approach: Mock API responses to test client creation and error handling\n\n<info added on 2025-04-14T23:02:30.519Z>\nHere's additional information for the client factory implementation:\n\nFor the client factory implementation:\n\n1. Structure the factory with a modular approach:\n```javascript\n// ai-client-factory.js\nimport { createOpenAI } from '@ai-sdk/openai';\nimport { createAnthropic } from '@ai-sdk/anthropic';\nimport { createGoogle } from '@ai-sdk/google';\nimport { createPerplexity } from '@ai-sdk/perplexity';\n\nconst clientCache = new Map();\n\nexport function createClientInstance(providerName, options = {}) {\n // Implementation details below\n}\n```\n\n2. For OpenAI-compatible providers (Ollama), implement specific configuration:\n```javascript\ncase 'ollama':\n const ollamaBaseUrl = process.env.OLLAMA_BASE_URL || 'http://localhost:11434';\n return createOpenAI({\n baseURL: ollamaBaseUrl,\n apiKey: 'ollama', // Ollama doesn't require a real API key\n ...options\n });\n```\n\n3. Add provider-specific model mapping:\n```javascript\n// Model mapping helper\nconst getModelForProvider = (provider, requestedModel) => {\n const modelMappings = {\n openai: {\n default: 'gpt-3.5-turbo',\n // Add other mappings\n },\n anthropic: {\n default: 'claude-3-opus-20240229',\n // Add other mappings\n },\n // Add mappings for other providers\n };\n \n return (modelMappings[provider] && modelMappings[provider][requestedModel]) \n || modelMappings[provider]?.default \n || requestedModel;\n};\n```\n\n4. Implement caching with provider+model as key:\n```javascript\nexport function getClient(providerName, model) {\n const cacheKey = `${providerName}:${model || 'default'}`;\n \n if (clientCache.has(cacheKey)) {\n return clientCache.get(cacheKey);\n }\n \n const modelName = getModelForProvider(providerName, model);\n const client = createClientInstance(providerName, { model: modelName });\n clientCache.set(cacheKey, client);\n \n return client;\n}\n```\n\n5. Add detailed environment variable validation:\n```javascript\nfunction validateEnvironment(provider) {\n const requirements = {\n openai: ['OPENAI_API_KEY'],\n anthropic: ['ANTHROPIC_API_KEY'],\n google: ['GOOGLE_API_KEY'],\n perplexity: ['PERPLEXITY_API_KEY'],\n openrouter: ['OPENROUTER_API_KEY'],\n ollama: ['OLLAMA_BASE_URL'],\n grok: ['GROK_API_KEY', 'GROK_BASE_URL']\n };\n \n const missing = requirements[provider]?.filter(env => !process.env[env]) || [];\n \n if (missing.length > 0) {\n throw new Error(`Missing environment variables for ${provider}: ${missing.join(', ')}`);\n }\n}\n```\n\n6. Add Jest test examples:\n```javascript\n// ai-client-factory.test.js\ndescribe('AI Client Factory', () => {\n beforeEach(() => {\n // Mock environment variables\n process.env.OPENAI_API_KEY = 'test-openai-key';\n process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';\n // Add other mocks\n });\n \n test('creates OpenAI client with correct configuration', () => {\n const client = getClient('openai');\n expect(client).toBeDefined();\n // Add assertions for client configuration\n });\n \n test('throws error when environment variables are missing', () => {\n delete process.env.OPENAI_API_KEY;\n expect(() => getClient('openai')).toThrow(/Missing environment variables/);\n });\n \n // Add tests for other providers\n});\n```\n</info added on 2025-04-14T23:02:30.519Z>",
"details": "1. Install Vercel AI SDK: `npm install @vercel/ai`\n2. Create an `ai-client-factory.js` module that implements the Factory pattern\n3. Define client creation functions for each supported model (Claude, OpenAI, Ollama, Gemini, OpenRouter, Perplexity, Grok)\n4. Implement error handling for missing API keys or configuration issues\n5. Add caching mechanism to reuse existing clients\n6. Create a unified interface for all clients regardless of the underlying model\n7. Implement client validation to ensure proper initialization\n8. Testing approach: Mock API responses to test client creation and error handling\n\n<info added on 2025-04-14T23:02:30.519Z>\nHere's additional information for the client factory implementation:\n\nFor the client factory implementation:\n\n1. Structure the factory with a modular approach:\n```javascript\n// ai-client-factory.js\nimport { createOpenAI } from '@ai-sdk/openai';\nimport { createAnthropic } from '@ai-sdk/anthropic';\nimport { createGoogle } from '@ai-sdk/google';\nimport { createPerplexity } from '@ai-sdk/perplexity';\n\nconst clientCache = new Map();\n\nexport function createClientInstance(providerName, options = {}) {\n // Implementation details below\n}\n```\n\n2. For OpenAI-compatible providers (Ollama), implement specific configuration:\n```javascript\ncase 'ollama':\n const ollamaBaseUrl = process.env.OLLAMA_BASE_URL || 'http://localhost:11434';\n return createOpenAI({\n baseURL: ollamaBaseUrl,\n apiKey: 'ollama', // Ollama doesn't require a real API key\n ...options\n });\n```\n\n3. Add provider-specific model mapping:\n```javascript\n// Model mapping helper\nconst getModelForProvider = (provider, requestedModel) => {\n const modelMappings = {\n openai: {\n default: 'gpt-3.5-turbo',\n // Add other mappings\n },\n anthropic: {\n default: 'claude-3-opus-20240229',\n // Add other mappings\n },\n // Add mappings for other providers\n };\n \n return (modelMappings[provider] && modelMappings[provider][requestedModel]) \n || modelMappings[provider]?.default \n || requestedModel;\n};\n```\n\n4. Implement caching with provider+model as key:\n```javascript\nexport function getClient(providerName, model) {\n const cacheKey = `${providerName}:${model || 'default'}`;\n \n if (clientCache.has(cacheKey)) {\n return clientCache.get(cacheKey);\n }\n \n const modelName = getModelForProvider(providerName, model);\n const client = createClientInstance(providerName, { model: modelName });\n clientCache.set(cacheKey, client);\n \n return client;\n}\n```\n\n5. Add detailed environment variable validation:\n```javascript\nfunction validateEnvironment(provider) {\n const requirements = {\n openai: ['OPENAI_API_KEY'],\n anthropic: ['ANTHROPIC_API_KEY'],\n google: ['GOOGLE_API_KEY'],\n perplexity: ['PERPLEXITY_API_KEY'],\n openrouter: ['OPENROUTER_API_KEY'],\n ollama: ['OLLAMA_BASE_URL'],\n xai: ['XAI_API_KEY']\n };\n \n const missing = requirements[provider]?.filter(env => !process.env[env]) || [];\n \n if (missing.length > 0) {\n throw new Error(`Missing environment variables for ${provider}: ${missing.join(', ')}`);\n }\n}\n```\n\n6. Add Jest test examples:\n```javascript\n// ai-client-factory.test.js\ndescribe('AI Client Factory', () => {\n beforeEach(() => {\n // Mock environment variables\n process.env.OPENAI_API_KEY = 'test-openai-key';\n process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';\n // Add other mocks\n });\n \n test('creates OpenAI client with correct configuration', () => {\n const client = getClient('openai');\n expect(client).toBeDefined();\n // Add assertions for client configuration\n });\n \n test('throws error when environment variables are missing', () => {\n delete process.env.OPENAI_API_KEY;\n expect(() => getClient('openai')).toThrow(/Missing environment variables/);\n });\n \n // Add tests for other providers\n});\n```\n</info added on 2025-04-14T23:02:30.519Z>",
"status": "done",
"parentTaskId": 61
},
@@ -3107,7 +3107,7 @@
9
],
"details": "1. Update README.md with new model management commands\n2. Create usage examples for all supported models\n3. Document environment variable requirements for each model\n4. Create troubleshooting guide for common issues\n5. Add performance considerations and best practices\n6. Document API key acquisition process for each supported service\n7. Create comparison chart of model capabilities and limitations\n8. Testing approach: Conduct user testing with the documentation to ensure clarity and completeness\n\n<info added on 2025-04-20T03:55:20.433Z>\n## Documentation Update for Configuration System Refactoring\n\n### Configuration System Architecture\n- Document the separation between environment variables and configuration file:\n - API keys: Sourced exclusively from environment variables (process.env or session.env)\n - All other settings: Centralized in `.taskmasterconfig` JSON file\n\n### `.taskmasterconfig` Structure\n```json\n{\n \"models\": {\n \"completion\": \"gpt-3.5-turbo\",\n \"chat\": \"gpt-4\",\n \"embedding\": \"text-embedding-ada-002\"\n },\n \"parameters\": {\n \"temperature\": 0.7,\n \"maxTokens\": 2000,\n \"topP\": 1\n },\n \"logging\": {\n \"enabled\": true,\n \"level\": \"info\"\n },\n \"defaults\": {\n \"outputFormat\": \"markdown\"\n }\n}\n```\n\n### Configuration Access Patterns\n- Document the getter functions in `config-manager.js`:\n - `getModelForRole(role)`: Returns configured model for a specific role\n - `getParameter(name)`: Retrieves model parameters\n - `getLoggingConfig()`: Access logging settings\n - Example usage: `const completionModel = getModelForRole('completion')`\n\n### Environment Variable Resolution\n- Explain the `resolveEnvVariable(key)` function:\n - Checks both process.env and session.env\n - Prioritizes session variables over process variables\n - Returns null if variable not found\n\n### Configuration Precedence\n- Document the order of precedence:\n 1. Command-line arguments (highest priority)\n 2. Session environment variables\n 3. Process environment variables\n 4. `.taskmasterconfig` settings\n 5. Hardcoded defaults (lowest priority)\n\n### Migration Guide\n- Steps for users to migrate from previous configuration approach\n- How to verify configuration is correctly loaded\n</info added on 2025-04-20T03:55:20.433Z>",
"status": "pending",
"status": "done",
"parentTaskId": 61
},
{
@@ -3337,7 +3337,7 @@
"title": "Cleanup Old AI Service Files",
"description": "After all other migration subtasks (refactoring, provider implementation, testing, documentation) are complete and verified, remove the old `ai-services.js` and `ai-client-factory.js` files from the `scripts/modules/` directory. Ensure no code still references them.",
"details": "\n\n<info added on 2025-04-22T06:51:02.444Z>\nI'll provide additional technical information to enhance the \"Cleanup Old AI Service Files\" subtask:\n\n## Implementation Details\n\n**Pre-Cleanup Verification Steps:**\n- Run a comprehensive codebase search for any remaining imports or references to `ai-services.js` and `ai-client-factory.js` using grep or your IDE's search functionality[1][4]\n- Check for any dynamic imports that might not be caught by static analysis tools\n- Verify that all dependent modules have been properly migrated to the new AI service architecture\n\n**Cleanup Process:**\n- Create a backup of the files before deletion in case rollback is needed\n- Document the file removal in the migration changelog with timestamps and specific file paths[5]\n- Update any build configuration files that might reference these files (webpack configs, etc.)\n- Run a full test suite after removal to ensure no runtime errors occur[2]\n\n**Post-Cleanup Validation:**\n- Implement automated tests to verify the application functions correctly without the removed files\n- Monitor application logs and error reporting systems for 48-72 hours after deployment to catch any missed dependencies[3]\n- Perform a final code review to ensure clean architecture principles are maintained in the new implementation\n\n**Technical Considerations:**\n- Check for any circular dependencies that might have been created during the migration process\n- Ensure proper garbage collection by removing any cached instances of the old services\n- Verify that performance metrics remain stable after the removal of legacy code\n</info added on 2025-04-22T06:51:02.444Z>",
"status": "pending",
"status": "done",
"dependencies": [
"61.31",
"61.32"
@@ -3433,6 +3433,15 @@
"status": "pending",
"dependencies": [],
"parentTaskId": 61
},
{
"id": 44,
"title": "Add setters for temperature, max tokens on per role basis.",
"description": "NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.",
"details": "",
"status": "pending",
"dependencies": [],
"parentTaskId": 61
}
]
},