diff --git a/assets/rules/ai_providers.mdc b/assets/rules/ai_providers.mdc new file mode 100644 index 00000000..d984e251 --- /dev/null +++ b/assets/rules/ai_providers.mdc @@ -0,0 +1,155 @@ +--- +description: Guidelines for managing Task Master AI providers and models. +globs: +alwaysApply: false +--- +# Task Master AI Provider Management + +This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc). + +- **Primary Interaction:** + - Use the `models` MCP tool or the `task-master models` CLI command to manage AI configurations. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for detailed command/tool usage. + +- **Configuration Roles:** + - Task Master uses three roles for AI models: + - `main`: Primary model for general tasks (generation, updates). + - `research`: Model used when the `--research` flag or `research: true` parameter is used (typically models with web access or specialized knowledge). + - `fallback`: Model used if the primary (`main`) model fails. + - Each role is configured with a specific `provider:modelId` pair (e.g., `openai:gpt-4o`). + +- **Viewing Configuration & Available Models:** + - To see the current model assignments for each role and list all models available for assignment: + - **MCP Tool:** `models` (call with no arguments or `listAvailableModels: true`) + - **CLI Command:** `task-master models` + - The output will show currently assigned models and a list of others, prefixed with their provider (e.g., `google:gemini-2.5-pro-exp-03-25`). + +- **Setting Models for Roles:** + - To assign a model to a role: + - **MCP Tool:** `models` with `setMain`, `setResearch`, or `setFallback` parameters. + - **CLI Command:** `task-master models` with `--set-main`, `--set-research`, or `--set-fallback` flags. + - **Crucially:** When providing the model ID to *set*, **DO NOT include the `provider:` prefix**. Use only the model ID itself. + - ✅ **DO:** `models(setMain='gpt-4o')` or `task-master models --set-main=gpt-4o` + - ❌ **DON'T:** `models(setMain='openai:gpt-4o')` or `task-master models --set-main=openai:gpt-4o` + - The tool/command will automatically determine the provider based on the model ID. + +- **Setting Custom Models (Ollama/OpenRouter):** + - To set a model ID not in the internal list for Ollama or OpenRouter: + - **MCP Tool:** Use `models` with `set` and **also** `ollama: true` or `openrouter: true`. + - Example: `models(setMain='my-custom-ollama-model', ollama=true)` + - Example: `models(setMain='some-openrouter-model', openrouter=true)` + - **CLI Command:** Use `task-master models` with `--set-` and **also** `--ollama` or `--openrouter`. + - Example: `task-master models --set-main=my-custom-ollama-model --ollama` + - Example: `task-master models --set-main=some-openrouter-model --openrouter` + - **Interactive Setup:** Use `task-master models --setup` and select the `Ollama (Enter Custom ID)` or `OpenRouter (Enter Custom ID)` options. + - **OpenRouter Validation:** When setting a custom OpenRouter model, Taskmaster attempts to validate the ID against the live OpenRouter API. + - **Ollama:** No live validation occurs for custom Ollama models; ensure the model is available on your Ollama server. + +- **Supported Providers & Required API Keys:** + - Task Master integrates with various providers via the Vercel AI SDK. + - **API keys are essential** for most providers and must be configured correctly. + - **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management): + - **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`. + - **CLI:** Set keys in a `.env` file in the project root. + - **Provider List & Keys:** + - **`anthropic`**: Requires `ANTHROPIC_API_KEY`. + - **`google`**: Requires `GOOGLE_API_KEY`. + - **`openai`**: Requires `OPENAI_API_KEY`. + - **`perplexity`**: Requires `PERPLEXITY_API_KEY`. + - **`xai`**: Requires `XAI_API_KEY`. + - **`mistral`**: Requires `MISTRAL_API_KEY`. + - **`azure`**: Requires `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`. + - **`openrouter`**: Requires `OPENROUTER_API_KEY`. + - **`ollama`**: Might require `OLLAMA_API_KEY` (not currently supported) *and* `OLLAMA_BASE_URL` (default: `http://localhost:11434/api`). *Check specific setup.* + +- **Troubleshooting:** + - If AI commands fail (especially in MCP context): + 1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`). + 2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`). + 3. **Provider Status:** Check the status of the external AI provider's service. + 4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server. + +## Adding a New AI Provider (Vercel AI SDK Method) + +Follow these steps to integrate a new AI provider that has an official Vercel AI SDK adapter (`@ai-sdk/`): + +1. **Install Dependency:** + - Install the provider-specific package: + ```bash + npm install @ai-sdk/ + ``` + +2. **Create Provider Module:** + - Create a new file in `src/ai-providers/` named `.js`. + - Use existing modules (`openai.js`, `anthropic.js`, etc.) as a template. + - **Import:** + - Import the provider's `create` function from `@ai-sdk/`. + - Import `generateText`, `streamText`, `generateObject` from the core `ai` package. + - Import the `log` utility from `../../scripts/modules/utils.js`. + - **Implement Core Functions:** + - `generateText(params)`: + - Accepts `params` (apiKey, modelId, messages, etc.). + - Instantiate the client: `const client = create({ apiKey });` + - Call `generateText({ model: client(modelId), ... })`. + - Return `result.text`. + - Include basic validation and try/catch error handling. + - `streamText(params)`: + - Similar structure to `generateText`. + - Call `streamText({ model: client(modelId), ... })`. + - Return the full stream result object. + - Include basic validation and try/catch. + - `generateObject(params)`: + - Similar structure. + - Call `generateObject({ model: client(modelId), schema, messages, ... })`. + - Return `result.object`. + - Include basic validation and try/catch. + - **Export Functions:** Export the three implemented functions (`generateText`, `streamText`, `generateObject`). + +3. **Integrate with Unified Service:** + - Open `scripts/modules/ai-services-unified.js`. + - **Import:** Add `import * as from '../../src/ai-providers/.js';` + - **Map:** Add an entry to the `PROVIDER_FUNCTIONS` map: + ```javascript + '': { + generateText: .generateText, + streamText: .streamText, + generateObject: .generateObject + }, + ``` + +4. **Update Configuration Management:** + - Open `scripts/modules/config-manager.js`. + - **`MODEL_MAP`:** Add the new `` key to the `MODEL_MAP` loaded from `supported-models.json` (or ensure the loading handles new providers dynamically if `supported-models.json` is updated first). + - **`VALID_PROVIDERS`:** Ensure the new `` is included in the `VALID_PROVIDERS` array (this should happen automatically if derived from `MODEL_MAP` keys). + - **API Key Handling:** + - Update the `keyMap` in `_resolveApiKey` and `isApiKeySet` with the correct environment variable name (e.g., `PROVIDER_API_KEY`). + - Update the `switch` statement in `getMcpApiKeyStatus` to check the corresponding key in `mcp.json` and its placeholder value. + - Add a case to the `switch` statement in `getMcpApiKeyStatus` for the new provider, including its placeholder string if applicable. + - **Ollama Exception:** If adding Ollama or another provider *not* requiring an API key, add a specific check at the beginning of `isApiKeySet` and `getMcpApiKeyStatus` to return `true` immediately for that provider. + +5. **Update Supported Models List:** + - Edit `scripts/modules/supported-models.json`. + - Add a new key for the ``. + - Add an array of model objects under the provider key, each including: + - `id`: The specific model identifier (e.g., `claude-3-opus-20240229`). + - `name`: A user-friendly name (optional). + - `swe_score`, `cost_per_1m_tokens`: (Optional) Add performance/cost data if available. + - `allowed_roles`: An array of roles (`"main"`, `"research"`, `"fallback"`) the model is suitable for. + - `max_tokens`: (Optional but recommended) The maximum token limit for the model. + +6. **Update Environment Examples:** + - Add the new `PROVIDER_API_KEY` to `.env.example`. + - Add the new `PROVIDER_API_KEY` with its placeholder (`YOUR_PROVIDER_API_KEY_HERE`) to the `env` section for `taskmaster-ai` in `.cursor/mcp.json.example` (if it exists) or update instructions. + +7. **Add Unit Tests:** + - Create `tests/unit/ai-providers/.test.js`. + - Mock the `@ai-sdk/` module and the core `ai` module functions (`generateText`, `streamText`, `generateObject`). + - Write tests for each exported function (`generateText`, etc.) to verify: + - Correct client instantiation. + - Correct parameters passed to the mocked Vercel AI SDK functions. + - Correct handling of results. + - Error handling (missing API key, SDK errors). + +8. **Documentation:** + - Update any relevant documentation (like `README.md` or other rules) mentioning supported providers or configuration. + +*(Note: For providers **without** an official Vercel AI SDK adapter, the process would involve directly using the provider's own SDK or API within the `src/ai-providers/.js` module and manually constructing responses compatible with the unified service layer, which is significantly more complex.)* \ No newline at end of file diff --git a/assets/rules/ai_services.mdc b/assets/rules/ai_services.mdc new file mode 100644 index 00000000..1be5205c --- /dev/null +++ b/assets/rules/ai_services.mdc @@ -0,0 +1,101 @@ +--- +description: Guidelines for interacting with the unified AI service layer. +globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js, scripts/modules/commands.js +--- + +# AI Services Layer Guidelines + +This document outlines the architecture and usage patterns for interacting with Large Language Models (LLMs) via Task Master's unified AI service layer (`ai-services-unified.js`). The goal is to centralize configuration, provider selection, API key management, fallback logic, and error handling. + +**Core Components:** + +* **Configuration (`.taskmasterconfig` & [`config-manager.js`](mdc:scripts/modules/config-manager.js)):** + * Defines the AI provider and model ID for different **roles** (`main`, `research`, `fallback`). + * Stores parameters like `maxTokens` and `temperature` per role. + * Managed via the `task-master models --setup` CLI command. + * [`config-manager.js`](mdc:scripts/modules/config-manager.js) provides **getters** (e.g., `getMainProvider()`, `getParametersForRole()`) to access these settings. Core logic should **only** use these getters for *non-AI related application logic* (e.g., `getDefaultSubtasks`). The unified service fetches necessary AI parameters internally based on the `role`. + * **API keys** are **NOT** stored here; they are resolved via `resolveEnvVariable` (in [`utils.js`](mdc:scripts/modules/utils.js)) from `.env` (for CLI) or the MCP `session.env` object (for MCP calls). See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc). + +* **Unified Service (`ai-services-unified.js`):** + * Exports primary interaction functions: `generateTextService`, `generateObjectService`. (Note: `streamTextService` exists but has known reliability issues with some providers/payloads). + * Contains the core `_unifiedServiceRunner` logic. + * Internally uses `config-manager.js` getters to determine the provider/model/parameters based on the requested `role`. + * Implements the **fallback sequence** (e.g., main -> fallback -> research) if the primary provider/model fails. + * Constructs the `messages` array required by the Vercel AI SDK. + * Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`). + * Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`). + * Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`. + +* **Provider Implementations (`src/ai-providers/*.js`):** + * Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`). + +**Usage Pattern (from Core Logic like `task-manager/*.js`):** + +1. **Import Service:** Import `generateTextService` or `generateObjectService` from `../ai-services-unified.js`. + ```javascript + // Preferred for most tasks (especially with complex JSON) + import { generateTextService } from '../ai-services-unified.js'; + + // Use if structured output is reliable for the specific use case + // import { generateObjectService } from '../ai-services-unified.js'; + ``` + +2. **Prepare Parameters:** Construct the parameters object for the service call. + * `role`: **Required.** `'main'`, `'research'`, or `'fallback'`. Determines the initial provider/model/parameters used by the unified service. + * `session`: **Required if called from MCP context.** Pass the `session` object received by the direct function wrapper. The unified service uses `session.env` to find API keys. + * `systemPrompt`: Your system instruction string. + * `prompt`: The user message string (can be long, include stringified data, etc.). + * (For `generateObjectService` only): `schema` (Zod schema), `objectName`. + +3. **Call Service:** Use `await` to call the service function. + ```javascript + // Example using generateTextService (most common) + try { + const resultText = await generateTextService({ + role: useResearch ? 'research' : 'main', // Determine role based on logic + session: context.session, // Pass session from context object + systemPrompt: "You are...", + prompt: userMessageContent + }); + // Process the raw text response (e.g., parse JSON, use directly) + // ... + } catch (error) { + // Handle errors thrown by the unified service (if all fallbacks/retries fail) + report('error', `Unified AI service call failed: ${error.message}`); + throw error; + } + + // Example using generateObjectService (use cautiously) + try { + const resultObject = await generateObjectService({ + role: 'main', + session: context.session, + schema: myZodSchema, + objectName: 'myDataObject', + systemPrompt: "You are...", + prompt: userMessageContent + }); + // resultObject is already a validated JS object + // ... + } catch (error) { + report('error', `Unified AI service call failed: ${error.message}`); + throw error; + } + ``` + +4. **Handle Results/Errors:** Process the returned text/object or handle errors thrown by the unified service layer. + +**Key Implementation Rules & Gotchas:** + +* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`. +* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service. +* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context. +* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP). +* ✅ **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`). +* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas. +* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files. +* ❌ **DON'T**: Initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (`task-manager/`) or MCP direct functions. +* ❌ **DON'T**: Fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters *for the AI call*. Pass the `role` instead. +* ❌ **DON'T**: Implement fallback or retry logic outside `ai-services-unified.js`. +* ❌ **DON'T**: Handle API key resolution outside the service layer (it uses `utils.js` internally). +* ⚠️ **generateObjectService Caution**: Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs. diff --git a/assets/rules/architecture.mdc b/assets/rules/architecture.mdc new file mode 100644 index 00000000..68f32ab5 --- /dev/null +++ b/assets/rules/architecture.mdc @@ -0,0 +1,285 @@ +--- +description: Describes the high-level architecture of the Task Master CLI application. +globs: scripts/modules/*.js +alwaysApply: false +--- +# Application Architecture Overview + +- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability. + +- **Main Modules and Responsibilities**: + + - **[`commands.js`](mdc:scripts/modules/commands.js): Command Handling** + - **Purpose**: Defines and registers all CLI commands using Commander.js. + - **Responsibilities** (See also: [`commands.mdc`](mdc:.cursor/rules/commands.mdc)): + - Parses command-line arguments and options. + - Invokes appropriate core logic functions from `scripts/modules/`. + - Handles user input/output for CLI. + - Implements CLI-specific validation. + + - **[`task-manager.js`](mdc:scripts/modules/task-manager.js) & `task-manager/` directory: Task Data & Core Logic** + - **Purpose**: Contains core functions for task data manipulation (CRUD), AI interactions, and related logic. + - **Responsibilities**: + - Reading/writing `tasks.json`. + - Implementing functions for task CRUD, parsing PRDs, expanding tasks, updating status, etc. + - **Delegating AI interactions** to the `ai-services-unified.js` layer. + - Accessing non-AI configuration via `config-manager.js` getters. + - **Key Files**: Individual files within `scripts/modules/task-manager/` handle specific actions (e.g., `add-task.js`, `expand-task.js`). + + - **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management** + - **Purpose**: Manages task dependencies. + - **Responsibilities**: Add/remove/validate/fix dependencies. + + - **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components** + - **Purpose**: Handles CLI output formatting (tables, colors, boxes, spinners). + - **Responsibilities**: Displaying tasks, reports, progress, suggestions. + + - **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js): Unified AI Service Layer** + - **Purpose**: Centralized interface for all LLM interactions using Vercel AI SDK. + - **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)): + - Exports `generateTextService`, `generateObjectService`. + - Handles provider/model selection based on `role` and `.taskmasterconfig`. + - Resolves API keys (from `.env` or `session.env`). + - Implements fallback and retry logic. + - Orchestrates calls to provider-specific implementations (`src/ai-providers/`). + + - **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations** + - **Purpose**: Provider-specific wrappers for Vercel AI SDK functions. + - **Responsibilities**: Interact directly with Vercel AI SDK adapters. + + - **[`config-manager.js`](mdc:scripts/modules/config-manager.js): Configuration Management** + - **Purpose**: Loads, validates, and provides access to configuration. + - **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)): + - Reads and merges `.taskmasterconfig` with defaults. + - Provides getters (e.g., `getMainProvider`, `getLogLevel`, `getDefaultSubtasks`) for accessing settings. + - **Note**: Does **not** store or directly handle API keys (keys are in `.env` or MCP `session.env`). + + - **[`utils.js`](mdc:scripts/modules/utils.js): Core Utility Functions** + - **Purpose**: Low-level, reusable CLI utilities. + - **Responsibilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)): + - Logging (`log` function), File I/O (`readJSON`, `writeJSON`), String utils (`truncate`). + - Task utils (`findTaskById`), Dependency utils (`findCycles`). + - API Key Resolution (`resolveEnvVariable`). + - Silent Mode Control (`enableSilentMode`, `disableSilentMode`). + + - **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration** + - **Purpose**: Provides MCP interface using FastMCP. + - **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)): + - Registers tools (`mcp-server/src/tools/*.js`). Tool `execute` methods **should be wrapped** with the `withNormalizedProjectRoot` HOF (from `tools/utils.js`) to ensure consistent path handling. + - The HOF provides a normalized `args.projectRoot` to the `execute` method. + - Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`), passing the normalized `projectRoot` and other args. + - Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session. + - Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`. + - Manages MCP caching and response formatting. + + - **[`init.js`](mdc:scripts/init.js): Project Initialization Logic** + - **Purpose**: Sets up new Task Master project structure. + - **Responsibilities**: Creates directories, copies templates, manages `package.json`, sets up `.cursor/mcp.json`. + +- **Data Flow and Module Dependencies (Updated)**: + + - **CLI**: `bin/task-master.js` -> `scripts/dev.js` (loads `.env`) -> `scripts/modules/commands.js` -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API. + - **MCP**: External Tool -> `mcp-server/server.js` -> Tool (`mcp-server/src/tools/*`) -> Direct Function (`mcp-server/src/core/direct-functions/*`) -> Core Logic (`scripts/modules/*`) -> Unified AI Service (`ai-services-unified.js`) -> Provider Adapters -> LLM API. + - **Configuration**: Core logic needing non-AI settings calls `config-manager.js` getters (passing `session.env` via `explicitRoot` if from MCP). Unified AI Service internally calls `config-manager.js` getters (using `role`) for AI params and `utils.js` (`resolveEnvVariable` with `session.env`) for API keys. + +## Silent Mode Implementation Pattern in MCP Direct Functions + +Direct functions (the `*Direct` functions in `mcp-server/src/core/direct-functions/`) need to carefully implement silent mode to prevent console logs from interfering with the structured JSON responses required by MCP. This involves both using `enableSilentMode`/`disableSilentMode` around core function calls AND passing the MCP logger via the standard wrapper pattern (see mcp.mdc). Here's the standard pattern for correct implementation: + +1. **Import Silent Mode Utilities**: + ```javascript + import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js'; + ``` + +2. **Parameter Matching with Core Functions**: + - ✅ **DO**: Ensure direct function parameters match the core function parameters + - ✅ **DO**: Check the original core function signature before implementing + - ❌ **DON'T**: Add parameters to direct functions that don't exist in core functions + ```javascript + // Example: Core function signature + // async function expandTask(tasksPath, taskId, numSubtasks, useResearch, additionalContext, options) + + // Direct function implementation - extract only parameters that exist in core + export async function expandTaskDirect(args, log, context = {}) { + // Extract parameters that match the core function + const taskId = parseInt(args.id, 10); + const numSubtasks = args.num ? parseInt(args.num, 10) : undefined; + const useResearch = args.research === true; + const additionalContext = args.prompt || ''; + + // Later pass these parameters in the correct order to the core function + const result = await expandTask( + tasksPath, + taskId, + numSubtasks, + useResearch, + additionalContext, + { mcpLog: log, session: context.session } + ); + } + ``` + +3. **Checking Silent Mode State**: + - ✅ **DO**: Always use `isSilentMode()` function to check current status + - ❌ **DON'T**: Directly access the global `silentMode` variable or `global.silentMode` + ```javascript + // CORRECT: Use the function to check current state + if (!isSilentMode()) { + // Only create a loading indicator if not in silent mode + loadingIndicator = startLoadingIndicator('Processing...'); + } + + // INCORRECT: Don't access global variables directly + if (!silentMode) { // ❌ WRONG + loadingIndicator = startLoadingIndicator('Processing...'); + } + ``` + +4. **Wrapping Core Function Calls**: + - ✅ **DO**: Use a try/finally block pattern to ensure silent mode is always restored + - ✅ **DO**: Enable silent mode before calling core functions that produce console output + - ✅ **DO**: Disable silent mode in a finally block to ensure it runs even if errors occur + - ❌ **DON'T**: Enable silent mode without ensuring it gets disabled + ```javascript + export async function someDirectFunction(args, log) { + try { + // Argument preparation + const tasksPath = findTasksJsonPath(args, log); + const someArg = args.someArg; + + // Enable silent mode to prevent console logs + enableSilentMode(); + + try { + // Call core function which might produce console output + const result = await someCoreFunction(tasksPath, someArg); + + // Return standardized result object + return { + success: true, + data: result, + fromCache: false + }; + } finally { + // ALWAYS disable silent mode in finally block + disableSilentMode(); + } + } catch (error) { + // Standard error handling + log.error(`Error in direct function: ${error.message}`); + return { + success: false, + error: { code: 'OPERATION_ERROR', message: error.message }, + fromCache: false + }; + } + } + ``` + +5. **Mixed Parameter and Global Silent Mode Handling**: + - For functions that need to handle both a passed `silentMode` parameter and check global state: + ```javascript + // Check both the function parameter and global state + const isSilent = options.silentMode || (typeof options.silentMode === 'undefined' && isSilentMode()); + + if (!isSilent) { + console.log('Operation starting...'); + } + ``` + +By following these patterns consistently, direct functions will properly manage console output suppression while ensuring that silent mode is always properly reset, even when errors occur. This creates a more robust system that helps prevent unexpected silent mode states that could cause logging problems in subsequent operations. + +- **Testing Architecture**: + + - **Test Organization Structure** (See also: [`tests.mdc`](mdc:.cursor/rules/tests.mdc)): + - **Unit Tests**: Located in `tests/unit/`, reflect the module structure with one test file per module + - **Integration Tests**: Located in `tests/integration/`, test interactions between modules + - **End-to-End Tests**: Located in `tests/e2e/`, test complete workflows from a user perspective + - **Test Fixtures**: Located in `tests/fixtures/`, provide reusable test data + + - **Module Design for Testability**: + - **Explicit Dependencies**: Functions accept their dependencies as parameters rather than using globals + - **Functional Style**: Pure functions with minimal side effects make testing deterministic + - **Separate Logic from I/O**: Core business logic is separated from file system operations + - **Clear Module Interfaces**: Each module has well-defined exports that can be mocked in tests + - **Callback Isolation**: Callbacks are defined as separate functions for easier testing + - **Stateless Design**: Modules avoid maintaining internal state where possible + + - **Mock Integration Patterns**: + - **External Libraries**: Libraries like `fs`, `commander`, and `@anthropic-ai/sdk` are mocked at module level + - **Internal Modules**: Application modules are mocked with appropriate spy functions + - **Testing Function Callbacks**: Callbacks are extracted from mock call arguments and tested in isolation + - **UI Elements**: Output functions from `ui.js` are mocked to verify display calls + + - **Testing Flow**: + - Module dependencies are mocked (following Jest's hoisting behavior) + - Test modules are imported after mocks are established + - Spy functions are set up on module methods + - Tests call the functions under test and verify behavior + - Mocks are reset between test cases to maintain isolation + +- **Benefits of this Architecture**: + + - **Maintainability**: Modules are self-contained and focused, making it easier to understand, modify, and debug specific features. + - **Testability**: Each module can be tested in isolation (unit testing), and interactions between modules can be tested (integration testing). + - **Mocking Support**: The clear dependency boundaries make mocking straightforward + - **Test Isolation**: Each component can be tested without affecting others + - **Callback Testing**: Function callbacks can be extracted and tested independently + - **Reusability**: Utility functions and UI components can be reused across different parts of the application. + - **Scalability**: New features can be added as new modules or by extending existing ones without significantly impacting other parts of the application. + - **Clarity**: The modular structure provides a clear separation of concerns, making the codebase easier to navigate and understand for developers. + +This architectural overview should help AI models understand the structure and organization of the Task Master CLI codebase, enabling them to more effectively assist with code generation, modification, and understanding. + +## Implementing MCP Support for a Command + +Follow these steps to add MCP support for an existing Task Master command (see [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for more detail): + +1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`. + +2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`:** + - Create a new file (e.g., `your-command.js`) using **kebab-case** naming. + - Import necessary core functions, **`findTasksJsonPath` from `../utils/path-utils.js`**, and **silent mode utilities**. + - Implement `async function yourCommandDirect(args, log)` using **camelCase** with `Direct` suffix: + - **Path Resolution**: Obtain the tasks file path using `const tasksPath = findTasksJsonPath(args, log);`. This relies on `args.projectRoot` being provided. + - Parse other `args` and perform necessary validation. + - **Implement Silent Mode**: Wrap core function calls with `enableSilentMode()` and `disableSilentMode()`. + - Implement caching with `getCachedOrExecute` if applicable. + - Call core logic. + - Return `{ success: true/false, data/error, fromCache: boolean }`. + - Export the wrapper function. + +3. **Update `task-master-core.js` with Import/Export**: Add imports/exports for the new `*Direct` function. + +4. **Create MCP Tool (`mcp-server/src/tools/`)**: + - Create a new file (e.g., `your-command.js`) using **kebab-case**. + - Import `zod`, `handleApiResult`, **`getProjectRootFromSession`**, and your `yourCommandDirect` function. + - Implement `registerYourCommandTool(server)`. + - **Define parameters, making `projectRoot` optional**: `projectRoot: z.string().optional().describe(...)`. + - Consider if this operation should run in the background using `AsyncOperationManager`. + - Implement the standard `execute` method: + - Get `rootFolder` using `getProjectRootFromSession` (with fallback to `args.projectRoot`). + - Call `yourCommandDirect({ ...args, projectRoot: rootFolder }, log)` or use `asyncOperationManager.addOperation`. + - Pass the result to `handleApiResult`. + +5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`. + +6. **Update `mcp.json`**: Add the new tool definition. + +## Project Initialization + +The `initialize_project` command provides a way to set up a new Task Master project: + +- **CLI Command**: `task-master init` +- **MCP Tool**: `initialize_project` +- **Functionality**: + - Creates necessary directories and files for a new project + - Sets up `tasks.json` and initial task files + - Configures project metadata (name, description, version) + - Handles shell alias creation if requested + - Works in both interactive and non-interactive modes + - Creates necessary directories and files for a new project + - Sets up `tasks.json` and initial task files + - Configures project metadata (name, description, version) + - Handles shell alias creation if requested + - Works in both interactive and non-interactive modes \ No newline at end of file diff --git a/assets/rules/changeset.mdc b/assets/rules/changeset.mdc new file mode 100644 index 00000000..49088bb7 --- /dev/null +++ b/assets/rules/changeset.mdc @@ -0,0 +1,105 @@ +--- +description: Guidelines for using Changesets (npm run changeset) to manage versioning and changelogs. +alwaysApply: true +--- + +# Changesets Workflow Guidelines + +Changesets is used to manage package versioning and generate accurate `CHANGELOG.md` files automatically. It's crucial to use it correctly after making meaningful changes that affect the package from an external perspective or significantly impact internal development workflow documented elsewhere. + +## When to Run Changeset + +- Run `npm run changeset` (or `npx changeset add`) **after** you have staged (`git add .`) a logical set of changes that should be communicated in the next release's `CHANGELOG.md`. +- This typically includes: + - **New Features** (Backward-compatible additions) + - **Bug Fixes** (Fixes to existing functionality) + - **Breaking Changes** (Changes that are not backward-compatible) + - **Performance Improvements** (Enhancements to speed or resource usage) + - **Significant Refactoring** (Major code restructuring, even if external behavior is unchanged, as it might affect stability or maintainability) - *Such as reorganizing the MCP server's direct function implementations into separate files* + - **User-Facing Documentation Updates** (Changes to README, usage guides, public API docs) + - **Dependency Updates** (Especially if they fix known issues or introduce significant changes) + - **Build/Tooling Changes** (If they affect how consumers might build or interact with the package) +- **Every Pull Request** containing one or more of the above change types **should include a changeset file**. + +## What NOT to Add a Changeset For + +Avoid creating changesets for changes that have **no impact or relevance to external consumers** of the `task-master` package or contributors following **public-facing documentation**. Examples include: + +- **Internal Documentation Updates:** Changes *only* to files within `.cursor/rules/` that solely guide internal development practices for this specific repository. +- **Trivial Chores:** Very minor code cleanup, adding comments that don't clarify behavior, typo fixes in non-user-facing code or internal docs. +- **Non-Impactful Test Updates:** Minor refactoring of tests, adding tests for existing functionality without fixing bugs. +- **Local Configuration Changes:** Updates to personal editor settings, local `.env` files, etc. + +**Rule of Thumb:** If a user installing or using the `task-master` package wouldn't care about the change, or if a contributor following the main README wouldn't need to know about it for their workflow, you likely don't need a changeset. + +## How to Run and What It Asks + +1. **Run the command**: + ```bash + npm run changeset + # or + npx changeset add + ``` +2. **Select Packages**: It will prompt you to select the package(s) affected by your changes using arrow keys and spacebar. If this is not a monorepo, select the main package. +3. **Select Bump Type**: Choose the appropriate semantic version bump for **each** selected package: + * **`Major`**: For **breaking changes**. Use sparingly. + * **`Minor`**: For **new features**. + * **`Patch`**: For **bug fixes**, performance improvements, **user-facing documentation changes**, significant refactoring, relevant dependency updates, or impactful build/tooling changes. +4. **Enter Summary**: Provide a concise summary of the changes **for the `CHANGELOG.md`**. + * **Purpose**: This message is user-facing and explains *what* changed in the release. + * **Format**: Use the imperative mood (e.g., "Add feature X", "Fix bug Y", "Update README setup instructions"). Keep it brief, typically a single line. + * **Audience**: Think about users installing/updating the package or developers consuming its public API/CLI. + * **Not a Git Commit Message**: This summary is *different* from your detailed Git commit message. + +## Changeset Summary vs. Git Commit Message + +- **Changeset Summary**: + - **Audience**: Users/Consumers of the package (reads `CHANGELOG.md`). + - **Purpose**: Briefly describe *what* changed in the released version that is relevant to them. + - **Format**: Concise, imperative mood, single line usually sufficient. + - **Example**: `Fix dependency resolution bug in 'next' command.` +- **Git Commit Message**: + - **Audience**: Developers browsing the Git history of *this* repository. + - **Purpose**: Explain *why* the change was made, the context, and the implementation details (can include internal context). + - **Format**: Follows commit conventions (e.g., Conventional Commits), can be multi-line with a subject and body. + - **Example**: + ``` + fix(deps): Correct dependency lookup in 'next' command + + The logic previously failed to account for subtask dependencies when + determining the next available task. This commit refactors the + dependency check in `findNextTask` within `task-manager.js` to + correctly traverse both direct and subtask dependencies. Added + unit tests to cover this specific scenario. + ``` +- ✅ **DO**: Provide *both* a concise changeset summary (when appropriate) *and* a detailed Git commit message. +- ❌ **DON'T**: Use your detailed Git commit message body as the changeset summary. +- ❌ **DON'T**: Skip running `changeset` for user-relevant changes just because you wrote a good commit message. + +## The `.changeset` File + +- Running the command creates a unique markdown file in the `.changeset/` directory (e.g., `.changeset/random-name.md`). +- This file contains the bump type information and the summary you provided. +- **This file MUST be staged and committed** along with your relevant code changes. + +## Standard Workflow Sequence (When a Changeset is Needed) + +1. Make your code or relevant documentation changes. +2. Stage your changes: `git add .` +3. Run changeset: `npm run changeset` + * Select package(s). + * Select bump type (`Patch`, `Minor`, `Major`). + * Enter the **concise summary** for the changelog. +4. Stage the generated changeset file: `git add .changeset/*.md` +5. Commit all staged changes (code + changeset file) using your **detailed Git commit message**: + ```bash + git commit -m "feat(module): Add new feature X..." + ``` + +## Release Process (Context) + +- The generated `.changeset/*.md` files are consumed later during the release process. +- Commands like `changeset version` read these files, update `package.json` versions, update the `CHANGELOG.md`, and delete the individual changeset files. +- Commands like `changeset publish` then publish the new versions to npm. + +Following this workflow ensures that versioning is consistent and changelogs are automatically and accurately generated based on the contributions made. diff --git a/assets/rules/commands.mdc b/assets/rules/commands.mdc new file mode 100644 index 00000000..52299e68 --- /dev/null +++ b/assets/rules/commands.mdc @@ -0,0 +1,608 @@ +--- +description: Guidelines for implementing CLI commands using Commander.js +globs: scripts/modules/commands.js +alwaysApply: false +--- + +# Command-Line Interface Implementation Guidelines + +**Note on Interaction Method:** + +While this document details the implementation of Task Master's **CLI commands**, the **preferred method for interacting with Task Master in integrated environments (like Cursor) is through the MCP server tools**. + +- **Use MCP Tools First**: Always prefer using the MCP tools (e.g., `get_tasks`, `add_task`) when interacting programmatically or via an integrated tool. They offer better performance, structured data, and richer error handling. See [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a comprehensive list of MCP tools and their corresponding CLI commands. +- **CLI as Fallback/User Interface**: The `task-master` CLI commands described here are primarily intended for: + - Direct user interaction in the terminal. + - A fallback mechanism if the MCP server is unavailable or a specific functionality is not exposed via an MCP tool. +- **Implementation Context**: This document (`commands.mdc`) focuses on the standards for *implementing* the CLI commands using Commander.js within the [`commands.js`](mdc:scripts/modules/commands.js) module. + +## Command Structure Standards + +- **Basic Command Template**: + ```javascript + // ✅ DO: Follow this structure for all commands + programInstance + .command('command-name') + .description('Clear, concise description of what the command does') + .option('-o, --option ', 'Option description', 'default value') + .option('--long-option ', 'Option description') + .action(async (options) => { + // Command implementation + }); + ``` + +- **Command Handler Organization**: + - ✅ DO: Keep action handlers concise and focused + - ✅ DO: Extract core functionality to appropriate modules + - ✅ DO: Have the action handler import and call the relevant functions from core modules, like `task-manager.js` or `init.js`, passing the parsed `options`. + - ✅ DO: Perform basic parameter validation, such as checking for required options, within the action handler or at the start of the called core function. + - ❌ DON'T: Implement business logic in command handlers + +## Best Practices for Removal/Delete Commands + +When implementing commands that delete or remove data (like `remove-task` or `remove-subtask`), follow these specific guidelines: + +- **Confirmation Prompts**: + - ✅ **DO**: Include a confirmation prompt by default for destructive operations + - ✅ **DO**: Provide a `--yes` or `-y` flag to skip confirmation, useful for scripting or automation + - ✅ **DO**: Show what will be deleted in the confirmation message + - ❌ **DON'T**: Perform destructive operations without user confirmation unless explicitly overridden + + ```javascript + // ✅ DO: Include confirmation for destructive operations + programInstance + .command('remove-task') + .description('Remove a task or subtask permanently') + .option('-i, --id ', 'ID of the task to remove') + .option('-y, --yes', 'Skip confirmation prompt', false) + .action(async (options) => { + // Validation code... + + if (!options.yes) { + const confirm = await inquirer.prompt([{ + type: 'confirm', + name: 'proceed', + message: `Are you sure you want to permanently delete task ${taskId}? This cannot be undone.`, + default: false + }]); + + if (!confirm.proceed) { + console.log(chalk.yellow('Operation cancelled.')); + return; + } + } + + // Proceed with removal... + }); + ``` + +- **File Path Handling**: + - ✅ **DO**: Use `path.join()` to construct file paths + - ✅ **DO**: Follow established naming conventions for tasks, like `task_001.txt` + - ✅ **DO**: Check if files exist before attempting to delete them + - ✅ **DO**: Handle file deletion errors gracefully + - ❌ **DON'T**: Construct paths with string concatenation + + ```javascript + // ✅ DO: Properly construct file paths + const taskFilePath = path.join( + path.dirname(tasksPath), + `task_${taskId.toString().padStart(3, '0')}.txt` + ); + + // ✅ DO: Check existence before deletion + if (fs.existsSync(taskFilePath)) { + try { + fs.unlinkSync(taskFilePath); + console.log(chalk.green(`Task file deleted: ${taskFilePath}`)); + } catch (error) { + console.warn(chalk.yellow(`Could not delete task file: ${error.message}`)); + } + } + ``` + +- **Clean Up References**: + - ✅ **DO**: Clean up references to the deleted item in other parts of the data + - ✅ **DO**: Handle both direct and indirect references + - ✅ **DO**: Explain what related data is being updated + - ❌ **DON'T**: Leave dangling references + + ```javascript + // ✅ DO: Clean up references when deleting items + console.log(chalk.blue('Cleaning up task dependencies...')); + let referencesRemoved = 0; + + // Update dependencies in other tasks + data.tasks.forEach(task => { + if (task.dependencies && task.dependencies.includes(taskId)) { + task.dependencies = task.dependencies.filter(depId => depId !== taskId); + referencesRemoved++; + } + }); + + if (referencesRemoved > 0) { + console.log(chalk.green(`Removed ${referencesRemoved} references to task ${taskId} from other tasks`)); + } + ``` + +- **Task File Regeneration**: + - ✅ **DO**: Regenerate task files after destructive operations + - ✅ **DO**: Pass all required parameters to generation functions + - ✅ **DO**: Provide an option to skip regeneration if needed + - ❌ **DON'T**: Assume default parameters will work + + ```javascript + // ✅ DO: Properly regenerate files after deletion + if (!options.skipGenerate) { + console.log(chalk.blue('Regenerating task files...')); + try { + // Note both parameters are explicitly provided + await generateTaskFiles(tasksPath, path.dirname(tasksPath)); + console.log(chalk.green('Task files regenerated successfully')); + } catch (error) { + console.warn(chalk.yellow(`Warning: Could not regenerate task files: ${error.message}`)); + } + } + ``` + +- **Alternative Suggestions**: + - ✅ **DO**: Suggest non-destructive alternatives when appropriate + - ✅ **DO**: Explain the difference between deletion and status changes + - ✅ **DO**: Include examples of alternative commands + + ```javascript + // ✅ DO: Suggest alternatives for destructive operations + console.log(chalk.yellow('Note: If you just want to exclude this task from active work, consider:')); + console.log(chalk.cyan(` task-master set-status --id='${taskId}' --status='cancelled'`)); + console.log(chalk.cyan(` task-master set-status --id='${taskId}' --status='deferred'`)); + console.log('This preserves the task and its history for reference.'); + ``` + +## Option Naming Conventions + +- **Command Names**: + - ✅ DO: Use kebab-case for command names (`analyze-complexity`) + - ❌ DON'T: Use camelCase for command names (`analyzeComplexity`) + - ✅ DO: Use descriptive, action-oriented names + +- **Option Names**: + - ✅ DO: Use kebab-case for long-form option names, like `--output-format` + - ✅ DO: Provide single-letter shortcuts when appropriate, like `-f, --file` + - ✅ DO: Use consistent option names across similar commands + - ❌ DON'T: Use different names for the same concept, such as `--file` in one command and `--path` in another + + ```javascript + // ✅ DO: Use consistent option naming + .option('-f, --file ', 'Path to the tasks file', 'tasks/tasks.json') + .option('-o, --output ', 'Output directory', 'tasks') + + // ❌ DON'T: Use inconsistent naming + .option('-f, --file ', 'Path to the tasks file') + .option('-p, --path ', 'Output directory') // Should be --output + ``` + + > **Note**: Although options are defined with kebab-case, like `--num-tasks`, Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`. + +- **Boolean Flag Conventions**: + - ✅ DO: Use positive flags with `--skip-` prefix for disabling behavior + - ❌ DON'T: Use negated boolean flags with `--no-` prefix + - ✅ DO: Use consistent flag handling across all commands + + ```javascript + // ✅ DO: Use positive flag with skip- prefix + .option('--skip-generate', 'Skip generating task files') + + // ❌ DON'T: Use --no- prefix + .option('--no-generate', 'Skip generating task files') + ``` + + > **Important**: When handling boolean flags in the code, make your intent clear: + ```javascript + // ✅ DO: Use clear variable naming that matches the flag's intent + const generateFiles = !options.skipGenerate; + + // ❌ DON'T: Use confusing double negatives + const dontSkipGenerate = !options.skipGenerate; + ``` + +## Input Validation + +- **Required Parameters**: + - ✅ DO: Check that required parameters are provided + - ✅ DO: Provide clear error messages when parameters are missing + - ✅ DO: Use early returns with `process.exit(1)` for validation failures + + ```javascript + // ✅ DO: Validate required parameters early + if (!prompt) { + console.error(chalk.red('Error: --prompt parameter is required. Please provide a task description.')); + process.exit(1); + } + ``` + +- **Parameter Type Conversion**: + - ✅ DO: Convert string inputs to appropriate types, such as numbers or booleans + - ✅ DO: Handle conversion errors gracefully + + ```javascript + // ✅ DO: Parse numeric parameters properly + const fromId = parseInt(options.from, 10); + if (isNaN(fromId)) { + console.error(chalk.red('Error: --from must be a valid number')); + process.exit(1); + } + ``` + +- **Enhanced Input Validation**: + - ✅ DO: Validate file existence for critical file operations + - ✅ DO: Provide context-specific validation for identifiers + - ✅ DO: Check required API keys for features that depend on them + + ```javascript + // ✅ DO: Validate file existence + if (!fs.existsSync(tasksPath)) { + console.error(chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)); + if (tasksPath === 'tasks/tasks.json') { + console.log(chalk.yellow('Hint: Run task-master init or task-master parse-prd to create tasks.json first')); + } else { + console.log(chalk.yellow(`Hint: Check if the file path is correct: ${tasksPath}`)); + } + process.exit(1); + } + + // ✅ DO: Validate task ID + const taskId = parseInt(options.id, 10); + if (isNaN(taskId) || taskId <= 0) { + console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`)); + console.log(chalk.yellow("Usage example: task-master update-task --id='23' --prompt='Update with new information.\\nEnsure proper error handling.'")); + process.exit(1); + } + + // ✅ DO: Check for required API keys + if (useResearch && !process.env.PERPLEXITY_API_KEY) { + console.log(chalk.yellow('Warning: PERPLEXITY_API_KEY environment variable is missing. Research-backed updates will not be available.')); + console.log(chalk.yellow('Falling back to Claude AI for task update.')); + } + ``` + +## User Feedback + +- **Operation Status**: + - ✅ DO: Provide clear feedback about the operation being performed + - ✅ DO: Display success or error messages after completion + - ✅ DO: Use colored output to distinguish between different message types + + ```javascript + // ✅ DO: Show operation status + console.log(chalk.blue(`Parsing PRD file: ${file}`)); + console.log(chalk.blue(`Generating ${numTasks} tasks...`)); + + try { + await parsePRD(file, outputPath, numTasks); + console.log(chalk.green('Successfully generated tasks from PRD')); + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + process.exit(1); + } + ``` + +- **Success Messages with Next Steps**: + - ✅ DO: Use boxen for important success messages with clear formatting + - ✅ DO: Provide suggested next steps after command completion + - ✅ DO: Include ready-to-use commands for follow-up actions + + ```javascript + // ✅ DO: Display success with next steps + console.log(boxen( + chalk.white.bold(`Subtask ${parentId}.${subtask.id} Added Successfully`) + '\n\n' + + chalk.white(`Title: ${subtask.title}`) + '\n' + + chalk.white(`Status: ${getStatusWithColor(subtask.status)}`) + '\n' + + (dependencies.length > 0 ? chalk.white(`Dependencies: ${dependencies.join(', ')}`) + '\n' : '') + + '\n' + + chalk.white.bold('Next Steps:') + '\n' + + chalk.cyan(`1. Run ${chalk.yellow(`task-master show '${parentId}'`)} to see the parent task with all subtasks`) + '\n' + + chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id='${parentId}.${subtask.id}' --status='in-progress'`)} to start working on it`), + { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } + )); + ``` + +## Command Registration + +- **Command Grouping**: + - ✅ DO: Group related commands together in the code + - ✅ DO: Add related commands in a logical order + - ✅ DO: Use comments to delineate command groups + +- **Command Export**: + - ✅ DO: Export the registerCommands function + - ✅ DO: Keep the CLI setup code clean and maintainable + + ```javascript + // ✅ DO: Follow this export pattern + export { + registerCommands, + setupCLI, + runCLI, + checkForUpdate, // Include version checking functions + compareVersions, + displayUpgradeNotification + }; + ``` + +## Error Handling + +- **Exception Management**: + - ✅ DO: Wrap async operations in try/catch blocks + - ✅ DO: Display user-friendly error messages + - ✅ DO: Include detailed error information in debug mode + + ```javascript + // ✅ DO: Handle errors properly + try { + // Command implementation + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + + if (CONFIG.debug) { + console.error(error); + } + + process.exit(1); + } + ``` + +- **Unknown Options Handling**: + - ✅ DO: Provide clear error messages for unknown options + - ✅ DO: Show available options when an unknown option is used + - ✅ DO: Include command-specific help displays for common errors + - ❌ DON'T: Allow unknown options with `.allowUnknownOption()` + + ```javascript + // ✅ DO: Register global error handlers for unknown options + programInstance.on('option:unknown', function(unknownOption) { + const commandName = this._name || 'unknown'; + console.error(chalk.red(`Error: Unknown option '${unknownOption}'`)); + console.error(chalk.yellow(`Run 'task-master ${commandName} --help' to see available options`)); + process.exit(1); + }); + + // ✅ DO: Add command-specific help displays + function showCommandHelp() { + console.log(boxen( + chalk.white.bold('Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master command --option1= [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' --option1 Description of option1 (required)\n' + + ' --option2 Description of option2\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master command --option1=\'value1\' --option2=\'value2\'', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); + } + ``` + +- **Global Error Handling**: + - ✅ DO: Set up global error handlers for uncaught exceptions + - ✅ DO: Detect and format Commander-specific errors + - ✅ DO: Provide suitable guidance for fixing common errors + + ```javascript + // ✅ DO: Set up global error handlers with helpful messages + process.on('uncaughtException', (err) => { + // Handle Commander-specific errors + if (err.code === 'commander.unknownOption') { + const option = err.message.match(/'([^']+)'/)?.[1]; // Safely extract option name + console.error(chalk.red(`Error: Unknown option '${option}'`)); + console.error(chalk.yellow("Run 'task-master --help' to see available options")); + process.exit(1); + } + + // Handle other error types... + console.error(chalk.red(`Error: ${err.message}`)); + process.exit(1); + }); + ``` + +- **Contextual Error Handling**: + - ✅ DO: Provide specific error handling for common issues + - ✅ DO: Include troubleshooting hints for each error type + - ✅ DO: Use consistent error formatting across all commands + + ```javascript + // ✅ DO: Provide specific error handling with guidance + try { + // Implementation + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + + // Provide more helpful error messages for common issues + if (error.message.includes('task') && error.message.includes('not found')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Run \'task-master list\' to see all available task IDs'); + console.log(' 2. Use a valid task ID with the --id parameter'); + } else if (error.message.includes('API key')) { + console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.')); + } + + if (CONFIG.debug) { + console.error(error); + } + + process.exit(1); + } + ``` + +## Integration with Other Modules + +- **Import Organization**: + - ✅ DO: Group imports by module/functionality + - ✅ DO: Import only what's needed, not entire modules + - ❌ DON'T: Create circular dependencies + + ```javascript + // ✅ DO: Organize imports by module + import { program } from 'commander'; + import path from 'path'; + import chalk from 'chalk'; + import https from 'https'; + + import { CONFIG, log, readJSON } from './utils.js'; + import { displayBanner, displayHelp } from './ui.js'; + import { parsePRD, listTasks } from './task-manager.js'; + import { addDependency } from './dependency-manager.js'; + ``` + +## Subtask Management Commands + +- **Add Subtask Command Structure**: + ```javascript + // ✅ DO: Follow this structure for adding subtasks + programInstance + .command('add-subtask') + .description('Add a new subtask to a parent task or convert an existing task to a subtask') + .option('-f, --file ', 'Path to the tasks file', 'tasks/tasks.json') + .option('-p, --parent ', 'ID of the parent task (required)') + .option('-i, --task-id ', 'Existing task ID to convert to subtask') + .option('-t, --title ', 'Title for the new subtask, required if not converting') + .option('-d, --description <description>', 'Description for the new subtask, optional') + .option('--details <details>', 'Implementation details for the new subtask, optional') + .option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on') + .option('--status <status>', 'Initial status for the subtask', 'pending') + .option('--skip-generate', 'Skip regenerating task files') + .action(async (options) => { + // Validate required parameters + if (!options.parent) { + console.error(chalk.red('Error: --parent parameter is required')); + showAddSubtaskHelp(); // Show contextual help + process.exit(1); + } + + // Implementation with detailed error handling + }); + ``` + +- **Remove Subtask Command Structure**: + ```javascript + // ✅ DO: Follow this structure for removing subtasks + programInstance + .command('remove-subtask') + .description('Remove a subtask from its parent task, optionally converting it to a standalone task') + .option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json') + .option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required') + .option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting') + .option('--skip-generate', 'Skip regenerating task files') + .action(async (options) => { + // Implementation with detailed error handling + }) + .on('error', function(err) { + console.error(chalk.red(`Error: ${err.message}`)); + showRemoveSubtaskHelp(); // Show contextual help + process.exit(1); + }); + ``` + +## Version Checking and Updates + +- **Automatic Version Checking**: + - ✅ DO: Implement version checking to notify users of available updates + - ✅ DO: Use non-blocking version checks that don't delay command execution + - ✅ DO: Display update notifications after command completion + + ```javascript + // ✅ DO: Implement version checking function + async function checkForUpdate() { + // Implementation details... + // Example return structure: + return { currentVersion, latestVersion, updateAvailable }; + } + + // ✅ DO: Implement semantic version comparison + function compareVersions(v1, v2) { + const v1Parts = v1.split('.').map(p => parseInt(p, 10)); + const v2Parts = v2.split('.').map(p => parseInt(p, 10)); + + // Implementation details... + return result; // -1, 0, or 1 + } + + // ✅ DO: Display attractive update notifications + function displayUpgradeNotification(currentVersion, latestVersion) { + const message = boxen( + `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}\n\n` + + `Run ${chalk.cyan('npm i task-master-ai@latest -g')} to update to the latest version with new features and bug fixes.`, + { + padding: 1, + margin: { top: 1, bottom: 1 }, + borderColor: 'yellow', + borderStyle: 'round' + } + ); + + console.log(message); + } + + // ✅ DO: Integrate version checking in CLI run function + async function runCLI(argv = process.argv) { + try { + // Start the update check in the background - don't await yet + const updateCheckPromise = checkForUpdate(); + + // Setup and parse + const programInstance = setupCLI(); + await programInstance.parseAsync(argv); + + // After command execution, check if an update is available + const updateInfo = await updateCheckPromise; + if (updateInfo.updateAvailable) { + displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion); + } + } catch (error) { + // Error handling... + } + } + ``` + +Refer to [`commands.js`](mdc:scripts/modules/commands.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines. +// Helper function to show add-subtask command help +function showAddSubtaskHelp() { + console.log(boxen( + chalk.white.bold('Add Subtask Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master add-subtask --parent=<id> [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' -p, --parent <id> Parent task ID (required)\n' + + ' -i, --task-id <id> Existing task ID to convert to subtask\n' + + ' -t, --title <title> Title for the new subtask\n' + + ' -d, --description <text> Description for the new subtask\n' + + ' --details <text> Implementation details for the new subtask\n' + + ' --dependencies <ids> Comma-separated list of dependency IDs\n' + + ' -s, --status <status> Status for the new subtask (default: "pending")\n' + + ' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' + + ' --skip-generate Skip regenerating task files\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' + + ' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' + + ' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details $\'Handle 401 Unauthorized.\nHandle 500 Server Error.\'', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); +} + +// Helper function to show remove-subtask command help +function showRemoveSubtaskHelp() { + console.log(boxen( + chalk.white.bold('Remove Subtask Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master remove-subtask --id=<parentId.subtaskId> [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' + + ' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' + + ' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' + + ' --skip-generate Skip regenerating task files\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master remove-subtask --id=\'5.2\'\n' + + ' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' + + ' task-master remove-subtask --id=\'5.2\' --convert', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); +} diff --git a/assets/rules/cursor_rules.mdc b/assets/rules/cursor_rules.mdc new file mode 100644 index 00000000..7dfae3de --- /dev/null +++ b/assets/rules/cursor_rules.mdc @@ -0,0 +1,53 @@ +--- +description: Guidelines for creating and maintaining Cursor rules to ensure consistency and effectiveness. +globs: .cursor/rules/*.mdc +alwaysApply: true +--- + +- **Required Rule Structure:** + ```markdown + --- + description: Clear, one-line description of what the rule enforces + globs: path/to/files/*.ext, other/path/**/* + alwaysApply: boolean + --- + + - **Main Points in Bold** + - Sub-points with details + - Examples and explanations + ``` + +- **File References:** + - Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files + - Example: [prisma.mdc](mdc:.cursor/rules/prisma.mdc) for rule references + - Example: [schema.prisma](mdc:prisma/schema.prisma) for code references + +- **Code Examples:** + - Use language-specific code blocks + ```typescript + // ✅ DO: Show good examples + const goodExample = true; + + // ❌ DON'T: Show anti-patterns + const badExample = false; + ``` + +- **Rule Content Guidelines:** + - Start with high-level overview + - Include specific, actionable requirements + - Show examples of correct implementation + - Reference existing code when possible + - Keep rules DRY by referencing other rules + +- **Rule Maintenance:** + - Update rules when new patterns emerge + - Add examples from actual codebase + - Remove outdated patterns + - Cross-reference related rules + +- **Best Practices:** + - Use bullet points for clarity + - Keep descriptions concise + - Include both DO and DON'T examples + - Reference actual code over theoretical examples + - Use consistent formatting across rules \ No newline at end of file diff --git a/assets/rules/dependencies.mdc b/assets/rules/dependencies.mdc new file mode 100644 index 00000000..541a9fee --- /dev/null +++ b/assets/rules/dependencies.mdc @@ -0,0 +1,224 @@ +--- +description: Guidelines for managing task dependencies and relationships +globs: scripts/modules/dependency-manager.js +alwaysApply: false +--- + +# Dependency Management Guidelines + +## Dependency Structure Principles + +- **Dependency References**: + - ✅ DO: Represent task dependencies as arrays of task IDs + - ✅ DO: Use numeric IDs for direct task references + - ✅ DO: Use string IDs with dot notation (e.g., "1.2") for subtask references + - ❌ DON'T: Mix reference types without proper conversion + + ```javascript + // ✅ DO: Use consistent dependency formats + // For main tasks + task.dependencies = [1, 2, 3]; // Dependencies on other main tasks + + // For subtasks + subtask.dependencies = [1, "3.2"]; // Dependency on main task 1 and subtask 2 of task 3 + ``` + +- **Subtask Dependencies**: + - ✅ DO: Allow numeric subtask IDs to reference other subtasks within the same parent + - ✅ DO: Convert between formats appropriately when needed + - ❌ DON'T: Create circular dependencies between subtasks + + ```javascript + // ✅ DO: Properly normalize subtask dependencies + // When a subtask refers to another subtask in the same parent + if (typeof depId === 'number' && depId < 100) { + // It's likely a reference to another subtask in the same parent task + const fullSubtaskId = `${parentId}.${depId}`; + // Now use fullSubtaskId for validation + } + ``` + +## Dependency Validation + +- **Existence Checking**: + - ✅ DO: Validate that referenced tasks exist before adding dependencies + - ✅ DO: Provide clear error messages for non-existent dependencies + - ✅ DO: Remove references to non-existent tasks during validation + + ```javascript + // ✅ DO: Check if the dependency exists before adding + if (!taskExists(data.tasks, formattedDependencyId)) { + log('error', `Dependency target ${formattedDependencyId} does not exist in tasks.json`); + process.exit(1); + } + ``` + +- **Circular Dependency Prevention**: + - ✅ DO: Check for circular dependencies before adding new relationships + - ✅ DO: Use graph traversal algorithms (DFS) to detect cycles + - ✅ DO: Provide clear error messages explaining the circular chain + + ```javascript + // ✅ DO: Check for circular dependencies before adding + const dependencyChain = [formattedTaskId]; + if (isCircularDependency(data.tasks, formattedDependencyId, dependencyChain)) { + log('error', `Cannot add dependency ${formattedDependencyId} to task ${formattedTaskId} as it would create a circular dependency.`); + process.exit(1); + } + ``` + +- **Self-Dependency Prevention**: + - ✅ DO: Prevent tasks from depending on themselves + - ✅ DO: Handle both direct and indirect self-dependencies + + ```javascript + // ✅ DO: Prevent self-dependencies + if (String(formattedTaskId) === String(formattedDependencyId)) { + log('error', `Task ${formattedTaskId} cannot depend on itself.`); + process.exit(1); + } + ``` + +## Dependency Modification + +- **Adding Dependencies**: + - ✅ DO: Format task and dependency IDs consistently + - ✅ DO: Check for existing dependencies to prevent duplicates + - ✅ DO: Sort dependencies for better readability + + ```javascript + // ✅ DO: Format IDs consistently when adding dependencies + const formattedTaskId = typeof taskId === 'string' && taskId.includes('.') + ? taskId : parseInt(taskId, 10); + + const formattedDependencyId = formatTaskId(dependencyId); + ``` + +- **Removing Dependencies**: + - ✅ DO: Check if the dependency exists before removing + - ✅ DO: Handle different ID formats consistently + - ✅ DO: Provide feedback about the removal result + + ```javascript + // ✅ DO: Properly handle dependency removal + const dependencyIndex = targetTask.dependencies.findIndex(dep => { + // Convert both to strings for comparison + let depStr = String(dep); + + // Handle relative subtask references + if (typeof dep === 'number' && dep < 100 && isSubtask) { + const [parentId] = formattedTaskId.split('.'); + depStr = `${parentId}.${dep}`; + } + + return depStr === normalizedDependencyId; + }); + + if (dependencyIndex === -1) { + log('info', `Task ${formattedTaskId} does not depend on ${formattedDependencyId}, no changes made.`); + return; + } + + // Remove the dependency + targetTask.dependencies.splice(dependencyIndex, 1); + ``` + +## Dependency Cleanup + +- **Duplicate Removal**: + - ✅ DO: Use Set objects to identify and remove duplicates + - ✅ DO: Handle both numeric and string ID formats + + ```javascript + // ✅ DO: Remove duplicate dependencies + const uniqueDeps = new Set(); + const uniqueDependencies = task.dependencies.filter(depId => { + // Convert to string for comparison to handle both numeric and string IDs + const depIdStr = String(depId); + if (uniqueDeps.has(depIdStr)) { + log('warn', `Removing duplicate dependency from task ${task.id}: ${depId}`); + return false; + } + uniqueDeps.add(depIdStr); + return true; + }); + ``` + +- **Invalid Reference Cleanup**: + - ✅ DO: Check for and remove references to non-existent tasks + - ✅ DO: Check for and remove self-references + - ✅ DO: Track and report changes made during cleanup + + ```javascript + // ✅ DO: Filter invalid task dependencies + task.dependencies = task.dependencies.filter(depId => { + const numericId = typeof depId === 'string' ? parseInt(depId, 10) : depId; + if (!validTaskIds.has(numericId)) { + log('warn', `Removing invalid task dependency from task ${task.id}: ${depId} (task does not exist)`); + return false; + } + return true; + }); + ``` + +## Dependency Visualization + +- **Status Indicators**: + - ✅ DO: Use visual indicators to show dependency status (✅/⏱️) + - ✅ DO: Format dependency lists consistently + + ```javascript + // ✅ DO: Format dependencies with status indicators + function formatDependenciesWithStatus(dependencies, allTasks) { + if (!dependencies || dependencies.length === 0) { + return 'None'; + } + + return dependencies.map(depId => { + const depTask = findTaskById(allTasks, depId); + if (!depTask) return `${depId} (Not found)`; + + const isDone = depTask.status === 'done' || depTask.status === 'completed'; + const statusIcon = isDone ? '✅' : '⏱️'; + + return `${statusIcon} ${depId} (${depTask.status})`; + }).join(', '); + } + ``` + +## Cycle Detection + +- **Graph Traversal**: + - ✅ DO: Use depth-first search (DFS) for cycle detection + - ✅ DO: Track visited nodes and recursion stack + - ✅ DO: Support both task and subtask dependencies + + ```javascript + // ✅ DO: Use proper cycle detection algorithms + function findCycles(subtaskId, dependencyMap, visited = new Set(), recursionStack = new Set()) { + // Mark the current node as visited and part of recursion stack + visited.add(subtaskId); + recursionStack.add(subtaskId); + + const cyclesToBreak = []; + const dependencies = dependencyMap.get(subtaskId) || []; + + for (const depId of dependencies) { + if (!visited.has(depId)) { + const cycles = findCycles(depId, dependencyMap, visited, recursionStack); + cyclesToBreak.push(...cycles); + } + else if (recursionStack.has(depId)) { + // Found a cycle, add the edge to break + cyclesToBreak.push(depId); + } + } + + // Remove the node from recursion stack before returning + recursionStack.delete(subtaskId); + + return cyclesToBreak; + } + ``` + +Refer to [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines. \ No newline at end of file diff --git a/assets/rules/dev_workflow.mdc b/assets/rules/dev_workflow.mdc new file mode 100644 index 00000000..4d430323 --- /dev/null +++ b/assets/rules/dev_workflow.mdc @@ -0,0 +1,219 @@ +--- +description: Guide for using Task Master to manage task-driven development workflows +globs: **/* +alwaysApply: true +--- +# Task Master Development Workflow + +This guide outlines the typical process for using Task Master to manage software development projects. + +## Primary Interaction: MCP Server vs. CLI + +Task Master offers two primary ways to interact: + +1. **MCP Server (Recommended for Integrated Tools)**: + - For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**. + - The MCP server exposes Task Master functionality through a set of tools (e.g., `get_tasks`, `add_subtask`). + - This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing. + - Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools. + - A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). + - **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change. + +2. **`task-master` CLI (For Users & Fallback)**: + - The global `task-master` command provides a user-friendly interface for direct terminal interaction. + - It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP. + - Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`. + - The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`). + - Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference. + +## Standard Development Workflow Process + +- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json +- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs +- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks +- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Select tasks based on dependencies (all marked 'done'), priority level, and ID order +- Clarify tasks by checking task files in tasks/ directory or asking for user input +- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements +- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`. +- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating +- Implement code following task details, dependencies, and project standards +- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc)) +- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent=<id> --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json +- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed +- Respect dependency chains and task priorities when selecting work +- Report progress regularly using `get_tasks` / `task-master list` + +## Task Complexity Analysis + +- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis +- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version. +- Focus on tasks with highest complexity scores (8-10) for detailed breakdown +- Use analysis results to determine appropriate subtask allocation +- Note that reports are automatically used by the `expand_task` tool/command + +## Task Breakdown Process + +- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks. +- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations. +- Add `--research` flag to leverage Perplexity AI for research-backed expansion. +- Add `--force` flag to clear existing subtasks before generating new ones (default is to append). +- Use `--prompt="<context>"` to provide additional context when needed. +- Review and adjust generated subtasks as necessary. +- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`. +- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`. + +## Implementation Drift Handling + +- When implementation differs significantly from planned approach +- When future tasks need modification due to current implementation choices +- When new dependencies or requirements emerge +- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks. +- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task. + +## Task Status Management + +- Use 'pending' for tasks ready to be worked on +- Use 'done' for completed and verified tasks +- Use 'deferred' for postponed tasks +- Add custom status values as needed for project-specific workflows + +## Task Structure Fields + +- **id**: Unique identifier for the task (Example: `1`, `1.1`) +- **title**: Brief, descriptive title (Example: `"Initialize Repo"`) +- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) +- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) +- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`) + - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) + - This helps quickly identify which prerequisite tasks are blocking work +- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) +- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) +- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) +- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) +- Refer to task structure details (previously linked to `tasks.mdc`). + +## Configuration Management (Updated) + +Taskmaster configuration is managed through two main mechanisms: + +1. **`.taskmasterconfig` File (Primary):** + * Located in the project root directory. + * Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc. + * **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing. + * **View/Set specific models via `task-master models` command or `models` MCP tool.** + * Created automatically when you run `task-master models --setup` for the first time. + +2. **Environment Variables (`.env` / `mcp.json`):** + * Used **only** for sensitive API keys and specific endpoint URLs. + * Place API keys (one per provider) in a `.env` file in the project root for CLI usage. + * For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`. + * Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`). + +**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool. +**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`. +**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project. + +## Determining the Next Task + +- Run `next_task` / `task-master next` to show the next task to work on. +- The command identifies tasks with all dependencies satisfied +- Tasks are prioritized by priority level, dependency count, and ID +- The command shows comprehensive task information including: + - Basic task details and description + - Implementation details + - Subtasks (if they exist) + - Contextual suggested actions +- Recommended before starting any new development work +- Respects your project's dependency structure +- Ensures tasks are completed in the appropriate sequence +- Provides ready-to-use commands for common task actions + +## Viewing Specific Task Details + +- Run `get_task` / `task-master show <id>` to view a specific task. +- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) +- Displays comprehensive information similar to the next command, but for a specific task +- For parent tasks, shows all subtasks and their current status +- For subtasks, shows parent task information and relationship +- Provides contextual suggested actions appropriate for the specific task +- Useful for examining task details before implementation or checking status + +## Managing Task Dependencies + +- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency. +- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency. +- The system prevents circular dependencies and duplicate dependency entries +- Dependencies are checked for existence before being added or removed +- Task files are automatically regenerated after dependency changes +- Dependencies are visualized with status indicators in task listings and files + +## Iterative Subtask Implementation + +Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation: + +1. **Understand the Goal (Preparation):** + * Use `get_task` / `task-master show <subtaskId>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to thoroughly understand the specific goals and requirements of the subtask. + +2. **Initial Exploration & Planning (Iteration 1):** + * This is the first attempt at creating a concrete implementation plan. + * Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification. + * Determine the intended code changes (diffs) and their locations. + * Gather *all* relevant details from this exploration phase. + +3. **Log the Plan:** + * Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`. + * Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`. + +4. **Verify the Plan:** + * Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details. + +5. **Begin Implementation:** + * Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`. + * Start coding based on the logged plan. + +6. **Refine and Log Progress (Iteration 2+):** + * As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches. + * **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy. + * **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings. + * **Crucially, log:** + * What worked ("fundamental truths" discovered). + * What didn't work and why (to avoid repeating mistakes). + * Specific code snippets or configurations that were successful. + * Decisions made, especially if confirmed with user input. + * Any deviations from the initial plan and the reasoning. + * The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors. + +7. **Review & Update Rules (Post-Implementation):** + * Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history. + * Identify any new or modified code patterns, conventions, or best practices established during the implementation. + * Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`). + +8. **Mark Task Complete:** + * After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`. + +9. **Commit Changes (If using Git):** + * Stage the relevant code changes and any updated/new rule files (`git add .`). + * Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments. + * Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`). + * Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one. + +10. **Proceed to Next Subtask:** + * Identify the next subtask (e.g., using `next_task` / `task-master next`). + +## Code Analysis & Refactoring Techniques + +- **Top-Level Function Search**: + - Useful for understanding module structure or planning refactors. + - Use grep/ripgrep to find exported functions/constants: + `rg "export (async function|function|const) \w+"` or similar patterns. + - Can help compare functions between files during migrations or identify potential naming conflicts. + +--- +*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.* \ No newline at end of file diff --git a/assets/rules/glossary.mdc b/assets/rules/glossary.mdc new file mode 100644 index 00000000..a8a48041 --- /dev/null +++ b/assets/rules/glossary.mdc @@ -0,0 +1,26 @@ +--- +description: Glossary of other Cursor rules +globs: **/* +alwaysApply: true +--- + +# Glossary of Task Master Cursor Rules + +This file provides a quick reference to the purpose of each rule file located in the `.cursor/rules` directory. + +- **[`architecture.mdc`](mdc:.cursor/rules/architecture.mdc)**: Describes the high-level architecture of the Task Master CLI application. +- **[`changeset.mdc`](mdc:.cursor/rules/changeset.mdc)**: Guidelines for using Changesets (npm run changeset) to manage versioning and changelogs. +- **[`commands.mdc`](mdc:.cursor/rules/commands.mdc)**: Guidelines for implementing CLI commands using Commander.js. +- **[`cursor_rules.mdc`](mdc:.cursor/rules/cursor_rules.mdc)**: Guidelines for creating and maintaining Cursor rules to ensure consistency and effectiveness. +- **[`dependencies.mdc`](mdc:.cursor/rules/dependencies.mdc)**: Guidelines for managing task dependencies and relationships. +- **[`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc)**: Guide for using Task Master to manage task-driven development workflows. +- **[`glossary.mdc`](mdc:.cursor/rules/glossary.mdc)**: This file; provides a glossary of other Cursor rules. +- **[`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)**: Guidelines for implementing and interacting with the Task Master MCP Server. +- **[`new_features.mdc`](mdc:.cursor/rules/new_features.mdc)**: Guidelines for integrating new features into the Task Master CLI. +- **[`self_improve.mdc`](mdc:.cursor/rules/self_improve.mdc)**: Guidelines for continuously improving Cursor rules based on emerging code patterns and best practices. +- **[`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)**: Comprehensive reference for Taskmaster MCP tools and CLI commands. +- **[`tasks.mdc`](mdc:.cursor/rules/tasks.mdc)**: Guidelines for implementing task management operations. +- **[`tests.mdc`](mdc:.cursor/rules/tests.mdc)**: Guidelines for implementing and maintaining tests for Task Master CLI. +- **[`ui.mdc`](mdc:.cursor/rules/ui.mdc)**: Guidelines for implementing and maintaining user interface components. +- **[`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)**: Guidelines for implementing utility functions. + diff --git a/assets/rules/mcp.mdc b/assets/rules/mcp.mdc new file mode 100644 index 00000000..ebacd578 --- /dev/null +++ b/assets/rules/mcp.mdc @@ -0,0 +1,524 @@ +--- +description: Guidelines for implementing and interacting with the Task Master MCP Server +globs: mcp-server/src/**/*, scripts/modules/**/* +alwaysApply: false +--- +# Task Master MCP Server Guidelines + +This document outlines the architecture and implementation patterns for the Task Master Model Context Protocol (MCP) server, designed for integration with tools like Cursor. + +## Architecture Overview (See also: [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc)) + +The MCP server acts as a bridge between external tools (like Cursor) and the core Task Master CLI logic. It leverages FastMCP for the server framework. + +- **Flow**: `External Tool (Cursor)` <-> `FastMCP Server` <-> `MCP Tools` (`mcp-server/src/tools/*.js`) <-> `Core Logic Wrappers` (`mcp-server/src/core/direct-functions/*.js`, exported via `task-master-core.js`) <-> `Core Modules` (`scripts/modules/*.js`) +- **Goal**: Provide a performant and reliable way for external tools to interact with Task Master functionality without directly invoking the CLI for every operation. + +## Direct Function Implementation Best Practices + +When implementing a new direct function in `mcp-server/src/core/direct-functions/`, follow these critical guidelines: + +1. **Verify Function Dependencies**: + - ✅ **DO**: Check that all helper functions your direct function needs are properly exported from their source modules + - ✅ **DO**: Import these dependencies explicitly at the top of your file + - ❌ **DON'T**: Assume helper functions like `findTaskById` or `taskExists` are automatically available + - **Example**: + ```javascript + // At top of direct-function file + import { removeTask, findTaskById, taskExists } from '../../../../scripts/modules/task-manager.js'; + ``` + +2. **Parameter Verification and Completeness**: + - ✅ **DO**: Verify the signature of core functions you're calling and ensure all required parameters are provided + - ✅ **DO**: Pass explicit values for required parameters rather than relying on defaults + - ✅ **DO**: Double-check parameter order against function definition + - ❌ **DON'T**: Omit parameters assuming they have default values + - **Example**: + ```javascript + // Correct parameter handling in direct function + async function generateTaskFilesDirect(args, log) { + const tasksPath = findTasksJsonPath(args, log); + const outputDir = args.output || path.dirname(tasksPath); + + try { + // Pass all required parameters + const result = await generateTaskFiles(tasksPath, outputDir); + return { success: true, data: result, fromCache: false }; + } catch (error) { + // Error handling... + } + } + ``` + +3. **Consistent File Path Handling**: + - ✅ **DO**: Use `path.join()` instead of string concatenation for file paths + - ✅ **DO**: Follow established file naming conventions (`task_001.txt` not `1.md`) + - ✅ **DO**: Use `path.dirname()` and other path utilities for manipulating paths + - ✅ **DO**: When paths relate to task files, follow the standard format: `task_${id.toString().padStart(3, '0')}.txt` + - ❌ **DON'T**: Create custom file path handling logic that diverges from established patterns + - **Example**: + ```javascript + // Correct file path handling + const taskFilePath = path.join( + path.dirname(tasksPath), + `task_${taskId.toString().padStart(3, '0')}.txt` + ); + ``` + +4. **Comprehensive Error Handling**: + - ✅ **DO**: Wrap core function calls *and AI calls* in try/catch blocks + - ✅ **DO**: Log errors with appropriate severity and context + - ✅ **DO**: Return standardized error objects with code and message (`{ success: false, error: { code: '...', message: '...' } }`) + - ✅ **DO**: Handle file system errors, AI client errors, AI processing errors, and core function errors distinctly with appropriate codes. + - **Example**: + ```javascript + try { + // Core function call or AI logic + } catch (error) { + log.error(`Failed to execute direct function logic: ${error.message}`); + return { + success: false, + error: { + code: error.code || 'DIRECT_FUNCTION_ERROR', // Use specific codes like AI_CLIENT_ERROR, etc. + message: error.message, + details: error.stack // Optional: Include stack in debug mode + }, + fromCache: false // Ensure this is included if applicable + }; + } + ``` + +5. **Handling Logging Context (`mcpLog`)**: + - **Requirement**: Core functions (like those in `task-manager.js`) may accept an `options` object containing an optional `mcpLog` property. If provided, the core function expects this object to have methods like `mcpLog.info(...)`, `mcpLog.error(...)`. + - **Solution: The Logger Wrapper Pattern**: When calling a core function from a direct function, pass the `log` object provided by FastMCP *wrapped* in the standard `logWrapper` object. This ensures the core function receives a logger with the expected method structure. + ```javascript + // Standard logWrapper pattern within a Direct Function + const logWrapper = { + info: (message, ...args) => log.info(message, ...args), + warn: (message, ...args) => log.warn(message, ...args), + error: (message, ...args) => log.error(message, ...args), + debug: (message, ...args) => log.debug && log.debug(message, ...args), + success: (message, ...args) => log.info(message, ...args) + }; + + // ... later when calling the core function ... + await coreFunction( + // ... other arguments ... + { + mcpLog: logWrapper, // Pass the wrapper object + session // Also pass session if needed by core logic or AI service + }, + 'json' // Pass 'json' output format if supported by core function + ); + ``` + - **JSON Output**: Passing `mcpLog` (via the wrapper) often triggers the core function to use a JSON-friendly output format, suppressing spinners/boxes. + - ✅ **DO**: Implement this pattern in direct functions calling core functions that might use `mcpLog`. + +6. **Silent Mode Implementation**: + - ✅ **DO**: Import silent mode utilities: `import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js';` + - ✅ **DO**: Wrap core function calls *within direct functions* using `enableSilentMode()` / `disableSilentMode()` in a `try/finally` block if the core function might produce console output (spinners, boxes, direct `console.log`) that isn't reliably controlled by passing `{ mcpLog }` or an `outputFormat` parameter. + - ✅ **DO**: Always disable silent mode in the `finally` block. + - ❌ **DON'T**: Wrap calls to the unified AI service (`generateTextService`, `generateObjectService`) in silent mode; their logging is handled internally. + - **Example (Direct Function Guaranteeing Silence & using Log Wrapper)**: + ```javascript + export async function coreWrapperDirect(args, log, context = {}) { + const { session } = context; + const tasksPath = findTasksJsonPath(args, log); + const logWrapper = { /* ... */ }; + + enableSilentMode(); // Ensure silence for direct console output + try { + const result = await coreFunction( + tasksPath, + args.param1, + { mcpLog: logWrapper, session }, // Pass context + 'json' // Request JSON format if supported + ); + return { success: true, data: result }; + } catch (error) { + log.error(`Error: ${error.message}`); + return { success: false, error: { /* ... */ } }; + } finally { + disableSilentMode(); // Critical: Always disable in finally + } + } + ``` + +7. **Debugging MCP/Core Logic Interaction**: + - ✅ **DO**: If an MCP tool fails with unclear errors (like JSON parsing failures), run the equivalent `task-master` CLI command in the terminal. The CLI often provides more detailed error messages originating from the core logic (e.g., `ReferenceError`, stack traces) that are obscured by the MCP layer. + +## Tool Definition and Execution + +### Tool Structure + +MCP tools must follow a specific structure to properly interact with the FastMCP framework: + +```javascript +server.addTool({ + name: "tool_name", // Use snake_case for tool names + description: "Description of what the tool does", + parameters: z.object({ + // Define parameters using Zod + param1: z.string().describe("Parameter description"), + param2: z.number().optional().describe("Optional parameter description"), + // IMPORTANT: For file operations, always include these optional parameters + file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z.string().optional().describe("Root directory of the project (typically derived from session)") + }), + + // The execute function is the core of the tool implementation + execute: async (args, context) => { + // Implementation goes here + // Return response in the appropriate format + } +}); +``` + +### Execute Function Signature + +The `execute` function receives validated arguments and the FastMCP context: + +```javascript +// Destructured signature (recommended) +execute: async (args, { log, session }) => { + // Tool implementation +} +``` + +- **args**: Validated parameters. +- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`). + +### Standard Tool Execution Pattern with Path Normalization (Updated) + +To ensure consistent handling of project paths across different client environments (Windows, macOS, Linux, WSL) and input formats (e.g., `file:///...`, URI encoded paths), all MCP tool `execute` methods that require access to the project root **MUST** be wrapped with the `withNormalizedProjectRoot` Higher-Order Function (HOF). + +This HOF, defined in [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js), performs the following before calling the tool's core logic: + +1. **Determines the Raw Root:** It prioritizes `args.projectRoot` if provided by the client, otherwise it calls `getRawProjectRootFromSession` to extract the path from the session. +2. **Normalizes the Path:** It uses the `normalizeProjectRoot` helper to decode URIs, strip `file://` prefixes, fix potential Windows drive letter prefixes (e.g., `/C:/`), convert backslashes (`\`) to forward slashes (`/`), and resolve the path to an absolute path suitable for the server's OS. +3. **Injects Normalized Path:** It updates the `args` object by replacing the original `projectRoot` (or adding it) with the normalized, absolute path. +4. **Executes Original Logic:** It calls the original `execute` function body, passing the updated `args` object. + +**Implementation Example:** + +```javascript +// In mcp-server/src/tools/your-tool.js +import { + handleApiResult, + createErrorResponse, + withNormalizedProjectRoot // <<< Import HOF +} from './utils.js'; +import { yourDirectFunction } from '../core/task-master-core.js'; +import { findTasksJsonPath } from '../core/utils/path-utils.js'; // If needed + +export function registerYourTool(server) { + server.addTool({ + name: "your_tool", + description: "...". + parameters: z.object({ + // ... other parameters ... + projectRoot: z.string().optional().describe('...') // projectRoot is optional here, HOF handles fallback + }), + // Wrap the entire execute function + execute: withNormalizedProjectRoot(async (args, { log, session }) => { + // args.projectRoot is now guaranteed to be normalized and absolute + const { /* other args */, projectRoot } = args; + + try { + log.info(`Executing your_tool with normalized root: ${projectRoot}`); + + // Resolve paths using the normalized projectRoot + let tasksPath = findTasksJsonPath({ projectRoot, file: args.file }, log); + + // Call direct function, passing normalized projectRoot if needed by direct func + const result = await yourDirectFunction( + { + /* other args */, + projectRoot // Pass it if direct function needs it + }, + log, + { session } + ); + + return handleApiResult(result, log); + } catch (error) { + log.error(`Error in your_tool: ${error.message}`); + return createErrorResponse(error.message); + } + }) // End HOF wrap + }); +} +``` + +By using this HOF, the core logic within the `execute` method and any downstream functions (like `findTasksJsonPath` or direct functions) can reliably expect `args.projectRoot` to be a clean, absolute path suitable for the server environment. + +### Project Initialization Tool + +The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project: + +```javascript +// In initialize-project.js +import { z } from "zod"; +import { initializeProjectDirect } from "../core/task-master-core.js"; +import { handleApiResult, createErrorResponse } from "./utils.js"; + +export function registerInitializeProjectTool(server) { + server.addTool({ + name: "initialize_project", + description: "Initialize a new Task Master project", + parameters: z.object({ + projectName: z.string().optional().describe("The name for the new project"), + projectDescription: z.string().optional().describe("A brief description"), + projectVersion: z.string().optional().describe("Initial version (e.g., '0.1.0')"), + authorName: z.string().optional().describe("The author's name"), + skipInstall: z.boolean().optional().describe("Skip installing dependencies"), + addAliases: z.boolean().optional().describe("Add shell aliases"), + yes: z.boolean().optional().describe("Skip prompts and use defaults") + }), + execute: async (args, { log, reportProgress }) => { + try { + // Since we're initializing, we don't need project root + const result = await initializeProjectDirect(args, log); + return handleApiResult(result, log, 'Error initializing project'); + } catch (error) { + log.error(`Error in initialize_project: ${error.message}`); + return createErrorResponse(`Failed to initialize project: ${error.message}`); + } + } + }); +} +``` + +### Logging Convention + +The `log` object (destructured from `context`) provides standardized logging methods. Use it within both the `execute` method and the `*Direct` functions. **If progress indication is needed within a direct function, use `log.info()` instead of `reportProgress`**. + +```javascript +// Proper logging usage +log.info(`Starting ${toolName} with parameters: ${JSON.stringify(sanitizedArgs)}`); +log.debug("Detailed operation info", { data }); +log.warn("Potential issue detected"); +log.error(`Error occurred: ${error.message}`, { stack: error.stack }); +log.info('Progress: 50% - AI call initiated...'); // Example progress logging +``` + +## Session Usage Convention + +The `session` object (destructured from `context`) contains authenticated session data and client information. + +- **Authentication**: Access user-specific data (`session.userId`, etc.) if authentication is implemented. +- **Project Root**: The primary use in Task Master is accessing `session.roots` to determine the client's project root directory via the `getProjectRootFromSession` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)). See the Standard Tool Execution Pattern above. +- **Environment Variables**: The `session.env` object provides access to environment variables set in the MCP client configuration (e.g., `.cursor/mcp.json`). This is the **primary mechanism** for the unified AI service layer (`ai-services-unified.js`) to securely access **API keys** when called from MCP context. +- **Capabilities**: Can be used to check client capabilities (`session.clientCapabilities`). + +## Direct Function Wrappers (`*Direct`) + +These functions, located in `mcp-server/src/core/direct-functions/`, form the core logic execution layer for MCP tools. + +- **Purpose**: Bridge MCP tools and core Task Master modules (`scripts/modules/*`). Handle AI interactions if applicable. +- **Responsibilities**: + - Receive `args` (including `projectRoot`), `log`, and optionally `{ session }` context. + - Find `tasks.json` using `findTasksJsonPath`. + - Validate arguments. + - **Implement Caching (if applicable)**: Use `getCachedOrExecute`. + - **Call Core Logic**: Invoke function from `scripts/modules/*`. + - Pass `outputFormat: 'json'` if applicable. + - Wrap with `enableSilentMode/disableSilentMode` if needed. + - Pass `{ mcpLog: logWrapper, session }` context if core logic needs it. + - Handle errors. + - Return standardized result object. + - ❌ **DON'T**: Call `reportProgress`. + - ❌ **DON'T**: Initialize AI clients or call AI services directly. + +## Key Principles + +- **Prefer Direct Function Calls**: MCP tools should always call `*Direct` wrappers instead of `executeTaskMasterCommand`. +- **Standardized Execution Flow**: Follow the pattern: MCP Tool -> `getProjectRootFromSession` -> `*Direct` Function -> Core Logic / AI Logic. +- **Path Resolution via Direct Functions**: The `*Direct` function is responsible for finding the exact `tasks.json` path using `findTasksJsonPath`, relying on the `projectRoot` passed in `args`. +- **AI Logic in Core Modules**: AI interactions (prompt building, calling unified service) reside within the core logic functions (`scripts/modules/*`), not direct functions. +- **Silent Mode in Direct Functions**: Wrap *core function* calls (from `scripts/modules`) with `enableSilentMode()` and `disableSilentMode()` if they produce console output not handled by `outputFormat`. Do not wrap AI calls. +- **Selective Async Processing**: Use `AsyncOperationManager` in the *MCP Tool layer* for operations involving multiple steps or long waits beyond a single AI call (e.g., file processing + AI call + file writing). Simple AI calls handled entirely within the `*Direct` function (like `addTaskDirect`) may not need it at the tool layer. +- **No `reportProgress` in Direct Functions**: Do not pass or use `reportProgress` within `*Direct` functions. Use `log.info()` for internal progress or report progress from the `AsyncOperationManager` callback in the MCP tool layer. +- **Output Formatting**: Ensure core functions called by `*Direct` functions can suppress CLI output, ideally via an `outputFormat` parameter. +- **Project Initialization**: Use the initialize_project tool for setting up new projects in integrated environments. +- **Centralized Utilities**: Use helpers from `mcp-server/src/tools/utils.js`, `mcp-server/src/core/utils/path-utils.js`, and `mcp-server/src/core/utils/ai-client-utils.js`. See [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc). +- **Caching in Direct Functions**: Caching logic resides *within* the `*Direct` functions using `getCachedOrExecute`. + +## Resources and Resource Templates + +Resources provide LLMs with static or dynamic data without executing tools. + +- **Implementation**: Use `@mcp.resource()` decorator pattern or `server.addResource`/`server.addResourceTemplate` in `mcp-server/src/core/resources/`. +- **Registration**: Register resources during server initialization in [`mcp-server/src/index.js`](mdc:mcp-server/src/index.js). +- **Best Practices**: Organize resources, validate parameters, use consistent URIs, handle errors. See [`fastmcp-core.txt`](docs/fastmcp-core.txt) for underlying SDK details. + +*(Self-correction: Removed detailed Resource implementation examples as they were less relevant to the current user focus on tool execution flow and project roots. Kept the overview.)* + +## Implementing MCP Support for a Command + +Follow these steps to add MCP support for an existing Task Master command (see [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for more detail): + +1. **Ensure Core Logic Exists**: Verify the core functionality is implemented and exported from the relevant module in `scripts/modules/`. Ensure the core function can suppress console output (e.g., via an `outputFormat` parameter). + +2. **Create Direct Function File in `mcp-server/src/core/direct-functions/`**: + - Create a new file (e.g., `your-command.js`) using **kebab-case** naming. + - Import necessary core functions, `findTasksJsonPath`, silent mode utilities, and potentially AI client/prompt utilities. + - Implement `async function yourCommandDirect(args, log, context = {})` using **camelCase** with `Direct` suffix. **Remember `context` should only contain `{ session }` if needed (for AI keys/config).** + - **Path Resolution**: Obtain `tasksPath` using `findTasksJsonPath(args, log)`. + - Parse other `args` and perform necessary validation. + - **Handle AI (if applicable)**: Initialize clients using `get*ClientForMCP(session, log)`, build prompts, call AI, parse response. Handle AI-specific errors. + - **Implement Caching (if applicable)**: Use `getCachedOrExecute`. + - **Call Core Logic**: + - Wrap with `enableSilentMode/disableSilentMode` if necessary. + - Pass `outputFormat: 'json'` (or similar) if applicable. + - Handle errors from the core function. + - Format the return as `{ success: true/false, data/error, fromCache?: boolean }`. + - ❌ **DON'T**: Call `reportProgress`. + - Export the wrapper function. + +3. **Update `task-master-core.js` with Import/Export**: Import and re-export your `*Direct` function and add it to the `directFunctions` map. + +4. **Create MCP Tool (`mcp-server/src/tools/`)**: + - Create a new file (e.g., `your-command.js`) using **kebab-case**. + - Import `zod`, `handleApiResult`, `createErrorResponse`, `getProjectRootFromSession`, and your `yourCommandDirect` function. Import `AsyncOperationManager` if needed. + - Implement `registerYourCommandTool(server)`. + - Define the tool `name` using **snake_case** (e.g., `your_command`). + - Define the `parameters` using `zod`. Include `projectRoot: z.string().optional()`. + - Implement the `async execute(args, { log, session })` method (omitting `reportProgress` from destructuring). + - Get `rootFolder` using `getProjectRootFromSession(session, log)`. + - **Determine Execution Strategy**: + - **If using `AsyncOperationManager`**: Create the operation, call the `*Direct` function from within the async task callback (passing `log` and `{ session }`), report progress *from the callback*, and return the initial `ACCEPTED` response. + - **If calling `*Direct` function synchronously** (like `add-task`): Call `await yourCommandDirect({ ...args, projectRoot }, log, { session });`. Handle the result with `handleApiResult`. + - ❌ **DON'T**: Pass `reportProgress` down to the direct function in either case. + +5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`. + +6. **Update `mcp.json`**: Add the new tool definition to the `tools` array in `.cursor/mcp.json`. + +## Handling Responses + +- MCP tools should return the object generated by `handleApiResult`. +- `handleApiResult` uses `createContentResponse` or `createErrorResponse` internally. +- `handleApiResult` also uses `processMCPResponseData` by default to filter potentially large fields (`details`, `testStrategy`) from task data. Provide a custom processor function to `handleApiResult` if different filtering is needed. +- The final JSON response sent to the MCP client will include the `fromCache` boolean flag (obtained from the `*Direct` function's result) alongside the actual data (e.g., `{ "fromCache": true, "data": { ... } }` or `{ "fromCache": false, "data": { ... } }`). + +## Parameter Type Handling + +- **Prefer Direct Function Calls**: For optimal performance and error handling, MCP tools should utilize direct function wrappers defined in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js). These wrappers call the underlying logic from the core modules (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)). +- **Standard Tool Execution Pattern**: + - The `execute` method within each MCP tool (in `mcp-server/src/tools/*.js`) should: + 1. Call the corresponding `*Direct` function wrapper (e.g., `listTasksDirect`) from [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js), passing necessary arguments and the logger. + 2. Receive the result object (typically `{ success, data/error, fromCache }`). + 3. Pass this result object to the `handleApiResult` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) for standardized response formatting and error handling. + 4. Return the formatted response object provided by `handleApiResult`. +- **CLI Execution as Fallback**: The `executeTaskMasterCommand` utility in [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) allows executing commands via the CLI (`task-master ...`). This should **only** be used as a fallback if a direct function wrapper is not yet implemented or if a specific command intrinsically requires CLI execution. +- **Centralized Utilities** (See also: [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)): + - Use `findTasksJsonPath` (in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)) *within direct function wrappers* to locate the `tasks.json` file consistently. + - **Leverage MCP Utilities**: The file [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js) contains essential helpers for MCP tool implementation: + - `getProjectRoot`: Normalizes project paths. + - `handleApiResult`: Takes the raw result from a `*Direct` function and formats it into a standard MCP success or error response, automatically handling data processing via `processMCPResponseData`. This is called by the tool's `execute` method. + - `createContentResponse`/`createErrorResponse`: Used by `handleApiResult` to format successful/error MCP responses. + - `processMCPResponseData`: Filters/cleans data (e.g., removing `details`, `testStrategy`) before it's sent in the MCP response. Called by `handleApiResult`. + - `getCachedOrExecute`: **Used inside `*Direct` functions** in `task-master-core.js` to implement caching logic. + - `executeTaskMasterCommand`: Fallback for executing CLI commands. +- **Caching**: To improve performance for frequently called read operations (like `listTasks`, `showTask`, `nextTask`), a caching layer using `lru-cache` is implemented. + - **Caching logic resides *within* the direct function wrappers** in [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js) using the `getCachedOrExecute` utility from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js). + - Generate unique cache keys based on function arguments that define a distinct call (e.g., file path, filters). + - The `getCachedOrExecute` utility handles checking the cache, executing the core logic function on a cache miss, storing the result, and returning the data along with a `fromCache` flag. + - Cache statistics can be monitored using the `cacheStats` MCP tool (implemented via `getCacheStatsDirect`). + - **Caching should generally be applied to read-only operations** that don't modify the `tasks.json` state. Commands like `set-status`, `add-task`, `update-task`, `parse-prd`, `add-dependency` should *not* be cached as they change the underlying data. + +**MCP Tool Implementation Checklist**: + +1. **Core Logic Verification**: + - [ ] Confirm the core function is properly exported from its module (e.g., `task-manager.js`) + - [ ] Identify all required parameters and their types + +2. **Direct Function Wrapper**: + - [ ] Create the `*Direct` function in the appropriate file in `mcp-server/src/core/direct-functions/` + - [ ] Import silent mode utilities and implement them around core function calls + - [ ] Handle all parameter validations and type conversions + - [ ] Implement path resolving for relative paths + - [ ] Add appropriate error handling with standardized error codes + - [ ] Add to imports/exports in `task-master-core.js` + +3. **MCP Tool Implementation**: + - [ ] Create new file in `mcp-server/src/tools/` with kebab-case naming + - [ ] Define zod schema for all parameters + - [ ] Implement the `execute` method following the standard pattern + - [ ] Consider using AsyncOperationManager for long-running operations + - [ ] Register tool in `mcp-server/src/tools/index.js` + +4. **Testing**: + - [ ] Write unit tests for the direct function wrapper + - [ ] Write integration tests for the MCP tool + +## Standard Error Codes + +- **Standard Error Codes**: Use consistent error codes across direct function wrappers + - `INPUT_VALIDATION_ERROR`: For missing or invalid required parameters + - `FILE_NOT_FOUND_ERROR`: For file system path issues + - `CORE_FUNCTION_ERROR`: For errors thrown by the core function + - `UNEXPECTED_ERROR`: For all other unexpected errors + +- **Error Object Structure**: + ```javascript + { + success: false, + error: { + code: 'ERROR_CODE', + message: 'Human-readable error message' + }, + fromCache: false + } + ``` + +- **MCP Tool Logging Pattern**: + - ✅ DO: Log the start of execution with arguments (sanitized if sensitive) + - ✅ DO: Log successful completion with result summary + - ✅ DO: Log all error conditions with appropriate log levels + - ✅ DO: Include the cache status in result logs + - ❌ DON'T: Log entire large data structures or sensitive information + +- The MCP server integrates with Task Master core functions through three layers: + 1. Tool Definitions (`mcp-server/src/tools/*.js`) - Define parameters and validation + 2. Direct Functions (`mcp-server/src/core/direct-functions/*.js`) - Handle core logic integration + 3. Core Functions (`scripts/modules/*.js`) - Implement the actual functionality + +- This layered approach provides: + - Clear separation of concerns + - Consistent parameter validation + - Centralized error handling + - Performance optimization through caching (for read operations) + - Standardized response formatting + +## MCP Naming Conventions + +- **Files and Directories**: + - ✅ DO: Use **kebab-case** for all file names: `list-tasks.js`, `set-task-status.js` + - ✅ DO: Use consistent directory structure: `mcp-server/src/tools/` for tool definitions, `mcp-server/src/core/direct-functions/` for direct function implementations + +- **JavaScript Functions**: + - ✅ DO: Use **camelCase** with `Direct` suffix for direct function implementations: `listTasksDirect`, `setTaskStatusDirect` + - ✅ DO: Use **camelCase** with `Tool` suffix for tool registration functions: `registerListTasksTool`, `registerSetTaskStatusTool` + - ✅ DO: Use consistent action function naming inside direct functions: `coreActionFn` or similar descriptive name + +- **MCP Tool Names**: + - ✅ DO: Use **snake_case** for tool names exposed to MCP clients: `list_tasks`, `set_task_status`, `parse_prd_document` + - ✅ DO: Include the core action in the tool name without redundant words: Use `list_tasks` instead of `list_all_tasks` + +- **Examples**: + - File: `list-tasks.js` + - Direct Function: `listTasksDirect` + - Tool Registration: `registerListTasksTool` + - MCP Tool Name: `list_tasks` + +- **Mapping**: + - The `directFunctions` map in `task-master-core.js` maps the core function name (in camelCase) to its direct implementation: + ```javascript + export const directFunctions = { + list: listTasksDirect, + setStatus: setTaskStatusDirect, + // Add more functions as implemented + }; + ``` diff --git a/assets/rules/new_features.mdc b/assets/rules/new_features.mdc new file mode 100644 index 00000000..f6a696f1 --- /dev/null +++ b/assets/rules/new_features.mdc @@ -0,0 +1,630 @@ +--- +description: Guidelines for integrating new features into the Task Master CLI +globs: scripts/modules/*.js +alwaysApply: false +--- + +# Task Master Feature Integration Guidelines + +## Feature Placement Decision Process + +- **Identify Feature Type** (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for module details): + - **Data Manipulation**: Features that create, read, update, or delete tasks belong in [`task-manager.js`](mdc:scripts/modules/task-manager.js). Follow guidelines in [`tasks.mdc`](mdc:.cursor/rules/tasks.mdc). + - **Dependency Management**: Features that handle task relationships belong in [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js). Follow guidelines in [`dependencies.mdc`](mdc:.cursor/rules/dependencies.mdc). + - **User Interface**: Features that display information to users belong in [`ui.js`](mdc:scripts/modules/ui.js). Follow guidelines in [`ui.mdc`](mdc:.cursor/rules/ui.mdc). + - **AI Integration**: Features that use AI models belong in [`ai-services.js`](mdc:scripts/modules/ai-services.js). + - **Cross-Cutting**: Features that don't fit one category may need components in multiple modules + +- **Command-Line Interface** (See [`commands.mdc`](mdc:.cursor/rules/commands.mdc)): + - All new user-facing commands should be added to [`commands.js`](mdc:scripts/modules/commands.js) + - Use consistent patterns for option naming and help text + - Follow the Commander.js model for subcommand structure + +## Implementation Pattern + +The standard pattern for adding a feature follows this workflow: + +1. **Core Logic**: Implement the business logic in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)). +2. **AI Integration (If Applicable)**: + - Import necessary service functions (e.g., `generateTextService`, `streamTextService`) from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js). + - Prepare parameters (`role`, `session`, `systemPrompt`, `prompt`). + - Call the service function. + - Handle the response (direct text or stream object). + - **Important**: Prefer `generateTextService` for calls sending large context (like stringified JSON) where incremental display is not needed. See [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc) for detailed usage patterns and cautions. +3. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc). +4. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc). +5. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc)) +6. **Configuration**: Update configuration settings or add new ones in [`config-manager.js`](mdc:scripts/modules/config-manager.js) and ensure getters/setters are appropriate. Update documentation in [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). Update the `.taskmasterconfig` structure if needed. +7. **Documentation**: Update help text and documentation in [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). + +## Critical Checklist for New Features + +- **Comprehensive Function Exports**: + - ✅ **DO**: Export **all core functions, helper functions (like `generateSubtaskPrompt`), and utility methods** needed by your new function or command from their respective modules. + - ✅ **DO**: **Explicitly review the module's `export { ... }` block** at the bottom of the file to ensure every required dependency (even seemingly minor helpers like `findTaskById`, `taskExists`, specific prompt generators, AI call handlers, etc.) is included. + - ❌ **DON'T**: Assume internal functions are already exported - **always verify**. A missing export will cause runtime errors (e.g., `ReferenceError: generateSubtaskPrompt is not defined`). + - **Example**: If implementing a feature that checks task existence, ensure the helper function is in exports: + ```javascript + // At the bottom of your module file: + export { + // ... existing exports ... + yourNewFunction, + taskExists, // Helper function used by yourNewFunction + findTaskById, // Helper function used by yourNewFunction + generateSubtaskPrompt, // Helper needed by expand/add features + getSubtasksFromAI, // Helper needed by expand/add features + }; + ``` + +- **Parameter Completeness and Matching**: + - ✅ **DO**: Pass all required parameters to functions you call within your implementation + - ✅ **DO**: Check function signatures before implementing calls to them + - ✅ **DO**: Verify that direct function parameters match their core function counterparts + - ✅ **DO**: When implementing a direct function for MCP, ensure it only accepts parameters that exist in the core function + - ✅ **DO**: Verify the expected *internal structure* of complex object parameters (like the `mcpLog` object, see mcp.mdc for the required logger wrapper pattern) + - ❌ **DON'T**: Add parameters to direct functions that don't exist in core functions + - ❌ **DON'T**: Assume default parameter values will handle missing arguments + - ❌ **DON'T**: Assume object parameters will work without verifying their required internal structure or methods. + - **Example**: When calling file generation, pass all required parameters: + ```javascript + // ✅ DO: Pass all required parameters + await generateTaskFiles(tasksPath, path.dirname(tasksPath)); + + // ❌ DON'T: Omit required parameters + await generateTaskFiles(tasksPath); // Error - missing outputDir parameter + ``` + + **Example**: Properly match direct function parameters to core function: + ```javascript + // Core function signature + async function expandTask(tasksPath, taskId, numSubtasks, useResearch = false, additionalContext = '', options = {}) { + // Implementation... + } + + // ✅ DO: Match direct function parameters to core function + export async function expandTaskDirect(args, log, context = {}) { + // Extract only parameters that exist in the core function + const taskId = parseInt(args.id, 10); + const numSubtasks = args.num ? parseInt(args.num, 10) : undefined; + const useResearch = args.research === true; + const additionalContext = args.prompt || ''; + + // Call core function with matched parameters + const result = await expandTask( + tasksPath, + taskId, + numSubtasks, + useResearch, + additionalContext, + { mcpLog: log, session: context.session } + ); + + // Return result + return { success: true, data: result, fromCache: false }; + } + + // ❌ DON'T: Use parameters that don't exist in the core function + export async function expandTaskDirect(args, log, context = {}) { + // DON'T extract parameters that don't exist in the core function! + const force = args.force === true; // ❌ WRONG - 'force' doesn't exist in core function + + // DON'T pass non-existent parameters to core functions + const result = await expandTask( + tasksPath, + args.id, + args.num, + args.research, + args.prompt, + force, // ❌ WRONG - this parameter doesn't exist in the core function + { mcpLog: log } + ); + } + ``` + +- **Consistent File Path Handling**: + - ✅ DO: Use consistent file naming conventions: `task_${id.toString().padStart(3, '0')}.txt` + - ✅ DO: Use `path.join()` for composing file paths + - ✅ DO: Use appropriate file extensions (.txt for tasks, .json for data) + - ❌ DON'T: Hardcode path separators or inconsistent file extensions + - **Example**: Creating file paths for tasks: + ```javascript + // ✅ DO: Use consistent file naming and path.join + const taskFileName = path.join( + path.dirname(tasksPath), + `task_${taskId.toString().padStart(3, '0')}.txt` + ); + + // ❌ DON'T: Use inconsistent naming or string concatenation + const taskFileName = path.dirname(tasksPath) + '/' + taskId + '.md'; + ``` + +- **Error Handling and Reporting**: + - ✅ DO: Use structured error objects with code and message properties + - ✅ DO: Include clear error messages identifying the specific problem + - ✅ DO: Handle both function-specific errors and potential file system errors + - ✅ DO: Log errors at appropriate severity levels + - **Example**: Structured error handling in core functions: + ```javascript + try { + // Implementation... + } catch (error) { + log('error', `Error removing task: ${error.message}`); + throw { + code: 'REMOVE_TASK_ERROR', + message: error.message, + details: error.stack + }; + } + ``` + +- **Silent Mode Implementation**: + - ✅ **DO**: Import all silent mode utilities together: + ```javascript + import { enableSilentMode, disableSilentMode, isSilentMode } from '../../../../scripts/modules/utils.js'; + ``` + - ✅ **DO**: Always use `isSilentMode()` function to check global silent mode status, never reference global variables. + - ✅ **DO**: Wrap core function calls **within direct functions** using `enableSilentMode()` and `disableSilentMode()` in a `try/finally` block if the core function might produce console output (like banners, spinners, direct `console.log`s) that isn't reliably controlled by an `outputFormat` parameter. + ```javascript + // Direct Function Example: + try { + // Prefer passing 'json' if the core function reliably handles it + const result = await coreFunction(...args, 'json'); + // OR, if outputFormat is not enough/unreliable: + // enableSilentMode(); // Enable *before* the call + // const result = await coreFunction(...args); + // disableSilentMode(); // Disable *after* the call (typically in finally) + + return { success: true, data: result }; + } catch (error) { + log.error(`Error: ${error.message}`); + return { success: false, error: { message: error.message } }; + } finally { + // If you used enable/disable, ensure disable is called here + // disableSilentMode(); + } + ``` + - ✅ **DO**: Core functions themselves *should* ideally check `outputFormat === 'text'` before displaying UI elements (banners, spinners, boxes) and use internal logging (`log`/`report`) that respects silent mode. The `enable/disableSilentMode` wrapper in the direct function is a safety net. + - ✅ **DO**: Handle mixed parameter/global silent mode correctly for functions accepting both (less common now, prefer `outputFormat`): + ```javascript + // Check both the passed parameter and global silent mode + const isSilent = silentMode || (typeof silentMode === 'undefined' && isSilentMode()); + ``` + - ❌ **DON'T**: Forget to disable silent mode in a `finally` block if you enabled it. + - ❌ **DON'T**: Access the global `silentMode` flag directly. + +- **Debugging Strategy**: + - ✅ **DO**: If an MCP tool fails with vague errors (e.g., JSON parsing issues like `Unexpected token ... is not valid JSON`), **try running the equivalent CLI command directly in the terminal** (e.g., `task-master expand --all`). CLI output often provides much more specific error messages (like missing function definitions or stack traces from the core logic) that pinpoint the root cause. + - ❌ **DON'T**: Rely solely on MCP logs if the error is unclear; use the CLI as a complementary debugging tool for core logic issues. + +```javascript +// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js) +/** + * Archives completed tasks to archive.json + * @param {string} tasksPath - Path to the tasks.json file + * @param {string} archivePath - Path to the archive.json file + * @returns {number} Number of tasks archived + */ +async function archiveTasks(tasksPath, archivePath = 'tasks/archive.json') { + // Implementation... + return archivedCount; +} + +// Export from the module +export { + // ... existing exports ... + archiveTasks, +}; +``` + +```javascript +// 2. AI Integration: Add import and use necessary service functions +import { generateTextService } from './ai-services-unified.js'; + +// Example usage: +async function handleAIInteraction() { + const role = 'user'; + const session = 'exampleSession'; + const systemPrompt = 'You are a helpful assistant.'; + const prompt = 'What is the capital of France?'; + + const result = await generateTextService(role, session, systemPrompt, prompt); + console.log(result); +} + +// Export from the module +export { + // ... existing exports ... + handleAIInteraction, +}; +``` + +```javascript +// 3. UI COMPONENTS: Add display function to ui.js +/** + * Display archive operation results + * @param {string} archivePath - Path to the archive file + * @param {number} count - Number of tasks archived + */ +function displayArchiveResults(archivePath, count) { + console.log(boxen( + chalk.green(`Successfully archived ${count} tasks to ${archivePath}`), + { padding: 1, borderColor: 'green', borderStyle: 'round' } + )); +} + +// Export from the module +export { + // ... existing exports ... + displayArchiveResults, +}; +``` + +```javascript +// 4. COMMAND INTEGRATION: Add to commands.js +import { archiveTasks } from './task-manager.js'; +import { displayArchiveResults } from './ui.js'; + +// In registerCommands function +programInstance + .command('archive') + .description('Archive completed tasks to separate file') + .option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json') + .option('-o, --output <file>', 'Archive output file', 'tasks/archive.json') + .action(async (options) => { + const tasksPath = options.file; + const archivePath = options.output; + + console.log(chalk.blue(`Archiving completed tasks from ${tasksPath} to ${archivePath}...`)); + + const archivedCount = await archiveTasks(tasksPath, archivePath); + displayArchiveResults(archivePath, archivedCount); + }); +``` + +## Cross-Module Features + +For features requiring components in multiple modules: + +- ✅ **DO**: Create a clear unidirectional flow of dependencies + ```javascript + // In task-manager.js + function analyzeTasksDifficulty(tasks) { + // Implementation... + return difficultyScores; + } + + // In ui.js - depends on task-manager.js + import { analyzeTasksDifficulty } from './task-manager.js'; + + function displayDifficultyReport(tasks) { + const scores = analyzeTasksDifficulty(tasks); + // Render the scores... + } + ``` + +- ❌ **DON'T**: Create circular dependencies between modules + ```javascript + // In task-manager.js - depends on ui.js + import { displayDifficultyReport } from './ui.js'; + + function analyzeTasks() { + // Implementation... + displayDifficultyReport(tasks); // WRONG! Don't call UI functions from task-manager + } + + // In ui.js - depends on task-manager.js + import { analyzeTasks } from './task-manager.js'; + ``` + +## Command-Line Interface Standards + +- **Naming Conventions**: + - Use kebab-case for command names (`analyze-complexity`, not `analyzeComplexity`) + - Use kebab-case for option names (`--output-format`, not `--outputFormat`) + - Use the same option names across commands when they represent the same concept + +- **Command Structure**: + ```javascript + programInstance + .command('command-name') + .description('Clear, concise description of what the command does') + .option('-s, --short-option <value>', 'Option description', 'default value') + .option('--long-option <value>', 'Option description') + .action(async (options) => { + // Command implementation + }); + ``` + +## Utility Function Guidelines + +When adding utilities to [`utils.js`](mdc:scripts/modules/utils.js): + +- Only add functions that could be used by multiple modules +- Keep utilities single-purpose and purely functional +- Document parameters and return values + +```javascript +/** + * Formats a duration in milliseconds to a human-readable string + * @param {number} ms - Duration in milliseconds + * @returns {string} Formatted duration string (e.g., "2h 30m 15s") + */ +function formatDuration(ms) { + // Implementation... + return formatted; +} +``` + +## Writing Testable Code + +When implementing new features, follow these guidelines to ensure your code is testable: + +- **Dependency Injection** + - Design functions to accept dependencies as parameters + - Avoid hard-coded dependencies that are difficult to mock + ```javascript + // ✅ DO: Accept dependencies as parameters + function processTask(task, fileSystem, logger) { + fileSystem.writeFile('task.json', JSON.stringify(task)); + logger.info('Task processed'); + } + + // ❌ DON'T: Use hard-coded dependencies + function processTask(task) { + fs.writeFile('task.json', JSON.stringify(task)); + console.log('Task processed'); + } + ``` + +- **Separate Logic from Side Effects** + - Keep pure logic separate from I/O operations or UI rendering + - This allows testing the logic without mocking complex dependencies + ```javascript + // ✅ DO: Separate logic from side effects + function calculateTaskPriority(task, dependencies) { + // Pure logic that returns a value + return computedPriority; + } + + function displayTaskPriority(task, dependencies) { + const priority = calculateTaskPriority(task, dependencies); + console.log(`Task priority: ${priority}`); + } + ``` + +- **Callback Functions and Testing** + - When using callbacks (like in Commander.js commands), define them separately + - This allows testing the callback logic independently + ```javascript + // ✅ DO: Define callbacks separately for testing + function getVersionString() { + // Logic to determine version + return version; + } + + // In setupCLI + programInstance.version(getVersionString); + + // In tests + test('getVersionString returns correct version', () => { + expect(getVersionString()).toBe('1.5.0'); + }); + ``` + +- **UI Output Testing** + - For UI components, focus on testing conditional logic rather than exact output + - Use string pattern matching (like `expect(result).toContain('text')`) + - Pay attention to emojis and formatting which can make exact string matching difficult + ```javascript + // ✅ DO: Test the essence of the output, not exact formatting + test('statusFormatter shows done status correctly', () => { + const result = formatStatus('done'); + expect(result).toContain('done'); + expect(result).toContain('✅'); + }); + ``` + +## Testing Requirements + +Every new feature **must** include comprehensive tests following the guidelines in [`tests.mdc`](mdc:.cursor/rules/tests.mdc). Testing should include: + +1. **Unit Tests**: Test individual functions and components in isolation + ```javascript + // Example unit test for a new utility function + describe('newFeatureUtil', () => { + test('should perform expected operation with valid input', () => { + expect(newFeatureUtil('valid input')).toBe('expected result'); + }); + + test('should handle edge cases appropriately', () => { + expect(newFeatureUtil('')).toBeNull(); + }); + }); + ``` + +2. **Integration Tests**: Verify the feature works correctly with other components + ```javascript + // Example integration test for a new command + describe('newCommand integration', () => { + test('should call the correct service functions with parsed arguments', () => { + const mockService = jest.fn().mockResolvedValue('success'); + // Set up test with mocked dependencies + // Call the command handler + // Verify service was called with expected arguments + }); + }); + ``` + +3. **Edge Cases**: Test boundary conditions and error handling + - Invalid inputs + - Missing dependencies + - File system errors + - API failures + +4. **Test Coverage**: Aim for at least 80% coverage for all new code + +5. **Jest Mocking Best Practices** + - Follow the mock-first-then-import pattern as described in [`tests.mdc`](mdc:.cursor/rules/tests.mdc) + - Use jest.spyOn() to create spy functions for testing + - Clear mocks between tests to prevent interference + - See the Jest Module Mocking Best Practices section in [`tests.mdc`](mdc:.cursor/rules/tests.mdc) for details + +When submitting a new feature, always run the full test suite to ensure nothing was broken: + +```bash +npm test +``` + +## Documentation Requirements + +For each new feature: + +1. Add help text to the command definition +2. Update [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) with command reference +3. Consider updating [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) if the feature significantly changes module responsibilities. + +Follow the existing command reference format: +```markdown +- **Command Reference: your-command** + - CLI Syntax: `task-master your-command [options]` + - Description: Brief explanation of what the command does + - Parameters: + - `--option1=<value>`: Description of option1 (default: 'default') + - `--option2=<value>`: Description of option2 (required) + - Example: `task-master your-command --option1=value --option2=value2` + - Notes: Additional details, limitations, or special considerations +``` + +For more information on module structure, see [`MODULE_PLAN.md`](mdc:scripts/modules/MODULE_PLAN.md) and follow [`self_improve.mdc`](mdc:scripts/modules/self_improve.mdc) for best practices on updating documentation. + +## Adding MCP Server Support for Commands + +Integrating Task Master commands with the MCP server (for use by tools like Cursor) follows a specific pattern distinct from the CLI command implementation, prioritizing performance and reliability. + +- **Goal**: Leverage direct function calls to core logic, avoiding CLI overhead. +- **Reference**: See [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for full details. + +**MCP Integration Workflow**: + +1. **Core Logic**: Ensure the command's core logic exists and is exported from the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)). +2. **Direct Function Wrapper (`mcp-server/src/core/direct-functions/`)**: + - Create a new file (e.g., `your-command.js`) in `mcp-server/src/core/direct-functions/` using **kebab-case** naming. + - Import the core logic function, necessary MCP utilities like **`findTasksJsonPath` from `../utils/path-utils.js`**, and **silent mode utilities**: `import { enableSilentMode, disableSilentMode } from '../../../../scripts/modules/utils.js';` + - Implement an `async function yourCommandDirect(args, log)` using **camelCase** with `Direct` suffix. + - **Path Finding**: Inside this function, obtain the `tasksPath` by calling `const tasksPath = findTasksJsonPath(args, log);`. This relies on `args.projectRoot` (derived from the session) being passed correctly. + - Perform validation on other arguments received in `args`. + - **Implement Silent Mode**: Wrap core function calls with `enableSilentMode()` and `disableSilentMode()` to prevent logs from interfering with JSON responses. + - **If Caching**: Implement caching using `getCachedOrExecute` from `../../tools/utils.js`. + - **If Not Caching**: Directly call the core logic function within a try/catch block. + - Format the return as `{ success: true/false, data/error, fromCache: boolean }`. + - Export the wrapper function. + +3. **Update `task-master-core.js` with Import/Export**: Import and re-export your `*Direct` function and add it to the `directFunctions` map. + +4. **Create MCP Tool (`mcp-server/src/tools/`)**: + - Create a new file (e.g., `your-command.js`) using **kebab-case**. + - Import `zod`, `handleApiResult`, **`withNormalizedProjectRoot` HOF**, and your `yourCommandDirect` function. + - Implement `registerYourCommandTool(server)`. + - **Define parameters**: Make `projectRoot` optional (`z.string().optional().describe(...)`) as the HOF handles fallback. + - Consider if this operation should run in the background using `AsyncOperationManager`. + - Implement the standard `execute` method **wrapped with `withNormalizedProjectRoot`**: + ```javascript + execute: withNormalizedProjectRoot(async (args, { log, session }) => { + // args.projectRoot is now normalized + const { projectRoot /*, other args */ } = args; + // ... resolve tasks path if needed using normalized projectRoot ... + const result = await yourCommandDirect( + { /* other args */, projectRoot /* if needed by direct func */ }, + log, + { session } + ); + return handleApiResult(result, log); + }) + ``` + +5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`. + +6. **Update `mcp.json`**: Add the new tool definition to the `tools` array in `.cursor/mcp.json`. + +## Implementing Background Operations + +For long-running operations that should not block the client, use the AsyncOperationManager: + +1. **Identify Background-Appropriate Operations**: + - ✅ **DO**: Use async operations for CPU-intensive tasks like task expansion or PRD parsing + - ✅ **DO**: Consider async operations for tasks that may take more than 1-2 seconds + - ❌ **DON'T**: Use async operations for quick read/status operations + - ❌ **DON'T**: Use async operations when immediate feedback is critical + +2. **Use AsyncOperationManager in MCP Tools**: + ```javascript + import { asyncOperationManager } from '../core/utils/async-manager.js'; + + // In execute method: + const operationId = asyncOperationManager.addOperation( + expandTaskDirect, // The direct function to run in background + { ...args, projectRoot: rootFolder }, // Args to pass to the function + { log, reportProgress, session } // Context to preserve for the operation + ); + + // Return immediate response with operation ID + return createContentResponse({ + message: "Operation started successfully", + operationId, + status: "pending" + }); + ``` + +3. **Implement Progress Reporting**: + - ✅ **DO**: Use the reportProgress function in direct functions: + ```javascript + // In your direct function: + if (reportProgress) { + await reportProgress({ progress: 50 }); // 50% complete + } + ``` + - AsyncOperationManager will forward progress updates to the client + +4. **Check Operation Status**: + - Implement a way for clients to check status using the `get_operation_status` MCP tool + - Return appropriate status codes and messages + +## Project Initialization + +When implementing project initialization commands: + +1. **Support Programmatic Initialization**: + - ✅ **DO**: Design initialization to work with both CLI and MCP + - ✅ **DO**: Support non-interactive modes with sensible defaults + - ✅ **DO**: Handle project metadata like name, description, version + - ✅ **DO**: Create necessary files and directories + +2. **In MCP Tool Implementation**: + ```javascript + // In initialize-project.js MCP tool: + import { z } from "zod"; + import { initializeProjectDirect } from "../core/task-master-core.js"; + + export function registerInitializeProjectTool(server) { + server.addTool({ + name: "initialize_project", + description: "Initialize a new Task Master project", + parameters: z.object({ + projectName: z.string().optional().describe("The name for the new project"), + projectDescription: z.string().optional().describe("A brief description"), + projectVersion: z.string().optional().describe("Initial version (e.g., '0.1.0')"), + // Add other parameters as needed + }), + execute: async (args, { log, reportProgress, session }) => { + try { + // No need for project root since we're creating a new project + const result = await initializeProjectDirect(args, log); + return handleApiResult(result, log, 'Error initializing project'); + } catch (error) { + log.error(`Error in initialize_project: ${error.message}`); + return createErrorResponse(`Failed to initialize project: ${error.message}`); + } + } + }); + } + ``` diff --git a/assets/rules/self_improve.mdc b/assets/rules/self_improve.mdc new file mode 100644 index 00000000..40b31b6e --- /dev/null +++ b/assets/rules/self_improve.mdc @@ -0,0 +1,72 @@ +--- +description: Guidelines for continuously improving Cursor rules based on emerging code patterns and best practices. +globs: **/* +alwaysApply: true +--- + +- **Rule Improvement Triggers:** + - New code patterns not covered by existing rules + - Repeated similar implementations across files + - Common error patterns that could be prevented + - New libraries or tools being used consistently + - Emerging best practices in the codebase + +- **Analysis Process:** + - Compare new code with existing rules + - Identify patterns that should be standardized + - Look for references to external documentation + - Check for consistent error handling patterns + - Monitor test patterns and coverage + +- **Rule Updates:** + - **Add New Rules When:** + - A new technology/pattern is used in 3+ files + - Common bugs could be prevented by a rule + - Code reviews repeatedly mention the same feedback + - New security or performance patterns emerge + + - **Modify Existing Rules When:** + - Better examples exist in the codebase + - Additional edge cases are discovered + - Related rules have been updated + - Implementation details have changed + +- **Example Pattern Recognition:** + ```typescript + // If you see repeated patterns like: + const data = await prisma.user.findMany({ + select: { id: true, email: true }, + where: { status: 'ACTIVE' } + }); + + // Consider adding to [prisma.mdc](mdc:.cursor/rules/prisma.mdc): + // - Standard select fields + // - Common where conditions + // - Performance optimization patterns + ``` + +- **Rule Quality Checks:** + - Rules should be actionable and specific + - Examples should come from actual code + - References should be up to date + - Patterns should be consistently enforced + +- **Continuous Improvement:** + - Monitor code review comments + - Track common development questions + - Update rules after major refactors + - Add links to relevant documentation + - Cross-reference related rules + +- **Rule Deprecation:** + - Mark outdated patterns as deprecated + - Remove rules that no longer apply + - Update references to deprecated rules + - Document migration paths for old patterns + +- **Documentation Updates:** + - Keep examples synchronized with code + - Update references to external docs + - Maintain links between related rules + - Document breaking changes +Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure. diff --git a/assets/rules/taskmaster.mdc b/assets/rules/taskmaster.mdc new file mode 100644 index 00000000..fd6a8384 --- /dev/null +++ b/assets/rules/taskmaster.mdc @@ -0,0 +1,382 @@ +--- +description: Comprehensive reference for Taskmaster MCP tools and CLI commands. +globs: **/* +alwaysApply: true +--- +# Taskmaster Tool & Command Reference + +This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback. + +**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. + +**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`. + +--- + +## Initialization & Setup + +### 1. Initialize Project (`init`) + +* **MCP Tool:** `initialize_project` +* **CLI Command:** `task-master init [options]` +* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.` +* **Key CLI Options:** + * `--name <name>`: `Set the name for your project in Taskmaster's configuration.` + * `--description <text>`: `Provide a brief description for your project.` + * `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.` + * `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.` +* **Usage:** Run this once at the beginning of a new project. +* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.` +* **Key MCP Parameters/Options:** + * `projectName`: `Set the name for your project.` (CLI: `--name <name>`) + * `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`) + * `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`) + * `authorName`: `Author name.` (CLI: `--author <author>`) + * `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`) + * `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`) + * `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`) +* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server. +* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt. + +### 2. Parse PRD (`parse_prd`) + +* **MCP Tool:** `parse_prd` +* **CLI Command:** `task-master parse-prd [file] [options]` +* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.` +* **Key Parameters/Options:** + * `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`) + * `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to 'tasks/tasks.json'.` (CLI: `-o, --output <file>`) + * `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`) + * `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`) +* **Usage:** Useful for bootstrapping a project from an existing requirements document. +* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`. + +--- + +## AI Model Configuration + +### 2. Manage Models (`models`) +* **MCP Tool:** `models` +* **CLI Command:** `task-master models [options]` +* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.` +* **Key MCP Parameters/Options:** + * `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`) + * `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`) + * `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`) + * `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`) + * `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`) + * `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically) + * `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically) +* **Key CLI Options:** + * `--set-main <model_id>`: `Set the primary model.` + * `--set-research <model_id>`: `Set the research model.` + * `--set-fallback <model_id>`: `Set the fallback model.` + * `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).` + * `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.` + * `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.` +* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`. +* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`. +* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live. +* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them. +* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80. +* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback. + +--- + +## Task Listing & Viewing + +### 3. Get Tasks (`get_tasks`) + +* **MCP Tool:** `get_tasks` +* **CLI Command:** `task-master list [options]` +* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.` +* **Key Parameters/Options:** + * `status`: `Show only Taskmaster tasks matching this status, e.g., 'pending' or 'done'.` (CLI: `-s, --status <status>`) + * `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Get an overview of the project status, often used at the start of a work session. + +### 4. Get Next Task (`next_task`) + +* **MCP Tool:** `next_task` +* **CLI Command:** `task-master next [options]` +* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Identify what to work on next according to the plan. + +### 5. Get Task Details (`get_task`) + +* **MCP Tool:** `get_task` +* **CLI Command:** `task-master show [id] [options]` +* **Description:** `Display detailed information for a specific Taskmaster task or subtask by its ID.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to view.` (CLI: `[id]` positional or `-i, --id <id>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Understand the full details, implementation notes, and test strategy for a specific task before starting work. + +--- + +## Task Creation & Modification + +### 6. Add Task (`add_task`) + +* **MCP Tool:** `add_task` +* **CLI Command:** `task-master add-task [options]` +* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.` +* **Key Parameters/Options:** + * `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`) + * `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`) + * `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`) + * `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Quickly add newly identified tasks during development. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 7. Add Subtask (`add_subtask`) + +* **MCP Tool:** `add_subtask` +* **CLI Command:** `task-master add-subtask [options]` +* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.` +* **Key Parameters/Options:** + * `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`) + * `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`) + * `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`) + * `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`) + * `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`) + * `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`) + * `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Break down tasks manually or reorganize existing tasks. + +### 8. Update Tasks (`update`) + +* **MCP Tool:** `update` +* **CLI Command:** `task-master update [options]` +* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.` +* **Key Parameters/Options:** + * `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`) + * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 9. Update Task (`update_task`) + +* **MCP Tool:** `update_task` +* **CLI Command:** `task-master update-task [options]` +* **Description:** `Modify a specific Taskmaster task or subtask by its ID, incorporating new information or changes.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to update.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt='Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 10. Update Subtask (`update_subtask`) + +* **MCP Tool:** `update_subtask` +* **CLI Command:** `task-master update-subtask [options]` +* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster subtask, e.g., '15.2', you want to add information to.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt='Discovered that the API requires header X.\nImplementation needs adjustment...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 11. Set Task Status (`set_task_status`) + +* **MCP Tool:** `set_task_status` +* **CLI Command:** `task-master set-status [options]` +* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`) + * `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Mark progress as tasks move through the development cycle. + +### 12. Remove Task (`remove_task`) + +* **MCP Tool:** `remove_task` +* **CLI Command:** `task-master remove-task [options]` +* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`) + * `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project. +* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks. + +--- + +## Task Structure & Breakdown + +### 13. Expand Task (`expand_task`) + +* **MCP Tool:** `expand_task` +* **CLI Command:** `task-master expand [options]` +* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.` +* **Key Parameters/Options:** + * `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`) + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`) + * `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 14. Expand All Tasks (`expand_all`) + +* **MCP Tool:** `expand_all` +* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag) +* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.` +* **Key Parameters/Options:** + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`) + * `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 15. Clear Subtasks (`clear_subtasks`) + +* **MCP Tool:** `clear_subtasks` +* **CLI Command:** `task-master clear-subtasks [options]` +* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` +* **Key Parameters/Options:** + * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) + * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement. + +### 16. Remove Subtask (`remove_subtask`) + +* **MCP Tool:** `remove_subtask` +* **CLI Command:** `task-master remove-subtask [options]` +* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`) + * `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task. + +--- + +## Dependency Management + +### 17. Add Dependency (`add_dependency`) + +* **MCP Tool:** `add_dependency` +* **CLI Command:** `task-master add-dependency [options]` +* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`) +* **Usage:** Establish the correct order of execution between tasks. + +### 18. Remove Dependency (`remove_dependency`) + +* **MCP Tool:** `remove_dependency` +* **CLI Command:** `task-master remove-dependency [options]` +* **Description:** `Remove a dependency relationship between two Taskmaster tasks.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Update task relationships when the order of execution changes. + +### 19. Validate Dependencies (`validate_dependencies`) + +* **MCP Tool:** `validate_dependencies` +* **CLI Command:** `task-master validate-dependencies [options]` +* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Audit the integrity of your task dependencies. + +### 20. Fix Dependencies (`fix_dependencies`) + +* **MCP Tool:** `fix_dependencies` +* **CLI Command:** `task-master fix-dependencies [options]` +* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Clean up dependency errors automatically. + +--- + +## Analysis & Reporting + +### 21. Analyze Project Complexity (`analyze_project_complexity`) + +* **MCP Tool:** `analyze_project_complexity` +* **CLI Command:** `task-master analyze-complexity [options]` +* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.` +* **Key Parameters/Options:** + * `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`) + * `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`) + * `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before breaking down tasks to identify which ones need the most attention. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 22. View Complexity Report (`complexity_report`) + +* **MCP Tool:** `complexity_report` +* **CLI Command:** `task-master complexity-report [options]` +* **Description:** `Display the task complexity analysis report in a readable format.` +* **Key Parameters/Options:** + * `file`: `Path to the complexity report (default: 'scripts/task-complexity-report.json').` (CLI: `-f, --file <file>`) +* **Usage:** Review and understand the complexity analysis results after running analyze-complexity. + +--- + +## File Management + +### 23. Generate Task Files (`generate`) + +* **MCP Tool:** `generate` +* **CLI Command:** `task-master generate [options]` +* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.` +* **Key Parameters/Options:** + * `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. + +--- + +## Environment Variables Configuration (Updated) + +Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`. + +Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL: + +* **API Keys (Required for corresponding provider):** + * `ANTHROPIC_API_KEY` + * `PERPLEXITY_API_KEY` + * `OPENAI_API_KEY` + * `GOOGLE_API_KEY` + * `MISTRAL_API_KEY` + * `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too) + * `OPENROUTER_API_KEY` + * `XAI_API_KEY` + * `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too) +* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):** + * `AZURE_OPENAI_ENDPOINT` + * `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`) + +**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool. + +--- + +For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc). diff --git a/assets/rules/tasks.mdc b/assets/rules/tasks.mdc new file mode 100644 index 00000000..dee041e9 --- /dev/null +++ b/assets/rules/tasks.mdc @@ -0,0 +1,331 @@ +--- +description: Guidelines for implementing task management operations +globs: scripts/modules/task-manager.js +alwaysApply: false +--- + +# Task Management Guidelines + +## Task Structure Standards + +- **Core Task Properties**: + - ✅ DO: Include all required properties in each task object + - ✅ DO: Provide default values for optional properties + - ❌ DON'T: Add extra properties that aren't in the standard schema + + ```javascript + // ✅ DO: Follow this structure for task objects + const task = { + id: nextId, + title: "Task title", + description: "Brief task description", + status: "pending", // "pending", "in-progress", "done", etc. + dependencies: [], // Array of task IDs + priority: "medium", // "high", "medium", "low" + details: "Detailed implementation instructions", + testStrategy: "Verification approach", + subtasks: [] // Array of subtask objects + }; + ``` + +- **Subtask Structure**: + - ✅ DO: Use consistent properties across subtasks + - ✅ DO: Maintain simple numeric IDs within parent tasks + - ❌ DON'T: Duplicate parent task properties in subtasks + + ```javascript + // ✅ DO: Structure subtasks consistently + const subtask = { + id: nextSubtaskId, // Simple numeric ID, unique within the parent task + title: "Subtask title", + description: "Brief subtask description", + status: "pending", + dependencies: [], // Can include numeric IDs (other subtasks) or full task IDs + details: "Detailed implementation instructions" + }; + ``` + +## Task Creation and Parsing + +- **ID Management**: + - ✅ DO: Assign unique sequential IDs to tasks + - ✅ DO: Calculate the next ID based on existing tasks + - ❌ DON'T: Hardcode or reuse IDs + + ```javascript + // ✅ DO: Calculate the next available ID + const highestId = Math.max(...data.tasks.map(t => t.id)); + const nextTaskId = highestId + 1; + ``` + +- **PRD Parsing**: + - ✅ DO: Extract tasks from PRD documents using AI + - ✅ DO: Provide clear prompts to guide AI task generation + - ✅ DO: Validate and clean up AI-generated tasks + + ```javascript + // ✅ DO: Validate AI responses + try { + // Parse the JSON response + taskData = JSON.parse(jsonContent); + + // Check that we have the required fields + if (!taskData.title || !taskData.description) { + throw new Error("Missing required fields in the generated task"); + } + } catch (error) { + log('error', "Failed to parse AI's response as valid task JSON:", error); + process.exit(1); + } + ``` + +## Task Updates and Modifications + +- **Status Management**: + - ✅ DO: Provide functions for updating task status + - ✅ DO: Handle both individual tasks and subtasks + - ✅ DO: Consider subtask status when updating parent tasks + + ```javascript + // ✅ DO: Handle status updates for both tasks and subtasks + async function setTaskStatus(tasksPath, taskIdInput, newStatus) { + // Check if it's a subtask (e.g., "1.2") + if (taskIdInput.includes('.')) { + const [parentId, subtaskId] = taskIdInput.split('.').map(id => parseInt(id, 10)); + + // Find the parent task and subtask + const parentTask = data.tasks.find(t => t.id === parentId); + const subtask = parentTask.subtasks.find(st => st.id === subtaskId); + + // Update subtask status + subtask.status = newStatus; + + // Check if all subtasks are done + if (newStatus === 'done') { + const allSubtasksDone = parentTask.subtasks.every(st => st.status === 'done'); + if (allSubtasksDone) { + // Suggest updating parent task + } + } + } else { + // Handle regular task + const task = data.tasks.find(t => t.id === parseInt(taskIdInput, 10)); + task.status = newStatus; + + // If marking as done, also mark subtasks + if (newStatus === 'done' && task.subtasks && task.subtasks.length > 0) { + task.subtasks.forEach(subtask => { + subtask.status = newStatus; + }); + } + } + } + ``` + +- **Task Expansion**: + - ✅ DO: Use AI to generate detailed subtasks + - ✅ DO: Consider complexity analysis for subtask counts + - ✅ DO: Ensure proper IDs for newly created subtasks + + ```javascript + // ✅ DO: Generate appropriate subtasks based on complexity + if (taskAnalysis) { + log('info', `Found complexity analysis for task ${taskId}: Score ${taskAnalysis.complexityScore}/10`); + + // Use recommended number of subtasks if available + if (taskAnalysis.recommendedSubtasks && numSubtasks === CONFIG.defaultSubtasks) { + numSubtasks = taskAnalysis.recommendedSubtasks; + log('info', `Using recommended number of subtasks: ${numSubtasks}`); + } + } + ``` + +## Task File Generation + +- **File Formatting**: + - ✅ DO: Use consistent formatting for task files + - ✅ DO: Include all task properties in text files + - ✅ DO: Format dependencies with status indicators + + ```javascript + // ✅ DO: Use consistent file formatting + let content = `# Task ID: ${task.id}\n`; + content += `# Title: ${task.title}\n`; + content += `# Status: ${task.status || 'pending'}\n`; + + // Format dependencies with their status + if (task.dependencies && task.dependencies.length > 0) { + content += `# Dependencies: ${formatDependenciesWithStatus(task.dependencies, data.tasks)}\n`; + } else { + content += '# Dependencies: None\n'; + } + ``` + +- **Subtask Inclusion**: + - ✅ DO: Include subtasks in parent task files + - ✅ DO: Use consistent indentation for subtask sections + - ✅ DO: Display subtask dependencies with proper formatting + + ```javascript + // ✅ DO: Format subtasks correctly in task files + if (task.subtasks && task.subtasks.length > 0) { + content += '\n# Subtasks:\n'; + + task.subtasks.forEach(subtask => { + content += `## ${subtask.id}. ${subtask.title} [${subtask.status || 'pending'}]\n`; + + // Format subtask dependencies + if (subtask.dependencies && subtask.dependencies.length > 0) { + // Format the dependencies + content += `### Dependencies: ${formattedDeps}\n`; + } else { + content += '### Dependencies: None\n'; + } + + content += `### Description: ${subtask.description || ''}\n`; + content += '### Details:\n'; + content += (subtask.details || '').split('\n').map(line => line).join('\n'); + content += '\n\n'; + }); + } + ``` + +## Task Listing and Display + +- **Filtering and Organization**: + - ✅ DO: Allow filtering tasks by status + - ✅ DO: Handle subtask display in lists + - ✅ DO: Use consistent table formats + + ```javascript + // ✅ DO: Implement clear filtering and organization + // Filter tasks by status if specified + const filteredTasks = statusFilter + ? data.tasks.filter(task => + task.status && task.status.toLowerCase() === statusFilter.toLowerCase()) + : data.tasks; + ``` + +- **Progress Tracking**: + - ✅ DO: Calculate and display completion statistics + - ✅ DO: Track both task and subtask completion + - ✅ DO: Use visual progress indicators + + ```javascript + // ✅ DO: Track and display progress + // Calculate completion statistics + const totalTasks = data.tasks.length; + const completedTasks = data.tasks.filter(task => + task.status === 'done' || task.status === 'completed').length; + const completionPercentage = totalTasks > 0 ? (completedTasks / totalTasks) * 100 : 0; + + // Count subtasks + let totalSubtasks = 0; + let completedSubtasks = 0; + + data.tasks.forEach(task => { + if (task.subtasks && task.subtasks.length > 0) { + totalSubtasks += task.subtasks.length; + completedSubtasks += task.subtasks.filter(st => + st.status === 'done' || st.status === 'completed').length; + } + }); + ``` + +## Complexity Analysis + +- **Scoring System**: + - ✅ DO: Use AI to analyze task complexity + - ✅ DO: Include complexity scores (1-10) + - ✅ DO: Generate specific expansion recommendations + + ```javascript + // ✅ DO: Handle complexity analysis properly + const report = { + meta: { + generatedAt: new Date().toISOString(), + tasksAnalyzed: tasksData.tasks.length, + thresholdScore: thresholdScore, + projectName: tasksData.meta?.projectName || 'Your Project Name', + usedResearch: useResearch + }, + complexityAnalysis: complexityAnalysis + }; + ``` + +- **Analysis-Based Workflow**: + - ✅ DO: Use complexity reports to guide task expansion + - ✅ DO: Prioritize complex tasks for more detailed breakdown + - ✅ DO: Use expansion prompts from complexity analysis + + ```javascript + // ✅ DO: Apply complexity analysis to workflow + // Sort tasks by complexity if report exists, otherwise by ID + if (complexityReport && complexityReport.complexityAnalysis) { + log('info', 'Sorting tasks by complexity...'); + + // Create a map of task IDs to complexity scores + const complexityMap = new Map(); + complexityReport.complexityAnalysis.forEach(analysis => { + complexityMap.set(analysis.taskId, analysis.complexityScore); + }); + + // Sort tasks by complexity score (high to low) + tasksToExpand.sort((a, b) => { + const scoreA = complexityMap.get(a.id) || 0; + const scoreB = complexityMap.get(b.id) || 0; + return scoreB - scoreA; + }); + } + ``` + +## Next Task Selection + +- **Eligibility Criteria**: + - ✅ DO: Consider dependencies when finding next tasks + - ✅ DO: Prioritize by task priority and dependency count + - ✅ DO: Skip completed tasks + + ```javascript + // ✅ DO: Use proper task prioritization logic + function findNextTask(tasks) { + // Get all completed task IDs + const completedTaskIds = new Set( + tasks + .filter(t => t.status === 'done' || t.status === 'completed') + .map(t => t.id) + ); + + // Filter for pending tasks whose dependencies are all satisfied + const eligibleTasks = tasks.filter(task => + (task.status === 'pending' || task.status === 'in-progress') && + task.dependencies && + task.dependencies.every(depId => completedTaskIds.has(depId)) + ); + + // Sort by priority, dependency count, and ID + const priorityValues = { 'high': 3, 'medium': 2, 'low': 1 }; + + const nextTask = eligibleTasks.sort((a, b) => { + // Priority first + const priorityA = priorityValues[a.priority || 'medium'] || 2; + const priorityB = priorityValues[b.priority || 'medium'] || 2; + + if (priorityB !== priorityA) { + return priorityB - priorityA; // Higher priority first + } + + // Dependency count next + if (a.dependencies.length !== b.dependencies.length) { + return a.dependencies.length - b.dependencies.length; // Fewer dependencies first + } + + // ID last + return a.id - b.id; // Lower ID first + })[0]; + + return nextTask; + } + ``` + +Refer to [`task-manager.js`](mdc:scripts/modules/task-manager.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines. \ No newline at end of file diff --git a/assets/rules/tests.mdc b/assets/rules/tests.mdc new file mode 100644 index 00000000..0ad87de9 --- /dev/null +++ b/assets/rules/tests.mdc @@ -0,0 +1,892 @@ +--- +description: Guidelines for implementing and maintaining tests for Task Master CLI +globs: "**/*.test.js,tests/**/*" +--- + +# Testing Guidelines for Task Master CLI + +*Note:* Never use asynchronous operations in tests. Always mock tests properly based on the way the tested functions are defined and used. Do not arbitrarily create tests. Based them on the low-level details and execution of the underlying code being tested. + +## Test Organization Structure + +- **Unit Tests** (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for module breakdown) + - Located in `tests/unit/` + - Test individual functions and utilities in isolation + - Mock all external dependencies + - Keep tests small, focused, and fast + - Example naming: `utils.test.js`, `task-manager.test.js` + +- **Integration Tests** + - Located in `tests/integration/` + - Test interactions between modules + - Focus on component interfaces rather than implementation details + - Use more realistic but still controlled test environments + - Example naming: `task-workflow.test.js`, `command-integration.test.js` + +- **End-to-End Tests** + - Located in `tests/e2e/` + - Test complete workflows from a user perspective + - Focus on CLI commands as they would be used by users + - Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js` + +- **Test Fixtures** + - Located in `tests/fixtures/` + - Provide reusable test data + - Keep fixtures small and representative + - Export fixtures as named exports for reuse + +## Test File Organization + +```javascript +// 1. Imports +import { jest } from '@jest/globals'; + +// 2. Mock setup (MUST come before importing the modules under test) +jest.mock('fs'); +jest.mock('@anthropic-ai/sdk'); +jest.mock('../../scripts/modules/utils.js', () => ({ + CONFIG: { + projectVersion: '1.5.0' + }, + log: jest.fn() +})); + +// 3. Import modules AFTER all mocks are defined +import { functionToTest } from '../../scripts/modules/module-name.js'; +import { testFixture } from '../fixtures/fixture-name.js'; +import fs from 'fs'; + +// 4. Set up spies on mocked modules (if needed) +const mockReadFileSync = jest.spyOn(fs, 'readFileSync'); + +// 5. Test suite with descriptive name +describe('Feature or Function Name', () => { + // 6. Setup and teardown (if needed) + beforeEach(() => { + jest.clearAllMocks(); + // Additional setup code + }); + + afterEach(() => { + // Cleanup code + }); + + // 7. Grouped tests for related functionality + describe('specific functionality', () => { + // 8. Individual test cases with clear descriptions + test('should behave in expected way when given specific input', () => { + // Arrange - set up test data + const input = testFixture.sampleInput; + mockReadFileSync.mockReturnValue('mocked content'); + + // Act - call the function being tested + const result = functionToTest(input); + + // Assert - verify the result + expect(result).toBe(expectedOutput); + expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path')); + }); + }); +}); +``` + +## Commander.js Command Testing Best Practices + +When testing CLI commands built with Commander.js, several special considerations must be made to avoid common pitfalls: + +- **Direct Action Handler Testing** + - ✅ **DO**: Test the command action handlers directly rather than trying to mock the entire Commander.js chain + - ✅ **DO**: Create simplified test-specific implementations of command handlers that match the original behavior + - ✅ **DO**: Explicitly handle all options, including defaults and shorthand flags (e.g., `-p` for `--prompt`) + - ✅ **DO**: Include null/undefined checks in test implementations for parameters that might be optional + - ✅ **DO**: Use fixtures from `tests/fixtures/` for consistent sample data across tests + + ```javascript + // ✅ DO: Create a simplified test version of the command handler + const testAddTaskAction = async (options) => { + options = options || {}; // Ensure options aren't undefined + + // Validate parameters + const isManualCreation = options.title && options.description; + const prompt = options.prompt || options.p; // Handle shorthand flags + + if (!prompt && !isManualCreation) { + throw new Error('Expected error message'); + } + + // Call the mocked task manager + return mockTaskManager.addTask(/* parameters */); + }; + + test('should handle required parameters correctly', async () => { + // Call the test implementation directly + await expect(async () => { + await testAddTaskAction({ file: 'tasks.json' }); + }).rejects.toThrow('Expected error message'); + }); + ``` + +- **Commander Chain Mocking (If Necessary)** + - ✅ **DO**: Mock ALL chainable methods (`option`, `argument`, `action`, `on`, etc.) + - ✅ **DO**: Return `this` (or the mock object) from all chainable method mocks + - ✅ **DO**: Remember to mock not only the initial object but also all objects returned by methods + - ✅ **DO**: Implement a mechanism to capture the action handler for direct testing + + ```javascript + // If you must mock the Commander.js chain: + const mockCommand = { + command: jest.fn().mockReturnThis(), + description: jest.fn().mockReturnThis(), + option: jest.fn().mockReturnThis(), + argument: jest.fn().mockReturnThis(), // Don't forget this one + action: jest.fn(fn => { + actionHandler = fn; // Capture the handler for testing + return mockCommand; + }), + on: jest.fn().mockReturnThis() // Don't forget this one + }; + ``` + +- **Parameter Handling** + - ✅ **DO**: Check for both main flag and shorthand flags (e.g., `prompt` and `p`) + - ✅ **DO**: Handle parameters like Commander would (comma-separated lists, etc.) + - ✅ **DO**: Set proper default values as defined in the command + - ✅ **DO**: Validate that required parameters are actually required in tests + + ```javascript + // Parse dependencies like Commander would + const dependencies = options.dependencies + ? options.dependencies.split(',').map(id => id.trim()) + : []; + ``` + +- **Environment and Session Handling** + - ✅ **DO**: Properly mock session objects when required by functions + - ✅ **DO**: Reset environment variables between tests if modified + - ✅ **DO**: Use a consistent pattern for environment-dependent tests + + ```javascript + // Session parameter mock pattern + const sessionMock = { session: process.env }; + + // In test: + expect(mockAddTask).toHaveBeenCalledWith( + expect.any(String), + 'Test prompt', + [], + 'medium', + sessionMock, + false, + null, + null + ); + ``` + +- **Common Pitfalls to Avoid** + - ❌ **DON'T**: Try to use the real action implementation without proper mocking + - ❌ **DON'T**: Mock Commander partially - either mock it completely or test the action directly + - ❌ **DON'T**: Forget to handle optional parameters that may be undefined + - ❌ **DON'T**: Neglect to test shorthand flag functionality (e.g., `-p`, `-r`) + - ❌ **DON'T**: Create circular dependencies in your test mocks + - ❌ **DON'T**: Access variables before initialization in your test implementations + - ❌ **DON'T**: Include actual command execution in unit tests + - ❌ **DON'T**: Overwrite the same file path in multiple tests + + ```javascript + // ❌ DON'T: Create circular references in mocks + const badMock = { + method: jest.fn().mockImplementation(() => badMock.method()) + }; + + // ❌ DON'T: Access uninitialized variables + const badImplementation = () => { + const result = uninitialized; + let uninitialized = 'value'; + return result; + }; + ``` + +## Jest Module Mocking Best Practices + +- **Mock Hoisting Behavior** + - Jest hoists `jest.mock()` calls to the top of the file, even above imports + - Always declare mocks before importing the modules being tested + - Use the factory pattern for complex mocks that need access to other variables + + ```javascript + // ✅ DO: Place mocks before imports + jest.mock('commander'); + import { program } from 'commander'; + + // ❌ DON'T: Define variables and then try to use them in mocks + const mockFn = jest.fn(); + jest.mock('module', () => ({ + func: mockFn // This won't work due to hoisting! + })); + ``` + +- **Mocking Modules with Function References** + - Use `jest.spyOn()` after imports to create spies on mock functions + - Reference these spies in test assertions + + ```javascript + // Mock the module first + jest.mock('fs'); + + // Import the mocked module + import fs from 'fs'; + + // Create spies on the mock functions + const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true); + + test('should call existsSync', () => { + // Call function that uses fs.existsSync + const result = functionUnderTest(); + + // Verify the mock was called correctly + expect(mockExistsSync).toHaveBeenCalled(); + }); + ``` + +- **Testing Functions with Callbacks** + - Get the callback from your mock's call arguments + - Execute it directly with test inputs + - Verify the results match expectations + + ```javascript + jest.mock('commander'); + import { program } from 'commander'; + import { setupCLI } from '../../scripts/modules/commands.js'; + + const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program); + + test('version callback should return correct version', () => { + // Call the function that registers the callback + setupCLI(); + + // Extract the callback function + const versionCallback = mockVersion.mock.calls[0][0]; + expect(typeof versionCallback).toBe('function'); + + // Execute the callback and verify results + const result = versionCallback(); + expect(result).toBe('1.5.0'); + }); + ``` + +## ES Module Testing Strategies + +When testing ES modules (`"type": "module"` in package.json), traditional mocking approaches require special handling to avoid reference and scoping issues. + +- **Module Import Challenges** + - Functions imported from ES modules may still reference internal module-scoped variables + - Imported functions may not use your mocked dependencies even with proper jest.mock() setup + - ES module exports are read-only properties (cannot be reassigned during tests) + +- **Mocking Modules Statically Imported** + - For modules imported with standard `import` statements at the top level: + - Use `jest.mock('path/to/module', factory)` **before** any imports. + - Jest hoists these mocks. + - Ensure the factory function returns the mocked structure correctly. + +- **Mocking Dependencies for Dynamically Imported Modules** + - **Problem**: Standard `jest.mock()` often fails for dependencies of modules loaded later using dynamic `import('path/to/module')`. The mocks aren't applied correctly when the dynamic import resolves. + - **Solution**: Use `jest.unstable_mockModule(modulePath, factory)` **before** the dynamic `import()` call. + ```javascript + // 1. Define mock function instances + const mockExistsSync = jest.fn(); + const mockReadFileSync = jest.fn(); + // ... other mocks + + // 2. Mock the dependency module *before* the dynamic import + jest.unstable_mockModule('fs', () => ({ + __esModule: true, // Important for ES module mocks + // Mock named exports + existsSync: mockExistsSync, + readFileSync: mockReadFileSync, + // Mock default export if necessary + // default: { ... } + })); + + // 3. Dynamically import the module under test (e.g., in beforeAll or test case) + let moduleUnderTest; + beforeAll(async () => { + // Ensure mocks are reset if needed before import + mockExistsSync.mockReset(); + mockReadFileSync.mockReset(); + // ... reset other mocks ... + + // Import *after* unstable_mockModule is called + moduleUnderTest = await import('../../scripts/modules/module-using-fs.js'); + }); + + // 4. Now tests can use moduleUnderTest, and its 'fs' calls will hit the mocks + test('should use mocked fs.readFileSync', () => { + mockReadFileSync.mockReturnValue('mock data'); + moduleUnderTest.readFileAndProcess(); + expect(mockReadFileSync).toHaveBeenCalled(); + // ... other assertions + }); + ``` + - ✅ **DO**: Call `jest.unstable_mockModule()` before `await import()`. + - ✅ **DO**: Include `__esModule: true` in the mock factory for ES modules. + - ✅ **DO**: Mock named and default exports as needed within the factory. + - ✅ **DO**: Reset mock functions (`mockFn.mockReset()`) before the dynamic import if they might have been called previously. + +- **Mocking Entire Modules (Static Import)** + ```javascript + // Mock the entire module with custom implementation for static imports + // ... (existing example remains valid) ... + ``` + +- **Direct Implementation Testing** + - Instead of calling the actual function which may have module-scope reference issues: + ```javascript + // ... (existing example remains valid) ... + ``` + +- **Avoiding Module Property Assignment** + ```javascript + // ... (existing example remains valid) ... + ``` + +- **Handling Mock Verification Failures** + - If verification like `expect(mockFn).toHaveBeenCalled()` fails: + 1. Check that your mock setup (`jest.mock` or `jest.unstable_mockModule`) is correctly placed **before** imports (static or dynamic). + 2. Ensure you're using the right mock instance and it's properly passed to the module. + 3. Verify your test invokes behavior that *should* call the mock. + 4. Use `jest.clearAllMocks()` or specific `mockFn.mockReset()` in `beforeEach` to prevent state leakage between tests. + 5. **Check Console Assertions**: If verifying `console.log`, `console.warn`, or `console.error` calls, ensure your assertion matches the *actual* arguments passed. If the code logs a single formatted string, assert against that single string (using `expect.stringContaining` or exact match), not multiple `expect.stringContaining` arguments. + ```javascript + // Example: Code logs console.error(`Error: ${message}. Details: ${details}`) + // ❌ DON'T: Assert multiple arguments if only one is logged + // expect(console.error).toHaveBeenCalledWith( + // expect.stringContaining('Error:'), + // expect.stringContaining('Details:') + // ); + // ✅ DO: Assert the single string argument + expect(console.error).toHaveBeenCalledWith( + expect.stringContaining('Error: Specific message. Details: More details') + ); + // or for exact match: + expect(console.error).toHaveBeenCalledWith( + 'Error: Specific message. Details: More details' + ); + ``` + 6. Consider implementing a simpler test that *only* verifies the mock behavior in isolation. + +## Mocking Guidelines + +- **File System Operations** + ```javascript + import mockFs from 'mock-fs'; + + beforeEach(() => { + mockFs({ + 'tasks': { + 'tasks.json': JSON.stringify({ + meta: { projectName: 'Test Project' }, + tasks: [] + }) + } + }); + }); + + afterEach(() => { + mockFs.restore(); + }); + ``` + +- **API Calls (Anthropic/Claude)** + ```javascript + import { Anthropic } from '@anthropic-ai/sdk'; + + jest.mock('@anthropic-ai/sdk'); + + beforeEach(() => { + Anthropic.mockImplementation(() => ({ + messages: { + create: jest.fn().mockResolvedValue({ + content: [{ text: 'Mocked response' }] + }) + } + })); + }); + ``` + +- **Environment Variables** + ```javascript + const originalEnv = process.env; + + beforeEach(() => { + jest.resetModules(); + process.env = { ...originalEnv }; + process.env.MODEL = 'test-model'; + }); + + afterEach(() => { + process.env = originalEnv; + }); + ``` + +## Testing Common Components + +- **CLI Commands** + - Mock the action handlers (defined in [`commands.js`](mdc:scripts/modules/commands.js)) and verify they're called with correct arguments + - Test command registration and option parsing + - Use `commander` test utilities or custom mocks + +- **Task Operations** + - Use sample task fixtures for consistent test data + - Mock file system operations + - Test both success and error paths + +- **UI Functions** + - Mock console output and verify correct formatting + - Test conditional output logic + - When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons + - For functions with different behavior modes (e.g., `forConsole`, `forTable` parameters), create separate tests for each mode + - Test the structure of formatted output (e.g., check that it's a comma-separated list with the right number of items) rather than exact string matching + - When testing chalk-formatted output, remember that strict equality comparison (`toBe()`) can fail even when the visible output looks identical + - Consider using more flexible assertions like checking for the presence of key elements when working with styled text + - Mock chalk functions to return the input text to make testing easier while still verifying correct function calls + +## Test Quality Guidelines + +- ✅ **DO**: Write tests before implementing features (TDD approach when possible) +- ✅ **DO**: Test edge cases and error conditions, not just happy paths +- ✅ **DO**: Keep tests independent and isolated from each other +- ✅ **DO**: Use descriptive test names that explain the expected behavior +- ✅ **DO**: Maintain test fixtures separate from test logic +- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100% +- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks + +- ❌ **DON'T**: Test implementation details that might change +- ❌ **DON'T**: Write brittle tests that depend on specific output formatting +- ❌ **DON'T**: Skip testing error handling and validation +- ❌ **DON'T**: Duplicate test fixtures across multiple test files +- ❌ **DON'T**: Write tests that depend on execution order +- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting) + + +- **Task File Operations** + - ✅ DO: Use test-specific file paths (e.g., 'test-tasks.json') for all operations + - ✅ DO: Mock `readJSON` and `writeJSON` to avoid real file system interactions + - ✅ DO: Verify file operations use the correct paths in `expect` statements + - ✅ DO: Use different paths for each test to avoid test interdependence + - ✅ DO: Verify modifications on the in-memory task objects passed to `writeJSON` + - ❌ DON'T: Modify real task files (tasks.json) during tests + - ❌ DON'T: Skip testing file operations because they're "just I/O" + + ```javascript + // ✅ DO: Test file operations without real file system changes + test('should update task status in tasks.json', async () => { + // Setup mock to return sample data + readJSON.mockResolvedValue(JSON.parse(JSON.stringify(sampleTasks))); + + // Use test-specific file path + await setTaskStatus('test-tasks.json', '2', 'done'); + + // Verify correct file path was read + expect(readJSON).toHaveBeenCalledWith('test-tasks.json'); + + // Verify correct file path was written with updated content + expect(writeJSON).toHaveBeenCalledWith( + 'test-tasks.json', + expect.objectContaining({ + tasks: expect.arrayContaining([ + expect.objectContaining({ + id: 2, + status: 'done' + }) + ]) + }) + ); + }); + ``` + +## Running Tests + +```bash +# Run all tests +npm test + +# Run tests in watch mode +npm run test:watch + +# Run tests with coverage reporting +npm run test:coverage + +# Run a specific test file +npm test -- tests/unit/specific-file.test.js + +# Run tests matching a pattern +npm test -- -t "pattern to match" +``` + +## Troubleshooting Test Issues + +- **Mock Functions Not Called** + - Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls) + - Check that you're referencing the correct mock instance + - Verify the import paths match exactly + +- **Unexpected Mock Behavior** + - Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach` + - Check mock implementation for conditional behavior + - Ensure mock return values are correctly configured for each test + +- **Tests Affecting Each Other** + - Isolate tests by properly mocking shared resources + - Reset state in `beforeEach` and `afterEach` hooks + - Avoid global state modifications + +## Common Testing Pitfalls and Solutions + +- **Complex Library Mocking** + - **Problem**: Trying to create full mocks of complex libraries like Commander.js can be error-prone + - **Solution**: Instead of mocking the entire library, test the command handlers directly by calling your action handlers with the expected arguments + ```javascript + // ❌ DON'T: Create complex mocks of Commander.js + class MockCommand { + constructor() { /* Complex mock implementation */ } + option() { /* ... */ } + action() { /* ... */ } + // Many methods to implement + } + + // ✅ DO: Test the command handlers directly + test('should use default PRD path when no arguments provided', async () => { + // Call the action handler directly with the right params + await parsePrdAction(undefined, { numTasks: '10', output: 'tasks/tasks.json' }); + + // Assert on behavior + expect(mockParsePRD).toHaveBeenCalledWith('scripts/prd.txt', 'tasks/tasks.json', 10); + }); + ``` + +- **ES Module Mocking Challenges** + - **Problem**: ES modules don't support `require()` and imports are read-only + - **Solution**: Use Jest's module factory pattern and ensure mocks are defined before imports + ```javascript + // ❌ DON'T: Try to modify imported modules + import { detectCamelCaseFlags } from '../../scripts/modules/utils.js'; + detectCamelCaseFlags = jest.fn(); // Error: Assignment to constant variable + + // ❌ DON'T: Try to use require with ES modules + const utils = require('../../scripts/modules/utils.js'); // Error in ES modules + + // ✅ DO: Use Jest module factory pattern + jest.mock('../../scripts/modules/utils.js', () => ({ + detectCamelCaseFlags: jest.fn(), + toKebabCase: jest.fn() + })); + + // Import after mocks are defined + import { detectCamelCaseFlags } from '../../scripts/modules/utils.js'; + ``` + +- **Function Redeclaration Errors** + - **Problem**: Declaring the same function twice in a test file causes errors + - **Solution**: Use different function names or create local test-specific implementations + ```javascript + // ❌ DON'T: Redefine imported functions with the same name + import { detectCamelCaseFlags } from '../../scripts/modules/utils.js'; + + function detectCamelCaseFlags() { /* Test implementation */ } + // Error: Identifier has already been declared + + // ✅ DO: Use a different name for test implementations + function testDetectCamelCaseFlags() { /* Test implementation */ } + ``` + +- **Console.log Circular References** + - **Problem**: Creating infinite recursion by spying on console.log while also allowing it to log + - **Solution**: Implement a mock that doesn't call the original function + ```javascript + // ❌ DON'T: Create circular references with console.log + const mockConsoleLog = jest.spyOn(console, 'log'); + mockConsoleLog.mockImplementation(console.log); // Creates infinite recursion + + // ✅ DO: Use a non-recursive mock implementation + const mockConsoleLog = jest.spyOn(console, 'log').mockImplementation(() => {}); + ``` + +- **Mock Function Method Issues** + - **Problem**: Trying to use jest.fn() methods on imported functions that aren't properly mocked + - **Solution**: Create explicit jest.fn() mocks for functions you need to call jest methods on + ```javascript + // ❌ DON'T: Try to use jest methods on imported functions without proper mocking + import { parsePRD } from '../../scripts/modules/task-manager.js'; + parsePRD.mockClear(); // Error: parsePRD.mockClear is not a function + + // ✅ DO: Create proper jest.fn() mocks + const mockParsePRD = jest.fn().mockResolvedValue(undefined); + jest.mock('../../scripts/modules/task-manager.js', () => ({ + parsePRD: mockParsePRD + })); + // Now you can use: + mockParsePRD.mockClear(); + ``` + +- **EventEmitter Max Listeners Warning** + - **Problem**: Commander.js adds many listeners in complex mocks, causing warnings + - **Solution**: Either increase the max listeners limit or avoid deep mocking + ```javascript + // Option 1: Increase max listeners if you must mock Commander + class MockCommand extends EventEmitter { + constructor() { + super(); + this.setMaxListeners(20); // Avoid MaxListenersExceededWarning + } + } + + // Option 2 (preferred): Test command handlers directly instead + // (as shown in the first example) + ``` + +- **Test Isolation Issues** + - **Problem**: Tests affecting each other due to shared mock state + - **Solution**: Reset all mocks in beforeEach and use separate test-specific mocks + ```javascript + // ❌ DON'T: Allow mock state to persist between tests + const globalMock = jest.fn().mockReturnValue('test'); + + // ✅ DO: Clear mocks before each test + beforeEach(() => { + jest.clearAllMocks(); + // Set up test-specific mock behavior + mockFunction.mockReturnValue('test-specific value'); + }); + ``` + +## Testing AI Service Integrations + +- **DO NOT import real AI service clients** + - ❌ DON'T: Import actual AI clients from their libraries + - ✅ DO: Create fully mocked versions that return predictable responses + + ```javascript + // ❌ DON'T: Import and instantiate real AI clients + import { Anthropic } from '@anthropic-ai/sdk'; + const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }); + + // ✅ DO: Mock the entire module with controlled behavior + jest.mock('@anthropic-ai/sdk', () => ({ + Anthropic: jest.fn().mockImplementation(() => ({ + messages: { + create: jest.fn().mockResolvedValue({ + content: [{ type: 'text', text: 'Mocked AI response' }] + }) + } + })) + })); + ``` + +- **DO NOT rely on environment variables for API keys** + - ❌ DON'T: Assume environment variables are set in tests + - ✅ DO: Set mock environment variables in test setup + + ```javascript + // In tests/setup.js or at the top of test file + process.env.ANTHROPIC_API_KEY = 'test-mock-api-key-for-tests'; + process.env.PERPLEXITY_API_KEY = 'test-mock-perplexity-key-for-tests'; + ``` + +- **DO NOT use real AI client initialization logic** + - ❌ DON'T: Use code that attempts to initialize or validate real AI clients + - ✅ DO: Create test-specific paths that bypass client initialization + + ```javascript + // ❌ DON'T: Test functions that require valid AI client initialization + // This will fail without proper API keys or network access + test('should use AI client', async () => { + const result = await functionThatInitializesAIClient(); + expect(result).toBeDefined(); + }); + + // ✅ DO: Test with bypassed initialization or manual task paths + test('should handle manual task creation without AI', () => { + // Using a path that doesn't require AI client initialization + const result = addTaskDirect({ + title: 'Manual Task', + description: 'Test Description' + }, mockLogger); + + expect(result.success).toBe(true); + }); + ``` + +## Testing Asynchronous Code + +- **DO NOT rely on asynchronous operations in tests** + - ❌ DON'T: Use real async/await or Promise resolution in tests + - ✅ DO: Make all mocks return synchronous values when possible + + ```javascript + // ❌ DON'T: Use real async functions that might fail unpredictably + test('should handle async operation', async () => { + const result = await realAsyncFunction(); // Can time out or fail for external reasons + expect(result).toBe(expectedValue); + }); + + // ✅ DO: Make async operations synchronous in tests + test('should handle operation', () => { + mockAsyncFunction.mockReturnValue({ success: true, data: 'test' }); + const result = functionUnderTest(); + expect(result).toEqual({ success: true, data: 'test' }); + }); + ``` + +- **DO NOT test exact error messages** + - ❌ DON'T: Assert on exact error message text that might change + - ✅ DO: Test for error presence and general properties + + ```javascript + // ❌ DON'T: Test for exact error message text + expect(result.error).toBe('Could not connect to API: Network error'); + + // ✅ DO: Test for general error properties or message patterns + expect(result.success).toBe(false); + expect(result.error).toContain('Could not connect'); + // Or even better: + expect(result).toMatchObject({ + success: false, + error: expect.stringContaining('connect') + }); + ``` + +## Reliable Testing Techniques + +- **Create Simplified Test Functions** + - Create simplified versions of complex functions that focus only on core logic + - Remove file system operations, API calls, and other external dependencies + - Pass all dependencies as parameters to make testing easier + + ```javascript + // Original function (hard to test) + const setTaskStatus = async (taskId, newStatus) => { + const tasksPath = 'tasks/tasks.json'; + const data = await readJSON(tasksPath); + // [implementation] + await writeJSON(tasksPath, data); + return { success: true }; + }; + + // Test-friendly version (easier to test) + const updateTaskStatus = (tasks, taskId, newStatus) => { + // Pure logic without side effects + const updatedTasks = [...tasks]; + const taskIndex = findTaskById(updatedTasks, taskId); + if (taskIndex === -1) return { success: false, error: 'Task not found' }; + updatedTasks[taskIndex].status = newStatus; + return { success: true, tasks: updatedTasks }; + }; + ``` + +See [tests/README.md](mdc:tests/README.md) for more details on the testing approach. + +Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options. + +## Variable Hoisting and Module Initialization Issues + +When testing ES modules or working with complex module imports, you may encounter variable hoisting and initialization issues. These can be particularly tricky to debug and often appear as "Cannot access 'X' before initialization" errors. + +- **Understanding Module Initialization Order** + - ✅ **DO**: Declare and initialize global variables at the top of modules + - ✅ **DO**: Use proper function declarations to avoid hoisting issues + - ✅ **DO**: Initialize variables before they are referenced, especially in imported modules + - ✅ **DO**: Be aware that imports are hoisted to the top of the file + + ```javascript + // ✅ DO: Define global state variables at the top of the module + let silentMode = false; // Declare and initialize first + + const CONFIG = { /* configuration */ }; + + function isSilentMode() { + return silentMode; // Reference variable after it's initialized + } + + function log(level, message) { + if (isSilentMode()) return; // Use the function instead of accessing variable directly + // ... + } + ``` + +- **Testing Modules with Initialization-Dependent Functions** + - ✅ **DO**: Create test-specific implementations that initialize all variables correctly + - ✅ **DO**: Use factory functions in mocks to ensure proper initialization order + - ✅ **DO**: Be careful with how you mock or stub functions that depend on module state + + ```javascript + // ✅ DO: Test-specific implementation that avoids initialization issues + const testLog = (level, ...args) => { + // Local implementation with proper initialization + const isSilent = false; // Explicit initialization + if (isSilent) return; + // Test implementation... + }; + ``` + +- **Common Hoisting-Related Errors to Avoid** + - ❌ **DON'T**: Reference variables before their declaration in module scope + - ❌ **DON'T**: Create circular dependencies between modules + - ❌ **DON'T**: Rely on variable initialization order across module boundaries + - ❌ **DON'T**: Define functions that use hoisted variables before they're initialized + + ```javascript + // ❌ DON'T: Create reference-before-initialization patterns + function badFunction() { + if (silentMode) { /* ... */ } // ReferenceError if silentMode is declared later + } + + let silentMode = false; + + // ❌ DON'T: Create cross-module references that depend on initialization order + // module-a.js + import { getSetting } from './module-b.js'; + export const config = { value: getSetting() }; + + // module-b.js + import { config } from './module-a.js'; + export function getSetting() { + return config.value; // Circular dependency causing initialization issues + } + ``` + +- **Dynamic Imports as a Solution** + - ✅ **DO**: Use dynamic imports (`import()`) to avoid initialization order issues + - ✅ **DO**: Structure modules to avoid circular dependencies that cause initialization issues + - ✅ **DO**: Consider factory functions for modules with complex state + + ```javascript + // ✅ DO: Use dynamic imports to avoid initialization issues + async function getTaskManager() { + return import('./task-manager.js'); + } + + async function someFunction() { + const taskManager = await getTaskManager(); + return taskManager.someMethod(); + } + ``` + +- **Testing Approach for Modules with Initialization Issues** + - ✅ **DO**: Create self-contained test implementations rather than using real implementations + - ✅ **DO**: Mock dependencies at module boundaries instead of trying to mock deep dependencies + - ✅ **DO**: Isolate module-specific state in tests + + ```javascript + // ✅ DO: Create isolated test implementation instead of reusing module code + test('should log messages when not in silent mode', () => { + // Local test implementation instead of importing from module + const testLog = (level, message) => { + if (false) return; // Always non-silent for this test + mockConsole(level, message); + }; + + testLog('info', 'test message'); + expect(mockConsole).toHaveBeenCalledWith('info', 'test message'); + }); + ``` \ No newline at end of file diff --git a/assets/rules/ui.mdc b/assets/rules/ui.mdc new file mode 100644 index 00000000..52be439b --- /dev/null +++ b/assets/rules/ui.mdc @@ -0,0 +1,153 @@ +--- +description: Guidelines for implementing and maintaining user interface components +globs: scripts/modules/ui.js +alwaysApply: false +--- + +# User Interface Implementation Guidelines + +## Core UI Component Principles + +- **Function Scope Separation**: + - ✅ DO: Keep display logic separate from business logic + - ✅ DO: Import data processing functions from other modules + - ❌ DON'T: Include task manipulations within UI functions + - ❌ DON'T: Create circular dependencies with other modules + +- **Standard Display Pattern**: + ```javascript + // ✅ DO: Follow this pattern for display functions + /** + * Display information about a task + * @param {Object} task - The task to display + */ + function displayTaskInfo(task) { + console.log(boxen( + chalk.white.bold(`Task: #${task.id} - ${task.title}`), + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); + } + ``` + +## Visual Styling Standards + +- **Color Scheme**: + - Use `chalk.blue` for informational messages + - Use `chalk.green` for success messages + - Use `chalk.yellow` for warnings + - Use `chalk.red` for errors + - Use `chalk.cyan` for prompts and highlights + - Use `chalk.magenta` for subtask-related information + +- **Box Styling**: + ```javascript + // ✅ DO: Use consistent box styles by content type + // For success messages: + boxen(content, { + padding: 1, + borderColor: 'green', + borderStyle: 'round', + margin: { top: 1 } + }) + + // For errors: + boxen(content, { + padding: 1, + borderColor: 'red', + borderStyle: 'round' + }) + + // For information: + boxen(content, { + padding: 1, + borderColor: 'blue', + borderStyle: 'round', + margin: { top: 1, bottom: 1 } + }) + ``` + +## Table Display Guidelines + +- **Table Structure**: + - Use [`cli-table3`](mdc:node_modules/cli-table3/README.md) for consistent table rendering + - Include colored headers with bold formatting + - Use appropriate column widths for readability + + ```javascript + // ✅ DO: Create well-structured tables + const table = new Table({ + head: [ + chalk.cyan.bold('ID'), + chalk.cyan.bold('Title'), + chalk.cyan.bold('Status'), + chalk.cyan.bold('Priority'), + chalk.cyan.bold('Dependencies') + ], + colWidths: [5, 40, 15, 10, 20] + }); + + // Add content rows + table.push([ + task.id, + truncate(task.title, 37), + getStatusWithColor(task.status), + chalk.white(task.priority || 'medium'), + formatDependenciesWithStatus(task.dependencies, allTasks, true) + ]); + + console.log(table.toString()); + ``` + +## Loading Indicators + +- **Animation Standards**: + - Use [`ora`](mdc:node_modules/ora/readme.md) for spinner animations + - Create and stop loading indicators correctly + + ```javascript + // ✅ DO: Properly manage loading state + const loadingIndicator = startLoadingIndicator('Processing task data...'); + try { + // Do async work... + stopLoadingIndicator(loadingIndicator); + // Show success message + } catch (error) { + stopLoadingIndicator(loadingIndicator); + // Show error message + } + ``` + +## Helper Functions + +- **Status Formatting**: + - Use `getStatusWithColor` for consistent status display + - Use `formatDependenciesWithStatus` for dependency lists + - Use `truncate` to handle text that may overflow display + +- **Progress Reporting**: + - Use visual indicators for progress (bars, percentages) + - Include both numeric and visual representations + + ```javascript + // ✅ DO: Show clear progress indicators + console.log(`${chalk.cyan('Tasks:')} ${completedTasks}/${totalTasks} (${completionPercentage.toFixed(1)}%)`); + console.log(`${chalk.cyan('Progress:')} ${createProgressBar(completionPercentage)}`); + ``` + +## Command Suggestions + +- **Action Recommendations**: + - Provide next step suggestions after command completion + - Use a consistent format for suggested commands + + ```javascript + // ✅ DO: Show suggested next actions + console.log(boxen( + chalk.white.bold('Next Steps:') + '\n\n' + + `${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` + + `${chalk.cyan('2.')} Run ${chalk.yellow('task-master show --id=' + newTaskId)} to view details`, + { padding: 1, borderColor: 'cyan', borderStyle: 'round', margin: { top: 1 } } + )); + ``` + +Refer to [`ui.js`](mdc:scripts/modules/ui.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines. \ No newline at end of file diff --git a/assets/rules/utilities.mdc b/assets/rules/utilities.mdc new file mode 100644 index 00000000..90b0be31 --- /dev/null +++ b/assets/rules/utilities.mdc @@ -0,0 +1,551 @@ +--- +description: Guidelines for implementing utility functions +globs: scripts/modules/utils.js, mcp-server/src/**/* +alwaysApply: false +--- +# Utility Function Guidelines + +## General Principles + +- **Function Scope**: + - ✅ DO: Create utility functions that serve multiple modules + - ✅ DO: Keep functions single-purpose and focused + - ❌ DON'T: Include business logic in utility functions + - ❌ DON'T: Create utilities with side effects + + ```javascript + // ✅ DO: Create focused, reusable utilities + /** + * Truncates text to a specified length + * @param {string} text - The text to truncate + * @param {number} maxLength - The maximum length + * @returns {string} The truncated text + */ + function truncate(text, maxLength) { + if (!text || text.length <= maxLength) { + return text; + } + return text.slice(0, maxLength - 3) + '...'; + } + ``` + + ```javascript + // ❌ DON'T: Add side effects to utilities + function truncate(text, maxLength) { + if (!text || text.length <= maxLength) { + return text; + } + + // Side effect - modifying global state or logging + console.log(`Truncating text from ${text.length} to ${maxLength} chars`); + + return text.slice(0, maxLength - 3) + '...'; + } + ``` + +- **Location**: + - **Core CLI Utilities**: Place utilities used primarily by the core `task-master` CLI logic and command modules (`scripts/modules/*`) into [`scripts/modules/utils.js`](mdc:scripts/modules/utils.js). + - **MCP Server Utilities**: Place utilities specifically designed to support the MCP server implementation into the appropriate subdirectories within `mcp-server/src/`. + - Path/Core Logic Helpers: [`mcp-server/src/core/utils/`](mdc:mcp-server/src/core/utils/) (e.g., `path-utils.js`). + - Tool Execution/Response Helpers: [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js). + +## Documentation Standards + +- **JSDoc Format**: + - ✅ DO: Document all parameters and return values + - ✅ DO: Include descriptions for complex logic + - ✅ DO: Add examples for non-obvious usage + - ❌ DON'T: Skip documentation for "simple" functions + + ```javascript + // ✅ DO: Provide complete JSDoc documentation + /** + * Reads and parses a JSON file + * @param {string} filepath - Path to the JSON file + * @returns {Object|null} Parsed JSON data or null if error occurs + */ + function readJSON(filepath) { + try { + const rawData = fs.readFileSync(filepath, 'utf8'); + return JSON.parse(rawData); + } catch (error) { + log('error', `Error reading JSON file ${filepath}:`, error.message); + if (CONFIG.debug) { + console.error(error); + } + return null; + } + } + ``` + +## Configuration Management (via `config-manager.js`) + +Taskmaster configuration (excluding API keys) is primarily managed through the `.taskmasterconfig` file located in the project root and accessed via getters in [`scripts/modules/config-manager.js`](mdc:scripts/modules/config-manager.js). + +- **`.taskmasterconfig` File**: + - ✅ DO: Use this JSON file to store settings like AI model selections (main, research, fallback), parameters (temperature, maxTokens), logging level, default priority/subtasks, etc. + - ✅ DO: Manage this file using the `task-master models --setup` CLI command or the `models` MCP tool. + - ✅ DO: Rely on [`config-manager.js`](mdc:scripts/modules/config-manager.js) to load this file (using the correct project root passed from MCP or found via CLI utils), merge with defaults, and provide validated settings. + - ❌ DON'T: Store API keys in this file. + - ❌ DON'T: Manually edit this file unless necessary. + +- **Configuration Getters (`config-manager.js`)**: + - ✅ DO: Import and use specific getters from `config-manager.js` (e.g., `getMainProvider()`, `getLogLevel()`, `getMainMaxTokens()`) to access configuration values *needed for application logic* (like `getDefaultSubtasks`). + - ✅ DO: Pass the `explicitRoot` parameter to getters if calling from MCP direct functions to ensure the correct project's config is loaded. + - ❌ DON'T: Call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions (`scripts/modules/task-manager/*`). Instead, pass the `role` to the unified AI service. + - ❌ DON'T: Access configuration values directly from environment variables (except API keys). + +- **API Key Handling (`utils.js` & `ai-services-unified.js`)**: + - ✅ DO: Store API keys **only** in `.env` (for CLI, loaded by `dotenv` in `scripts/dev.js`) or `.cursor/mcp.json` (for MCP, accessed via `session.env`). + - ✅ DO: Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available *before* potentially attempting an AI call if needed, but note the unified service performs its own internal check. + - ✅ DO: Understand that the unified service layer (`ai-services-unified.js`) internally resolves API keys using `resolveEnvVariable(key, session)` from `utils.js`. + +- **Error Handling**: + - ✅ DO: Handle potential `ConfigurationError` if the `.taskmasterconfig` file is missing or invalid when accessed via `getConfig` (e.g., in `commands.js` or direct functions). + +## Logging Utilities (in `scripts/modules/utils.js`) + +- **Log Levels**: + - ✅ DO: Support multiple log levels (debug, info, warn, error) + - ✅ DO: Use appropriate icons for different log levels + - ✅ DO: Respect the configured log level + - ❌ DON'T: Add direct console.log calls outside the logging utility + - **Note on Passed Loggers**: When a logger object (like the FastMCP `log` object) is passed *as a parameter* (e.g., as `mcpLog`) into core Task Master functions, the receiving function often expects specific methods (`.info`, `.warn`, `.error`, etc.) to be directly callable on that object (e.g., `mcpLog[level](...)`). If the passed logger doesn't have this exact structure, a wrapper object may be needed. See the **Handling Logging Context (`mcpLog`)** section in [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for the standard pattern used in direct functions. + +- **Logger Wrapper Pattern**: + - ✅ DO: Use the logger wrapper pattern when passing loggers to prevent `mcpLog[level] is not a function` errors: + ```javascript + // Standard logWrapper pattern to wrap FastMCP's log object + const logWrapper = { + info: (message, ...args) => log.info(message, ...args), + warn: (message, ...args) => log.warn(message, ...args), + error: (message, ...args) => log.error(message, ...args), + debug: (message, ...args) => log.debug && log.debug(message, ...args), + success: (message, ...args) => log.info(message, ...args) // Map success to info + }; + + // Pass this wrapper as mcpLog to ensure consistent method availability + // This also ensures output format is set to 'json' in many core functions + const options = { mcpLog: logWrapper, session }; + ``` + - ✅ DO: Implement this pattern in any direct function that calls core functions expecting `mcpLog` + - ✅ DO: Use this solution in conjunction with silent mode for complete output control + - ❌ DON'T: Pass the FastMCP `log` object directly as `mcpLog` to core functions + - **Important**: This pattern has successfully fixed multiple issues in MCP tools (e.g., `update-task`, `update-subtask`) where using or omitting `mcpLog` incorrectly led to runtime errors or JSON parsing failures. + - For complete implementation details, see the **Handling Logging Context (`mcpLog`)** section in [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc). + + ```javascript + // ✅ DO: Implement a proper logging utility + const LOG_LEVELS = { + debug: 0, + info: 1, + warn: 2, + error: 3 + }; + + function log(level, ...args) { + const icons = { + debug: chalk.gray('🔍'), + info: chalk.blue('ℹ️'), + warn: chalk.yellow('⚠️'), + error: chalk.red('❌'), + success: chalk.green('✅') + }; + + if (LOG_LEVELS[level] >= LOG_LEVELS[CONFIG.logLevel]) { + const icon = icons[level] || ''; + console.log(`${icon} ${args.join(' ')}`); + } + } + ``` + +## Silent Mode Utilities (in `scripts/modules/utils.js`) + +- **Silent Mode Control**: + - ✅ DO: Use the exported silent mode functions rather than accessing global variables + - ✅ DO: Always use `isSilentMode()` to check the current silent mode state + - ✅ DO: Ensure silent mode is disabled in a `finally` block to prevent it from staying enabled + - ❌ DON'T: Access the global `silentMode` variable directly + - ❌ DON'T: Forget to disable silent mode after enabling it + + ```javascript + // ✅ DO: Use the silent mode control functions properly + + // Example of proper implementation in utils.js: + + // Global silent mode flag (private to the module) + let silentMode = false; + + // Enable silent mode + function enableSilentMode() { + silentMode = true; + } + + // Disable silent mode + function disableSilentMode() { + silentMode = false; + } + + // Check if silent mode is enabled + function isSilentMode() { + return silentMode; + } + + // Example of proper usage in another module: + import { enableSilentMode, disableSilentMode, isSilentMode } from './utils.js'; + + // Check current status + if (!isSilentMode()) { + console.log('Silent mode is not enabled'); + } + + // Use try/finally pattern to ensure silent mode is disabled + try { + enableSilentMode(); + // Do something that should suppress console output + performOperation(); + } finally { + disableSilentMode(); + } + ``` + +- **Integration with Logging**: + - ✅ DO: Make the `log` function respect silent mode + ```javascript + function log(level, ...args) { + // Skip logging if silent mode is enabled + if (isSilentMode()) { + return; + } + + // Rest of logging logic... + } + ``` + +- **Common Patterns for Silent Mode**: + - ✅ DO: In **direct functions** (`mcp-server/src/core/direct-functions/*`) that call **core functions** (`scripts/modules/*`), ensure console output from the core function is suppressed to avoid breaking MCP JSON responses. + - **Preferred Method**: Update the core function to accept an `outputFormat` parameter (e.g., `outputFormat = 'text'`) and make it check `outputFormat === 'text'` before displaying any UI elements (banners, spinners, boxes, direct `console.log`s). Pass `'json'` from the direct function. + - **Necessary Fallback/Guarantee**: If the core function *cannot* be modified or its output suppression via `outputFormat` is unreliable, **wrap the core function call within the direct function** using `enableSilentMode()` and `disableSilentMode()` in a `try/finally` block. This acts as a safety net. + ```javascript + // Example in a direct function + export async function someOperationDirect(args, log) { + let result; + const tasksPath = findTasksJsonPath(args, log); // Get path first + + // Option 1: Core function handles 'json' format (Preferred) + try { + result = await coreFunction(tasksPath, ...otherArgs, 'json'); // Pass 'json' + return { success: true, data: result, fromCache: false }; + } catch (error) { + // Handle error... + } + + // Option 2: Core function output unreliable (Fallback/Guarantee) + try { + enableSilentMode(); // Enable before call + result = await coreFunction(tasksPath, ...otherArgs); // Call without format param + } catch (error) { + // Handle error... + log.error(`Failed: ${error.message}`); + return { success: false, error: { /* ... */ } }; + } finally { + disableSilentMode(); // ALWAYS disable in finally + } + return { success: true, data: result, fromCache: false }; // Assuming success if no error caught + } + ``` + - ✅ DO: For functions that accept a silent mode parameter but also need to check global state (less common): + ```javascript + // Check both the passed parameter and global silent mode + const isSilent = options.silentMode || (typeof options.silentMode === 'undefined' && isSilentMode()); + ``` + +## File Operations (in `scripts/modules/utils.js`) + +- **Error Handling**: + - ✅ DO: Use try/catch blocks for all file operations + - ✅ DO: Return null or a default value on failure + - ✅ DO: Log detailed error information using the `log` utility + - ❌ DON'T: Allow exceptions to propagate unhandled from simple file reads/writes + + ```javascript + // ✅ DO: Handle file operation errors properly in core utils + function writeJSON(filepath, data) { + try { + // Ensure directory exists (example) + const dir = path.dirname(filepath); + if (!fs.existsSync(dir)) { + fs.mkdirSync(dir, { recursive: true }); + } + fs.writeFileSync(filepath, JSON.stringify(data, null, 2)); + } catch (error) { + log('error', `Error writing JSON file ${filepath}:`, error.message); + if (CONFIG.debug) { + console.error(error); + } + } + } + ``` + +## Task-Specific Utilities (in `scripts/modules/utils.js`) + +- **Task ID Formatting**: + - ✅ DO: Create utilities for consistent ID handling + - ✅ DO: Support different ID formats (numeric, string, dot notation) + - ❌ DON'T: Duplicate formatting logic across modules + + ```javascript + // ✅ DO: Create utilities for common operations + /** + * Formats a task ID as a string + * @param {string|number} id - The task ID to format + * @returns {string} The formatted task ID + */ + function formatTaskId(id) { + if (typeof id === 'string' && id.includes('.')) { + return id; // Already formatted as a string with a dot (e.g., "1.2") + } + + if (typeof id === 'number') { + return id.toString(); + } + + return id; + } + ``` + +- **Task Search**: + - ✅ DO: Implement reusable task finding utilities + - ✅ DO: Support both task and subtask lookups + - ✅ DO: Add context to subtask results + + ```javascript + // ✅ DO: Create comprehensive search utilities + /** + * Finds a task by ID in the tasks array + * @param {Array} tasks - The tasks array + * @param {string|number} taskId - The task ID to find + * @returns {Object|null} The task object or null if not found + */ + function findTaskById(tasks, taskId) { + if (!taskId || !tasks || !Array.isArray(tasks)) { + return null; + } + + // Check if it's a subtask ID (e.g., "1.2") + if (typeof taskId === 'string' && taskId.includes('.')) { + const [parentId, subtaskId] = taskId.split('.').map(id => parseInt(id, 10)); + const parentTask = tasks.find(t => t.id === parentId); + + if (!parentTask || !parentTask.subtasks) { + return null; + } + + const subtask = parentTask.subtasks.find(st => st.id === subtaskId); + if (subtask) { + // Add reference to parent task for context + subtask.parentTask = { + id: parentTask.id, + title: parentTask.title, + status: parentTask.status + }; + subtask.isSubtask = true; + } + + return subtask || null; + } + + const id = parseInt(taskId, 10); + return tasks.find(t => t.id === id) || null; + } + ``` + +## Cycle Detection (in `scripts/modules/utils.js`) + +- **Graph Algorithms**: + - ✅ DO: Implement cycle detection using graph traversal + - ✅ DO: Track visited nodes and recursion stack + - ✅ DO: Return specific information about cycles + + ```javascript + // ✅ DO: Implement proper cycle detection + /** + * Find cycles in a dependency graph using DFS + * @param {string} subtaskId - Current subtask ID + * @param {Map} dependencyMap - Map of subtask IDs to their dependencies + * @param {Set} visited - Set of visited nodes + * @param {Set} recursionStack - Set of nodes in current recursion stack + * @returns {Array} - List of dependency edges that need to be removed to break cycles + */ + function findCycles(subtaskId, dependencyMap, visited = new Set(), recursionStack = new Set(), path = []) { + // Mark the current node as visited and part of recursion stack + visited.add(subtaskId); + recursionStack.add(subtaskId); + path.push(subtaskId); + + const cyclesToBreak = []; + + // Get all dependencies of the current subtask + const dependencies = dependencyMap.get(subtaskId) || []; + + // For each dependency + for (const depId of dependencies) { + // If not visited, recursively check for cycles + if (!visited.has(depId)) { + const cycles = findCycles(depId, dependencyMap, visited, recursionStack, [...path]); + cyclesToBreak.push(...cycles); + } + // If the dependency is in the recursion stack, we found a cycle + else if (recursionStack.has(depId)) { + // The last edge in the cycle is what we want to remove + cyclesToBreak.push(depId); + } + } + + // Remove the node from recursion stack before returning + recursionStack.delete(subtaskId); + + return cyclesToBreak; + } + ``` + +## MCP Server Core Utilities (`mcp-server/src/core/utils/`) + +### Project Root and Task File Path Detection (`path-utils.js`) + +- **Purpose**: This module ([`mcp-server/src/core/utils/path-utils.js`](mdc:mcp-server/src/core/utils/path-utils.js)) provides the mechanism for locating the user's `tasks.json` file, used by direct functions. +- **`findTasksJsonPath(args, log)`**: + - ✅ **DO**: Call this function from within **direct function wrappers** (e.g., `listTasksDirect` in `mcp-server/src/core/direct-functions/`) to get the absolute path to the relevant `tasks.json`. + - Pass the *entire `args` object* received by the MCP tool (which should include `projectRoot` derived from the session) and the `log` object. + - Implements a **simplified precedence system** for finding the `tasks.json` path: + 1. Explicit `projectRoot` passed in `args` (Expected from MCP tools). + 2. Cached `lastFoundProjectRoot` (CLI fallback). + 3. Search upwards from `process.cwd()` (CLI fallback). + - Throws a specific error if the `tasks.json` file cannot be located. + - Updates the `lastFoundProjectRoot` cache on success. +- **`PROJECT_MARKERS`**: An exported array of common file/directory names used to identify a likely project root during the CLI fallback search. +- **`getPackagePath()`**: Utility to find the installation path of the `task-master-ai` package itself (potentially removable). + +## MCP Server Tool Utilities (`mcp-server/src/tools/utils.js`) + +These utilities specifically support the implementation and execution of MCP tools. + +- **`normalizeProjectRoot(rawPath, log)`**: + - **Purpose**: Takes a raw project root path (potentially URI encoded, with `file://` prefix, Windows slashes) and returns a normalized, absolute path suitable for the server's OS. + - **Logic**: Decodes URI, strips `file://`, handles Windows drive prefix (`/C:/`), replaces `\` with `/`, uses `path.resolve()`. + - **Usage**: Used internally by `withNormalizedProjectRoot` HOF. + +- **`getRawProjectRootFromSession(session, log)`**: + - **Purpose**: Extracts the *raw* project root URI string from the session object (`session.roots[0].uri` or `session.roots.roots[0].uri`) without performing normalization. + - **Usage**: Used internally by `withNormalizedProjectRoot` HOF as a fallback if `args.projectRoot` isn't provided. + +- **`withNormalizedProjectRoot(executeFn)`**: + - **Purpose**: A Higher-Order Function (HOF) designed to wrap a tool's `execute` method. + - **Logic**: + 1. Determines the raw project root (from `args.projectRoot` or `getRawProjectRootFromSession`). + 2. Normalizes the raw path using `normalizeProjectRoot`. + 3. Injects the normalized, absolute path back into the `args` object as `args.projectRoot`. + 4. Calls the original `executeFn` with the updated `args`. + - **Usage**: Should wrap the `execute` function of *every* MCP tool that needs a reliable, normalized project root path. + - **Example**: + ```javascript + // In mcp-server/src/tools/your-tool.js + import { withNormalizedProjectRoot } from './utils.js'; + + export function registerYourTool(server) { + server.addTool({ + // ... name, description, parameters ... + execute: withNormalizedProjectRoot(async (args, context) => { + // args.projectRoot is now normalized here + const { projectRoot /*, other args */ } = args; + // ... rest of tool logic using normalized projectRoot ... + }) + }); + } + ``` + +- **`handleApiResult(result, log, errorPrefix, processFunction)`**: + - **Purpose**: Standardizes the formatting of responses returned by direct functions (`{ success, data/error, fromCache }`) into the MCP response format. + - **Usage**: Call this at the end of the tool's `execute` method, passing the result from the direct function call. + +- **`createContentResponse(content)` / `createErrorResponse(errorMessage)`**: + - **Purpose**: Helper functions to create the basic MCP response structure for success or error messages. + - **Usage**: Used internally by `handleApiResult` and potentially directly for simple responses. + +- **`createLogWrapper(log)`**: + - **Purpose**: Creates a logger object wrapper with standard methods (`info`, `warn`, `error`, `debug`, `success`) mapping to the passed MCP `log` object's methods. Ensures compatibility when passing loggers to core functions. + - **Usage**: Used within direct functions before passing the `log` object down to core logic that expects the standard method names. + +- **`getCachedOrExecute({ cacheKey, actionFn, log })`**: + - **Purpose**: Utility for implementing caching within direct functions. Checks cache for `cacheKey`; if miss, executes `actionFn`, caches successful result, and returns. + - **Usage**: Wrap the core logic execution within a direct function call. + +- **`processMCPResponseData(taskOrData, fieldsToRemove)`**: + - **Purpose**: Utility to filter potentially sensitive or large fields (like `details`, `testStrategy`) from task objects before sending the response back via MCP. + - **Usage**: Passed as the default `processFunction` to `handleApiResult`. + +- **`getProjectRootFromSession(session, log)`**: + - **Purpose**: Legacy function to extract *and normalize* the project root from the session. Replaced by the HOF pattern but potentially still used. + - **Recommendation**: Prefer using the `withNormalizedProjectRoot` HOF in tools instead of calling this directly. + +- **`executeTaskMasterCommand(...)`**: + - **Purpose**: Executes `task-master` CLI command as a fallback. + - **Recommendation**: Deprecated for most uses; prefer direct function calls. + +## Export Organization + +- **Grouping Related Functions**: + - ✅ DO: Keep utilities relevant to their location (e.g., core CLI utils in `scripts/modules/utils.js`, MCP path utils in `mcp-server/src/core/utils/path-utils.js`, MCP tool utils in `mcp-server/src/tools/utils.js`). + - ✅ DO: Export all utility functions in a single statement per file. + - ✅ DO: Group related exports together. + - ✅ DO: Export configuration constants (from `scripts/modules/utils.js`). + - ❌ DON'T: Use default exports. + - ❌ DON'T: Create circular dependencies (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc)). + +```javascript +// Example export from scripts/modules/utils.js +export { + // Configuration + CONFIG, + LOG_LEVELS, + + // Logging + log, + + // File operations + readJSON, + writeJSON, + + // String manipulation + sanitizePrompt, + truncate, + + // Task utilities + // ... (taskExists, formatTaskId, findTaskById, etc.) + + // Graph algorithms + findCycles, +}; + +// Example export from mcp-server/src/core/utils/path-utils.js +export { + findTasksJsonPath, + getPackagePath, + PROJECT_MARKERS, + lastFoundProjectRoot // Exporting for potential direct use/reset if needed +}; + +// Example export from mcp-server/src/tools/utils.js +export { + getProjectRoot, + getProjectRootFromSession, + handleApiResult, + executeTaskMasterCommand, + processMCPResponseData, + createContentResponse, + createErrorResponse, + getCachedOrExecute +}; +``` + +Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) and [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for more context on MCP server architecture and integration. \ No newline at end of file