Compare commits
38 Commits
better-ai-
...
v0.13.0-rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8cfdf662ff | ||
|
|
0a45f4329c | ||
|
|
c4b2f7e514 | ||
|
|
9684beafc3 | ||
|
|
302b916045 | ||
|
|
e7f18f65b9 | ||
|
|
655c7c225a | ||
|
|
e1218b3747 | ||
|
|
ffa621a37c | ||
|
|
cd32fd9edf | ||
|
|
590e4bd66d | ||
|
|
70d3f2f103 | ||
|
|
424aae10ed | ||
|
|
a48d1f13e2 | ||
|
|
25ca1a45a0 | ||
|
|
2e17437da3 | ||
|
|
1f44ea5299 | ||
|
|
d63964a10e | ||
|
|
33559e368c | ||
|
|
9f86306766 | ||
|
|
8f8a3dc45d | ||
|
|
d18351dc38 | ||
|
|
9d437f8594 | ||
|
|
ad89253e31 | ||
|
|
70c5097553 | ||
|
|
c9e4558a19 | ||
|
|
cd4d8e335f | ||
|
|
16297058bb | ||
|
|
ae2d43de29 | ||
|
|
f5585e6c31 | ||
|
|
303b13e3d4 | ||
|
|
1862ca2360 | ||
|
|
ad1c234b4e | ||
|
|
d07f8fddc5 | ||
|
|
c7158d4910 | ||
|
|
2a07d366be | ||
|
|
40df57f969 | ||
|
|
d4a2e34b3b |
9
.changeset/curvy-candies-eat.md
Normal file
9
.changeset/curvy-candies-eat.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Better support for file paths on Windows, Linux & WSL.
|
||||||
|
|
||||||
|
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||||
|
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||||
|
- Simplifies tool implementation by centralizing normalization logic.
|
||||||
8
.changeset/fine-monkeys-eat.md
Normal file
8
.changeset/fine-monkeys-eat.md
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improved update-subtask
|
||||||
|
- Now it has context about the parent task details
|
||||||
|
- It also has context about the subtask before it and the subtask after it (if they exist)
|
||||||
|
- Not passing all subtasks to stay token efficient
|
||||||
27
.changeset/pre.json
Normal file
27
.changeset/pre.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"mode": "pre",
|
||||||
|
"tag": "rc",
|
||||||
|
"initialVersions": {
|
||||||
|
"task-master-ai": "0.12.1"
|
||||||
|
},
|
||||||
|
"changesets": [
|
||||||
|
"beige-rats-accept",
|
||||||
|
"blue-spies-kick",
|
||||||
|
"cuddly-zebras-matter",
|
||||||
|
"curvy-candies-eat",
|
||||||
|
"easy-toys-wash",
|
||||||
|
"every-stars-sell",
|
||||||
|
"fine-monkeys-eat",
|
||||||
|
"fine-signs-add",
|
||||||
|
"gentle-views-jump",
|
||||||
|
"mighty-mirrors-watch",
|
||||||
|
"neat-donkeys-shave",
|
||||||
|
"nine-rocks-sink",
|
||||||
|
"ninety-ghosts-relax",
|
||||||
|
"ninety-wombats-pull",
|
||||||
|
"public-cooks-fetch",
|
||||||
|
"tricky-papayas-hang",
|
||||||
|
"violet-papayas-see",
|
||||||
|
"violet-parrots-march"
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -65,8 +65,9 @@ alwaysApply: false
|
|||||||
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
|
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
|
||||||
- **Purpose**: Provides MCP interface using FastMCP.
|
- **Purpose**: Provides MCP interface using FastMCP.
|
||||||
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
|
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
|
||||||
- Registers tools (`mcp-server/src/tools/*.js`).
|
- Registers tools (`mcp-server/src/tools/*.js`). Tool `execute` methods **should be wrapped** with the `withNormalizedProjectRoot` HOF (from `tools/utils.js`) to ensure consistent path handling.
|
||||||
- Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`).
|
- The HOF provides a normalized `args.projectRoot` to the `execute` method.
|
||||||
|
- Tool `execute` methods call **direct function wrappers** (`mcp-server/src/core/direct-functions/*.js`), passing the normalized `projectRoot` and other args.
|
||||||
- Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session.
|
- Direct functions use path utilities (`mcp-server/src/core/utils/`) to resolve paths based on `projectRoot` from session.
|
||||||
- Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`.
|
- Direct functions implement silent mode, logger wrappers, and call core logic functions from `scripts/modules/`.
|
||||||
- Manages MCP caching and response formatting.
|
- Manages MCP caching and response formatting.
|
||||||
|
|||||||
@@ -188,58 +188,70 @@ execute: async (args, { log, session }) => {
|
|||||||
- **args**: Validated parameters.
|
- **args**: Validated parameters.
|
||||||
- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`).
|
- **context**: Contains `{ log, session }` from FastMCP. (Removed `reportProgress`).
|
||||||
|
|
||||||
### Standard Tool Execution Pattern
|
### Standard Tool Execution Pattern with Path Normalization (Updated)
|
||||||
|
|
||||||
The `execute` method within each MCP tool (in `mcp-server/src/tools/*.js`) should follow this standard pattern:
|
To ensure consistent handling of project paths across different client environments (Windows, macOS, Linux, WSL) and input formats (e.g., `file:///...`, URI encoded paths), all MCP tool `execute` methods that require access to the project root **MUST** be wrapped with the `withNormalizedProjectRoot` Higher-Order Function (HOF).
|
||||||
|
|
||||||
1. **Log Entry**: Log the start of the tool execution with relevant arguments.
|
This HOF, defined in [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js), performs the following before calling the tool's core logic:
|
||||||
2. **Get Project Root**: Use the `getProjectRootFromSession(session, log)` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) to extract the project root path from the client session. Fall back to `args.projectRoot` if the session doesn't provide a root.
|
|
||||||
3. **Call Direct Function**: Invoke the corresponding `*Direct` function wrapper (e.g., `listTasksDirect` from [`task-master-core.js`](mdc:mcp-server/src/core/task-master-core.js)), passing an updated `args` object that includes the resolved `projectRoot`. Crucially, the third argument (context) passed to the direct function should **only include `{ log, session }`**. **Do NOT pass `reportProgress`**.
|
1. **Determines the Raw Root:** It prioritizes `args.projectRoot` if provided by the client, otherwise it calls `getRawProjectRootFromSession` to extract the path from the session.
|
||||||
```javascript
|
2. **Normalizes the Path:** It uses the `normalizeProjectRoot` helper to decode URIs, strip `file://` prefixes, fix potential Windows drive letter prefixes (e.g., `/C:/`), convert backslashes (`\`) to forward slashes (`/`), and resolve the path to an absolute path suitable for the server's OS.
|
||||||
// Example call (applies to both AI and non-AI direct functions now)
|
3. **Injects Normalized Path:** It updates the `args` object by replacing the original `projectRoot` (or adding it) with the normalized, absolute path.
|
||||||
const result = await someDirectFunction(
|
4. **Executes Original Logic:** It calls the original `execute` function body, passing the updated `args` object.
|
||||||
{ ...args, projectRoot }, // Args including resolved root
|
|
||||||
log, // MCP logger
|
**Implementation Example:**
|
||||||
{ session } // Context containing session
|
|
||||||
);
|
|
||||||
```
|
|
||||||
4. **Handle Result**: Receive the result object (`{ success, data/error, fromCache }`) from the `*Direct` function.
|
|
||||||
5. **Format Response**: Pass this result object to the `handleApiResult` utility (from [`tools/utils.js`](mdc:mcp-server/src/tools/utils.js)) for standardized MCP response formatting and error handling.
|
|
||||||
6. **Return**: Return the formatted response object provided by `handleApiResult`.
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Example execute method structure for a tool calling an AI-based direct function
|
// In mcp-server/src/tools/your-tool.js
|
||||||
import { getProjectRootFromSession, handleApiResult, createErrorResponse } from './utils.js';
|
import {
|
||||||
import { someAIDirectFunction } from '../core/task-master-core.js';
|
handleApiResult,
|
||||||
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot // <<< Import HOF
|
||||||
|
} from './utils.js';
|
||||||
|
import { yourDirectFunction } from '../core/task-master-core.js';
|
||||||
|
import { findTasksJsonPath } from '../core/utils/path-utils.js'; // If needed
|
||||||
|
|
||||||
|
export function registerYourTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
name: "your_tool",
|
||||||
|
description: "...".
|
||||||
|
parameters: z.object({
|
||||||
|
// ... other parameters ...
|
||||||
|
projectRoot: z.string().optional().describe('...') // projectRoot is optional here, HOF handles fallback
|
||||||
|
}),
|
||||||
|
// Wrap the entire execute function
|
||||||
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
// args.projectRoot is now guaranteed to be normalized and absolute
|
||||||
|
const { /* other args */, projectRoot } = args;
|
||||||
|
|
||||||
// ... inside server.addTool({...})
|
|
||||||
execute: async (args, { log, session }) => { // Note: reportProgress is omitted here
|
|
||||||
try {
|
try {
|
||||||
log.info(`Starting AI tool execution with args: ${JSON.stringify(args)}`);
|
log.info(`Executing your_tool with normalized root: ${projectRoot}`);
|
||||||
|
|
||||||
// 1. Get Project Root
|
// Resolve paths using the normalized projectRoot
|
||||||
let rootFolder = getProjectRootFromSession(session, log);
|
let tasksPath = findTasksJsonPath({ projectRoot, file: args.file }, log);
|
||||||
if (!rootFolder && args.projectRoot) { // Fallback if needed
|
|
||||||
rootFolder = args.projectRoot;
|
|
||||||
log.info(`Using project root from args as fallback: ${rootFolder}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Call AI-Based Direct Function (passing only log and session in context)
|
// Call direct function, passing normalized projectRoot if needed by direct func
|
||||||
const result = await someAIDirectFunction({
|
const result = await yourDirectFunction(
|
||||||
...args,
|
{
|
||||||
projectRoot: rootFolder // Ensure projectRoot is explicitly passed
|
/* other args */,
|
||||||
}, log, { session }); // Pass session here, NO reportProgress
|
projectRoot // Pass it if direct function needs it
|
||||||
|
},
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
|
||||||
// 3. Handle and Format Response
|
|
||||||
return handleApiResult(result, log);
|
return handleApiResult(result, log);
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error during AI tool execution: ${error.message}`);
|
log.error(`Error in your_tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
|
}) // End HOF wrap
|
||||||
|
});
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
By using this HOF, the core logic within the `execute` method and any downstream functions (like `findTasksJsonPath` or direct functions) can reliably expect `args.projectRoot` to be a clean, absolute path suitable for the server environment.
|
||||||
|
|
||||||
### Project Initialization Tool
|
### Project Initialization Tool
|
||||||
|
|
||||||
The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project:
|
The `initialize_project` tool allows integrated clients like Cursor to set up a new Task Master project:
|
||||||
|
|||||||
@@ -523,14 +523,24 @@ Integrating Task Master commands with the MCP server (for use by tools like Curs
|
|||||||
|
|
||||||
4. **Create MCP Tool (`mcp-server/src/tools/`)**:
|
4. **Create MCP Tool (`mcp-server/src/tools/`)**:
|
||||||
- Create a new file (e.g., `your-command.js`) using **kebab-case**.
|
- Create a new file (e.g., `your-command.js`) using **kebab-case**.
|
||||||
- Import `zod`, `handleApiResult`, `createErrorResponse`, **`getProjectRootFromSession`**, and your `yourCommandDirect` function.
|
- Import `zod`, `handleApiResult`, **`withNormalizedProjectRoot` HOF**, and your `yourCommandDirect` function.
|
||||||
- Implement `registerYourCommandTool(server)`.
|
- Implement `registerYourCommandTool(server)`.
|
||||||
- Define the tool `name` using **snake_case** (e.g., `your_command`).
|
- **Define parameters**: Make `projectRoot` optional (`z.string().optional().describe(...)`) as the HOF handles fallback.
|
||||||
- Define the `parameters` using `zod`. **Crucially, define `projectRoot` as optional**: `projectRoot: z.string().optional().describe(...)`. Include `file` if applicable.
|
- Consider if this operation should run in the background using `AsyncOperationManager`.
|
||||||
- Implement the standard `async execute(args, { log, reportProgress, session })` method:
|
- Implement the standard `execute` method **wrapped with `withNormalizedProjectRoot`**:
|
||||||
- Get `rootFolder` using `getProjectRootFromSession` (with fallback to `args.projectRoot`).
|
```javascript
|
||||||
- Call `yourCommandDirect({ ...args, projectRoot: rootFolder }, log)`.
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
- Pass the result to `handleApiResult(result, log, 'Error Message')`.
|
// args.projectRoot is now normalized
|
||||||
|
const { projectRoot /*, other args */ } = args;
|
||||||
|
// ... resolve tasks path if needed using normalized projectRoot ...
|
||||||
|
const result = await yourCommandDirect(
|
||||||
|
{ /* other args */, projectRoot /* if needed by direct func */ },
|
||||||
|
log,
|
||||||
|
{ session }
|
||||||
|
);
|
||||||
|
return handleApiResult(result, log);
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`.
|
5. **Register Tool**: Import and call `registerYourCommandTool` in `mcp-server/src/tools/index.js`.
|
||||||
|
|
||||||
@@ -618,8 +628,3 @@ When implementing project initialization commands:
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -79,6 +79,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
|
||||||
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
|
||||||
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
|
||||||
|
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
|
||||||
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -428,36 +428,69 @@ Taskmaster configuration (excluding API keys) is primarily managed through the `
|
|||||||
|
|
||||||
## MCP Server Tool Utilities (`mcp-server/src/tools/utils.js`)
|
## MCP Server Tool Utilities (`mcp-server/src/tools/utils.js`)
|
||||||
|
|
||||||
- **Purpose**: These utilities specifically support the MCP server tools ([`mcp-server/src/tools/*.js`](mdc:mcp-server/src/tools/*.js)), handling MCP communication patterns, response formatting, caching integration, and the CLI fallback mechanism.
|
These utilities specifically support the implementation and execution of MCP tools.
|
||||||
- **Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)** for detailed usage patterns within the MCP tool `execute` methods and direct function wrappers.
|
|
||||||
|
|
||||||
- **`getProjectRootFromSession(session, log)`**:
|
- **`normalizeProjectRoot(rawPath, log)`**:
|
||||||
- ✅ **DO**: Call this utility **within the MCP tool's `execute` method** to extract the project root path from the `session` object.
|
- **Purpose**: Takes a raw project root path (potentially URI encoded, with `file://` prefix, Windows slashes) and returns a normalized, absolute path suitable for the server's OS.
|
||||||
- Decodes the `file://` URI and handles potential errors.
|
- **Logic**: Decodes URI, strips `file://`, handles Windows drive prefix (`/C:/`), replaces `\` with `/`, uses `path.resolve()`.
|
||||||
- Returns the project path string or `null`.
|
- **Usage**: Used internally by `withNormalizedProjectRoot` HOF.
|
||||||
- The returned path should then be passed in the `args` object when calling the corresponding `*Direct` function (e.g., `yourDirectFunction({ ...args, projectRoot: rootFolder }, log)`).
|
|
||||||
|
- **`getRawProjectRootFromSession(session, log)`**:
|
||||||
|
- **Purpose**: Extracts the *raw* project root URI string from the session object (`session.roots[0].uri` or `session.roots.roots[0].uri`) without performing normalization.
|
||||||
|
- **Usage**: Used internally by `withNormalizedProjectRoot` HOF as a fallback if `args.projectRoot` isn't provided.
|
||||||
|
|
||||||
|
- **`withNormalizedProjectRoot(executeFn)`**:
|
||||||
|
- **Purpose**: A Higher-Order Function (HOF) designed to wrap a tool's `execute` method.
|
||||||
|
- **Logic**:
|
||||||
|
1. Determines the raw project root (from `args.projectRoot` or `getRawProjectRootFromSession`).
|
||||||
|
2. Normalizes the raw path using `normalizeProjectRoot`.
|
||||||
|
3. Injects the normalized, absolute path back into the `args` object as `args.projectRoot`.
|
||||||
|
4. Calls the original `executeFn` with the updated `args`.
|
||||||
|
- **Usage**: Should wrap the `execute` function of *every* MCP tool that needs a reliable, normalized project root path.
|
||||||
|
- **Example**:
|
||||||
|
```javascript
|
||||||
|
// In mcp-server/src/tools/your-tool.js
|
||||||
|
import { withNormalizedProjectRoot } from './utils.js';
|
||||||
|
|
||||||
|
export function registerYourTool(server) {
|
||||||
|
server.addTool({
|
||||||
|
// ... name, description, parameters ...
|
||||||
|
execute: withNormalizedProjectRoot(async (args, context) => {
|
||||||
|
// args.projectRoot is now normalized here
|
||||||
|
const { projectRoot /*, other args */ } = args;
|
||||||
|
// ... rest of tool logic using normalized projectRoot ...
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
- **`handleApiResult(result, log, errorPrefix, processFunction)`**:
|
- **`handleApiResult(result, log, errorPrefix, processFunction)`**:
|
||||||
- ✅ **DO**: Call this from the MCP tool's `execute` method after receiving the result from the `*Direct` function wrapper.
|
- **Purpose**: Standardizes the formatting of responses returned by direct functions (`{ success, data/error, fromCache }`) into the MCP response format.
|
||||||
- Takes the standard `{ success, data/error, fromCache }` object.
|
- **Usage**: Call this at the end of the tool's `execute` method, passing the result from the direct function call.
|
||||||
- Formats the standard MCP success or error response, including the `fromCache` flag.
|
|
||||||
- Uses `processMCPResponseData` by default to filter response data.
|
|
||||||
|
|
||||||
- **`executeTaskMasterCommand(command, log, args, projectRootRaw)`**:
|
|
||||||
- Executes a Task Master CLI command as a child process.
|
|
||||||
- Handles fallback between global `task-master` and local `node scripts/dev.js`.
|
|
||||||
- ❌ **DON'T**: Use this as the primary method for MCP tools. Prefer direct function calls via `*Direct` wrappers.
|
|
||||||
|
|
||||||
- **`processMCPResponseData(taskOrData, fieldsToRemove)`**:
|
|
||||||
- Filters task data (e.g., removing `details`, `testStrategy`) before sending to the MCP client. Called by `handleApiResult`.
|
|
||||||
|
|
||||||
- **`createContentResponse(content)` / `createErrorResponse(errorMessage)`**:
|
- **`createContentResponse(content)` / `createErrorResponse(errorMessage)`**:
|
||||||
- Formatters for standard MCP success/error responses.
|
- **Purpose**: Helper functions to create the basic MCP response structure for success or error messages.
|
||||||
|
- **Usage**: Used internally by `handleApiResult` and potentially directly for simple responses.
|
||||||
|
|
||||||
|
- **`createLogWrapper(log)`**:
|
||||||
|
- **Purpose**: Creates a logger object wrapper with standard methods (`info`, `warn`, `error`, `debug`, `success`) mapping to the passed MCP `log` object's methods. Ensures compatibility when passing loggers to core functions.
|
||||||
|
- **Usage**: Used within direct functions before passing the `log` object down to core logic that expects the standard method names.
|
||||||
|
|
||||||
- **`getCachedOrExecute({ cacheKey, actionFn, log })`**:
|
- **`getCachedOrExecute({ cacheKey, actionFn, log })`**:
|
||||||
- ✅ **DO**: Use this utility *inside direct function wrappers* to implement caching.
|
- **Purpose**: Utility for implementing caching within direct functions. Checks cache for `cacheKey`; if miss, executes `actionFn`, caches successful result, and returns.
|
||||||
- Checks cache, executes `actionFn` on miss, stores result.
|
- **Usage**: Wrap the core logic execution within a direct function call.
|
||||||
- Returns standard `{ success, data/error, fromCache: boolean }`.
|
|
||||||
|
- **`processMCPResponseData(taskOrData, fieldsToRemove)`**:
|
||||||
|
- **Purpose**: Utility to filter potentially sensitive or large fields (like `details`, `testStrategy`) from task objects before sending the response back via MCP.
|
||||||
|
- **Usage**: Passed as the default `processFunction` to `handleApiResult`.
|
||||||
|
|
||||||
|
- **`getProjectRootFromSession(session, log)`**:
|
||||||
|
- **Purpose**: Legacy function to extract *and normalize* the project root from the session. Replaced by the HOF pattern but potentially still used.
|
||||||
|
- **Recommendation**: Prefer using the `withNormalizedProjectRoot` HOF in tools instead of calling this directly.
|
||||||
|
|
||||||
|
- **`executeTaskMasterCommand(...)`**:
|
||||||
|
- **Purpose**: Executes `task-master` CLI command as a fallback.
|
||||||
|
- **Recommendation**: Deprecated for most uses; prefer direct function calls.
|
||||||
|
|
||||||
## Export Organization
|
## Export Organization
|
||||||
|
|
||||||
|
|||||||
@@ -7,14 +7,14 @@
|
|||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
},
|
},
|
||||||
"research": {
|
"research": {
|
||||||
"provider": "xai",
|
"provider": "perplexity",
|
||||||
"modelId": "grok-3",
|
"modelId": "sonar-pro",
|
||||||
"maxTokens": 8700,
|
"maxTokens": 8700,
|
||||||
"temperature": 0.1
|
"temperature": 0.1
|
||||||
},
|
},
|
||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "anthropic",
|
"provider": "anthropic",
|
||||||
"modelId": "claude-3-5-sonnet-20241022",
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
"maxTokens": 120000,
|
"maxTokens": 120000,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
}
|
}
|
||||||
|
|||||||
53
CHANGELOG.md
53
CHANGELOG.md
@@ -1,5 +1,58 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
|
## 0.13.0-rc.0
|
||||||
|
|
||||||
|
### Minor Changes
|
||||||
|
|
||||||
|
- ef782ff: feat(expand): Enhance `expand` and `expand-all` commands
|
||||||
|
|
||||||
|
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
||||||
|
- Change default behavior to _append_ new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
||||||
|
|
||||||
|
- 87d97bb: Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs. - IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.
|
||||||
|
- 1ab836f: Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
||||||
|
- c8722b0: Adds custom model ID support for Ollama and OpenRouter providers.
|
||||||
|
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
||||||
|
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
||||||
|
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
||||||
|
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
|
||||||
|
- Added warnings when setting custom/unvalidated models.
|
||||||
|
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
|
||||||
|
- 2517bc1: Integrate OpenAI as a new AI provider. - Enhance `models` command/tool to display API key status. - Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.
|
||||||
|
- 9a48278: Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information - Forces temp at 0.1 for highly deterministic output, no variations - Adds a system prompt to further improve the output - Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity - Specificies to use a high degree of research across the web - Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- 842eaf7: - Add support for Google Gemini models via Vercel AI SDK integration.
|
||||||
|
- ed79d4f: Add xAI provider and Grok models support
|
||||||
|
- ad89253: Better support for file paths on Windows, Linux & WSL.
|
||||||
|
|
||||||
|
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||||
|
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||||
|
- Simplifies tool implementation by centralizing normalization logic.
|
||||||
|
|
||||||
|
- 2acba94: Add integration for Roo Code
|
||||||
|
- d63964a: Improved update-subtask - Now it has context about the parent task details - It also has context about the subtask before it and the subtask after it (if they exist) - Not passing all subtasks to stay token efficient
|
||||||
|
- 5f504fa: Improve and adjust `init` command for robustness and updated dependencies.
|
||||||
|
|
||||||
|
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
||||||
|
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
||||||
|
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
||||||
|
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
|
||||||
|
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
|
||||||
|
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
|
||||||
|
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.
|
||||||
|
|
||||||
|
- 96aeeff: Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.
|
||||||
|
- 5aea93d: Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140
|
||||||
|
- 66ac9ab: Improves next command to be subtask-aware - The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'. - The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority. - If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
|
||||||
|
|
||||||
|
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
|
||||||
|
|
||||||
|
- ca7b045: Add `--status` flag to `show` command to filter displayed subtasks.
|
||||||
|
- 5a2371b: Fix --task to --num-tasks in ui + related tests - issue #324
|
||||||
|
- 6cb213e: Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models." - In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup. - In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both. - Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information. - Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.
|
||||||
|
|
||||||
## 0.12.1
|
## 0.12.1
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ npm install task-master-ai
|
|||||||
task-master init
|
task-master init
|
||||||
|
|
||||||
# If installed locally
|
# If installed locally
|
||||||
npx task-master-init
|
npx task-master init
|
||||||
```
|
```
|
||||||
|
|
||||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ Initialize a new project:
|
|||||||
task-master init
|
task-master init
|
||||||
|
|
||||||
# If installed locally
|
# If installed locally
|
||||||
npx task-master-init
|
npx task-master init
|
||||||
```
|
```
|
||||||
|
|
||||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||||
|
|||||||
@@ -23,13 +23,21 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
* @param {string} [args.priority='medium'] - Task priority (high, medium, low)
|
* @param {string} [args.priority='medium'] - Task priority (high, medium, low)
|
||||||
* @param {string} [args.tasksJsonPath] - Path to the tasks.json file (resolved by tool)
|
* @param {string} [args.tasksJsonPath] - Path to the tasks.json file (resolved by tool)
|
||||||
* @param {boolean} [args.research=false] - Whether to use research capabilities for task creation
|
* @param {boolean} [args.research=false] - Whether to use research capabilities for task creation
|
||||||
|
* @param {string} [args.projectRoot] - Project root path
|
||||||
* @param {Object} log - Logger object
|
* @param {Object} log - Logger object
|
||||||
* @param {Object} context - Additional context (session)
|
* @param {Object} context - Additional context (session)
|
||||||
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
|
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||||
*/
|
*/
|
||||||
export async function addTaskDirect(args, log, context = {}) {
|
export async function addTaskDirect(args, log, context = {}) {
|
||||||
// Destructure expected args (including research)
|
// Destructure expected args (including research and projectRoot)
|
||||||
const { tasksJsonPath, prompt, dependencies, priority, research } = args;
|
const {
|
||||||
|
tasksJsonPath,
|
||||||
|
prompt,
|
||||||
|
dependencies,
|
||||||
|
priority,
|
||||||
|
research,
|
||||||
|
projectRoot
|
||||||
|
} = args;
|
||||||
const { session } = context; // Destructure session from context
|
const { session } = context; // Destructure session from context
|
||||||
|
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||||
@@ -108,11 +116,13 @@ export async function addTaskDirect(args, log, context = {}) {
|
|||||||
taskPriority,
|
taskPriority,
|
||||||
{
|
{
|
||||||
session,
|
session,
|
||||||
mcpLog
|
mcpLog,
|
||||||
|
projectRoot
|
||||||
},
|
},
|
||||||
'json', // outputFormat
|
'json', // outputFormat
|
||||||
manualTaskData, // Pass the manual task data
|
manualTaskData, // Pass the manual task data
|
||||||
false // research flag is false for manual creation
|
false, // research flag is false for manual creation
|
||||||
|
projectRoot // Pass projectRoot
|
||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
// AI-driven task creation
|
// AI-driven task creation
|
||||||
@@ -128,7 +138,8 @@ export async function addTaskDirect(args, log, context = {}) {
|
|||||||
taskPriority,
|
taskPriority,
|
||||||
{
|
{
|
||||||
session,
|
session,
|
||||||
mcpLog
|
mcpLog,
|
||||||
|
projectRoot
|
||||||
},
|
},
|
||||||
'json', // outputFormat
|
'json', // outputFormat
|
||||||
null, // manualTaskData is null for AI creation
|
null, // manualTaskData is null for AI creation
|
||||||
|
|||||||
@@ -18,15 +18,17 @@ import { createLogWrapper } from '../../tools/utils.js'; // Import the new utili
|
|||||||
* @param {string} args.outputPath - Explicit absolute path to save the report.
|
* @param {string} args.outputPath - Explicit absolute path to save the report.
|
||||||
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
|
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
|
||||||
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
|
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
|
||||||
|
* @param {string} [args.projectRoot] - Project root path.
|
||||||
* @param {Object} log - Logger object
|
* @param {Object} log - Logger object
|
||||||
* @param {Object} [context={}] - Context object containing session data
|
* @param {Object} [context={}] - Context object containing session data
|
||||||
* @param {Object} [context.session] - MCP session object
|
* @param {Object} [context.session] - MCP session object
|
||||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||||
*/
|
*/
|
||||||
export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Extract session
|
const { session } = context;
|
||||||
// Destructure expected args
|
const { tasksJsonPath, outputPath, threshold, research, projectRoot } = args;
|
||||||
const { tasksJsonPath, outputPath, model, threshold, research } = args; // Model is ignored by core function now
|
|
||||||
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
// --- Initial Checks (remain the same) ---
|
// --- Initial Checks (remain the same) ---
|
||||||
try {
|
try {
|
||||||
@@ -60,35 +62,34 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
log.info('Using research role for complexity analysis');
|
log.info('Using research role for complexity analysis');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prepare options for the core function
|
// Prepare options for the core function - REMOVED mcpLog and session here
|
||||||
const options = {
|
const coreOptions = {
|
||||||
file: tasksPath,
|
file: tasksJsonPath,
|
||||||
output: resolvedOutputPath,
|
output: outputPath,
|
||||||
// model: model, // No longer needed
|
|
||||||
threshold: threshold,
|
threshold: threshold,
|
||||||
research: research === true // Ensure boolean
|
research: research === true, // Ensure boolean
|
||||||
|
projectRoot: projectRoot // Pass projectRoot here
|
||||||
};
|
};
|
||||||
// --- End Initial Checks ---
|
// --- End Initial Checks ---
|
||||||
|
|
||||||
// --- Silent Mode and Logger Wrapper (remain the same) ---
|
// --- Silent Mode and Logger Wrapper ---
|
||||||
const wasSilent = isSilentMode();
|
const wasSilent = isSilentMode();
|
||||||
if (!wasSilent) {
|
if (!wasSilent) {
|
||||||
enableSilentMode();
|
enableSilentMode(); // Still enable silent mode as a backup
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create logger wrapper using the utility
|
let report;
|
||||||
const mcpLog = createLogWrapper(log);
|
|
||||||
|
|
||||||
let report; // To store the result from the core function
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// --- Call Core Function (Updated Context Passing) ---
|
// --- Call Core Function (Pass context separately) ---
|
||||||
// Call the core function, passing options and the context object { session, mcpLog }
|
// Pass coreOptions as the first argument
|
||||||
report = await analyzeTaskComplexity(options, {
|
// Pass context object { session, mcpLog } as the second argument
|
||||||
session, // Pass the session object
|
report = await analyzeTaskComplexity(
|
||||||
mcpLog // Pass the logger wrapper
|
coreOptions, // Pass options object
|
||||||
});
|
{ session, mcpLog: logWrapper } // Pass context object
|
||||||
// --- End Core Function Call ---
|
// Removed the explicit 'json' format argument, assuming context handling is sufficient
|
||||||
|
// If issues persist, we might need to add an explicit format param to analyzeTaskComplexity
|
||||||
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(
|
log.error(
|
||||||
`Error in analyzeTaskComplexity core function: ${error.message}`
|
`Error in analyzeTaskComplexity core function: ${error.message}`
|
||||||
@@ -100,7 +101,7 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'ANALYZE_CORE_ERROR', // More specific error code
|
code: 'ANALYZE_CORE_ERROR',
|
||||||
message: `Error running core complexity analysis: ${error.message}`
|
message: `Error running core complexity analysis: ${error.message}`
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
@@ -124,10 +125,10 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// The core function now returns the report object directly
|
// Added a check to ensure report is defined before accessing its properties
|
||||||
if (!report || !report.complexityAnalysis) {
|
if (!report || typeof report !== 'object') {
|
||||||
log.error(
|
log.error(
|
||||||
'Core analyzeTaskComplexity function did not return a valid report object.'
|
'Core analysis function returned an invalid or undefined response.'
|
||||||
);
|
);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
@@ -139,7 +140,10 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const analysisArray = report.complexityAnalysis; // Already an array
|
// Ensure complexityAnalysis exists and is an array
|
||||||
|
const analysisArray = Array.isArray(report.complexityAnalysis)
|
||||||
|
? report.complexityAnalysis
|
||||||
|
: [];
|
||||||
|
|
||||||
// Count tasks by complexity (remains the same)
|
// Count tasks by complexity (remains the same)
|
||||||
const highComplexityTasks = analysisArray.filter(
|
const highComplexityTasks = analysisArray.filter(
|
||||||
@@ -155,16 +159,15 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message: `Task complexity analysis complete. Report saved to ${resolvedOutputPath}`,
|
message: `Task complexity analysis complete. Report saved to ${outputPath}`, // Use outputPath from args
|
||||||
reportPath: resolvedOutputPath,
|
reportPath: outputPath, // Use outputPath from args
|
||||||
reportSummary: {
|
reportSummary: {
|
||||||
taskCount: analysisArray.length,
|
taskCount: analysisArray.length,
|
||||||
highComplexityTasks,
|
highComplexityTasks,
|
||||||
mediumComplexityTasks,
|
mediumComplexityTasks,
|
||||||
lowComplexityTasks
|
lowComplexityTasks
|
||||||
}
|
},
|
||||||
// Include the full report data if needed by the client
|
fullReport: report // Now includes the full report
|
||||||
// fullReport: report
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
} catch (parseError) {
|
} catch (parseError) {
|
||||||
|
|||||||
@@ -17,14 +17,15 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
* @param {boolean} [args.research] - Enable research-backed subtask generation
|
* @param {boolean} [args.research] - Enable research-backed subtask generation
|
||||||
* @param {string} [args.prompt] - Additional context to guide subtask generation
|
* @param {string} [args.prompt] - Additional context to guide subtask generation
|
||||||
* @param {boolean} [args.force] - Force regeneration of subtasks for tasks that already have them
|
* @param {boolean} [args.force] - Force regeneration of subtasks for tasks that already have them
|
||||||
|
* @param {string} [args.projectRoot] - Project root path.
|
||||||
* @param {Object} log - Logger object from FastMCP
|
* @param {Object} log - Logger object from FastMCP
|
||||||
* @param {Object} context - Context object containing session
|
* @param {Object} context - Context object containing session
|
||||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||||
*/
|
*/
|
||||||
export async function expandAllTasksDirect(args, log, context = {}) {
|
export async function expandAllTasksDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Extract session
|
const { session } = context; // Extract session
|
||||||
// Destructure expected args
|
// Destructure expected args, including projectRoot
|
||||||
const { tasksJsonPath, num, research, prompt, force } = args;
|
const { tasksJsonPath, num, research, prompt, force, projectRoot } = args;
|
||||||
|
|
||||||
// Create logger wrapper using the utility
|
// Create logger wrapper using the utility
|
||||||
const mcpLog = createLogWrapper(log);
|
const mcpLog = createLogWrapper(log);
|
||||||
@@ -43,7 +44,7 @@ export async function expandAllTasksDirect(args, log, context = {}) {
|
|||||||
enableSilentMode(); // Enable silent mode for the core function call
|
enableSilentMode(); // Enable silent mode for the core function call
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Calling core expandAllTasks with args: ${JSON.stringify({ num, research, prompt, force })}`
|
`Calling core expandAllTasks with args: ${JSON.stringify({ num, research, prompt, force, projectRoot })}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Parse parameters (ensure correct types)
|
// Parse parameters (ensure correct types)
|
||||||
@@ -52,14 +53,14 @@ export async function expandAllTasksDirect(args, log, context = {}) {
|
|||||||
const additionalContext = prompt || '';
|
const additionalContext = prompt || '';
|
||||||
const forceFlag = force === true;
|
const forceFlag = force === true;
|
||||||
|
|
||||||
// Call the core function, passing options and the context object { session, mcpLog }
|
// Call the core function, passing options and the context object { session, mcpLog, projectRoot }
|
||||||
const result = await expandAllTasks(
|
const result = await expandAllTasks(
|
||||||
tasksJsonPath,
|
tasksJsonPath,
|
||||||
numSubtasks,
|
numSubtasks,
|
||||||
useResearch,
|
useResearch,
|
||||||
additionalContext,
|
additionalContext,
|
||||||
forceFlag,
|
forceFlag,
|
||||||
{ session, mcpLog }
|
{ session, mcpLog, projectRoot }
|
||||||
);
|
);
|
||||||
|
|
||||||
// Core function now returns a summary object
|
// Core function now returns a summary object
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
* @param {boolean} [args.research] - Enable research role for subtask generation.
|
* @param {boolean} [args.research] - Enable research role for subtask generation.
|
||||||
* @param {string} [args.prompt] - Additional context to guide subtask generation.
|
* @param {string} [args.prompt] - Additional context to guide subtask generation.
|
||||||
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
|
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
|
||||||
|
* @param {string} [args.projectRoot] - Project root directory.
|
||||||
* @param {Object} log - Logger object
|
* @param {Object} log - Logger object
|
||||||
* @param {Object} context - Context object containing session
|
* @param {Object} context - Context object containing session
|
||||||
* @param {Object} [context.session] - MCP Session object
|
* @param {Object} [context.session] - MCP Session object
|
||||||
@@ -32,8 +33,8 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
*/
|
*/
|
||||||
export async function expandTaskDirect(args, log, context = {}) {
|
export async function expandTaskDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Extract session
|
const { session } = context; // Extract session
|
||||||
// Destructure expected args
|
// Destructure expected args, including projectRoot
|
||||||
const { tasksJsonPath, id, num, research, prompt, force } = args;
|
const { tasksJsonPath, id, num, research, prompt, force, projectRoot } = args;
|
||||||
|
|
||||||
// Log session root data for debugging
|
// Log session root data for debugging
|
||||||
log.info(
|
log.info(
|
||||||
@@ -184,20 +185,22 @@ export async function expandTaskDirect(args, log, context = {}) {
|
|||||||
// Create logger wrapper using the utility
|
// Create logger wrapper using the utility
|
||||||
const mcpLog = createLogWrapper(log);
|
const mcpLog = createLogWrapper(log);
|
||||||
|
|
||||||
|
let wasSilent; // Declare wasSilent outside the try block
|
||||||
// Process the request
|
// Process the request
|
||||||
try {
|
try {
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||||
const wasSilent = isSilentMode();
|
wasSilent = isSilentMode(); // Assign inside the try block
|
||||||
if (!wasSilent) enableSilentMode();
|
if (!wasSilent) enableSilentMode();
|
||||||
|
|
||||||
// Call the core expandTask function with the wrapped logger
|
// Call the core expandTask function with the wrapped logger and projectRoot
|
||||||
const result = await expandTask(
|
const updatedTaskResult = await expandTask(
|
||||||
tasksPath,
|
tasksPath,
|
||||||
taskId,
|
taskId,
|
||||||
numSubtasks,
|
numSubtasks,
|
||||||
useResearch,
|
useResearch,
|
||||||
additionalContext,
|
additionalContext,
|
||||||
{ mcpLog, session }
|
{ mcpLog, session, projectRoot },
|
||||||
|
forceFlag
|
||||||
);
|
);
|
||||||
|
|
||||||
// Restore normal logging
|
// Restore normal logging
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import {
|
|||||||
disableSilentMode
|
disableSilentMode
|
||||||
// isSilentMode // Not used directly here
|
// isSilentMode // Not used directly here
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { getProjectRootFromSession } from '../../tools/utils.js'; // Adjust path if necessary
|
|
||||||
import os from 'os'; // Import os module for home directory check
|
import os from 'os'; // Import os module for home directory check
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -16,60 +15,32 @@ import os from 'os'; // Import os module for home directory check
|
|||||||
* @returns {Promise<{success: boolean, data?: any, error?: {code: string, message: string}}>} - Standard result object.
|
* @returns {Promise<{success: boolean, data?: any, error?: {code: string, message: string}}>} - Standard result object.
|
||||||
*/
|
*/
|
||||||
export async function initializeProjectDirect(args, log, context = {}) {
|
export async function initializeProjectDirect(args, log, context = {}) {
|
||||||
const { session } = context;
|
const { session } = context; // Keep session if core logic needs it
|
||||||
const homeDir = os.homedir();
|
const homeDir = os.homedir();
|
||||||
let targetDirectory = null;
|
|
||||||
|
|
||||||
log.info(
|
|
||||||
`CONTEXT received in direct function: ${context ? JSON.stringify(Object.keys(context)) : 'MISSING or Falsy'}`
|
|
||||||
);
|
|
||||||
log.info(
|
|
||||||
`SESSION extracted in direct function: ${session ? 'Exists' : 'MISSING or Falsy'}`
|
|
||||||
);
|
|
||||||
log.info(`Args received in direct function: ${JSON.stringify(args)}`);
|
log.info(`Args received in direct function: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// --- Determine Target Directory ---
|
// --- Determine Target Directory ---
|
||||||
// 1. Prioritize projectRoot passed directly in args
|
// TRUST the projectRoot passed from the tool layer via args
|
||||||
// Ensure it's not null, '/', or the home directory
|
// The HOF in the tool layer already normalized and validated it came from a reliable source (args or session)
|
||||||
if (
|
const targetDirectory = args.projectRoot;
|
||||||
args.projectRoot &&
|
|
||||||
args.projectRoot !== '/' &&
|
|
||||||
args.projectRoot !== homeDir
|
|
||||||
) {
|
|
||||||
log.info(`Using projectRoot directly from args: ${args.projectRoot}`);
|
|
||||||
targetDirectory = args.projectRoot;
|
|
||||||
} else {
|
|
||||||
// 2. If args.projectRoot is missing or invalid, THEN try session (as a fallback)
|
|
||||||
log.warn(
|
|
||||||
`args.projectRoot ('${args.projectRoot}') is missing or invalid. Attempting to derive from session.`
|
|
||||||
);
|
|
||||||
const sessionDerivedPath = getProjectRootFromSession(session, log);
|
|
||||||
// Validate the session-derived path as well
|
|
||||||
if (
|
|
||||||
sessionDerivedPath &&
|
|
||||||
sessionDerivedPath !== '/' &&
|
|
||||||
sessionDerivedPath !== homeDir
|
|
||||||
) {
|
|
||||||
log.info(
|
|
||||||
`Using project root derived from session: ${sessionDerivedPath}`
|
|
||||||
);
|
|
||||||
targetDirectory = sessionDerivedPath;
|
|
||||||
} else {
|
|
||||||
log.error(
|
|
||||||
`Could not determine a valid project root. args.projectRoot='${args.projectRoot}', sessionDerivedPath='${sessionDerivedPath}'`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Validate the final targetDirectory
|
// --- Validate the targetDirectory (basic sanity checks) ---
|
||||||
if (!targetDirectory) {
|
if (
|
||||||
// This error now covers cases where neither args.projectRoot nor session provided a valid path
|
!targetDirectory ||
|
||||||
|
typeof targetDirectory !== 'string' || // Ensure it's a string
|
||||||
|
targetDirectory === '/' ||
|
||||||
|
targetDirectory === homeDir
|
||||||
|
) {
|
||||||
|
log.error(
|
||||||
|
`Invalid target directory received from tool layer: '${targetDirectory}'`
|
||||||
|
);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'INVALID_TARGET_DIRECTORY',
|
code: 'INVALID_TARGET_DIRECTORY',
|
||||||
message: `Cannot initialize project: Could not determine a valid target directory. Please ensure a workspace/folder is open or specify projectRoot.`,
|
message: `Cannot initialize project: Invalid target directory '${targetDirectory}' received. Please ensure a valid workspace/folder is open or specified.`,
|
||||||
details: `Attempted args.projectRoot: ${args.projectRoot}`
|
details: `Received args.projectRoot: ${args.projectRoot}` // Show what was received
|
||||||
},
|
},
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
@@ -86,11 +57,12 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
|||||||
log.info(
|
log.info(
|
||||||
`Temporarily changing CWD to ${targetDirectory} for initialization.`
|
`Temporarily changing CWD to ${targetDirectory} for initialization.`
|
||||||
);
|
);
|
||||||
process.chdir(targetDirectory); // Change CWD to the *validated* targetDirectory
|
process.chdir(targetDirectory); // Change CWD to the HOF-provided root
|
||||||
|
|
||||||
enableSilentMode(); // Enable silent mode BEFORE calling the core function
|
enableSilentMode();
|
||||||
try {
|
try {
|
||||||
// Always force yes: true when called via MCP to avoid interactive prompts
|
// Construct options ONLY from the relevant flags in args
|
||||||
|
// The core initializeProject operates in the current CWD, which we just set
|
||||||
const options = {
|
const options = {
|
||||||
aliases: args.addAliases,
|
aliases: args.addAliases,
|
||||||
skipInstall: args.skipInstall,
|
skipInstall: args.skipInstall,
|
||||||
@@ -100,12 +72,11 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
|||||||
log.info(`Initializing project with options: ${JSON.stringify(options)}`);
|
log.info(`Initializing project with options: ${JSON.stringify(options)}`);
|
||||||
const result = await initializeProject(options); // Call core logic
|
const result = await initializeProject(options); // Call core logic
|
||||||
|
|
||||||
// Format success result for handleApiResult
|
|
||||||
resultData = {
|
resultData = {
|
||||||
message: 'Project initialized successfully.',
|
message: 'Project initialized successfully.',
|
||||||
next_step:
|
next_step:
|
||||||
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in the project root directory, scripts/ directory). You can create a prd.txt file by asking the user about their idea, and then using the scripts/example_prd.txt file as a template to genrate a prd.txt file in scripts/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in scripts/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
|
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in the project root directory, scripts/ directory). You can create a prd.txt file by asking the user about their idea, and then using the scripts/example_prd.txt file as a template to genrate a prd.txt file in scripts/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in scripts/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
|
||||||
...result // Include details returned by initializeProject
|
...result
|
||||||
};
|
};
|
||||||
success = true;
|
success = true;
|
||||||
log.info(
|
log.info(
|
||||||
@@ -120,12 +91,11 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
success = false;
|
success = false;
|
||||||
} finally {
|
} finally {
|
||||||
disableSilentMode(); // ALWAYS disable silent mode in finally
|
disableSilentMode();
|
||||||
log.info(`Restoring original CWD: ${originalCwd}`);
|
log.info(`Restoring original CWD: ${originalCwd}`);
|
||||||
process.chdir(originalCwd); // Change back to original CWD
|
process.chdir(originalCwd);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return in format expected by handleApiResult
|
|
||||||
if (success) {
|
if (success) {
|
||||||
return { success: true, data: resultData, fromCache: false };
|
return { success: true, data: resultData, fromCache: false };
|
||||||
} else {
|
} else {
|
||||||
@@ -71,24 +71,34 @@ export async function nextTaskDirect(args, log) {
|
|||||||
data: {
|
data: {
|
||||||
message:
|
message:
|
||||||
'No eligible next task found. All tasks are either completed or have unsatisfied dependencies',
|
'No eligible next task found. All tasks are either completed or have unsatisfied dependencies',
|
||||||
nextTask: null,
|
nextTask: null
|
||||||
allTasks: data.tasks
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check if it's a subtask
|
||||||
|
const isSubtask =
|
||||||
|
typeof nextTask.id === 'string' && nextTask.id.includes('.');
|
||||||
|
|
||||||
|
const taskOrSubtask = isSubtask ? 'subtask' : 'task';
|
||||||
|
|
||||||
|
const additionalAdvice = isSubtask
|
||||||
|
? 'Subtasks can be updated with timestamped details as you implement them. This is useful for tracking progress, marking milestones and insights (of successful or successive falures in attempting to implement the subtask). Research can be used when updating the subtask to collect up-to-date information, and can be helpful to solve a repeating problem the agent is unable to solve. It is a good idea to get-task the parent task to collect the overall context of the task, and to get-task the subtask to collect the specific details of the subtask.'
|
||||||
|
: 'Tasks can be updated to reflect a change in the direction of the task, or to reformulate the task per your prompt. Research can be used when updating the task to collect up-to-date information. It is best to update subtasks as you work on them, and to update the task for more high-level changes that may affect pending subtasks or the general direction of the task.';
|
||||||
|
|
||||||
// Restore normal logging
|
// Restore normal logging
|
||||||
disableSilentMode();
|
disableSilentMode();
|
||||||
|
|
||||||
// Return the next task data with the full tasks array for reference
|
// Return the next task data with the full tasks array for reference
|
||||||
log.info(
|
log.info(
|
||||||
`Successfully found next task ${nextTask.id}: ${nextTask.title}`
|
`Successfully found next task ${nextTask.id}: ${nextTask.title}. Is subtask: ${isSubtask}`
|
||||||
);
|
);
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
nextTask,
|
nextTask,
|
||||||
allTasks: data.tasks
|
isSubtask,
|
||||||
|
nextSteps: `When ready to work on the ${taskOrSubtask}, use set-status to set the status to "in progress" ${additionalAdvice}`
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|||||||
@@ -8,9 +8,11 @@ import fs from 'fs';
|
|||||||
import { parsePRD } from '../../../../scripts/modules/task-manager.js';
|
import { parsePRD } from '../../../../scripts/modules/task-manager.js';
|
||||||
import {
|
import {
|
||||||
enableSilentMode,
|
enableSilentMode,
|
||||||
disableSilentMode
|
disableSilentMode,
|
||||||
|
isSilentMode
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { createLogWrapper } from '../../tools/utils.js';
|
import { createLogWrapper } from '../../tools/utils.js';
|
||||||
|
import { getDefaultNumTasks } from '../../../../scripts/modules/config-manager.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for parsing PRD documents and generating tasks.
|
* Direct function wrapper for parsing PRD documents and generating tasks.
|
||||||
@@ -21,177 +23,158 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function parsePRDDirect(args, log, context = {}) {
|
export async function parsePRDDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Only extract session
|
const { session } = context;
|
||||||
|
// Extract projectRoot from args
|
||||||
|
const {
|
||||||
|
input: inputArg,
|
||||||
|
output: outputArg,
|
||||||
|
numTasks: numTasksArg,
|
||||||
|
force,
|
||||||
|
append,
|
||||||
|
projectRoot
|
||||||
|
} = args;
|
||||||
|
|
||||||
try {
|
// Create the standard logger wrapper
|
||||||
log.info(`Parsing PRD document with args: ${JSON.stringify(args)}`);
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
// Validate required parameters
|
// --- Input Validation and Path Resolution ---
|
||||||
if (!args.projectRoot) {
|
if (!projectRoot) {
|
||||||
const errorMessage = 'Project root is required for parsePRDDirect';
|
logWrapper.error('parsePRDDirect requires a projectRoot argument.');
|
||||||
log.error(errorMessage);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_PROJECT_ROOT', message: errorMessage },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
if (!args.input) {
|
|
||||||
const errorMessage = 'Input file path is required for parsePRDDirect';
|
|
||||||
log.error(errorMessage);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_INPUT_PATH', message: errorMessage },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
if (!args.output) {
|
|
||||||
const errorMessage = 'Output file path is required for parsePRDDirect';
|
|
||||||
log.error(errorMessage);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_OUTPUT_PATH', message: errorMessage },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve input path (expecting absolute path or path relative to project root)
|
|
||||||
const projectRoot = args.projectRoot;
|
|
||||||
const inputPath = path.isAbsolute(args.input)
|
|
||||||
? args.input
|
|
||||||
: path.resolve(projectRoot, args.input);
|
|
||||||
|
|
||||||
// Verify input file exists
|
|
||||||
if (!fs.existsSync(inputPath)) {
|
|
||||||
const errorMessage = `Input file not found: ${inputPath}`;
|
|
||||||
log.error(errorMessage);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'INPUT_FILE_NOT_FOUND',
|
code: 'MISSING_ARGUMENT',
|
||||||
message: errorMessage,
|
message: 'projectRoot is required.'
|
||||||
details: `Checked path: ${inputPath}\nProject root: ${projectRoot}\nInput argument: ${args.input}`
|
}
|
||||||
},
|
};
|
||||||
fromCache: false
|
}
|
||||||
|
if (!inputArg) {
|
||||||
|
logWrapper.error('parsePRDDirect called without input path');
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'MISSING_ARGUMENT', message: 'Input path is required' }
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resolve output path (expecting absolute path or path relative to project root)
|
// Resolve input and output paths relative to projectRoot
|
||||||
const outputPath = path.isAbsolute(args.output)
|
const inputPath = path.resolve(projectRoot, inputArg);
|
||||||
? args.output
|
const outputPath = outputArg
|
||||||
: path.resolve(projectRoot, args.output);
|
? path.resolve(projectRoot, outputArg)
|
||||||
|
: path.resolve(projectRoot, 'tasks', 'tasks.json'); // Default output path
|
||||||
|
|
||||||
|
// Check if input file exists
|
||||||
|
if (!fs.existsSync(inputPath)) {
|
||||||
|
const errorMsg = `Input PRD file not found at resolved path: ${inputPath}`;
|
||||||
|
logWrapper.error(errorMsg);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'FILE_NOT_FOUND', message: errorMsg }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Ensure output directory exists
|
|
||||||
const outputDir = path.dirname(outputPath);
|
const outputDir = path.dirname(outputPath);
|
||||||
|
try {
|
||||||
if (!fs.existsSync(outputDir)) {
|
if (!fs.existsSync(outputDir)) {
|
||||||
log.info(`Creating output directory: ${outputDir}`);
|
logWrapper.info(`Creating output directory: ${outputDir}`);
|
||||||
fs.mkdirSync(outputDir, { recursive: true });
|
fs.mkdirSync(outputDir, { recursive: true });
|
||||||
}
|
}
|
||||||
|
} catch (dirError) {
|
||||||
|
logWrapper.error(
|
||||||
|
`Failed to create output directory ${outputDir}: ${dirError.message}`
|
||||||
|
);
|
||||||
|
// Return an error response immediately if dir creation fails
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'DIRECTORY_CREATION_ERROR',
|
||||||
|
message: `Failed to create output directory: ${dirError.message}`
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Parse number of tasks - handle both string and number values
|
let numTasks = getDefaultNumTasks(projectRoot);
|
||||||
let numTasks = 10; // Default
|
if (numTasksArg) {
|
||||||
if (args.numTasks) {
|
|
||||||
numTasks =
|
numTasks =
|
||||||
typeof args.numTasks === 'string'
|
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
|
||||||
? parseInt(args.numTasks, 10)
|
if (isNaN(numTasks) || numTasks <= 0) {
|
||||||
: args.numTasks;
|
// Ensure positive number
|
||||||
if (isNaN(numTasks)) {
|
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
||||||
numTasks = 10; // Fallback to default if parsing fails
|
logWrapper.warn(
|
||||||
log.warn(`Invalid numTasks value: ${args.numTasks}. Using default: 10`);
|
`Invalid numTasks value: ${numTasksArg}. Using default: ${numTasks}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract the append flag from args
|
const useForce = force === true;
|
||||||
const append = Boolean(args.append) === true;
|
const useAppend = append === true;
|
||||||
|
if (useAppend) {
|
||||||
|
logWrapper.info('Append mode enabled.');
|
||||||
|
if (useForce) {
|
||||||
|
logWrapper.warn(
|
||||||
|
'Both --force and --append flags were provided. --force takes precedence; append mode will be ignored.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Log key parameters including append flag
|
logWrapper.info(
|
||||||
log.info(
|
`Parsing PRD via direct function. Input: ${inputPath}, Output: ${outputPath}, NumTasks: ${numTasks}, Force: ${useForce}, Append: ${useAppend}, ProjectRoot: ${projectRoot}`
|
||||||
`Preparing to parse PRD from ${inputPath} and output to ${outputPath} with ${numTasks} tasks, append mode: ${append}`
|
|
||||||
);
|
);
|
||||||
|
|
||||||
// --- Logger Wrapper ---
|
const wasSilent = isSilentMode();
|
||||||
const mcpLog = createLogWrapper(log);
|
if (!wasSilent) {
|
||||||
|
|
||||||
// Prepare options for the core function
|
|
||||||
const options = {
|
|
||||||
mcpLog,
|
|
||||||
session
|
|
||||||
};
|
|
||||||
|
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
|
||||||
enableSilentMode();
|
enableSilentMode();
|
||||||
try {
|
|
||||||
// Make sure the output directory exists
|
|
||||||
const outputDir = path.dirname(outputPath);
|
|
||||||
if (!fs.existsSync(outputDir)) {
|
|
||||||
log.info(`Creating output directory: ${outputDir}`);
|
|
||||||
fs.mkdirSync(outputDir, { recursive: true });
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Execute core parsePRD function with AI client
|
try {
|
||||||
const tasksDataResult = await parsePRD(
|
// Call the core parsePRD function
|
||||||
|
const result = await parsePRD(
|
||||||
inputPath,
|
inputPath,
|
||||||
outputPath,
|
outputPath,
|
||||||
numTasks,
|
numTasks,
|
||||||
{
|
{ session, mcpLog: logWrapper, projectRoot, useForce, useAppend },
|
||||||
mcpLog: logWrapper,
|
'json'
|
||||||
session,
|
|
||||||
append
|
|
||||||
},
|
|
||||||
aiClient,
|
|
||||||
modelConfig
|
|
||||||
);
|
);
|
||||||
|
|
||||||
// Since parsePRD doesn't return a value but writes to a file, we'll read the result
|
// parsePRD returns { success: true, tasks: processedTasks } on success
|
||||||
// to return it to the caller
|
if (result && result.success && Array.isArray(result.tasks)) {
|
||||||
if (fs.existsSync(outputPath)) {
|
logWrapper.success(
|
||||||
const tasksData = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
`Successfully parsed PRD. Generated ${result.tasks.length} tasks.`
|
||||||
const actionVerb = append ? 'appended' : 'generated';
|
|
||||||
const message = `Successfully ${actionVerb} ${tasksData.tasks?.length || 0} tasks from PRD`;
|
|
||||||
|
|
||||||
if (!tasksDataResult || !tasksDataResult.tasks || !tasksData) {
|
|
||||||
throw new Error(
|
|
||||||
'Core parsePRD function did not return valid task data.'
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
log.info(message);
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message,
|
message: `Successfully parsed PRD and generated ${result.tasks.length} tasks.`,
|
||||||
taskCount: tasksDataResult.tasks?.length || 0,
|
outputPath: outputPath,
|
||||||
outputPath,
|
taskCount: result.tasks.length
|
||||||
appended: append
|
}
|
||||||
},
|
|
||||||
fromCache: false // This operation always modifies state and should never be cached
|
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
const errorMessage = `Tasks file was not created at ${outputPath}`;
|
// Handle case where core function didn't return expected success structure
|
||||||
log.error(errorMessage);
|
logWrapper.error(
|
||||||
return {
|
'Core parsePRD function did not return a successful structure.'
|
||||||
success: false,
|
);
|
||||||
error: { code: 'OUTPUT_FILE_NOT_CREATED', message: errorMessage },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
} finally {
|
|
||||||
// Always restore normal logging
|
|
||||||
disableSilentMode();
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
// Make sure to restore normal logging even if there's an error
|
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
log.error(`Error parsing PRD: ${error.message}`);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: error.code || 'PARSE_PRD_ERROR', // Use error code if available
|
code: 'CORE_FUNCTION_ERROR',
|
||||||
message: error.message || 'Unknown error parsing PRD'
|
message:
|
||||||
},
|
result?.message ||
|
||||||
fromCache: false
|
'Core function failed to parse PRD or returned unexpected result.'
|
||||||
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logWrapper.error(`Error executing core parsePRD: ${error.message}`);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'PARSE_PRD_CORE_ERROR',
|
||||||
|
message: error.message || 'Unknown error parsing PRD'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
} finally {
|
||||||
|
if (!wasSilent && isSilentMode()) {
|
||||||
|
disableSilentMode();
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,151 +3,100 @@
|
|||||||
* Direct function implementation for showing task details
|
* Direct function implementation for showing task details
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { findTaskById } from '../../../../scripts/modules/utils.js';
|
import { findTaskById, readJSON } from '../../../../scripts/modules/utils.js';
|
||||||
import { readJSON } from '../../../../scripts/modules/utils.js';
|
|
||||||
import { getCachedOrExecute } from '../../tools/utils.js';
|
import { getCachedOrExecute } from '../../tools/utils.js';
|
||||||
import {
|
import {
|
||||||
enableSilentMode,
|
enableSilentMode,
|
||||||
disableSilentMode
|
disableSilentMode
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
|
import { findTasksJsonPath } from '../utils/path-utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for showing task details with error handling and caching.
|
* Direct function wrapper for getting task details.
|
||||||
*
|
*
|
||||||
* @param {Object} args - Command arguments
|
* @param {Object} args - Command arguments.
|
||||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
* @param {string} args.id - Task ID to show.
|
||||||
* @param {string} args.id - The ID of the task or subtask to show.
|
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksJsonPath).
|
||||||
* @param {string} [args.status] - Optional status to filter subtasks by.
|
* @param {string} [args.status] - Optional status to filter subtasks by.
|
||||||
* @param {Object} log - Logger object
|
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
|
||||||
* @returns {Promise<Object>} - Task details result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
|
* @param {Object} log - Logger object.
|
||||||
|
* @param {Object} context - Context object containing session data.
|
||||||
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function showTaskDirect(args, log) {
|
export async function showTaskDirect(args, log) {
|
||||||
// Destructure expected args
|
// Destructure session from context if needed later, otherwise ignore
|
||||||
const { tasksJsonPath, id, status } = args;
|
// const { session } = context;
|
||||||
|
// Destructure projectRoot and other args. projectRoot is assumed normalized.
|
||||||
if (!tasksJsonPath) {
|
const { id, file, status, projectRoot } = args;
|
||||||
log.error('showTaskDirect called without tasksJsonPath');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
code: 'MISSING_ARGUMENT',
|
|
||||||
message: 'tasksJsonPath is required'
|
|
||||||
},
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate task ID
|
|
||||||
const taskId = id;
|
|
||||||
if (!taskId) {
|
|
||||||
log.error('Task ID is required');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
code: 'INPUT_VALIDATION_ERROR',
|
|
||||||
message: 'Task ID is required'
|
|
||||||
},
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate cache key using the provided task path, ID, and status filter
|
|
||||||
const cacheKey = `showTask:${tasksJsonPath}:${taskId}:${status || 'all'}`;
|
|
||||||
|
|
||||||
// Define the action function to be executed on cache miss
|
|
||||||
const coreShowTaskAction = async () => {
|
|
||||||
try {
|
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
|
||||||
enableSilentMode();
|
|
||||||
|
|
||||||
log.info(
|
log.info(
|
||||||
`Retrieving task details for ID: ${taskId} from ${tasksJsonPath}${status ? ` (filtering by status: ${status})` : ''}`
|
`Showing task direct function. ID: ${id}, File: ${file}, Status Filter: ${status}, ProjectRoot: ${projectRoot}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Read tasks data using the provided path
|
// --- Path Resolution using the passed (already normalized) projectRoot ---
|
||||||
const data = readJSON(tasksJsonPath);
|
let tasksJsonPath;
|
||||||
if (!data || !data.tasks) {
|
try {
|
||||||
disableSilentMode(); // Disable before returning
|
// Use the projectRoot passed directly from args
|
||||||
|
tasksJsonPath = findTasksJsonPath(
|
||||||
|
{ projectRoot: projectRoot, file: file },
|
||||||
|
log
|
||||||
|
);
|
||||||
|
log.info(`Resolved tasks path: ${tasksJsonPath}`);
|
||||||
|
} catch (error) {
|
||||||
|
log.error(`Error finding tasks.json: ${error.message}`);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'INVALID_TASKS_FILE',
|
code: 'TASKS_FILE_NOT_FOUND',
|
||||||
message: `No valid tasks found in ${tasksJsonPath}`
|
message: `Failed to find tasks.json: ${error.message}`
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
// --- End Path Resolution ---
|
||||||
|
|
||||||
|
// --- Rest of the function remains the same, using tasksJsonPath ---
|
||||||
|
try {
|
||||||
|
const tasksData = readJSON(tasksJsonPath);
|
||||||
|
if (!tasksData || !tasksData.tasks) {
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'INVALID_TASKS_DATA', message: 'Invalid tasks data' }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Find the specific task, passing the status filter
|
|
||||||
const { task, originalSubtaskCount } = findTaskById(
|
const { task, originalSubtaskCount } = findTaskById(
|
||||||
data.tasks,
|
tasksData.tasks,
|
||||||
taskId,
|
id,
|
||||||
status
|
status
|
||||||
);
|
);
|
||||||
|
|
||||||
if (!task) {
|
if (!task) {
|
||||||
disableSilentMode(); // Disable before returning
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'TASK_NOT_FOUND',
|
code: 'TASK_NOT_FOUND',
|
||||||
message: `Task with ID ${taskId} not found${status ? ` or no subtasks match status '${status}'` : ''}`
|
message: `Task or subtask with ID ${id} not found`
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Restore normal logging
|
log.info(`Successfully retrieved task ${id}.`);
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
// Return the task data, the original subtask count (if applicable),
|
const returnData = { ...task };
|
||||||
// and the full tasks array for reference (needed for formatDependenciesWithStatus function in UI)
|
if (originalSubtaskCount !== null) {
|
||||||
log.info(
|
returnData._originalSubtaskCount = originalSubtaskCount;
|
||||||
`Successfully found task ${taskId}${status ? ` (with status filter: ${status})` : ''}`
|
returnData._subtaskFilter = status;
|
||||||
);
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
task,
|
|
||||||
originalSubtaskCount,
|
|
||||||
allTasks: data.tasks
|
|
||||||
}
|
}
|
||||||
};
|
|
||||||
|
return { success: true, data: returnData };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Make sure to restore normal logging even if there's an error
|
log.error(`Error showing task ${id}: ${error.message}`);
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
log.error(`Error showing task: ${error.message}`);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'CORE_FUNCTION_ERROR',
|
code: 'TASK_OPERATION_ERROR',
|
||||||
message: error.message || 'Failed to show task details'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Use the caching utility
|
|
||||||
try {
|
|
||||||
const result = await getCachedOrExecute({
|
|
||||||
cacheKey,
|
|
||||||
actionFn: coreShowTaskAction,
|
|
||||||
log
|
|
||||||
});
|
|
||||||
log.info(`showTaskDirect completed. From cache: ${result.fromCache}`);
|
|
||||||
return result; // Returns { success, data/error, fromCache }
|
|
||||||
} catch (error) {
|
|
||||||
// Catch unexpected errors from getCachedOrExecute itself
|
|
||||||
disableSilentMode();
|
|
||||||
log.error(
|
|
||||||
`Unexpected error during getCachedOrExecute for showTask: ${error.message}`
|
|
||||||
);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
code: 'UNEXPECTED_ERROR',
|
|
||||||
message: error.message
|
message: error.message
|
||||||
},
|
}
|
||||||
fromCache: false
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,29 +6,40 @@
|
|||||||
import { updateSubtaskById } from '../../../../scripts/modules/task-manager.js';
|
import { updateSubtaskById } from '../../../../scripts/modules/task-manager.js';
|
||||||
import {
|
import {
|
||||||
enableSilentMode,
|
enableSilentMode,
|
||||||
disableSilentMode
|
disableSilentMode,
|
||||||
|
isSilentMode
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { createLogWrapper } from '../../tools/utils.js';
|
import { createLogWrapper } from '../../tools/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for updateSubtaskById with error handling.
|
* Direct function wrapper for updateSubtaskById with error handling.
|
||||||
*
|
*
|
||||||
* @param {Object} args - Command arguments containing id, prompt, useResearch and tasksJsonPath.
|
* @param {Object} args - Command arguments containing id, prompt, useResearch, tasksJsonPath, and projectRoot.
|
||||||
|
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||||
|
* @param {string} args.id - Subtask ID in format "parent.sub".
|
||||||
|
* @param {string} args.prompt - Information to append to the subtask.
|
||||||
|
* @param {boolean} [args.research] - Whether to use research role.
|
||||||
|
* @param {string} [args.projectRoot] - Project root path.
|
||||||
* @param {Object} log - Logger object.
|
* @param {Object} log - Logger object.
|
||||||
* @param {Object} context - Context object containing session data.
|
* @param {Object} context - Context object containing session data.
|
||||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Only extract session, not reportProgress
|
const { session } = context;
|
||||||
const { tasksJsonPath, id, prompt, research } = args;
|
// Destructure expected args, including projectRoot
|
||||||
|
const { tasksJsonPath, id, prompt, research, projectRoot } = args;
|
||||||
|
|
||||||
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
log.info(`Updating subtask with args: ${JSON.stringify(args)}`);
|
logWrapper.info(
|
||||||
|
`Updating subtask by ID via direct function. ID: ${id}, ProjectRoot: ${projectRoot}`
|
||||||
|
);
|
||||||
|
|
||||||
// Check if tasksJsonPath was provided
|
// Check if tasksJsonPath was provided
|
||||||
if (!tasksJsonPath) {
|
if (!tasksJsonPath) {
|
||||||
const errorMessage = 'tasksJsonPath is required but was not provided.';
|
const errorMessage = 'tasksJsonPath is required but was not provided.';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
|
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
|
||||||
@@ -36,22 +47,22 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check required parameters (id and prompt)
|
// Basic validation for ID format (e.g., '5.2')
|
||||||
if (!id) {
|
if (!id || typeof id !== 'string' || !id.includes('.')) {
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
'No subtask ID specified. Please provide a subtask ID to update.';
|
'Invalid subtask ID format. Must be in format "parentId.subtaskId" (e.g., "5.2").';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_SUBTASK_ID', message: errorMessage },
|
error: { code: 'INVALID_SUBTASK_ID', message: errorMessage },
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!prompt) {
|
if (!prompt) {
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
'No prompt specified. Please provide a prompt with information to add to the subtask.';
|
'No prompt specified. Please provide the information to append.';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_PROMPT', message: errorMessage },
|
error: { code: 'MISSING_PROMPT', message: errorMessage },
|
||||||
@@ -84,51 +95,41 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
|||||||
|
|
||||||
// Use the provided path
|
// Use the provided path
|
||||||
const tasksPath = tasksJsonPath;
|
const tasksPath = tasksJsonPath;
|
||||||
|
|
||||||
// Get research flag
|
|
||||||
const useResearch = research === true;
|
const useResearch = research === true;
|
||||||
|
|
||||||
log.info(
|
log.info(
|
||||||
`Updating subtask with ID ${subtaskIdStr} with prompt "${prompt}" and research: ${useResearch}`
|
`Updating subtask with ID ${subtaskIdStr} with prompt "${prompt}" and research: ${useResearch}`
|
||||||
);
|
);
|
||||||
|
|
||||||
try {
|
const wasSilent = isSilentMode();
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
if (!wasSilent) {
|
||||||
enableSilentMode();
|
enableSilentMode();
|
||||||
|
}
|
||||||
|
|
||||||
// Create the logger wrapper using the utility function
|
try {
|
||||||
const mcpLog = createLogWrapper(log);
|
|
||||||
|
|
||||||
// Execute core updateSubtaskById function
|
// Execute core updateSubtaskById function
|
||||||
// Pass both session and logWrapper as mcpLog to ensure outputFormat is 'json'
|
|
||||||
const updatedSubtask = await updateSubtaskById(
|
const updatedSubtask = await updateSubtaskById(
|
||||||
tasksPath,
|
tasksPath,
|
||||||
subtaskIdStr,
|
subtaskIdStr,
|
||||||
prompt,
|
prompt,
|
||||||
useResearch,
|
useResearch,
|
||||||
{
|
{ mcpLog: logWrapper, session, projectRoot },
|
||||||
session,
|
'json'
|
||||||
mcpLog
|
|
||||||
}
|
|
||||||
);
|
);
|
||||||
|
|
||||||
// Restore normal logging
|
if (updatedSubtask === null) {
|
||||||
disableSilentMode();
|
const message = `Subtask ${id} or its parent task not found.`;
|
||||||
|
logWrapper.error(message); // Log as error since it couldn't be found
|
||||||
// Handle the case where the subtask couldn't be updated (e.g., already marked as done)
|
|
||||||
if (!updatedSubtask) {
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: { code: 'SUBTASK_NOT_FOUND', message: message },
|
||||||
code: 'SUBTASK_UPDATE_FAILED',
|
|
||||||
message:
|
|
||||||
'Failed to update subtask. It may be marked as completed, or another error occurred.'
|
|
||||||
},
|
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return the updated subtask information
|
// Subtask updated successfully
|
||||||
|
const successMessage = `Successfully updated subtask with ID ${subtaskIdStr}`;
|
||||||
|
logWrapper.success(successMessage);
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
@@ -139,25 +140,35 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
|||||||
tasksPath,
|
tasksPath,
|
||||||
useResearch
|
useResearch
|
||||||
},
|
},
|
||||||
fromCache: false // This operation always modifies state and should never be cached
|
fromCache: false
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Make sure to restore normal logging even if there's an error
|
logWrapper.error(`Error updating subtask by ID: ${error.message}`);
|
||||||
disableSilentMode();
|
|
||||||
throw error; // Rethrow to be caught by outer catch block
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
// Ensure silent mode is disabled
|
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
log.error(`Error updating subtask by ID: ${error.message}`);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'UPDATE_SUBTASK_ERROR',
|
code: 'UPDATE_SUBTASK_CORE_ERROR',
|
||||||
message: error.message || 'Unknown error updating subtask'
|
message: error.message || 'Unknown error updating subtask'
|
||||||
},
|
},
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
|
} finally {
|
||||||
|
if (!wasSilent && isSilentMode()) {
|
||||||
|
disableSilentMode();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logWrapper.error(
|
||||||
|
`Setup error in updateSubtaskByIdDirect: ${error.message}`
|
||||||
|
);
|
||||||
|
if (isSilentMode()) disableSilentMode();
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'DIRECT_FUNCTION_SETUP_ERROR',
|
||||||
|
message: error.message || 'Unknown setup error'
|
||||||
|
},
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,30 +6,40 @@
|
|||||||
import { updateTaskById } from '../../../../scripts/modules/task-manager.js';
|
import { updateTaskById } from '../../../../scripts/modules/task-manager.js';
|
||||||
import {
|
import {
|
||||||
enableSilentMode,
|
enableSilentMode,
|
||||||
disableSilentMode
|
disableSilentMode,
|
||||||
|
isSilentMode
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { createLogWrapper } from '../../tools/utils.js';
|
import { createLogWrapper } from '../../tools/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for updateTaskById with error handling.
|
* Direct function wrapper for updateTaskById with error handling.
|
||||||
*
|
*
|
||||||
* @param {Object} args - Command arguments containing id, prompt, useResearch and tasksJsonPath.
|
* @param {Object} args - Command arguments containing id, prompt, useResearch, tasksJsonPath, and projectRoot.
|
||||||
|
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||||
|
* @param {string} args.id - Task ID (or subtask ID like "1.2").
|
||||||
|
* @param {string} args.prompt - New information/context prompt.
|
||||||
|
* @param {boolean} [args.research] - Whether to use research role.
|
||||||
|
* @param {string} [args.projectRoot] - Project root path.
|
||||||
* @param {Object} log - Logger object.
|
* @param {Object} log - Logger object.
|
||||||
* @param {Object} context - Context object containing session data.
|
* @param {Object} context - Context object containing session data.
|
||||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function updateTaskByIdDirect(args, log, context = {}) {
|
export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Only extract session, not reportProgress
|
const { session } = context;
|
||||||
// Destructure expected args, including the resolved tasksJsonPath
|
// Destructure expected args, including projectRoot
|
||||||
const { tasksJsonPath, id, prompt, research } = args;
|
const { tasksJsonPath, id, prompt, research, projectRoot } = args;
|
||||||
|
|
||||||
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
log.info(`Updating task with args: ${JSON.stringify(args)}`);
|
logWrapper.info(
|
||||||
|
`Updating task by ID via direct function. ID: ${id}, ProjectRoot: ${projectRoot}`
|
||||||
|
);
|
||||||
|
|
||||||
// Check if tasksJsonPath was provided
|
// Check if tasksJsonPath was provided
|
||||||
if (!tasksJsonPath) {
|
if (!tasksJsonPath) {
|
||||||
const errorMessage = 'tasksJsonPath is required but was not provided.';
|
const errorMessage = 'tasksJsonPath is required but was not provided.';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
|
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
|
||||||
@@ -41,7 +51,7 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
|||||||
if (!id) {
|
if (!id) {
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
'No task ID specified. Please provide a task ID to update.';
|
'No task ID specified. Please provide a task ID to update.';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_TASK_ID', message: errorMessage },
|
error: { code: 'MISSING_TASK_ID', message: errorMessage },
|
||||||
@@ -52,7 +62,7 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
|||||||
if (!prompt) {
|
if (!prompt) {
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
'No prompt specified. Please provide a prompt with new information for the task update.';
|
'No prompt specified. Please provide a prompt with new information for the task update.';
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'MISSING_PROMPT', message: errorMessage },
|
error: { code: 'MISSING_PROMPT', message: errorMessage },
|
||||||
@@ -71,7 +81,7 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
|||||||
taskId = parseInt(id, 10);
|
taskId = parseInt(id, 10);
|
||||||
if (isNaN(taskId)) {
|
if (isNaN(taskId)) {
|
||||||
const errorMessage = `Invalid task ID: ${id}. Task ID must be a positive integer or subtask ID (e.g., "5.2").`;
|
const errorMessage = `Invalid task ID: ${id}. Task ID must be a positive integer or subtask ID (e.g., "5.2").`;
|
||||||
log.error(errorMessage);
|
logWrapper.error(errorMessage);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: { code: 'INVALID_TASK_ID', message: errorMessage },
|
error: { code: 'INVALID_TASK_ID', message: errorMessage },
|
||||||
@@ -89,66 +99,80 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
|||||||
// Get research flag
|
// Get research flag
|
||||||
const useResearch = research === true;
|
const useResearch = research === true;
|
||||||
|
|
||||||
log.info(
|
logWrapper.info(
|
||||||
`Updating task with ID ${taskId} with prompt "${prompt}" and research: ${useResearch}`
|
`Updating task with ID ${taskId} with prompt "${prompt}" and research: ${useResearch}`
|
||||||
);
|
);
|
||||||
|
|
||||||
try {
|
const wasSilent = isSilentMode();
|
||||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
if (!wasSilent) {
|
||||||
enableSilentMode();
|
enableSilentMode();
|
||||||
|
}
|
||||||
|
|
||||||
// Create the logger wrapper using the utility function
|
try {
|
||||||
const mcpLog = createLogWrapper(log);
|
|
||||||
|
|
||||||
// Execute core updateTaskById function with proper parameters
|
// Execute core updateTaskById function with proper parameters
|
||||||
await updateTaskById(
|
const updatedTask = await updateTaskById(
|
||||||
tasksPath,
|
tasksPath,
|
||||||
taskId,
|
taskId,
|
||||||
prompt,
|
prompt,
|
||||||
useResearch,
|
useResearch,
|
||||||
{
|
{
|
||||||
mcpLog, // Pass the wrapped logger
|
mcpLog: logWrapper,
|
||||||
session
|
session,
|
||||||
|
projectRoot
|
||||||
},
|
},
|
||||||
'json'
|
'json'
|
||||||
);
|
);
|
||||||
|
|
||||||
// Since updateTaskById doesn't return a value but modifies the tasks file,
|
// Check if the core function indicated the task wasn't updated (e.g., status was 'done')
|
||||||
// we'll return a success message
|
if (updatedTask === null) {
|
||||||
|
// Core function logs the reason, just return success with info
|
||||||
|
const message = `Task ${taskId} was not updated (likely already completed).`;
|
||||||
|
logWrapper.info(message);
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
data: { message: message, taskId: taskId, updated: false },
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Task was updated successfully
|
||||||
|
const successMessage = `Successfully updated task with ID ${taskId} based on the prompt`;
|
||||||
|
logWrapper.success(successMessage);
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message: `Successfully updated task with ID ${taskId} based on the prompt`,
|
message: successMessage,
|
||||||
taskId,
|
taskId: taskId,
|
||||||
tasksPath: tasksPath, // Return the used path
|
tasksPath: tasksPath,
|
||||||
useResearch
|
useResearch: useResearch,
|
||||||
|
updated: true,
|
||||||
|
updatedTask: updatedTask
|
||||||
},
|
},
|
||||||
fromCache: false // This operation always modifies state and should never be cached
|
fromCache: false
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error updating task by ID: ${error.message}`);
|
logWrapper.error(`Error updating task by ID: ${error.message}`);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'UPDATE_TASK_ERROR',
|
code: 'UPDATE_TASK_CORE_ERROR',
|
||||||
message: error.message || 'Unknown error updating task'
|
message: error.message || 'Unknown error updating task'
|
||||||
},
|
},
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
} finally {
|
} finally {
|
||||||
// Make sure to restore normal logging even if there's an error
|
if (!wasSilent && isSilentMode()) {
|
||||||
disableSilentMode();
|
disableSilentMode();
|
||||||
}
|
}
|
||||||
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Ensure silent mode is disabled
|
logWrapper.error(`Setup error in updateTaskByIdDirect: ${error.message}`);
|
||||||
disableSilentMode();
|
if (isSilentMode()) disableSilentMode();
|
||||||
|
|
||||||
log.error(`Error updating task by ID: ${error.message}`);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'UPDATE_TASK_ERROR',
|
code: 'DIRECT_FUNCTION_SETUP_ERROR',
|
||||||
message: error.message || 'Unknown error updating task'
|
message: error.message || 'Unknown setup error'
|
||||||
},
|
},
|
||||||
fromCache: false
|
fromCache: false
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -1,128 +1,122 @@
|
|||||||
/**
|
/**
|
||||||
* update-tasks.js
|
* update-tasks.js
|
||||||
* Direct function implementation for updating tasks based on new context/prompt
|
* Direct function implementation for updating tasks based on new context
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
import path from 'path';
|
||||||
import { updateTasks } from '../../../../scripts/modules/task-manager.js';
|
import { updateTasks } from '../../../../scripts/modules/task-manager.js';
|
||||||
import {
|
|
||||||
enableSilentMode,
|
|
||||||
disableSilentMode
|
|
||||||
} from '../../../../scripts/modules/utils.js';
|
|
||||||
import { createLogWrapper } from '../../tools/utils.js';
|
import { createLogWrapper } from '../../tools/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for updating tasks based on new context/prompt.
|
* Direct function wrapper for updating tasks based on new context.
|
||||||
*
|
*
|
||||||
* @param {Object} args - Command arguments containing from, prompt, research and tasksJsonPath.
|
* @param {Object} args - Command arguments containing projectRoot, from, prompt, research options.
|
||||||
* @param {Object} log - Logger object.
|
* @param {Object} log - Logger object.
|
||||||
* @param {Object} context - Context object containing session data.
|
* @param {Object} context - Context object containing session data.
|
||||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function updateTasksDirect(args, log, context = {}) {
|
export async function updateTasksDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Extract session
|
const { session } = context;
|
||||||
const { tasksJsonPath, from, prompt, research } = args;
|
const { from, prompt, research, file: fileArg, projectRoot } = args;
|
||||||
|
|
||||||
// Create the standard logger wrapper
|
// Create the standard logger wrapper
|
||||||
const logWrapper = {
|
const logWrapper = createLogWrapper(log);
|
||||||
info: (message, ...args) => log.info(message, ...args),
|
|
||||||
warn: (message, ...args) => log.warn(message, ...args),
|
|
||||||
error: (message, ...args) => log.error(message, ...args),
|
|
||||||
debug: (message, ...args) => log.debug && log.debug(message, ...args),
|
|
||||||
success: (message, ...args) => log.info(message, ...args)
|
|
||||||
};
|
|
||||||
|
|
||||||
// --- Input Validation (Keep existing checks) ---
|
// --- Input Validation ---
|
||||||
if (!tasksJsonPath) {
|
if (!projectRoot) {
|
||||||
log.error('updateTasksDirect called without tasksJsonPath');
|
logWrapper.error('updateTasksDirect requires a projectRoot argument.');
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_ARGUMENT', message: 'tasksJsonPath is required' },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
if (args.id !== undefined && from === undefined) {
|
|
||||||
// Keep 'from' vs 'id' check
|
|
||||||
const errorMessage =
|
|
||||||
"Use 'from' parameter, not 'id', or use 'update_task' tool.";
|
|
||||||
log.error(errorMessage);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'PARAMETER_MISMATCH', message: errorMessage },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
if (!from) {
|
|
||||||
log.error('Missing from ID.');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_FROM_ID', message: 'No from ID specified.' },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
if (!prompt) {
|
|
||||||
log.error('Missing prompt.');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: { code: 'MISSING_PROMPT', message: 'No prompt specified.' },
|
|
||||||
fromCache: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
let fromId;
|
|
||||||
try {
|
|
||||||
fromId = parseInt(from, 10);
|
|
||||||
if (isNaN(fromId) || fromId <= 0) throw new Error();
|
|
||||||
} catch {
|
|
||||||
log.error(`Invalid from ID: ${from}`);
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'INVALID_FROM_ID',
|
code: 'MISSING_ARGUMENT',
|
||||||
message: `Invalid from ID: ${from}. Must be a positive integer.`
|
message: 'projectRoot is required.'
|
||||||
},
|
}
|
||||||
fromCache: false
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
const useResearch = research === true;
|
|
||||||
// --- End Input Validation ---
|
|
||||||
|
|
||||||
log.info(`Updating tasks from ID ${fromId}. Research: ${useResearch}`);
|
if (!from) {
|
||||||
|
logWrapper.error('updateTasksDirect called without from ID');
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'MISSING_ARGUMENT',
|
||||||
|
message: 'Starting task ID (from) is required'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!prompt) {
|
||||||
|
logWrapper.error('updateTasksDirect called without prompt');
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'MISSING_ARGUMENT',
|
||||||
|
message: 'Update prompt is required'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Resolve tasks file path
|
||||||
|
const tasksFile = fileArg
|
||||||
|
? path.resolve(projectRoot, fileArg)
|
||||||
|
: path.resolve(projectRoot, 'tasks', 'tasks.json');
|
||||||
|
|
||||||
|
logWrapper.info(
|
||||||
|
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksFile}, ProjectRoot: ${projectRoot}`
|
||||||
|
);
|
||||||
|
|
||||||
enableSilentMode(); // Enable silent mode
|
enableSilentMode(); // Enable silent mode
|
||||||
try {
|
try {
|
||||||
// Create logger wrapper using the utility
|
// Call the core updateTasks function
|
||||||
const mcpLog = createLogWrapper(log);
|
const result = await updateTasks(
|
||||||
|
tasksFile,
|
||||||
// Execute core updateTasks function, passing session context
|
from,
|
||||||
await updateTasks(
|
|
||||||
tasksJsonPath,
|
|
||||||
fromId,
|
|
||||||
prompt,
|
prompt,
|
||||||
useResearch,
|
research,
|
||||||
// Pass context with logger wrapper and session
|
{
|
||||||
{ mcpLog, session },
|
session,
|
||||||
'json' // Explicitly request JSON format for MCP
|
mcpLog: logWrapper,
|
||||||
|
projectRoot
|
||||||
|
},
|
||||||
|
'json'
|
||||||
);
|
);
|
||||||
|
|
||||||
// Since updateTasks modifies file and doesn't return data, create success message
|
// updateTasks returns { success: true, updatedTasks: [...] } on success
|
||||||
|
if (result && result.success && Array.isArray(result.updatedTasks)) {
|
||||||
|
logWrapper.success(
|
||||||
|
`Successfully updated ${result.updatedTasks.length} tasks.`
|
||||||
|
);
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message: `Successfully initiated update for tasks from ID ${fromId} based on the prompt.`,
|
message: `Successfully updated ${result.updatedTasks.length} tasks.`,
|
||||||
fromId,
|
tasksFile,
|
||||||
tasksPath: tasksJsonPath,
|
updatedCount: result.updatedTasks.length
|
||||||
useResearch
|
}
|
||||||
},
|
|
||||||
fromCache: false // Modifies state
|
|
||||||
};
|
};
|
||||||
|
} else {
|
||||||
|
// Handle case where core function didn't return expected success structure
|
||||||
|
logWrapper.error(
|
||||||
|
'Core updateTasks function did not return a successful structure.'
|
||||||
|
);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
code: 'CORE_FUNCTION_ERROR',
|
||||||
|
message:
|
||||||
|
result?.message ||
|
||||||
|
'Core function failed to update tasks or returned unexpected result.'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error executing core updateTasks: ${error.message}`);
|
logWrapper.error(`Error executing core updateTasks: ${error.message}`);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'UPDATE_TASKS_CORE_ERROR',
|
code: 'UPDATE_TASKS_CORE_ERROR',
|
||||||
message: error.message || 'Unknown error updating tasks'
|
message: error.message || 'Unknown error updating tasks'
|
||||||
},
|
}
|
||||||
fromCache: false
|
|
||||||
};
|
};
|
||||||
} finally {
|
} finally {
|
||||||
disableSilentMode(); // Ensure silent mode is disabled
|
disableSilentMode(); // Ensure silent mode is disabled
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ import { fixDependenciesDirect } from './direct-functions/fix-dependencies.js';
|
|||||||
import { complexityReportDirect } from './direct-functions/complexity-report.js';
|
import { complexityReportDirect } from './direct-functions/complexity-report.js';
|
||||||
import { addDependencyDirect } from './direct-functions/add-dependency.js';
|
import { addDependencyDirect } from './direct-functions/add-dependency.js';
|
||||||
import { removeTaskDirect } from './direct-functions/remove-task.js';
|
import { removeTaskDirect } from './direct-functions/remove-task.js';
|
||||||
import { initializeProjectDirect } from './direct-functions/initialize-project-direct.js';
|
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
|
||||||
import { modelsDirect } from './direct-functions/models.js';
|
import { modelsDirect } from './direct-functions/models.js';
|
||||||
|
|
||||||
// Re-export utility functions
|
// Re-export utility functions
|
||||||
|
|||||||
@@ -7,7 +7,8 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
getProjectRootFromSession,
|
||||||
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { addDependencyDirect } from '../core/task-master-core.js';
|
import { addDependencyDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -35,28 +36,16 @@ export function registerAddDependencyTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Adding dependency for task ${args.id} to depend on ${args.dependsOn}`
|
`Adding dependency for task ${args.id} to depend on ${args.dependsOn}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Get project root from args or session
|
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -92,6 +81,6 @@ export function registerAddDependencyTool(server) {
|
|||||||
log.error(`Error in addDependency tool: ${error.message}`);
|
log.error(`Error in addDependency tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { addSubtaskDirect } from '../core/task-master-core.js';
|
import { addSubtaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -60,24 +60,15 @@ export function registerAddSubtaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
|
log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -113,6 +104,6 @@ export function registerAddSubtaskTool(server) {
|
|||||||
log.error(`Error in addSubtask tool: ${error.message}`);
|
log.error(`Error in addSubtask tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,8 +6,8 @@
|
|||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import {
|
import {
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession,
|
handleApiResult,
|
||||||
handleApiResult
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { addTaskDirect } from '../core/task-master-core.js';
|
import { addTaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -63,26 +63,15 @@ export function registerAddTaskTool(server) {
|
|||||||
.optional()
|
.optional()
|
||||||
.describe('Whether to use research capabilities for task creation')
|
.describe('Whether to use research capabilities for task creation')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Starting add-task with args: ${JSON.stringify(args)}`);
|
log.info(`Starting add-task with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -92,12 +81,10 @@ export function registerAddTaskTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Call the direct function
|
// Call the direct functionP
|
||||||
const result = await addTaskDirect(
|
const result = await addTaskDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
title: args.title,
|
title: args.title,
|
||||||
description: args.description,
|
description: args.description,
|
||||||
@@ -105,18 +92,18 @@ export function registerAddTaskTool(server) {
|
|||||||
testStrategy: args.testStrategy,
|
testStrategy: args.testStrategy,
|
||||||
dependencies: args.dependencies,
|
dependencies: args.dependencies,
|
||||||
priority: args.priority,
|
priority: args.priority,
|
||||||
research: args.research
|
research: args.research,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
// Return the result
|
|
||||||
return handleApiResult(result, log);
|
return handleApiResult(result, log);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in add-task tool: ${error.message}`);
|
log.error(`Error in add-task tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,121 +4,128 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import { handleApiResult, createErrorResponse } from './utils.js';
|
|
||||||
import { analyzeTaskComplexityDirect } from '../core/direct-functions/analyze-task-complexity.js';
|
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
|
||||||
import path from 'path';
|
import path from 'path';
|
||||||
import fs from 'fs';
|
import fs from 'fs'; // Import fs for directory check/creation
|
||||||
|
import {
|
||||||
|
handleApiResult,
|
||||||
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot
|
||||||
|
} from './utils.js';
|
||||||
|
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; // Assuming core functions are exported via task-master-core.js
|
||||||
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register the analyze tool with the MCP server
|
* Register the analyze_project_complexity tool
|
||||||
* @param {Object} server - FastMCP server instance
|
* @param {Object} server - FastMCP server instance
|
||||||
*/
|
*/
|
||||||
export function registerAnalyzeTool(server) {
|
export function registerAnalyzeProjectComplexityTool(server) {
|
||||||
server.addTool({
|
server.addTool({
|
||||||
name: 'analyze_project_complexity',
|
name: 'analyze_project_complexity',
|
||||||
description:
|
description:
|
||||||
'Analyze task complexity and generate expansion recommendations',
|
'Analyze task complexity and generate expansion recommendations.',
|
||||||
parameters: z.object({
|
parameters: z.object({
|
||||||
|
threshold: z.coerce // Use coerce for number conversion from string if needed
|
||||||
|
.number()
|
||||||
|
.int()
|
||||||
|
.min(1)
|
||||||
|
.max(10)
|
||||||
|
.optional()
|
||||||
|
.default(5) // Default threshold
|
||||||
|
.describe('Complexity score threshold (1-10) to recommend expansion.'),
|
||||||
|
research: z
|
||||||
|
.boolean()
|
||||||
|
.optional()
|
||||||
|
.default(false)
|
||||||
|
.describe('Use Perplexity AI for research-backed analysis.'),
|
||||||
output: z
|
output: z
|
||||||
.string()
|
.string()
|
||||||
.optional()
|
.optional()
|
||||||
.describe(
|
.describe(
|
||||||
'Output file path relative to project root (default: scripts/task-complexity-report.json)'
|
'Output file path relative to project root (default: scripts/task-complexity-report.json).'
|
||||||
),
|
|
||||||
threshold: z.coerce
|
|
||||||
.number()
|
|
||||||
.min(1)
|
|
||||||
.max(10)
|
|
||||||
.optional()
|
|
||||||
.describe(
|
|
||||||
'Minimum complexity score to recommend expansion (1-10) (default: 5)'
|
|
||||||
),
|
),
|
||||||
file: z
|
file: z
|
||||||
.string()
|
.string()
|
||||||
.optional()
|
.optional()
|
||||||
.describe(
|
.describe(
|
||||||
'Absolute path to the tasks file in the /tasks folder inside the project root (default: tasks/tasks.json)'
|
'Path to the tasks file relative to project root (default: tasks/tasks.json).'
|
||||||
),
|
),
|
||||||
research: z
|
|
||||||
.boolean()
|
|
||||||
.optional()
|
|
||||||
.default(false)
|
|
||||||
.describe('Use research role for complexity analysis'),
|
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
const toolName = 'analyze_project_complexity'; // Define tool name for logging
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Executing analyze_project_complexity tool with args: ${JSON.stringify(args)}`
|
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||||
);
|
);
|
||||||
|
|
||||||
const rootFolder = args.projectRoot;
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse('projectRoot is required.');
|
|
||||||
}
|
|
||||||
if (!path.isAbsolute(rootFolder)) {
|
|
||||||
return createErrorResponse('projectRoot must be an absolute path.');
|
|
||||||
}
|
|
||||||
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
|
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to find tasks.json within project root '${rootFolder}': ${error.message}`
|
`Failed to find tasks.json within project root '${args.projectRoot}': ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
const outputPath = args.output
|
const outputPath = args.output
|
||||||
? path.resolve(rootFolder, args.output)
|
? path.resolve(args.projectRoot, args.output)
|
||||||
: path.resolve(rootFolder, 'scripts', 'task-complexity-report.json');
|
: path.resolve(
|
||||||
|
args.projectRoot,
|
||||||
|
'scripts',
|
||||||
|
'task-complexity-report.json'
|
||||||
|
);
|
||||||
|
|
||||||
|
log.info(`${toolName}: Report output path: ${outputPath}`);
|
||||||
|
|
||||||
|
// Ensure output directory exists
|
||||||
const outputDir = path.dirname(outputPath);
|
const outputDir = path.dirname(outputPath);
|
||||||
try {
|
try {
|
||||||
if (!fs.existsSync(outputDir)) {
|
if (!fs.existsSync(outputDir)) {
|
||||||
fs.mkdirSync(outputDir, { recursive: true });
|
fs.mkdirSync(outputDir, { recursive: true });
|
||||||
log.info(`Created output directory: ${outputDir}`);
|
log.info(`${toolName}: Created output directory: ${outputDir}`);
|
||||||
}
|
}
|
||||||
} catch (dirError) {
|
} catch (dirError) {
|
||||||
log.error(
|
log.error(
|
||||||
`Failed to create output directory ${outputDir}: ${dirError.message}`
|
`${toolName}: Failed to create output directory ${outputDir}: ${dirError.message}`
|
||||||
);
|
);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to create output directory: ${dirError.message}`
|
`Failed to create output directory: ${dirError.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 3. Call Direct Function - Pass projectRoot in first arg object
|
||||||
const result = await analyzeTaskComplexityDirect(
|
const result = await analyzeTaskComplexityDirect(
|
||||||
{
|
{
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
outputPath: outputPath,
|
outputPath: outputPath,
|
||||||
threshold: args.threshold,
|
threshold: args.threshold,
|
||||||
research: args.research
|
research: args.research,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.success) {
|
// 4. Handle Result
|
||||||
log.info(`Tool analyze_project_complexity finished successfully.`);
|
log.info(
|
||||||
} else {
|
`${toolName}: Direct function result: success=${result.success}`
|
||||||
log.error(
|
|
||||||
`Tool analyze_project_complexity failed: ${result.error?.message || 'Unknown error'}`
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
return handleApiResult(result, log, 'Error analyzing task complexity');
|
return handleApiResult(result, log, 'Error analyzing task complexity');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Critical error in analyze tool execute: ${error.message}`);
|
log.error(
|
||||||
return createErrorResponse(`Internal tool error: ${error.message}`);
|
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||||
}
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Internal tool error (${toolName}): ${error.message}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { clearSubtasksDirect } from '../core/task-master-core.js';
|
import { clearSubtasksDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -41,26 +41,15 @@ export function registerClearSubtasksTool(server) {
|
|||||||
message: "Either 'id' or 'all' parameter must be provided",
|
message: "Either 'id' or 'all' parameter must be provided",
|
||||||
path: ['id', 'all']
|
path: ['id', 'all']
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Clearing subtasks with args: ${JSON.stringify(args)}`);
|
log.info(`Clearing subtasks with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -72,14 +61,11 @@ export function registerClearSubtasksTool(server) {
|
|||||||
|
|
||||||
const result = await clearSubtasksDirect(
|
const result = await clearSubtasksDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
all: args.all
|
all: args.all
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
// Remove context object as clearSubtasksDirect likely doesn't need session/reportProgress
|
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
@@ -93,6 +79,6 @@ export function registerClearSubtasksTool(server) {
|
|||||||
log.error(`Error in clearSubtasks tool: ${error.message}`);
|
log.error(`Error in clearSubtasks tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { complexityReportDirect } from '../core/task-master-core.js';
|
import { complexityReportDirect } from '../core/task-master-core.js';
|
||||||
import path from 'path';
|
import path from 'path';
|
||||||
@@ -31,34 +31,24 @@ export function registerComplexityReportTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Getting complexity report with args: ${JSON.stringify(args)}`
|
`Getting complexity report with args: ${JSON.stringify(args)}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to the complexity report file
|
|
||||||
// Default to scripts/task-complexity-report.json relative to root
|
|
||||||
const reportPath = args.file
|
const reportPath = args.file
|
||||||
? path.resolve(rootFolder, args.file)
|
? path.resolve(args.projectRoot, args.file)
|
||||||
: path.resolve(rootFolder, 'scripts', 'task-complexity-report.json');
|
: path.resolve(
|
||||||
|
args.projectRoot,
|
||||||
|
'scripts',
|
||||||
|
'task-complexity-report.json'
|
||||||
|
);
|
||||||
|
|
||||||
const result = await complexityReportDirect(
|
const result = await complexityReportDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
reportPath: reportPath
|
reportPath: reportPath
|
||||||
// No other args specific to this tool
|
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
@@ -84,6 +74,6 @@ export function registerComplexityReportTool(server) {
|
|||||||
`Failed to retrieve complexity report: ${error.message}`
|
`Failed to retrieve complexity report: ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { expandAllTasksDirect } from '../core/task-master-core.js';
|
import { expandAllTasksDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -59,25 +59,16 @@ export function registerExpandAllTool(server) {
|
|||||||
'Absolute path to the project root directory (derived from session if possible)'
|
'Absolute path to the project root directory (derived from session if possible)'
|
||||||
)
|
)
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Tool expand_all execution started with args: ${JSON.stringify(args)}`
|
`Tool expand_all execution started with args: ${JSON.stringify(args)}`
|
||||||
);
|
);
|
||||||
|
|
||||||
const rootFolder = getProjectRootFromSession(session, log);
|
|
||||||
if (!rootFolder) {
|
|
||||||
log.error('Could not determine project root from session.');
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root from session.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
log.info(`Project root determined: ${rootFolder}`);
|
|
||||||
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
log.info(`Resolved tasks.json path: ${tasksJsonPath}`);
|
log.info(`Resolved tasks.json path: ${tasksJsonPath}`);
|
||||||
@@ -94,7 +85,8 @@ export function registerExpandAllTool(server) {
|
|||||||
num: args.num,
|
num: args.num,
|
||||||
research: args.research,
|
research: args.research,
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
force: args.force
|
force: args.force,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
@@ -112,6 +104,6 @@ export function registerExpandAllTool(server) {
|
|||||||
`An unexpected error occurred: ${error.message}`
|
`An unexpected error occurred: ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { expandTaskDirect } from '../core/task-master-core.js';
|
import { expandTaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -47,28 +47,15 @@ export function registerExpandTaskTool(server) {
|
|||||||
.default(false)
|
.default(false)
|
||||||
.describe('Force expansion even if subtasks exist')
|
.describe('Force expansion even if subtasks exist')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Starting expand-task with args: ${JSON.stringify(args)}`);
|
log.info(`Starting expand-task with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info(`Project root resolved to: ${rootFolder}`);
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json using the utility
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -78,29 +65,25 @@ export function registerExpandTaskTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Call direct function with only session in the context, not reportProgress
|
|
||||||
// Use the pattern recommended in the MCP guidelines
|
|
||||||
const result = await expandTaskDirect(
|
const result = await expandTaskDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
num: args.num,
|
num: args.num,
|
||||||
research: args.research,
|
research: args.research,
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
force: args.force // Need to add force to parameters
|
force: args.force,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
); // Only pass session, NOT reportProgress
|
);
|
||||||
|
|
||||||
// Return the result
|
|
||||||
return handleApiResult(result, log, 'Error expanding task');
|
return handleApiResult(result, log, 'Error expanding task');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in expand task tool: ${error.message}`);
|
log.error(`Error in expand-task tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { fixDependenciesDirect } from '../core/task-master-core.js';
|
import { fixDependenciesDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -26,24 +26,15 @@ export function registerFixDependenciesTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Fixing dependencies with args: ${JSON.stringify(args)}`);
|
log.info(`Fixing dependencies with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -71,6 +62,6 @@ export function registerFixDependenciesTool(server) {
|
|||||||
log.error(`Error in fixDependencies tool: ${error.message}`);
|
log.error(`Error in fixDependencies tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { generateTaskFilesDirect } from '../core/task-master-core.js';
|
import { generateTaskFilesDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -32,26 +32,15 @@ export function registerGenerateTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Generating task files with args: ${JSON.stringify(args)}`);
|
log.info(`Generating task files with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -61,17 +50,14 @@ export function registerGenerateTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Determine output directory: use explicit arg or default to tasks.json directory
|
|
||||||
const outputDir = args.output
|
const outputDir = args.output
|
||||||
? path.resolve(rootFolder, args.output) // Resolve relative to root if needed
|
? path.resolve(args.projectRoot, args.output)
|
||||||
: path.dirname(tasksJsonPath);
|
: path.dirname(tasksJsonPath);
|
||||||
|
|
||||||
const result = await generateTaskFilesDirect(
|
const result = await generateTaskFilesDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved paths
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
outputDir: outputDir
|
outputDir: outputDir
|
||||||
// No other args specific to this tool
|
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
@@ -89,6 +75,6 @@ export function registerGenerateTool(server) {
|
|||||||
log.error(`Error in generate tool: ${error.message}`);
|
log.error(`Error in generate tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { showTaskDirect } from '../core/task-master-core.js';
|
import { showTaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -21,8 +21,10 @@ function processTaskResponse(data) {
|
|||||||
if (!data) return data;
|
if (!data) return data;
|
||||||
|
|
||||||
// If we have the expected structure with task and allTasks
|
// If we have the expected structure with task and allTasks
|
||||||
if (data.task) {
|
if (typeof data === 'object' && data !== null && data.id && data.title) {
|
||||||
// Return only the task object, removing the allTasks array
|
// If the data itself looks like the task object, return it
|
||||||
|
return data;
|
||||||
|
} else if (data.task) {
|
||||||
return data.task;
|
return data.task;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,44 +46,33 @@ export function registerShowTaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.optional()
|
.optional()
|
||||||
.describe("Filter subtasks by status (e.g., 'pending', 'done')"),
|
.describe("Filter subtasks by status (e.g., 'pending', 'done')"),
|
||||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
file: z
|
||||||
|
.string()
|
||||||
|
.optional()
|
||||||
|
.describe('Path to the tasks file relative to project root'),
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.optional()
|
||||||
|
.describe(
|
||||||
|
'Absolute path to the project root directory (Optional, usually from session)'
|
||||||
|
)
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log }) => {
|
||||||
// Log the session right at the start of execute
|
const { id, file, status, projectRoot } = args;
|
||||||
log.info(
|
|
||||||
`Session object received in execute: ${JSON.stringify(session)}`
|
|
||||||
); // Use JSON.stringify for better visibility
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Getting task details for ID: ${args.id}${args.status ? ` (filtering subtasks by status: ${args.status})` : ''}`
|
`Getting task details for ID: ${id}${status ? ` (filtering subtasks by status: ${status})` : ''} in root: ${projectRoot}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Resolve the path to tasks.json using the NORMALIZED projectRoot from args
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info(`Attempting to use project root: ${rootFolder}`); // Log the final resolved root
|
|
||||||
|
|
||||||
log.info(`Root folder: ${rootFolder}`); // Log the final resolved root
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: projectRoot, file: file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
|
log.info(`Resolved tasks path: ${tasksJsonPath}`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`Error finding tasks.json: ${error.message}`);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
@@ -89,13 +80,13 @@ export function registerShowTaskTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
log.info(`Attempting to use tasks file path: ${tasksJsonPath}`);
|
// Call the direct function, passing the normalized projectRoot
|
||||||
|
|
||||||
const result = await showTaskDirect(
|
const result = await showTaskDirect(
|
||||||
{
|
{
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
id: args.id,
|
id: id,
|
||||||
status: args.status
|
status: status,
|
||||||
|
projectRoot: projectRoot
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
@@ -108,7 +99,7 @@ export function registerShowTaskTool(server) {
|
|||||||
log.error(`Failed to get task: ${result.error.message}`);
|
log.error(`Failed to get task: ${result.error.message}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use our custom processor function to remove allTasks from the response
|
// Use our custom processor function
|
||||||
return handleApiResult(
|
return handleApiResult(
|
||||||
result,
|
result,
|
||||||
log,
|
log,
|
||||||
@@ -116,9 +107,9 @@ export function registerShowTaskTool(server) {
|
|||||||
processTaskResponse
|
processTaskResponse
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in get-task tool: ${error.message}\n${error.stack}`); // Add stack trace
|
log.error(`Error in get-task tool: ${error.message}\n${error.stack}`);
|
||||||
return createErrorResponse(`Failed to get task: ${error.message}`);
|
return createErrorResponse(`Failed to get task: ${error.message}`);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { listTasksDirect } from '../core/task-master-core.js';
|
import { listTasksDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -42,31 +42,19 @@ export function registerListTasksTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
|
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`Error finding tasks.json: ${error.message}`);
|
||||||
// Use the error message from findTasksJsonPath for better context
|
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to find tasks.json: ${error.message}`
|
`Failed to find tasks.json: ${error.message}`
|
||||||
);
|
);
|
||||||
@@ -89,7 +77,7 @@ export function registerListTasksTool(server) {
|
|||||||
log.error(`Error getting tasks: ${error.message}`);
|
log.error(`Error getting tasks: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ import { registerExpandTaskTool } from './expand-task.js';
|
|||||||
import { registerAddTaskTool } from './add-task.js';
|
import { registerAddTaskTool } from './add-task.js';
|
||||||
import { registerAddSubtaskTool } from './add-subtask.js';
|
import { registerAddSubtaskTool } from './add-subtask.js';
|
||||||
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
||||||
import { registerAnalyzeTool } from './analyze.js';
|
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
||||||
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
||||||
import { registerExpandAllTool } from './expand-all.js';
|
import { registerExpandAllTool } from './expand-all.js';
|
||||||
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
||||||
@@ -63,7 +63,7 @@ export function registerTaskMasterTools(server) {
|
|||||||
registerClearSubtasksTool(server);
|
registerClearSubtasksTool(server);
|
||||||
|
|
||||||
// Group 5: Task Analysis & Expansion
|
// Group 5: Task Analysis & Expansion
|
||||||
registerAnalyzeTool(server);
|
registerAnalyzeProjectComplexityTool(server);
|
||||||
registerExpandTaskTool(server);
|
registerExpandTaskTool(server);
|
||||||
registerExpandAllTool(server);
|
registerExpandAllTool(server);
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,9 @@
|
|||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import { createErrorResponse, handleApiResult } from './utils.js';
|
import {
|
||||||
|
createErrorResponse,
|
||||||
|
handleApiResult,
|
||||||
|
withNormalizedProjectRoot
|
||||||
|
} from './utils.js';
|
||||||
import { initializeProjectDirect } from '../core/task-master-core.js';
|
import { initializeProjectDirect } from '../core/task-master-core.js';
|
||||||
|
|
||||||
export function registerInitializeProjectTool(server) {
|
export function registerInitializeProjectTool(server) {
|
||||||
@@ -33,19 +37,10 @@ export function registerInitializeProjectTool(server) {
|
|||||||
'The root directory for the project. ALWAYS SET THIS TO THE PROJECT ROOT DIRECTORY. IF NOT SET, THE TOOL WILL NOT WORK.'
|
'The root directory for the project. ALWAYS SET THIS TO THE PROJECT ROOT DIRECTORY. IF NOT SET, THE TOOL WILL NOT WORK.'
|
||||||
)
|
)
|
||||||
}),
|
}),
|
||||||
execute: async (args, context) => {
|
execute: withNormalizedProjectRoot(async (args, context) => {
|
||||||
const { log } = context;
|
const { log } = context;
|
||||||
const session = context.session;
|
const session = context.session;
|
||||||
|
|
||||||
log.info(
|
|
||||||
'>>> Full Context Received by Tool:',
|
|
||||||
JSON.stringify(context, null, 2)
|
|
||||||
);
|
|
||||||
log.info(`Context received in tool function: ${context}`);
|
|
||||||
log.info(
|
|
||||||
`Session received in tool function: ${session ? session : 'undefined'}`
|
|
||||||
);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Executing initialize_project tool with args: ${JSON.stringify(args)}`
|
`Executing initialize_project tool with args: ${JSON.stringify(args)}`
|
||||||
@@ -59,6 +54,6 @@ export function registerInitializeProjectTool(server) {
|
|||||||
log.error(errorMessage, error);
|
log.error(errorMessage, error);
|
||||||
return createErrorResponse(errorMessage, { details: error.stack });
|
return createErrorResponse(errorMessage, { details: error.stack });
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,9 +5,9 @@
|
|||||||
|
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import {
|
import {
|
||||||
getProjectRootFromSession,
|
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { modelsDirect } from '../core/task-master-core.js';
|
import { modelsDirect } from '../core/task-master-core.js';
|
||||||
|
|
||||||
@@ -42,7 +42,9 @@ export function registerModelsTool(server) {
|
|||||||
listAvailableModels: z
|
listAvailableModels: z
|
||||||
.boolean()
|
.boolean()
|
||||||
.optional()
|
.optional()
|
||||||
.describe('List all available models not currently in use.'),
|
.describe(
|
||||||
|
'List all available models not currently in use. Input/output costs values are in dollars (3 is $3.00).'
|
||||||
|
),
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.optional()
|
.optional()
|
||||||
@@ -56,34 +58,22 @@ export function registerModelsTool(server) {
|
|||||||
.optional()
|
.optional()
|
||||||
.describe('Indicates the set model ID is a custom Ollama model.')
|
.describe('Indicates the set model ID is a custom Ollama model.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Starting models tool with args: ${JSON.stringify(args)}`);
|
log.info(`Starting models tool with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call the direct function
|
|
||||||
const result = await modelsDirect(
|
const result = await modelsDirect(
|
||||||
{ ...args, projectRoot: rootFolder },
|
{ ...args, projectRoot: args.projectRoot },
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
// Handle and return the result
|
|
||||||
return handleApiResult(result, log);
|
return handleApiResult(result, log);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in models tool: ${error.message}`);
|
log.error(`Error in models tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { nextTaskDirect } from '../core/task-master-core.js';
|
import { nextTaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -27,26 +27,15 @@ export function registerNextTaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Finding next task with args: ${JSON.stringify(args)}`);
|
log.info(`Finding next task with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -58,9 +47,7 @@ export function registerNextTaskTool(server) {
|
|||||||
|
|
||||||
const result = await nextTaskDirect(
|
const result = await nextTaskDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath
|
tasksJsonPath: tasksJsonPath
|
||||||
// No other args specific to this tool
|
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
@@ -80,6 +67,6 @@ export function registerNextTaskTool(server) {
|
|||||||
log.error(`Error in nextTask tool: ${error.message}`);
|
log.error(`Error in nextTask tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,16 +4,16 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
|
import path from 'path';
|
||||||
import {
|
import {
|
||||||
getProjectRootFromSession,
|
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { parsePRDDirect } from '../core/task-master-core.js';
|
import { parsePRDDirect } from '../core/task-master-core.js';
|
||||||
import { resolveProjectPaths } from '../core/utils/path-utils.js';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register the parsePRD tool with the MCP server
|
* Register the parse_prd tool
|
||||||
* @param {Object} server - FastMCP server instance
|
* @param {Object} server - FastMCP server instance
|
||||||
*/
|
*/
|
||||||
export function registerParsePRDTool(server) {
|
export function registerParsePRDTool(server) {
|
||||||
@@ -42,72 +42,50 @@ export function registerParsePRDTool(server) {
|
|||||||
force: z
|
force: z
|
||||||
.boolean()
|
.boolean()
|
||||||
.optional()
|
.optional()
|
||||||
.describe('Allow overwriting an existing tasks.json file.'),
|
.default(false)
|
||||||
|
.describe('Overwrite existing output file without prompting.'),
|
||||||
append: z
|
append: z
|
||||||
.boolean()
|
.boolean()
|
||||||
.optional()
|
.optional()
|
||||||
.describe(
|
.default(false)
|
||||||
'Append new tasks to existing tasks.json instead of overwriting'
|
.describe('Append generated tasks to existing file.'),
|
||||||
),
|
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
const toolName = 'parse_prd';
|
||||||
try {
|
try {
|
||||||
log.info(`Parsing PRD with args: ${JSON.stringify(args)}`);
|
log.info(
|
||||||
|
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||||
// Get project root from args or session
|
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve input (PRD) and output (tasks.json) paths using the utility
|
|
||||||
const { projectRoot, prdPath, tasksJsonPath } = resolveProjectPaths(
|
|
||||||
rootFolder,
|
|
||||||
args,
|
|
||||||
log
|
|
||||||
);
|
);
|
||||||
|
|
||||||
// Check if PRD path was found (resolveProjectPaths returns null if not found and not provided)
|
// Call Direct Function - Pass relevant args including projectRoot
|
||||||
if (!prdPath) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'No PRD document found or provided. Please ensure a PRD file exists (e.g., PRD.md) or provide a valid input file path.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call the direct function with fully resolved paths
|
|
||||||
const result = await parsePRDDirect(
|
const result = await parsePRDDirect(
|
||||||
{
|
{
|
||||||
projectRoot: projectRoot,
|
input: args.input,
|
||||||
input: prdPath,
|
output: args.output,
|
||||||
output: tasksJsonPath,
|
|
||||||
numTasks: args.numTasks,
|
numTasks: args.numTasks,
|
||||||
force: args.force,
|
force: args.force,
|
||||||
append: args.append
|
append: args.append,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.success) {
|
log.info(
|
||||||
log.info(`Successfully parsed PRD: ${result.data.message}`);
|
`${toolName}: Direct function result: success=${result.success}`
|
||||||
} else {
|
|
||||||
log.error(
|
|
||||||
`Failed to parse PRD: ${result.error?.message || 'Unknown error'}`
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
return handleApiResult(result, log, 'Error parsing PRD');
|
return handleApiResult(result, log, 'Error parsing PRD');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in parse-prd tool: ${error.message}`);
|
log.error(
|
||||||
return createErrorResponse(error.message);
|
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||||
}
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Internal tool error (${toolName}): ${error.message}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { removeDependencyDirect } from '../core/task-master-core.js';
|
import { removeDependencyDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -33,28 +33,17 @@ export function registerRemoveDependencyTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(
|
log.info(
|
||||||
`Removing dependency for task ${args.id} from ${args.dependsOn} with args: ${JSON.stringify(args)}`
|
`Removing dependency for task ${args.id} from ${args.dependsOn} with args: ${JSON.stringify(args)}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -66,9 +55,7 @@ export function registerRemoveDependencyTool(server) {
|
|||||||
|
|
||||||
const result = await removeDependencyDirect(
|
const result = await removeDependencyDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
dependsOn: args.dependsOn
|
dependsOn: args.dependsOn
|
||||||
},
|
},
|
||||||
@@ -86,6 +73,6 @@ export function registerRemoveDependencyTool(server) {
|
|||||||
log.error(`Error in removeDependency tool: ${error.message}`);
|
log.error(`Error in removeDependency tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { removeSubtaskDirect } from '../core/task-master-core.js';
|
import { removeSubtaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -46,26 +46,15 @@ export function registerRemoveSubtaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Removing subtask with args: ${JSON.stringify(args)}`);
|
log.info(`Removing subtask with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -77,9 +66,7 @@ export function registerRemoveSubtaskTool(server) {
|
|||||||
|
|
||||||
const result = await removeSubtaskDirect(
|
const result = await removeSubtaskDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
convert: args.convert,
|
convert: args.convert,
|
||||||
skipGenerate: args.skipGenerate
|
skipGenerate: args.skipGenerate
|
||||||
@@ -98,6 +85,6 @@ export function registerRemoveSubtaskTool(server) {
|
|||||||
log.error(`Error in removeSubtask tool: ${error.message}`);
|
log.error(`Error in removeSubtask tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { removeTaskDirect } from '../core/task-master-core.js';
|
import { removeTaskDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -35,28 +35,15 @@ export function registerRemoveTaskTool(server) {
|
|||||||
.optional()
|
.optional()
|
||||||
.describe('Whether to skip confirmation prompt (default: false)')
|
.describe('Whether to skip confirmation prompt (default: false)')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Removing task(s) with ID(s): ${args.id}`);
|
log.info(`Removing task(s) with ID(s): ${args.id}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info(`Using project root: ${rootFolder}`);
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -68,7 +55,6 @@ export function registerRemoveTaskTool(server) {
|
|||||||
|
|
||||||
log.info(`Using tasks file path: ${tasksJsonPath}`);
|
log.info(`Using tasks file path: ${tasksJsonPath}`);
|
||||||
|
|
||||||
// Assume client has already handled confirmation if needed
|
|
||||||
const result = await removeTaskDirect(
|
const result = await removeTaskDirect(
|
||||||
{
|
{
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
@@ -88,6 +74,6 @@ export function registerRemoveTaskTool(server) {
|
|||||||
log.error(`Error in remove-task tool: ${error.message}`);
|
log.error(`Error in remove-task tool: ${error.message}`);
|
||||||
return createErrorResponse(`Failed to remove task: ${error.message}`);
|
return createErrorResponse(`Failed to remove task: ${error.message}`);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { setTaskStatusDirect } from '../core/task-master-core.js';
|
import { setTaskStatusDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -36,26 +36,15 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Setting status of task(s) ${args.id} to: ${args.status}`);
|
log.info(`Setting status of task(s) ${args.id} to: ${args.status}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -65,19 +54,15 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Call the direct function with the resolved path
|
|
||||||
const result = await setTaskStatusDirect(
|
const result = await setTaskStatusDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
status: args.status
|
status: args.status
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
|
|
||||||
// Log the result
|
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
log.info(
|
log.info(
|
||||||
`Successfully updated status for task(s) ${args.id} to "${args.status}": ${result.data.message}`
|
`Successfully updated status for task(s) ${args.id} to "${args.status}": ${result.data.message}`
|
||||||
@@ -88,7 +73,6 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Format and return the result
|
|
||||||
return handleApiResult(result, log, 'Error setting task status');
|
return handleApiResult(result, log, 'Error setting task status');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in setTaskStatus tool: ${error.message}`);
|
log.error(`Error in setTaskStatus tool: ${error.message}`);
|
||||||
@@ -96,6 +80,6 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
`Error setting task status: ${error.message}`
|
`Error setting task status: ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { updateSubtaskByIdDirect } from '../core/task-master-core.js';
|
import { updateSubtaskByIdDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -37,30 +37,19 @@ export function registerUpdateSubtaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
const toolName = 'update_subtask';
|
||||||
try {
|
try {
|
||||||
log.info(`Updating subtask with args: ${JSON.stringify(args)}`);
|
log.info(`Updating subtask with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to find tasks.json: ${error.message}`
|
`Failed to find tasks.json: ${error.message}`
|
||||||
);
|
);
|
||||||
@@ -68,12 +57,11 @@ export function registerUpdateSubtaskTool(server) {
|
|||||||
|
|
||||||
const result = await updateSubtaskByIdDirect(
|
const result = await updateSubtaskByIdDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
research: args.research
|
research: args.research,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
@@ -89,9 +77,13 @@ export function registerUpdateSubtaskTool(server) {
|
|||||||
|
|
||||||
return handleApiResult(result, log, 'Error updating subtask');
|
return handleApiResult(result, log, 'Error updating subtask');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in update_subtask tool: ${error.message}`);
|
log.error(
|
||||||
return createErrorResponse(error.message);
|
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||||
}
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Internal tool error (${toolName}): ${error.message}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { updateTaskByIdDirect } from '../core/task-master-core.js';
|
import { updateTaskByIdDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -23,7 +23,7 @@ export function registerUpdateTaskTool(server) {
|
|||||||
'Updates a single task by ID with new information or context provided in the prompt.',
|
'Updates a single task by ID with new information or context provided in the prompt.',
|
||||||
parameters: z.object({
|
parameters: z.object({
|
||||||
id: z
|
id: z
|
||||||
.string()
|
.string() // ID can be number or string like "1.2"
|
||||||
.describe(
|
.describe(
|
||||||
"ID of the task (e.g., '15') to update. Subtasks are supported using the update-subtask tool."
|
"ID of the task (e.g., '15') to update. Subtasks are supported using the update-subtask tool."
|
||||||
),
|
),
|
||||||
@@ -39,61 +39,53 @@ export function registerUpdateTaskTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
const toolName = 'update_task';
|
||||||
try {
|
try {
|
||||||
log.info(`Updating task with args: ${JSON.stringify(args)}`);
|
log.info(
|
||||||
|
`Executing ${toolName} tool with args: ${JSON.stringify(args)}`
|
||||||
// Get project root from args or session
|
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
// Ensure project root was determined
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the path to tasks.json
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
|
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to find tasks.json: ${error.message}`
|
`Failed to find tasks.json: ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 3. Call Direct Function - Include projectRoot
|
||||||
const result = await updateTaskByIdDirect(
|
const result = await updateTaskByIdDirect(
|
||||||
{
|
{
|
||||||
// Pass the explicitly resolved path
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
// Pass other relevant args
|
|
||||||
id: args.id,
|
id: args.id,
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
research: args.research
|
research: args.research,
|
||||||
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.success) {
|
// 4. Handle Result
|
||||||
log.info(`Successfully updated task with ID ${args.id}`);
|
log.info(
|
||||||
} else {
|
`${toolName}: Direct function result: success=${result.success}`
|
||||||
log.error(
|
|
||||||
`Failed to update task: ${result.error?.message || 'Unknown error'}`
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
return handleApiResult(result, log, 'Error updating task');
|
return handleApiResult(result, log, 'Error updating task');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error in update_task tool: ${error.message}`);
|
log.error(
|
||||||
return createErrorResponse(error.message);
|
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||||
}
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Internal tool error (${toolName}): ${error.message}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,10 +4,13 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import { handleApiResult, createErrorResponse } from './utils.js';
|
import {
|
||||||
|
handleApiResult,
|
||||||
|
createErrorResponse,
|
||||||
|
withNormalizedProjectRoot
|
||||||
|
} from './utils.js';
|
||||||
import { updateTasksDirect } from '../core/task-master-core.js';
|
import { updateTasksDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
import path from 'path';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register the update tool with the MCP server
|
* Register the update tool with the MCP server
|
||||||
@@ -31,58 +34,61 @@ export function registerUpdateTool(server) {
|
|||||||
.boolean()
|
.boolean()
|
||||||
.optional()
|
.optional()
|
||||||
.describe('Use Perplexity AI for research-backed updates'),
|
.describe('Use Perplexity AI for research-backed updates'),
|
||||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
file: z
|
||||||
|
.string()
|
||||||
|
.optional()
|
||||||
|
.describe('Path to the tasks file relative to project root'),
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.optional()
|
||||||
|
.describe(
|
||||||
|
'The directory of the project. (Optional, usually from session)'
|
||||||
|
)
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
|
const toolName = 'update';
|
||||||
|
const { from, prompt, research, file, projectRoot } = args;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
log.info(`Executing update tool with args: ${JSON.stringify(args)}`);
|
log.info(
|
||||||
|
`Executing ${toolName} tool with normalized root: ${projectRoot}`
|
||||||
// 1. Get Project Root
|
|
||||||
const rootFolder = args.projectRoot;
|
|
||||||
if (!rootFolder || !path.isAbsolute(rootFolder)) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'projectRoot is required and must be absolute.'
|
|
||||||
);
|
);
|
||||||
}
|
|
||||||
log.info(`Project root: ${rootFolder}`);
|
|
||||||
|
|
||||||
// 2. Resolve Path
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath({ projectRoot, file }, log);
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
|
||||||
log
|
|
||||||
);
|
|
||||||
log.info(`Resolved tasks path: ${tasksJsonPath}`);
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error finding tasks.json: ${error.message}`);
|
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
|
||||||
return createErrorResponse(
|
return createErrorResponse(
|
||||||
`Failed to find tasks.json: ${error.message}`
|
`Failed to find tasks.json within project root '${projectRoot}': ${error.message}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// 3. Call Direct Function
|
|
||||||
const result = await updateTasksDirect(
|
const result = await updateTasksDirect(
|
||||||
{
|
{
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
from: args.from,
|
from: from,
|
||||||
prompt: args.prompt,
|
prompt: prompt,
|
||||||
research: args.research
|
research: research,
|
||||||
|
projectRoot: projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
);
|
);
|
||||||
|
|
||||||
// 4. Handle Result
|
log.info(
|
||||||
log.info(`updateTasksDirect result: success=${result.success}`);
|
`${toolName}: Direct function result: success=${result.success}`
|
||||||
|
);
|
||||||
return handleApiResult(result, log, 'Error updating tasks');
|
return handleApiResult(result, log, 'Error updating tasks');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Critical error in update tool execute: ${error.message}`);
|
log.error(
|
||||||
return createErrorResponse(`Internal tool error: ${error.message}`);
|
`Critical error in ${toolName} tool execute: ${error.message}`
|
||||||
}
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Internal tool error (${toolName}): ${error.message}`
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -83,10 +83,10 @@ function getProjectRoot(projectRootRaw, log) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Extracts the project root path from the FastMCP session object.
|
* Extracts and normalizes the project root path from the MCP session object.
|
||||||
* @param {Object} session - The FastMCP session object.
|
* @param {Object} session - The MCP session object.
|
||||||
* @param {Object} log - Logger object.
|
* @param {Object} log - The MCP logger object.
|
||||||
* @returns {string|null} - The absolute path to the project root, or null if not found.
|
* @returns {string|null} - The normalized absolute project root path or null if not found/invalid.
|
||||||
*/
|
*/
|
||||||
function getProjectRootFromSession(session, log) {
|
function getProjectRootFromSession(session, log) {
|
||||||
try {
|
try {
|
||||||
@@ -107,68 +107,87 @@ function getProjectRootFromSession(session, log) {
|
|||||||
})}`
|
})}`
|
||||||
);
|
);
|
||||||
|
|
||||||
// ALWAYS ensure we return a valid path for project root
|
let rawRootPath = null;
|
||||||
|
let decodedPath = null;
|
||||||
|
let finalPath = null;
|
||||||
|
|
||||||
|
// Check primary location
|
||||||
|
if (session?.roots?.[0]?.uri) {
|
||||||
|
rawRootPath = session.roots[0].uri;
|
||||||
|
log.info(`Found raw root URI in session.roots[0].uri: ${rawRootPath}`);
|
||||||
|
}
|
||||||
|
// Check alternate location
|
||||||
|
else if (session?.roots?.roots?.[0]?.uri) {
|
||||||
|
rawRootPath = session.roots.roots[0].uri;
|
||||||
|
log.info(
|
||||||
|
`Found raw root URI in session.roots.roots[0].uri: ${rawRootPath}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rawRootPath) {
|
||||||
|
// Decode URI and strip file:// protocol
|
||||||
|
decodedPath = rawRootPath.startsWith('file://')
|
||||||
|
? decodeURIComponent(rawRootPath.slice(7))
|
||||||
|
: rawRootPath; // Assume non-file URI is already decoded? Or decode anyway? Let's decode.
|
||||||
|
if (!rawRootPath.startsWith('file://')) {
|
||||||
|
decodedPath = decodeURIComponent(rawRootPath); // Decode even if no file://
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle potential Windows drive prefix after stripping protocol (e.g., /C:/...)
|
||||||
|
if (
|
||||||
|
decodedPath.startsWith('/') &&
|
||||||
|
/[A-Za-z]:/.test(decodedPath.substring(1, 3))
|
||||||
|
) {
|
||||||
|
decodedPath = decodedPath.substring(1); // Remove leading slash if it's like /C:/...
|
||||||
|
}
|
||||||
|
|
||||||
|
log.info(`Decoded path: ${decodedPath}`);
|
||||||
|
|
||||||
|
// Normalize slashes and resolve
|
||||||
|
const normalizedSlashes = decodedPath.replace(/\\/g, '/');
|
||||||
|
finalPath = path.resolve(normalizedSlashes); // Resolve to absolute path for current OS
|
||||||
|
|
||||||
|
log.info(`Normalized and resolved session path: ${finalPath}`);
|
||||||
|
return finalPath;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback Logic (remains the same)
|
||||||
|
log.warn('No project root URI found in session. Attempting fallbacks...');
|
||||||
const cwd = process.cwd();
|
const cwd = process.cwd();
|
||||||
|
|
||||||
// If we have a session with roots array
|
// Fallback 1: Use server path deduction (Cursor IDE)
|
||||||
if (session?.roots?.[0]?.uri) {
|
const serverPath = process.argv[1];
|
||||||
const rootUri = session.roots[0].uri;
|
|
||||||
log.info(`Found rootUri in session.roots[0].uri: ${rootUri}`);
|
|
||||||
const rootPath = rootUri.startsWith('file://')
|
|
||||||
? decodeURIComponent(rootUri.slice(7))
|
|
||||||
: rootUri;
|
|
||||||
log.info(`Decoded rootPath: ${rootPath}`);
|
|
||||||
return rootPath;
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we have a session with roots.roots array (different structure)
|
|
||||||
if (session?.roots?.roots?.[0]?.uri) {
|
|
||||||
const rootUri = session.roots.roots[0].uri;
|
|
||||||
log.info(`Found rootUri in session.roots.roots[0].uri: ${rootUri}`);
|
|
||||||
const rootPath = rootUri.startsWith('file://')
|
|
||||||
? decodeURIComponent(rootUri.slice(7))
|
|
||||||
: rootUri;
|
|
||||||
log.info(`Decoded rootPath: ${rootPath}`);
|
|
||||||
return rootPath;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get the server's location and try to find project root -- this is a fallback necessary in Cursor IDE
|
|
||||||
const serverPath = process.argv[1]; // This should be the path to server.js, which is in mcp-server/
|
|
||||||
if (serverPath && serverPath.includes('mcp-server')) {
|
if (serverPath && serverPath.includes('mcp-server')) {
|
||||||
// Find the mcp-server directory first
|
|
||||||
const mcpServerIndex = serverPath.indexOf('mcp-server');
|
const mcpServerIndex = serverPath.indexOf('mcp-server');
|
||||||
if (mcpServerIndex !== -1) {
|
if (mcpServerIndex !== -1) {
|
||||||
// Get the path up to mcp-server, which should be the project root
|
const projectRoot = path.dirname(
|
||||||
const projectRoot = serverPath.substring(0, mcpServerIndex - 1); // -1 to remove trailing slash
|
serverPath.substring(0, mcpServerIndex)
|
||||||
|
); // Go up one level
|
||||||
|
|
||||||
// Verify this looks like our project root by checking for key files/directories
|
|
||||||
if (
|
if (
|
||||||
fs.existsSync(path.join(projectRoot, '.cursor')) ||
|
fs.existsSync(path.join(projectRoot, '.cursor')) ||
|
||||||
fs.existsSync(path.join(projectRoot, 'mcp-server')) ||
|
fs.existsSync(path.join(projectRoot, 'mcp-server')) ||
|
||||||
fs.existsSync(path.join(projectRoot, 'package.json'))
|
fs.existsSync(path.join(projectRoot, 'package.json'))
|
||||||
) {
|
) {
|
||||||
log.info(`Found project root from server path: ${projectRoot}`);
|
log.info(
|
||||||
return projectRoot;
|
`Using project root derived from server path: ${projectRoot}`
|
||||||
|
);
|
||||||
|
return projectRoot; // Already absolute
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ALWAYS ensure we return a valid path as a last resort
|
// Fallback 2: Use CWD
|
||||||
log.info(`Using current working directory as ultimate fallback: ${cwd}`);
|
log.info(`Using current working directory as ultimate fallback: ${cwd}`);
|
||||||
return cwd;
|
return cwd; // Already absolute
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
// If we have a server path, use it as a basis for project root
|
log.error(`Error in getProjectRootFromSession: ${e.message}`);
|
||||||
const serverPath = process.argv[1];
|
// Attempt final fallback to CWD on error
|
||||||
if (serverPath && serverPath.includes('mcp-server')) {
|
|
||||||
const mcpServerIndex = serverPath.indexOf('mcp-server');
|
|
||||||
return mcpServerIndex !== -1
|
|
||||||
? serverPath.substring(0, mcpServerIndex - 1)
|
|
||||||
: process.cwd();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only use cwd if it's not "/"
|
|
||||||
const cwd = process.cwd();
|
const cwd = process.cwd();
|
||||||
return cwd !== '/' ? cwd : '/';
|
log.warn(
|
||||||
|
`Returning CWD (${cwd}) due to error during session root processing.`
|
||||||
|
);
|
||||||
|
return cwd;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -474,6 +493,148 @@ function createLogWrapper(log) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Resolves and normalizes a project root path from various formats.
|
||||||
|
* Handles URI encoding, Windows paths, and file protocols.
|
||||||
|
* @param {string | undefined | null} rawPath - The raw project root path.
|
||||||
|
* @param {object} [log] - Optional logger object.
|
||||||
|
* @returns {string | null} Normalized absolute path or null if input is invalid/empty.
|
||||||
|
*/
|
||||||
|
function normalizeProjectRoot(rawPath, log) {
|
||||||
|
if (!rawPath) return null;
|
||||||
|
try {
|
||||||
|
let pathString = Array.isArray(rawPath) ? rawPath[0] : String(rawPath);
|
||||||
|
if (!pathString) return null;
|
||||||
|
|
||||||
|
// 1. Decode URI Encoding
|
||||||
|
// Use try-catch for decoding as malformed URIs can throw
|
||||||
|
try {
|
||||||
|
pathString = decodeURIComponent(pathString);
|
||||||
|
} catch (decodeError) {
|
||||||
|
if (log)
|
||||||
|
log.warn(
|
||||||
|
`Could not decode URI component for path "${rawPath}": ${decodeError.message}. Proceeding with raw string.`
|
||||||
|
);
|
||||||
|
// Proceed with the original string if decoding fails
|
||||||
|
pathString = Array.isArray(rawPath) ? rawPath[0] : String(rawPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Strip file:// prefix (handle 2 or 3 slashes)
|
||||||
|
if (pathString.startsWith('file:///')) {
|
||||||
|
pathString = pathString.slice(7); // Slice 7 for file:///, may leave leading / on Windows
|
||||||
|
} else if (pathString.startsWith('file://')) {
|
||||||
|
pathString = pathString.slice(7); // Slice 7 for file://
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Handle potential Windows leading slash after stripping prefix (e.g., /C:/...)
|
||||||
|
// This checks if it starts with / followed by a drive letter C: D: etc.
|
||||||
|
if (
|
||||||
|
pathString.startsWith('/') &&
|
||||||
|
/[A-Za-z]:/.test(pathString.substring(1, 3))
|
||||||
|
) {
|
||||||
|
pathString = pathString.substring(1); // Remove the leading slash
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Normalize backslashes to forward slashes
|
||||||
|
pathString = pathString.replace(/\\/g, '/');
|
||||||
|
|
||||||
|
// 5. Resolve to absolute path using server's OS convention
|
||||||
|
const resolvedPath = path.resolve(pathString);
|
||||||
|
return resolvedPath;
|
||||||
|
} catch (error) {
|
||||||
|
if (log) {
|
||||||
|
log.error(
|
||||||
|
`Error normalizing project root path "${rawPath}": ${error.message}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return null; // Return null on error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extracts the raw project root path from the session (without normalization).
|
||||||
|
* Used as a fallback within the HOF.
|
||||||
|
* @param {Object} session - The MCP session object.
|
||||||
|
* @param {Object} log - The MCP logger object.
|
||||||
|
* @returns {string|null} The raw path string or null.
|
||||||
|
*/
|
||||||
|
function getRawProjectRootFromSession(session, log) {
|
||||||
|
try {
|
||||||
|
// Check primary location
|
||||||
|
if (session?.roots?.[0]?.uri) {
|
||||||
|
return session.roots[0].uri;
|
||||||
|
}
|
||||||
|
// Check alternate location
|
||||||
|
else if (session?.roots?.roots?.[0]?.uri) {
|
||||||
|
return session.roots.roots[0].uri;
|
||||||
|
}
|
||||||
|
return null; // Not found in expected session locations
|
||||||
|
} catch (e) {
|
||||||
|
log.error(`Error accessing session roots: ${e.message}`);
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Higher-order function to wrap MCP tool execute methods.
|
||||||
|
* Ensures args.projectRoot is present and normalized before execution.
|
||||||
|
* @param {Function} executeFn - The original async execute(args, context) function.
|
||||||
|
* @returns {Function} The wrapped async execute function.
|
||||||
|
*/
|
||||||
|
function withNormalizedProjectRoot(executeFn) {
|
||||||
|
return async (args, context) => {
|
||||||
|
const { log, session } = context;
|
||||||
|
let normalizedRoot = null;
|
||||||
|
let rootSource = 'unknown';
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Determine raw root: prioritize args, then session
|
||||||
|
let rawRoot = args.projectRoot;
|
||||||
|
if (!rawRoot) {
|
||||||
|
rawRoot = getRawProjectRootFromSession(session, log);
|
||||||
|
rootSource = 'session';
|
||||||
|
} else {
|
||||||
|
rootSource = 'args';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!rawRoot) {
|
||||||
|
log.error('Could not determine project root from args or session.');
|
||||||
|
return createErrorResponse(
|
||||||
|
'Could not determine project root. Please provide projectRoot argument or ensure session contains root info.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize the determined raw root
|
||||||
|
normalizedRoot = normalizeProjectRoot(rawRoot, log);
|
||||||
|
|
||||||
|
if (!normalizedRoot) {
|
||||||
|
log.error(
|
||||||
|
`Failed to normalize project root obtained from ${rootSource}: ${rawRoot}`
|
||||||
|
);
|
||||||
|
return createErrorResponse(
|
||||||
|
`Invalid project root provided or derived from ${rootSource}: ${rawRoot}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inject the normalized root back into args
|
||||||
|
const updatedArgs = { ...args, projectRoot: normalizedRoot };
|
||||||
|
|
||||||
|
// Execute the original function with normalized root in args
|
||||||
|
return await executeFn(updatedArgs, context);
|
||||||
|
} catch (error) {
|
||||||
|
log.error(
|
||||||
|
`Error within withNormalizedProjectRoot HOF (Normalized Root: ${normalizedRoot}): ${error.message}`
|
||||||
|
);
|
||||||
|
// Add stack trace if available and debug enabled
|
||||||
|
if (error.stack && log.debug) {
|
||||||
|
log.debug(error.stack);
|
||||||
|
}
|
||||||
|
// Return a generic error or re-throw depending on desired behavior
|
||||||
|
return createErrorResponse(`Operation failed: ${error.message}`);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Ensure all functions are exported
|
// Ensure all functions are exported
|
||||||
export {
|
export {
|
||||||
getProjectRoot,
|
getProjectRoot,
|
||||||
@@ -484,5 +645,8 @@ export {
|
|||||||
processMCPResponseData,
|
processMCPResponseData,
|
||||||
createContentResponse,
|
createContentResponse,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
createLogWrapper
|
createLogWrapper,
|
||||||
|
normalizeProjectRoot,
|
||||||
|
getRawProjectRootFromSession,
|
||||||
|
withNormalizedProjectRoot
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import { z } from 'zod';
|
|||||||
import {
|
import {
|
||||||
handleApiResult,
|
handleApiResult,
|
||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
getProjectRootFromSession
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import { validateDependenciesDirect } from '../core/task-master-core.js';
|
import { validateDependenciesDirect } from '../core/task-master-core.js';
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
@@ -27,24 +27,15 @@ export function registerValidateDependenciesTool(server) {
|
|||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
}),
|
}),
|
||||||
execute: async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
log.info(`Validating dependencies with args: ${JSON.stringify(args)}`);
|
log.info(`Validating dependencies with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
// Get project root from args or session
|
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||||
const rootFolder =
|
|
||||||
args.projectRoot || getProjectRootFromSession(session, log);
|
|
||||||
|
|
||||||
if (!rootFolder) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
let tasksJsonPath;
|
let tasksJsonPath;
|
||||||
try {
|
try {
|
||||||
tasksJsonPath = findTasksJsonPath(
|
tasksJsonPath = findTasksJsonPath(
|
||||||
{ projectRoot: rootFolder, file: args.file },
|
{ projectRoot: args.projectRoot, file: args.file },
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -74,6 +65,6 @@ export function registerValidateDependenciesTool(server) {
|
|||||||
log.error(`Error in validateDependencies tool: ${error.message}`);
|
log.error(`Error in validateDependencies tool: ${error.message}`);
|
||||||
return createErrorResponse(error.message);
|
return createErrorResponse(error.message);
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.12.1",
|
"version": "0.13.0-rc.0",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -180,9 +180,9 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
|||||||
|
|
||||||
// Map template names to their actual source paths
|
// Map template names to their actual source paths
|
||||||
switch (templateName) {
|
switch (templateName) {
|
||||||
case 'scripts_README.md':
|
// case 'scripts_README.md':
|
||||||
sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
// sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
||||||
break;
|
// break;
|
||||||
case 'dev_workflow.mdc':
|
case 'dev_workflow.mdc':
|
||||||
sourcePath = path.join(
|
sourcePath = path.join(
|
||||||
__dirname,
|
__dirname,
|
||||||
@@ -219,8 +219,8 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
|||||||
'self_improve.mdc'
|
'self_improve.mdc'
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
case 'README-task-master.md':
|
// case 'README-task-master.md':
|
||||||
sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
// sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
||||||
break;
|
break;
|
||||||
case 'windsurfrules':
|
case 'windsurfrules':
|
||||||
sourcePath = path.join(__dirname, '..', 'assets', '.windsurfrules');
|
sourcePath = path.join(__dirname, '..', 'assets', '.windsurfrules');
|
||||||
@@ -351,18 +351,18 @@ async function initializeProject(options = {}) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Debug logging only if not in silent mode
|
// Debug logging only if not in silent mode
|
||||||
if (!isSilentMode()) {
|
// if (!isSilentMode()) {
|
||||||
console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
// console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
||||||
console.log('Full options object:', JSON.stringify(options));
|
// console.log('Full options object:', JSON.stringify(options));
|
||||||
console.log('options.yes:', options.yes);
|
// console.log('options.yes:', options.yes);
|
||||||
console.log('==================================================');
|
// console.log('==================================================');
|
||||||
}
|
// }
|
||||||
|
|
||||||
const skipPrompts = options.yes || (options.name && options.description);
|
const skipPrompts = options.yes || (options.name && options.description);
|
||||||
|
|
||||||
if (!isSilentMode()) {
|
// if (!isSilentMode()) {
|
||||||
console.log('Skip prompts determined:', skipPrompts);
|
// console.log('Skip prompts determined:', skipPrompts);
|
||||||
}
|
// }
|
||||||
|
|
||||||
if (skipPrompts) {
|
if (skipPrompts) {
|
||||||
if (!isSilentMode()) {
|
if (!isSilentMode()) {
|
||||||
@@ -565,12 +565,12 @@ function createProjectStructure(addAliases, dryRun) {
|
|||||||
path.join(targetDir, 'scripts', 'example_prd.txt')
|
path.join(targetDir, 'scripts', 'example_prd.txt')
|
||||||
);
|
);
|
||||||
|
|
||||||
// Create main README.md
|
// // Create main README.md
|
||||||
copyTemplateFile(
|
// copyTemplateFile(
|
||||||
'README-task-master.md',
|
// 'README-task-master.md',
|
||||||
path.join(targetDir, 'README-task-master.md'),
|
// path.join(targetDir, 'README-task-master.md'),
|
||||||
replacements
|
// replacements
|
||||||
);
|
// );
|
||||||
|
|
||||||
// Initialize git repository if git is available
|
// Initialize git repository if git is available
|
||||||
try {
|
try {
|
||||||
@@ -761,21 +761,22 @@ function setupMCPConfiguration(targetDir) {
|
|||||||
const newMCPServer = {
|
const newMCPServer = {
|
||||||
'task-master-ai': {
|
'task-master-ai': {
|
||||||
command: 'npx',
|
command: 'npx',
|
||||||
args: ['-y', 'task-master-mcp'],
|
args: ['-y', '--package=task-master-ai', 'task-master-ai'],
|
||||||
env: {
|
env: {
|
||||||
ANTHROPIC_API_KEY: 'YOUR_ANTHROPIC_API_KEY',
|
ANTHROPIC_API_KEY: 'ANTHROPIC_API_KEY_HERE',
|
||||||
PERPLEXITY_API_KEY: 'YOUR_PERPLEXITY_API_KEY',
|
PERPLEXITY_API_KEY: 'PERPLEXITY_API_KEY_HERE',
|
||||||
MODEL: 'claude-3-7-sonnet-20250219',
|
OPENAI_API_KEY: 'OPENAI_API_KEY_HERE',
|
||||||
PERPLEXITY_MODEL: 'sonar-pro',
|
GOOGLE_API_KEY: 'GOOGLE_API_KEY_HERE',
|
||||||
MAX_TOKENS: '64000',
|
XAI_API_KEY: 'XAI_API_KEY_HERE',
|
||||||
TEMPERATURE: '0.2',
|
OPENROUTER_API_KEY: 'OPENROUTER_API_KEY_HERE',
|
||||||
DEFAULT_SUBTASKS: '5',
|
MISTRAL_API_KEY: 'MISTRAL_API_KEY_HERE',
|
||||||
DEFAULT_PRIORITY: 'medium'
|
AZURE_OPENAI_API_KEY: 'AZURE_OPENAI_API_KEY_HERE',
|
||||||
|
OLLAMA_API_KEY: 'OLLAMA_API_KEY_HERE'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Check if mcp.json already exists
|
// Check if mcp.json already existsimage.png
|
||||||
if (fs.existsSync(mcpJsonPath)) {
|
if (fs.existsSync(mcpJsonPath)) {
|
||||||
log(
|
log(
|
||||||
'info',
|
'info',
|
||||||
@@ -795,14 +796,14 @@ function setupMCPConfiguration(targetDir) {
|
|||||||
(server) =>
|
(server) =>
|
||||||
server.args &&
|
server.args &&
|
||||||
server.args.some(
|
server.args.some(
|
||||||
(arg) => typeof arg === 'string' && arg.includes('task-master-mcp')
|
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
|
||||||
if (hasMCPString) {
|
if (hasMCPString) {
|
||||||
log(
|
log(
|
||||||
'info',
|
'info',
|
||||||
'Found existing task-master-mcp configuration in mcp.json, leaving untouched'
|
'Found existing task-master-ai MCP configuration in mcp.json, leaving untouched'
|
||||||
);
|
);
|
||||||
return; // Exit early, don't modify the existing configuration
|
return; // Exit early, don't modify the existing configuration
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ import {
|
|||||||
getFallbackModelId,
|
getFallbackModelId,
|
||||||
getParametersForRole
|
getParametersForRole
|
||||||
} from './config-manager.js';
|
} from './config-manager.js';
|
||||||
import { log, resolveEnvVariable } from './utils.js';
|
import { log, resolveEnvVariable, findProjectRoot } from './utils.js';
|
||||||
|
|
||||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||||
@@ -136,10 +136,11 @@ function _extractErrorMessage(error) {
|
|||||||
* Internal helper to resolve the API key for a given provider.
|
* Internal helper to resolve the API key for a given provider.
|
||||||
* @param {string} providerName - The name of the provider (lowercase).
|
* @param {string} providerName - The name of the provider (lowercase).
|
||||||
* @param {object|null} session - Optional MCP session object.
|
* @param {object|null} session - Optional MCP session object.
|
||||||
|
* @param {string|null} projectRoot - Optional project root path for .env fallback.
|
||||||
* @returns {string|null} The API key or null if not found/needed.
|
* @returns {string|null} The API key or null if not found/needed.
|
||||||
* @throws {Error} If a required API key is missing.
|
* @throws {Error} If a required API key is missing.
|
||||||
*/
|
*/
|
||||||
function _resolveApiKey(providerName, session) {
|
function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||||
const keyMap = {
|
const keyMap = {
|
||||||
openai: 'OPENAI_API_KEY',
|
openai: 'OPENAI_API_KEY',
|
||||||
anthropic: 'ANTHROPIC_API_KEY',
|
anthropic: 'ANTHROPIC_API_KEY',
|
||||||
@@ -163,10 +164,10 @@ function _resolveApiKey(providerName, session) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
const apiKey = resolveEnvVariable(envVarName, session);
|
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
|
||||||
if (!apiKey) {
|
if (!apiKey) {
|
||||||
throw new Error(
|
throw new Error(
|
||||||
`Required API key ${envVarName} for provider '${providerName}' is not set in environment or session.`
|
`Required API key ${envVarName} for provider '${providerName}' is not set in environment, session, or .env file.`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
return apiKey;
|
return apiKey;
|
||||||
@@ -241,27 +242,35 @@ async function _attemptProviderCallWithRetries(
|
|||||||
* Base logic for unified service functions.
|
* Base logic for unified service functions.
|
||||||
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
|
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
|
||||||
* @param {object} params - Original parameters passed to the service function.
|
* @param {object} params - Original parameters passed to the service function.
|
||||||
|
* @param {string} [params.projectRoot] - Optional project root path.
|
||||||
* @returns {Promise<any>} Result from the underlying provider call.
|
* @returns {Promise<any>} Result from the underlying provider call.
|
||||||
*/
|
*/
|
||||||
async function _unifiedServiceRunner(serviceType, params) {
|
async function _unifiedServiceRunner(serviceType, params) {
|
||||||
const {
|
const {
|
||||||
role: initialRole,
|
role: initialRole,
|
||||||
session,
|
session,
|
||||||
|
projectRoot,
|
||||||
systemPrompt,
|
systemPrompt,
|
||||||
prompt,
|
prompt,
|
||||||
schema,
|
schema,
|
||||||
objectName,
|
objectName,
|
||||||
...restApiParams
|
...restApiParams
|
||||||
} = params;
|
} = params;
|
||||||
log('info', `${serviceType}Service called`, { role: initialRole });
|
log('info', `${serviceType}Service called`, {
|
||||||
|
role: initialRole,
|
||||||
|
projectRoot
|
||||||
|
});
|
||||||
|
|
||||||
|
// Determine the effective project root (passed in or detected)
|
||||||
|
const effectiveProjectRoot = projectRoot || findProjectRoot();
|
||||||
|
|
||||||
let sequence;
|
let sequence;
|
||||||
if (initialRole === 'main') {
|
if (initialRole === 'main') {
|
||||||
sequence = ['main', 'fallback', 'research'];
|
sequence = ['main', 'fallback', 'research'];
|
||||||
} else if (initialRole === 'fallback') {
|
|
||||||
sequence = ['fallback', 'research'];
|
|
||||||
} else if (initialRole === 'research') {
|
} else if (initialRole === 'research') {
|
||||||
sequence = ['research', 'fallback'];
|
sequence = ['research', 'fallback', 'main'];
|
||||||
|
} else if (initialRole === 'fallback') {
|
||||||
|
sequence = ['fallback', 'main', 'research'];
|
||||||
} else {
|
} else {
|
||||||
log(
|
log(
|
||||||
'warn',
|
'warn',
|
||||||
@@ -281,16 +290,16 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
log('info', `New AI service call with role: ${currentRole}`);
|
log('info', `New AI service call with role: ${currentRole}`);
|
||||||
|
|
||||||
// 1. Get Config: Provider, Model, Parameters for the current role
|
// 1. Get Config: Provider, Model, Parameters for the current role
|
||||||
// Call individual getters based on the current role
|
// Pass effectiveProjectRoot to config getters
|
||||||
if (currentRole === 'main') {
|
if (currentRole === 'main') {
|
||||||
providerName = getMainProvider();
|
providerName = getMainProvider(effectiveProjectRoot);
|
||||||
modelId = getMainModelId();
|
modelId = getMainModelId(effectiveProjectRoot);
|
||||||
} else if (currentRole === 'research') {
|
} else if (currentRole === 'research') {
|
||||||
providerName = getResearchProvider();
|
providerName = getResearchProvider(effectiveProjectRoot);
|
||||||
modelId = getResearchModelId();
|
modelId = getResearchModelId(effectiveProjectRoot);
|
||||||
} else if (currentRole === 'fallback') {
|
} else if (currentRole === 'fallback') {
|
||||||
providerName = getFallbackProvider();
|
providerName = getFallbackProvider(effectiveProjectRoot);
|
||||||
modelId = getFallbackModelId();
|
modelId = getFallbackModelId(effectiveProjectRoot);
|
||||||
} else {
|
} else {
|
||||||
log(
|
log(
|
||||||
'error',
|
'error',
|
||||||
@@ -314,7 +323,8 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
roleParams = getParametersForRole(currentRole);
|
// Pass effectiveProjectRoot to getParametersForRole
|
||||||
|
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||||
|
|
||||||
// 2. Get Provider Function Set
|
// 2. Get Provider Function Set
|
||||||
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
||||||
@@ -345,7 +355,12 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 3. Resolve API Key (will throw if required and missing)
|
// 3. Resolve API Key (will throw if required and missing)
|
||||||
apiKey = _resolveApiKey(providerName?.toLowerCase(), session);
|
// Pass effectiveProjectRoot to _resolveApiKey
|
||||||
|
apiKey = _resolveApiKey(
|
||||||
|
providerName?.toLowerCase(),
|
||||||
|
session,
|
||||||
|
effectiveProjectRoot
|
||||||
|
);
|
||||||
|
|
||||||
// 4. Construct Messages Array
|
// 4. Construct Messages Array
|
||||||
const messages = [];
|
const messages = [];
|
||||||
@@ -443,6 +458,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
* @param {object} params - Parameters for the service call.
|
* @param {object} params - Parameters for the service call.
|
||||||
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
||||||
* @param {object} [params.session=null] - Optional MCP session object.
|
* @param {object} [params.session=null] - Optional MCP session object.
|
||||||
|
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||||
* @param {string} params.prompt - The prompt for the AI.
|
* @param {string} params.prompt - The prompt for the AI.
|
||||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||||
* // Other specific generateText params can be included here.
|
* // Other specific generateText params can be included here.
|
||||||
@@ -459,6 +475,7 @@ async function generateTextService(params) {
|
|||||||
* @param {object} params - Parameters for the service call.
|
* @param {object} params - Parameters for the service call.
|
||||||
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
||||||
* @param {object} [params.session=null] - Optional MCP session object.
|
* @param {object} [params.session=null] - Optional MCP session object.
|
||||||
|
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||||
* @param {string} params.prompt - The prompt for the AI.
|
* @param {string} params.prompt - The prompt for the AI.
|
||||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||||
* // Other specific streamText params can be included here.
|
* // Other specific streamText params can be included here.
|
||||||
@@ -475,6 +492,7 @@ async function streamTextService(params) {
|
|||||||
* @param {object} params - Parameters for the service call.
|
* @param {object} params - Parameters for the service call.
|
||||||
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
* @param {string} params.role - The initial client role ('main', 'research', 'fallback').
|
||||||
* @param {object} [params.session=null] - Optional MCP session object.
|
* @param {object} [params.session=null] - Optional MCP session object.
|
||||||
|
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the expected object.
|
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the expected object.
|
||||||
* @param {string} params.prompt - The prompt for the AI.
|
* @param {string} params.prompt - The prompt for the AI.
|
||||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ import boxen from 'boxen';
|
|||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import https from 'https';
|
import https from 'https';
|
||||||
import inquirer from 'inquirer';
|
import inquirer from 'inquirer';
|
||||||
|
import ora from 'ora'; // Import ora
|
||||||
|
|
||||||
import { log, readJSON } from './utils.js';
|
import { log, readJSON } from './utils.js';
|
||||||
import {
|
import {
|
||||||
@@ -514,29 +515,41 @@ function registerCommands(programInstance) {
|
|||||||
const outputPath = options.output;
|
const outputPath = options.output;
|
||||||
const force = options.force || false;
|
const force = options.force || false;
|
||||||
const append = options.append || false;
|
const append = options.append || false;
|
||||||
|
let useForce = false;
|
||||||
|
let useAppend = false;
|
||||||
|
|
||||||
// Helper function to check if tasks.json exists and confirm overwrite
|
// Helper function to check if tasks.json exists and confirm overwrite
|
||||||
async function confirmOverwriteIfNeeded() {
|
async function confirmOverwriteIfNeeded() {
|
||||||
if (fs.existsSync(outputPath) && !force && !append) {
|
if (fs.existsSync(outputPath) && !useForce && !useAppend) {
|
||||||
const shouldContinue = await confirmTaskOverwrite(outputPath);
|
const overwrite = await confirmTaskOverwrite(outputPath);
|
||||||
if (!shouldContinue) {
|
if (!overwrite) {
|
||||||
console.log(chalk.yellow('Operation cancelled by user.'));
|
log('info', 'Operation cancelled.');
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
// If user confirms 'y', we should set useForce = true for the parsePRD call
|
||||||
|
// Only overwrite if not appending
|
||||||
|
useForce = true;
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
// If no input file specified, check for default PRD location
|
let spinner;
|
||||||
|
|
||||||
|
try {
|
||||||
if (!inputFile) {
|
if (!inputFile) {
|
||||||
if (fs.existsSync(defaultPrdPath)) {
|
if (fs.existsSync(defaultPrdPath)) {
|
||||||
console.log(chalk.blue(`Using default PRD file: ${defaultPrdPath}`));
|
console.log(
|
||||||
|
chalk.blue(`Using default PRD file path: ${defaultPrdPath}`)
|
||||||
// Check for existing tasks.json before proceeding
|
);
|
||||||
if (!(await confirmOverwriteIfNeeded())) return;
|
if (!(await confirmOverwriteIfNeeded())) return;
|
||||||
|
|
||||||
console.log(chalk.blue(`Generating ${numTasks} tasks...`));
|
console.log(chalk.blue(`Generating ${numTasks} tasks...`));
|
||||||
await parsePRD(defaultPrdPath, outputPath, numTasks, { append });
|
spinner = ora('Parsing PRD and generating tasks...').start();
|
||||||
|
await parsePRD(defaultPrdPath, outputPath, numTasks, {
|
||||||
|
useAppend,
|
||||||
|
useForce
|
||||||
|
});
|
||||||
|
spinner.succeed('Tasks generated successfully!');
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -578,7 +591,13 @@ function registerCommands(programInstance) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check for existing tasks.json before proceeding with specified input file
|
if (!fs.existsSync(inputFile)) {
|
||||||
|
console.error(
|
||||||
|
chalk.red(`Error: Input PRD file not found: ${inputFile}`)
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
if (!(await confirmOverwriteIfNeeded())) return;
|
if (!(await confirmOverwriteIfNeeded())) return;
|
||||||
|
|
||||||
console.log(chalk.blue(`Parsing PRD file: ${inputFile}`));
|
console.log(chalk.blue(`Parsing PRD file: ${inputFile}`));
|
||||||
@@ -587,7 +606,20 @@ function registerCommands(programInstance) {
|
|||||||
console.log(chalk.blue('Appending to existing tasks...'));
|
console.log(chalk.blue('Appending to existing tasks...'));
|
||||||
}
|
}
|
||||||
|
|
||||||
await parsePRD(inputFile, outputPath, numTasks, { append });
|
spinner = ora('Parsing PRD and generating tasks...').start();
|
||||||
|
await parsePRD(inputFile, outputPath, numTasks, {
|
||||||
|
append: useAppend,
|
||||||
|
force: useForce
|
||||||
|
});
|
||||||
|
spinner.succeed('Tasks generated successfully!');
|
||||||
|
} catch (error) {
|
||||||
|
if (spinner) {
|
||||||
|
spinner.fail(`Error parsing PRD: ${error.message}`);
|
||||||
|
} else {
|
||||||
|
console.error(chalk.red(`Error parsing PRD: ${error.message}`));
|
||||||
|
}
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// update command
|
// update command
|
||||||
|
|||||||
@@ -345,6 +345,12 @@ function getDefaultSubtasks(explicitRoot = null) {
|
|||||||
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
|
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function getDefaultNumTasks(explicitRoot = null) {
|
||||||
|
const val = getGlobalConfig(explicitRoot).defaultNumTasks;
|
||||||
|
const parsedVal = parseInt(val, 10);
|
||||||
|
return isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
|
||||||
|
}
|
||||||
|
|
||||||
function getDefaultPriority(explicitRoot = null) {
|
function getDefaultPriority(explicitRoot = null) {
|
||||||
// Directly return value from config
|
// Directly return value from config
|
||||||
return getGlobalConfig(explicitRoot).defaultPriority;
|
return getGlobalConfig(explicitRoot).defaultPriority;
|
||||||
@@ -424,12 +430,13 @@ function getParametersForRole(role, explicitRoot = null) {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Checks if the API key for a given provider is set in the environment.
|
* Checks if the API key for a given provider is set in the environment.
|
||||||
* Checks process.env first, then session.env if session is provided.
|
* Checks process.env first, then session.env if session is provided, then .env file if projectRoot provided.
|
||||||
* @param {string} providerName - The name of the provider (e.g., 'openai', 'anthropic').
|
* @param {string} providerName - The name of the provider (e.g., 'openai', 'anthropic').
|
||||||
* @param {object|null} [session=null] - The MCP session object (optional).
|
* @param {object|null} [session=null] - The MCP session object (optional).
|
||||||
|
* @param {string|null} [projectRoot=null] - The project root directory (optional, for .env file check).
|
||||||
* @returns {boolean} True if the API key is set, false otherwise.
|
* @returns {boolean} True if the API key is set, false otherwise.
|
||||||
*/
|
*/
|
||||||
function isApiKeySet(providerName, session = null) {
|
function isApiKeySet(providerName, session = null, projectRoot = null) {
|
||||||
// Define the expected environment variable name for each provider
|
// Define the expected environment variable name for each provider
|
||||||
if (providerName?.toLowerCase() === 'ollama') {
|
if (providerName?.toLowerCase() === 'ollama') {
|
||||||
return true; // Indicate key status is effectively "OK"
|
return true; // Indicate key status is effectively "OK"
|
||||||
@@ -454,7 +461,7 @@ function isApiKeySet(providerName, session = null) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const envVarName = keyMap[providerKey];
|
const envVarName = keyMap[providerKey];
|
||||||
const apiKeyValue = resolveEnvVariable(envVarName, session);
|
const apiKeyValue = resolveEnvVariable(envVarName, session, projectRoot);
|
||||||
|
|
||||||
// Check if the key exists, is not empty, and is not a placeholder
|
// Check if the key exists, is not empty, and is not a placeholder
|
||||||
return (
|
return (
|
||||||
@@ -701,6 +708,7 @@ export {
|
|||||||
// Global setting getters (No env var overrides)
|
// Global setting getters (No env var overrides)
|
||||||
getLogLevel,
|
getLogLevel,
|
||||||
getDebugFlag,
|
getDebugFlag,
|
||||||
|
getDefaultNumTasks,
|
||||||
getDefaultSubtasks,
|
getDefaultSubtasks,
|
||||||
getDefaultPriority,
|
getDefaultPriority,
|
||||||
getProjectName,
|
getProjectName,
|
||||||
|
|||||||
@@ -195,7 +195,7 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Generate updated task files
|
// Generate updated task files
|
||||||
await generateTaskFiles(tasksPath, 'tasks');
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
|
|
||||||
log('info', 'Task files regenerated with updated dependencies.');
|
log('info', 'Task files regenerated with updated dependencies.');
|
||||||
} else {
|
} else {
|
||||||
@@ -334,7 +334,7 @@ async function removeDependency(tasksPath, taskId, dependencyId) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Regenerate task files
|
// Regenerate task files
|
||||||
await generateTaskFiles(tasksPath, 'tasks');
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -13,20 +13,6 @@
|
|||||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 64000
|
"max_tokens": 64000
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "claude-3-5-haiku-20241022",
|
|
||||||
"swe_score": 0.406,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.8, "output": 4.0 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 64000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "claude-3-opus-20240229",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 15, "output": 75 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 64000
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"openai": [
|
"openai": [
|
||||||
@@ -41,7 +27,7 @@
|
|||||||
"id": "o1",
|
"id": "o1",
|
||||||
"swe_score": 0.489,
|
"swe_score": 0.489,
|
||||||
"cost_per_1m_tokens": { "input": 15.0, "output": 60.0 },
|
"cost_per_1m_tokens": { "input": 15.0, "output": 60.0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "o3",
|
"id": "o3",
|
||||||
@@ -53,7 +39,7 @@
|
|||||||
"id": "o3-mini",
|
"id": "o3-mini",
|
||||||
"swe_score": 0.493,
|
"swe_score": 0.493,
|
||||||
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
|
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main"],
|
||||||
"max_tokens": 100000
|
"max_tokens": 100000
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -66,49 +52,49 @@
|
|||||||
"id": "o1-mini",
|
"id": "o1-mini",
|
||||||
"swe_score": 0.4,
|
"swe_score": 0.4,
|
||||||
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
|
"cost_per_1m_tokens": { "input": 1.1, "output": 4.4 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "o1-pro",
|
"id": "o1-pro",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 150.0, "output": 600.0 },
|
"cost_per_1m_tokens": { "input": 150.0, "output": 600.0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4-5-preview",
|
"id": "gpt-4-5-preview",
|
||||||
"swe_score": 0.38,
|
"swe_score": 0.38,
|
||||||
"cost_per_1m_tokens": { "input": 75.0, "output": 150.0 },
|
"cost_per_1m_tokens": { "input": 75.0, "output": 150.0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4-1-mini",
|
"id": "gpt-4-1-mini",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.4, "output": 1.6 },
|
"cost_per_1m_tokens": { "input": 0.4, "output": 1.6 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4-1-nano",
|
"id": "gpt-4-1-nano",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
|
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4o-mini",
|
"id": "gpt-4o-mini",
|
||||||
"swe_score": 0.3,
|
"swe_score": 0.3,
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4o-search-preview",
|
"id": "gpt-4o-search-preview",
|
||||||
"swe_score": 0.33,
|
"swe_score": 0.33,
|
||||||
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
|
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
|
||||||
"allowed_roles": ["main", "fallback", "research"]
|
"allowed_roles": ["research"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "gpt-4o-mini-search-preview",
|
"id": "gpt-4o-mini-search-preview",
|
||||||
"swe_score": 0.3,
|
"swe_score": 0.3,
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||||
"allowed_roles": ["main", "fallback", "research"]
|
"allowed_roles": ["research"]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"google": [
|
"google": [
|
||||||
@@ -189,14 +175,6 @@
|
|||||||
"allowed_roles": ["main", "fallback", "research"],
|
"allowed_roles": ["main", "fallback", "research"],
|
||||||
"max_tokens": 131072
|
"max_tokens": 131072
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"id": "grok-3-mini",
|
|
||||||
"name": "Grok 3 Mini",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.3, "output": 0.5 },
|
|
||||||
"allowed_roles": ["main", "fallback", "research"],
|
|
||||||
"max_tokens": 131072
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "grok-3-fast",
|
"id": "grok-3-fast",
|
||||||
"name": "Grok 3 Fast",
|
"name": "Grok 3 Fast",
|
||||||
@@ -204,13 +182,6 @@
|
|||||||
"cost_per_1m_tokens": { "input": 5, "output": 25 },
|
"cost_per_1m_tokens": { "input": 5, "output": 25 },
|
||||||
"allowed_roles": ["main", "fallback", "research"],
|
"allowed_roles": ["main", "fallback", "research"],
|
||||||
"max_tokens": 131072
|
"max_tokens": 131072
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "grok-3-mini-fast",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.6, "output": 4 },
|
|
||||||
"allowed_roles": ["main", "fallback", "research"],
|
|
||||||
"max_tokens": 131072
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"ollama": [
|
"ollama": [
|
||||||
@@ -283,7 +254,7 @@
|
|||||||
"id": "deepseek/deepseek-chat-v3-0324",
|
"id": "deepseek/deepseek-chat-v3-0324",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.27, "output": 1.1 },
|
"cost_per_1m_tokens": { "input": 0.27, "output": 1.1 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main"],
|
||||||
"max_tokens": 64000
|
"max_tokens": 64000
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -312,14 +283,14 @@
|
|||||||
"id": "google/gemini-2.5-flash-preview",
|
"id": "google/gemini-2.5-flash-preview",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main"],
|
||||||
"max_tokens": 65535
|
"max_tokens": 65535
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "google/gemini-2.5-flash-preview:thinking",
|
"id": "google/gemini-2.5-flash-preview:thinking",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 3.5 },
|
"cost_per_1m_tokens": { "input": 0.15, "output": 3.5 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main"],
|
||||||
"max_tokens": 65535
|
"max_tokens": 65535
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import {
|
|||||||
startLoadingIndicator,
|
startLoadingIndicator,
|
||||||
stopLoadingIndicator
|
stopLoadingIndicator
|
||||||
} from '../ui.js';
|
} from '../ui.js';
|
||||||
import { log, readJSON, writeJSON, truncate } from '../utils.js';
|
import { readJSON, writeJSON, log as consoleLog, truncate } from '../utils.js';
|
||||||
import { generateObjectService } from '../ai-services-unified.js';
|
import { generateObjectService } from '../ai-services-unified.js';
|
||||||
import { getDefaultPriority } from '../config-manager.js';
|
import { getDefaultPriority } from '../config-manager.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
@@ -42,19 +42,41 @@ const AiTaskDataSchema = z.object({
|
|||||||
* @param {Object} customEnv - Custom environment variables (optional) - Note: AI params override deprecated
|
* @param {Object} customEnv - Custom environment variables (optional) - Note: AI params override deprecated
|
||||||
* @param {Object} manualTaskData - Manual task data (optional, for direct task creation without AI)
|
* @param {Object} manualTaskData - Manual task data (optional, for direct task creation without AI)
|
||||||
* @param {boolean} useResearch - Whether to use the research model (passed to unified service)
|
* @param {boolean} useResearch - Whether to use the research model (passed to unified service)
|
||||||
|
* @param {Object} context - Context object containing session and potentially projectRoot
|
||||||
|
* @param {string} [context.projectRoot] - Project root path (for MCP/env fallback)
|
||||||
* @returns {number} The new task ID
|
* @returns {number} The new task ID
|
||||||
*/
|
*/
|
||||||
async function addTask(
|
async function addTask(
|
||||||
tasksPath,
|
tasksPath,
|
||||||
prompt,
|
prompt,
|
||||||
dependencies = [],
|
dependencies = [],
|
||||||
priority = getDefaultPriority(), // Keep getter for default priority
|
priority = null,
|
||||||
{ reportProgress, mcpLog, session } = {},
|
context = {},
|
||||||
outputFormat = 'text',
|
outputFormat = 'text', // Default to text for CLI
|
||||||
// customEnv = null, // Removed as AI param overrides are deprecated
|
|
||||||
manualTaskData = null,
|
manualTaskData = null,
|
||||||
useResearch = false // <-- Add useResearch parameter
|
useResearch = false
|
||||||
) {
|
) {
|
||||||
|
const { session, mcpLog, projectRoot } = context;
|
||||||
|
const isMCP = !!mcpLog;
|
||||||
|
|
||||||
|
// Create a consistent logFn object regardless of context
|
||||||
|
const logFn = isMCP
|
||||||
|
? mcpLog // Use MCP logger if provided
|
||||||
|
: {
|
||||||
|
// Create a wrapper around consoleLog for CLI
|
||||||
|
info: (...args) => consoleLog('info', ...args),
|
||||||
|
warn: (...args) => consoleLog('warn', ...args),
|
||||||
|
error: (...args) => consoleLog('error', ...args),
|
||||||
|
debug: (...args) => consoleLog('debug', ...args),
|
||||||
|
success: (...args) => consoleLog('success', ...args)
|
||||||
|
};
|
||||||
|
|
||||||
|
const effectivePriority = priority || getDefaultPriority(projectRoot);
|
||||||
|
|
||||||
|
logFn.info(
|
||||||
|
`Adding new task with prompt: "${prompt}", Priority: ${effectivePriority}, Dependencies: ${dependencies.join(', ') || 'None'}, Research: ${useResearch}, ProjectRoot: ${projectRoot}`
|
||||||
|
);
|
||||||
|
|
||||||
let loadingIndicator = null;
|
let loadingIndicator = null;
|
||||||
|
|
||||||
// Create custom reporter that checks for MCP log
|
// Create custom reporter that checks for MCP log
|
||||||
@@ -62,7 +84,7 @@ async function addTask(
|
|||||||
if (mcpLog) {
|
if (mcpLog) {
|
||||||
mcpLog[level](message);
|
mcpLog[level](message);
|
||||||
} else if (outputFormat === 'text') {
|
} else if (outputFormat === 'text') {
|
||||||
log(level, message);
|
consoleLog(level, message);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -220,11 +242,11 @@ async function addTask(
|
|||||||
const aiGeneratedTaskData = await generateObjectService({
|
const aiGeneratedTaskData = await generateObjectService({
|
||||||
role: serviceRole, // <-- Use the determined role
|
role: serviceRole, // <-- Use the determined role
|
||||||
session: session, // Pass session for API key resolution
|
session: session, // Pass session for API key resolution
|
||||||
|
projectRoot: projectRoot, // <<< Pass projectRoot here
|
||||||
schema: AiTaskDataSchema, // Pass the Zod schema
|
schema: AiTaskDataSchema, // Pass the Zod schema
|
||||||
objectName: 'newTaskData', // Name for the object
|
objectName: 'newTaskData', // Name for the object
|
||||||
systemPrompt: systemPrompt,
|
systemPrompt: systemPrompt,
|
||||||
prompt: userPrompt,
|
prompt: userPrompt
|
||||||
reportProgress // Pass progress reporter if available
|
|
||||||
});
|
});
|
||||||
report('DEBUG: generateObjectService returned successfully.', 'debug');
|
report('DEBUG: generateObjectService returned successfully.', 'debug');
|
||||||
|
|
||||||
@@ -254,7 +276,7 @@ async function addTask(
|
|||||||
testStrategy: taskData.testStrategy || '',
|
testStrategy: taskData.testStrategy || '',
|
||||||
status: 'pending',
|
status: 'pending',
|
||||||
dependencies: numericDependencies, // Use validated numeric dependencies
|
dependencies: numericDependencies, // Use validated numeric dependencies
|
||||||
priority: priority,
|
priority: effectivePriority,
|
||||||
subtasks: [] // Initialize with empty subtasks array
|
subtasks: [] // Initialize with empty subtasks array
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -46,6 +46,7 @@ Do not include any explanatory text, markdown formatting, or code block markers
|
|||||||
* @param {string} options.output - Path to report output file
|
* @param {string} options.output - Path to report output file
|
||||||
* @param {string|number} [options.threshold] - Complexity threshold
|
* @param {string|number} [options.threshold] - Complexity threshold
|
||||||
* @param {boolean} [options.research] - Use research role
|
* @param {boolean} [options.research] - Use research role
|
||||||
|
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
||||||
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
|
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
|
||||||
* @param {number} [options._originalTaskCount] - Original task count (internal use)
|
* @param {number} [options._originalTaskCount] - Original task count (internal use)
|
||||||
* @param {Object} context - Context object, potentially containing session and mcpLog
|
* @param {Object} context - Context object, potentially containing session and mcpLog
|
||||||
@@ -59,6 +60,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
const outputPath = options.output || 'scripts/task-complexity-report.json';
|
const outputPath = options.output || 'scripts/task-complexity-report.json';
|
||||||
const thresholdScore = parseFloat(options.threshold || '5');
|
const thresholdScore = parseFloat(options.threshold || '5');
|
||||||
const useResearch = options.research || false;
|
const useResearch = options.research || false;
|
||||||
|
const projectRoot = options.projectRoot;
|
||||||
|
|
||||||
const outputFormat = mcpLog ? 'json' : 'text';
|
const outputFormat = mcpLog ? 'json' : 'text';
|
||||||
|
|
||||||
@@ -209,15 +211,13 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
const role = useResearch ? 'research' : 'main';
|
const role = useResearch ? 'research' : 'main';
|
||||||
reportLog(`Using AI service with role: ${role}`, 'info');
|
reportLog(`Using AI service with role: ${role}`, 'info');
|
||||||
|
|
||||||
// *** CHANGED: Use generateTextService ***
|
|
||||||
fullResponse = await generateTextService({
|
fullResponse = await generateTextService({
|
||||||
prompt,
|
prompt,
|
||||||
systemPrompt,
|
systemPrompt,
|
||||||
role,
|
role,
|
||||||
session
|
session,
|
||||||
// No schema or objectName needed
|
projectRoot
|
||||||
});
|
});
|
||||||
// *** End Service Call Change ***
|
|
||||||
|
|
||||||
reportLog(
|
reportLog(
|
||||||
'Successfully received text response via AI service',
|
'Successfully received text response via AI service',
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ import chalk from 'chalk';
|
|||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import Table from 'cli-table3';
|
import Table from 'cli-table3';
|
||||||
|
|
||||||
import { log, readJSON, writeJSON, truncate } from '../utils.js';
|
import { log, readJSON, writeJSON, truncate, isSilentMode } from '../utils.js';
|
||||||
import { displayBanner } from '../ui.js';
|
import { displayBanner } from '../ui.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
|
|
||||||
@@ -22,6 +22,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!isSilentMode()) {
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.white.bold('Clearing Subtasks'), {
|
boxen(chalk.white.bold('Clearing Subtasks'), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
@@ -30,6 +31,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
margin: { top: 1, bottom: 1 }
|
margin: { top: 1, bottom: 1 }
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// Handle multiple task IDs (comma-separated)
|
// Handle multiple task IDs (comma-separated)
|
||||||
const taskIdArray = taskIds.split(',').map((id) => id.trim());
|
const taskIdArray = taskIds.split(',').map((id) => id.trim());
|
||||||
@@ -85,6 +87,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
writeJSON(tasksPath, data);
|
writeJSON(tasksPath, data);
|
||||||
|
|
||||||
// Show summary table
|
// Show summary table
|
||||||
|
if (!isSilentMode()) {
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.white.bold('Subtask Clearing Summary:'), {
|
boxen(chalk.white.bold('Subtask Clearing Summary:'), {
|
||||||
padding: { left: 2, right: 2, top: 0, bottom: 0 },
|
padding: { left: 2, right: 2, top: 0, bottom: 0 },
|
||||||
@@ -94,12 +97,14 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
})
|
})
|
||||||
);
|
);
|
||||||
console.log(summaryTable.toString());
|
console.log(summaryTable.toString());
|
||||||
|
}
|
||||||
|
|
||||||
// Regenerate task files to reflect changes
|
// Regenerate task files to reflect changes
|
||||||
log('info', 'Regenerating task files...');
|
log('info', 'Regenerating task files...');
|
||||||
generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
|
|
||||||
// Success message
|
// Success message
|
||||||
|
if (!isSilentMode()) {
|
||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.green(
|
chalk.green(
|
||||||
@@ -129,7 +134,9 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
|
if (!isSilentMode()) {
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.yellow('No subtasks were cleared'), {
|
boxen(chalk.yellow('No subtasks were cleared'), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
@@ -140,5 +147,6 @@ function clearSubtasks(tasksPath, taskIds) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
export default clearSubtasks;
|
export default clearSubtasks;
|
||||||
|
|||||||
@@ -503,7 +503,8 @@ async function expandTask(
|
|||||||
prompt: promptContent,
|
prompt: promptContent,
|
||||||
systemPrompt: systemPrompt, // Use the determined system prompt
|
systemPrompt: systemPrompt, // Use the determined system prompt
|
||||||
role,
|
role,
|
||||||
session
|
session,
|
||||||
|
projectRoot
|
||||||
});
|
});
|
||||||
logger.info(
|
logger.info(
|
||||||
'Successfully received text response from AI service',
|
'Successfully received text response from AI service',
|
||||||
|
|||||||
@@ -77,7 +77,7 @@ function fetchOpenRouterModels() {
|
|||||||
* @returns {Object} RESTful response with current model configuration
|
* @returns {Object} RESTful response with current model configuration
|
||||||
*/
|
*/
|
||||||
async function getModelConfiguration(options = {}) {
|
async function getModelConfiguration(options = {}) {
|
||||||
const { mcpLog, projectRoot } = options;
|
const { mcpLog, projectRoot, session } = options;
|
||||||
|
|
||||||
const report = (level, ...args) => {
|
const report = (level, ...args) => {
|
||||||
if (mcpLog && typeof mcpLog[level] === 'function') {
|
if (mcpLog && typeof mcpLog[level] === 'function') {
|
||||||
@@ -125,12 +125,16 @@ async function getModelConfiguration(options = {}) {
|
|||||||
const fallbackModelId = getFallbackModelId(projectRoot);
|
const fallbackModelId = getFallbackModelId(projectRoot);
|
||||||
|
|
||||||
// Check API keys
|
// Check API keys
|
||||||
const mainCliKeyOk = isApiKeySet(mainProvider);
|
const mainCliKeyOk = isApiKeySet(mainProvider, session, projectRoot);
|
||||||
const mainMcpKeyOk = getMcpApiKeyStatus(mainProvider, projectRoot);
|
const mainMcpKeyOk = getMcpApiKeyStatus(mainProvider, projectRoot);
|
||||||
const researchCliKeyOk = isApiKeySet(researchProvider);
|
const researchCliKeyOk = isApiKeySet(
|
||||||
|
researchProvider,
|
||||||
|
session,
|
||||||
|
projectRoot
|
||||||
|
);
|
||||||
const researchMcpKeyOk = getMcpApiKeyStatus(researchProvider, projectRoot);
|
const researchMcpKeyOk = getMcpApiKeyStatus(researchProvider, projectRoot);
|
||||||
const fallbackCliKeyOk = fallbackProvider
|
const fallbackCliKeyOk = fallbackProvider
|
||||||
? isApiKeySet(fallbackProvider)
|
? isApiKeySet(fallbackProvider, session, projectRoot)
|
||||||
: true;
|
: true;
|
||||||
const fallbackMcpKeyOk = fallbackProvider
|
const fallbackMcpKeyOk = fallbackProvider
|
||||||
? getMcpApiKeyStatus(fallbackProvider, projectRoot)
|
? getMcpApiKeyStatus(fallbackProvider, projectRoot)
|
||||||
@@ -523,7 +527,7 @@ async function getApiKeyStatusReport(options = {}) {
|
|||||||
); // Ollama is not a provider, it's a service, doesn't need an api key usually
|
); // Ollama is not a provider, it's a service, doesn't need an api key usually
|
||||||
const statusReport = providersToCheck.map((provider) => {
|
const statusReport = providersToCheck.map((provider) => {
|
||||||
// Use provided projectRoot for MCP status check
|
// Use provided projectRoot for MCP status check
|
||||||
const cliOk = isApiKeySet(provider, session); // Pass session for CLI check too
|
const cliOk = isApiKeySet(provider, session, projectRoot); // Pass session and projectRoot for CLI check
|
||||||
const mcpOk = getMcpApiKeyStatus(provider, projectRoot);
|
const mcpOk = getMcpApiKeyStatus(provider, projectRoot);
|
||||||
return {
|
return {
|
||||||
provider,
|
provider,
|
||||||
|
|||||||
@@ -9,28 +9,30 @@ import {
|
|||||||
writeJSON,
|
writeJSON,
|
||||||
enableSilentMode,
|
enableSilentMode,
|
||||||
disableSilentMode,
|
disableSilentMode,
|
||||||
isSilentMode
|
isSilentMode,
|
||||||
|
readJSON,
|
||||||
|
findTaskById
|
||||||
} from '../utils.js';
|
} from '../utils.js';
|
||||||
|
|
||||||
import { generateObjectService } from '../ai-services-unified.js';
|
import { generateObjectService } from '../ai-services-unified.js';
|
||||||
import { getDebugFlag } from '../config-manager.js';
|
import { getDebugFlag } from '../config-manager.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
|
|
||||||
// Define Zod schema for task validation
|
// Define the Zod schema for a SINGLE task object
|
||||||
const TaskSchema = z.object({
|
const prdSingleTaskSchema = z.object({
|
||||||
id: z.number(),
|
id: z.number().int().positive(),
|
||||||
title: z.string(),
|
title: z.string().min(1),
|
||||||
description: z.string(),
|
description: z.string().min(1),
|
||||||
status: z.string().default('pending'),
|
details: z.string().optional().default(''),
|
||||||
dependencies: z.array(z.number()).default([]),
|
testStrategy: z.string().optional().default(''),
|
||||||
priority: z.string().default('medium'),
|
priority: z.enum(['high', 'medium', 'low']).default('medium'),
|
||||||
details: z.string().optional(),
|
dependencies: z.array(z.number().int().positive()).optional().default([]),
|
||||||
testStrategy: z.string().optional()
|
status: z.string().optional().default('pending')
|
||||||
});
|
});
|
||||||
|
|
||||||
// Define Zod schema for the complete tasks data
|
// Define the Zod schema for the ENTIRE expected AI response object
|
||||||
const TasksDataSchema = z.object({
|
const prdResponseSchema = z.object({
|
||||||
tasks: z.array(TaskSchema),
|
tasks: z.array(prdSingleTaskSchema),
|
||||||
metadata: z.object({
|
metadata: z.object({
|
||||||
projectName: z.string(),
|
projectName: z.string(),
|
||||||
totalTasks: z.number(),
|
totalTasks: z.number(),
|
||||||
@@ -45,35 +47,114 @@ const TasksDataSchema = z.object({
|
|||||||
* @param {string} tasksPath - Path to the tasks.json file
|
* @param {string} tasksPath - Path to the tasks.json file
|
||||||
* @param {number} numTasks - Number of tasks to generate
|
* @param {number} numTasks - Number of tasks to generate
|
||||||
* @param {Object} options - Additional options
|
* @param {Object} options - Additional options
|
||||||
* @param {Object} options.reportProgress - Function to report progress to MCP server (optional)
|
* @param {boolean} [options.useForce=false] - Whether to overwrite existing tasks.json.
|
||||||
* @param {Object} options.mcpLog - MCP logger object (optional)
|
* @param {boolean} [options.useAppend=false] - Append to existing tasks file.
|
||||||
* @param {Object} options.session - Session object from MCP server (optional)
|
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
||||||
|
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
||||||
|
* @param {Object} [options.session] - Session object from MCP server (optional).
|
||||||
|
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
||||||
|
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
||||||
*/
|
*/
|
||||||
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||||
const { reportProgress, mcpLog, session } = options;
|
const {
|
||||||
|
reportProgress,
|
||||||
|
mcpLog,
|
||||||
|
session,
|
||||||
|
projectRoot,
|
||||||
|
useForce = false,
|
||||||
|
useAppend = false
|
||||||
|
} = options;
|
||||||
|
const isMCP = !!mcpLog;
|
||||||
|
const outputFormat = isMCP ? 'json' : 'text';
|
||||||
|
|
||||||
// Determine output format based on mcpLog presence (simplification)
|
const logFn = mcpLog
|
||||||
const outputFormat = mcpLog ? 'json' : 'text';
|
? mcpLog
|
||||||
|
: {
|
||||||
|
// Wrapper for CLI
|
||||||
|
info: (...args) => log('info', ...args),
|
||||||
|
warn: (...args) => log('warn', ...args),
|
||||||
|
error: (...args) => log('error', ...args),
|
||||||
|
debug: (...args) => log('debug', ...args),
|
||||||
|
success: (...args) => log('success', ...args)
|
||||||
|
};
|
||||||
|
|
||||||
// Create custom reporter that checks for MCP log and silent mode
|
// Create custom reporter using logFn
|
||||||
const report = (message, level = 'info') => {
|
const report = (message, level = 'info') => {
|
||||||
if (mcpLog) {
|
// Check logFn directly
|
||||||
mcpLog[level](message);
|
if (logFn && typeof logFn[level] === 'function') {
|
||||||
|
logFn[level](message);
|
||||||
} else if (!isSilentMode() && outputFormat === 'text') {
|
} else if (!isSilentMode() && outputFormat === 'text') {
|
||||||
// Only log to console if not in silent mode and outputFormat is 'text'
|
// Fallback to original log only if necessary and in CLI text mode
|
||||||
log(level, message);
|
log(level, message);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
try {
|
report(
|
||||||
report(`Parsing PRD file: ${prdPath}`, 'info');
|
`Parsing PRD file: ${prdPath}, Force: ${useForce}, Append: ${useAppend}`
|
||||||
|
);
|
||||||
|
|
||||||
// Read the PRD content
|
let existingTasks = [];
|
||||||
|
let nextId = 1;
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Handle file existence and overwrite/append logic
|
||||||
|
if (fs.existsSync(tasksPath)) {
|
||||||
|
if (useAppend) {
|
||||||
|
report(
|
||||||
|
`Append mode enabled. Reading existing tasks from ${tasksPath}`,
|
||||||
|
'info'
|
||||||
|
);
|
||||||
|
const existingData = readJSON(tasksPath); // Use readJSON utility
|
||||||
|
if (existingData && Array.isArray(existingData.tasks)) {
|
||||||
|
existingTasks = existingData.tasks;
|
||||||
|
if (existingTasks.length > 0) {
|
||||||
|
nextId = Math.max(...existingTasks.map((t) => t.id || 0)) + 1;
|
||||||
|
report(
|
||||||
|
`Found ${existingTasks.length} existing tasks. Next ID will be ${nextId}.`,
|
||||||
|
'info'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
report(
|
||||||
|
`Could not read existing tasks from ${tasksPath} or format is invalid. Proceeding without appending.`,
|
||||||
|
'warn'
|
||||||
|
);
|
||||||
|
existingTasks = []; // Reset if read fails
|
||||||
|
}
|
||||||
|
} else if (!useForce) {
|
||||||
|
// Not appending and not forcing overwrite
|
||||||
|
const overwriteError = new Error(
|
||||||
|
`Output file ${tasksPath} already exists. Use --force to overwrite or --append.`
|
||||||
|
);
|
||||||
|
report(overwriteError.message, 'error');
|
||||||
|
if (outputFormat === 'text') {
|
||||||
|
console.error(chalk.red(overwriteError.message));
|
||||||
|
process.exit(1);
|
||||||
|
} else {
|
||||||
|
throw overwriteError;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Force overwrite is true
|
||||||
|
report(
|
||||||
|
`Force flag enabled. Overwriting existing file: ${tasksPath}`,
|
||||||
|
'info'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
report(`Reading PRD content from ${prdPath}`, 'info');
|
||||||
const prdContent = fs.readFileSync(prdPath, 'utf8');
|
const prdContent = fs.readFileSync(prdPath, 'utf8');
|
||||||
|
if (!prdContent) {
|
||||||
|
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
||||||
|
}
|
||||||
|
|
||||||
// Build system prompt for PRD parsing
|
// Build system prompt for PRD parsing
|
||||||
const systemPrompt = `You are an AI assistant helping to break down a Product Requirements Document (PRD) into a set of sequential development tasks.
|
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.
|
||||||
Your goal is to create ${numTasks} well-structured, actionable development tasks based on the PRD provided.
|
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
||||||
|
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
|
||||||
|
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
|
||||||
|
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
|
||||||
|
Respond ONLY with a valid JSON object containing a single key "tasks", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.
|
||||||
|
|
||||||
Each task should follow this JSON structure:
|
Each task should follow this JSON structure:
|
||||||
{
|
{
|
||||||
@@ -88,12 +169,12 @@ Each task should follow this JSON structure:
|
|||||||
}
|
}
|
||||||
|
|
||||||
Guidelines:
|
Guidelines:
|
||||||
1. Create exactly ${numTasks} tasks, numbered from 1 to ${numTasks}
|
1. Unless complexity warrants otherwise, create exactly ${numTasks} tasks, numbered sequentially starting from ${nextId}
|
||||||
2. Each task should be atomic and focused on a single responsibility
|
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
|
||||||
3. Order tasks logically - consider dependencies and implementation sequence
|
3. Order tasks logically - consider dependencies and implementation sequence
|
||||||
4. Early tasks should focus on setup, core functionality first, then advanced features
|
4. Early tasks should focus on setup, core functionality first, then advanced features
|
||||||
5. Include clear validation/testing approach for each task
|
5. Include clear validation/testing approach for each task
|
||||||
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs)
|
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than ${nextId} if applicable)
|
||||||
7. Assign priority (high/medium/low) based on criticality and dependency order
|
7. Assign priority (high/medium/low) based on criticality and dependency order
|
||||||
8. Include detailed implementation guidance in the "details" field
|
8. Include detailed implementation guidance in the "details" field
|
||||||
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
|
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
|
||||||
@@ -101,9 +182,7 @@ Guidelines:
|
|||||||
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches`;
|
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches`;
|
||||||
|
|
||||||
// Build user prompt with PRD content
|
// Build user prompt with PRD content
|
||||||
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into ${numTasks} tasks:
|
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:\n\n${prdContent}\n\n
|
||||||
|
|
||||||
${prdContent}
|
|
||||||
|
|
||||||
Return your response in this format:
|
Return your response in this format:
|
||||||
{
|
{
|
||||||
@@ -127,15 +206,16 @@ Return your response in this format:
|
|||||||
// Call the unified AI service
|
// Call the unified AI service
|
||||||
report('Calling AI service to generate tasks from PRD...', 'info');
|
report('Calling AI service to generate tasks from PRD...', 'info');
|
||||||
|
|
||||||
// Call generateObjectService with proper parameters
|
// Call generateObjectService with the CORRECT schema
|
||||||
const tasksData = await generateObjectService({
|
const generatedData = await generateObjectService({
|
||||||
role: 'main', // Use 'main' role to get the model from config
|
role: 'main',
|
||||||
session: session, // Pass session for API key resolution
|
session: session,
|
||||||
schema: TasksDataSchema, // Pass the schema for validation
|
projectRoot: projectRoot,
|
||||||
objectName: 'tasks_data', // Name the generated object
|
schema: prdResponseSchema,
|
||||||
systemPrompt: systemPrompt, // System instructions
|
objectName: 'tasks_data',
|
||||||
prompt: userPrompt, // User prompt with PRD content
|
systemPrompt: systemPrompt,
|
||||||
reportProgress // Progress reporting function
|
prompt: userPrompt,
|
||||||
|
reportProgress
|
||||||
});
|
});
|
||||||
|
|
||||||
// Create the directory if it doesn't exist
|
// Create the directory if it doesn't exist
|
||||||
@@ -143,11 +223,58 @@ Return your response in this format:
|
|||||||
if (!fs.existsSync(tasksDir)) {
|
if (!fs.existsSync(tasksDir)) {
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
fs.mkdirSync(tasksDir, { recursive: true });
|
||||||
}
|
}
|
||||||
|
logFn.success('Successfully parsed PRD via AI service.'); // Assumes generateObjectService validated
|
||||||
|
|
||||||
|
// Validate and Process Tasks
|
||||||
|
if (!generatedData || !Array.isArray(generatedData.tasks)) {
|
||||||
|
// This error *shouldn't* happen if generateObjectService enforced prdResponseSchema
|
||||||
|
// But keep it as a safeguard
|
||||||
|
logFn.error(
|
||||||
|
`Internal Error: generateObjectService returned unexpected data structure: ${JSON.stringify(generatedData)}`
|
||||||
|
);
|
||||||
|
throw new Error(
|
||||||
|
'AI service returned unexpected data structure after validation.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let currentId = nextId;
|
||||||
|
const taskMap = new Map();
|
||||||
|
const processedNewTasks = generatedData.tasks.map((task) => {
|
||||||
|
const newId = currentId++;
|
||||||
|
taskMap.set(task.id, newId);
|
||||||
|
return {
|
||||||
|
...task,
|
||||||
|
id: newId,
|
||||||
|
status: 'pending',
|
||||||
|
priority: task.priority || 'medium',
|
||||||
|
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
||||||
|
subtasks: []
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
// Remap dependencies for the NEWLY processed tasks
|
||||||
|
processedNewTasks.forEach((task) => {
|
||||||
|
task.dependencies = task.dependencies
|
||||||
|
.map((depId) => taskMap.get(depId)) // Map old AI ID to new sequential ID
|
||||||
|
.filter(
|
||||||
|
(newDepId) =>
|
||||||
|
newDepId != null && // Must exist
|
||||||
|
newDepId < task.id && // Must be a lower ID (could be existing or newly generated)
|
||||||
|
(findTaskById(existingTasks, newDepId) || // Check if it exists in old tasks OR
|
||||||
|
processedNewTasks.some((t) => t.id === newDepId)) // check if it exists in new tasks
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
const allTasks = useAppend
|
||||||
|
? [...existingTasks, ...processedNewTasks]
|
||||||
|
: processedNewTasks;
|
||||||
|
|
||||||
|
const finalTaskData = { tasks: allTasks }; // Use the combined list
|
||||||
|
|
||||||
// Write the tasks to the file
|
// Write the tasks to the file
|
||||||
writeJSON(tasksPath, tasksData);
|
writeJSON(tasksPath, finalTaskData);
|
||||||
report(
|
report(
|
||||||
`Successfully generated ${tasksData.tasks.length} tasks from PRD`,
|
`Successfully wrote ${allTasks.length} total tasks to ${tasksPath} (${processedNewTasks.length} new).`,
|
||||||
'success'
|
'success'
|
||||||
);
|
);
|
||||||
report(`Tasks saved to: ${tasksPath}`, 'info');
|
report(`Tasks saved to: ${tasksPath}`, 'info');
|
||||||
@@ -156,10 +283,10 @@ Return your response in this format:
|
|||||||
if (reportProgress && mcpLog) {
|
if (reportProgress && mcpLog) {
|
||||||
// Enable silent mode when being called from MCP server
|
// Enable silent mode when being called from MCP server
|
||||||
enableSilentMode();
|
enableSilentMode();
|
||||||
await generateTaskFiles(tasksPath, tasksDir);
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
disableSilentMode();
|
disableSilentMode();
|
||||||
} else {
|
} else {
|
||||||
await generateTaskFiles(tasksPath, tasksDir);
|
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Only show success boxes for text output (CLI)
|
// Only show success boxes for text output (CLI)
|
||||||
@@ -167,7 +294,7 @@ Return your response in this format:
|
|||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.green(
|
chalk.green(
|
||||||
`Successfully generated ${tasksData.tasks.length} tasks from PRD`
|
`Successfully generated ${processedNewTasks.length} new tasks. Total tasks in ${tasksPath}: ${allTasks.length}`
|
||||||
),
|
),
|
||||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||||
)
|
)
|
||||||
@@ -189,7 +316,7 @@ Return your response in this format:
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
return tasksData;
|
return { success: true, tasks: processedNewTasks };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
report(`Error parsing PRD: ${error.message}`, 'error');
|
report(`Error parsing PRD: ${error.message}`, 'error');
|
||||||
|
|
||||||
@@ -197,8 +324,8 @@ Return your response in this format:
|
|||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
console.error(chalk.red(`Error: ${error.message}`));
|
console.error(chalk.red(`Error: ${error.message}`));
|
||||||
|
|
||||||
if (getDebugFlag(session)) {
|
if (getDebugFlag(projectRoot)) {
|
||||||
// Use getter
|
// Use projectRoot for debug flag check
|
||||||
console.error(error);
|
console.error(error);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ import path from 'path';
|
|||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import Table from 'cli-table3';
|
import Table from 'cli-table3';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
import {
|
import {
|
||||||
getStatusWithColor,
|
getStatusWithColor,
|
||||||
@@ -16,7 +17,10 @@ import {
|
|||||||
truncate,
|
truncate,
|
||||||
isSilentMode
|
isSilentMode
|
||||||
} from '../utils.js';
|
} from '../utils.js';
|
||||||
import { generateTextService } from '../ai-services-unified.js';
|
import {
|
||||||
|
generateObjectService,
|
||||||
|
generateTextService
|
||||||
|
} from '../ai-services-unified.js';
|
||||||
import { getDebugFlag } from '../config-manager.js';
|
import { getDebugFlag } from '../config-manager.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
|
|
||||||
@@ -29,6 +33,7 @@ import generateTaskFiles from './generate-task-files.js';
|
|||||||
* @param {Object} context - Context object containing session and mcpLog.
|
* @param {Object} context - Context object containing session and mcpLog.
|
||||||
* @param {Object} [context.session] - Session object from MCP server.
|
* @param {Object} [context.session] - Session object from MCP server.
|
||||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||||
|
* @param {string} [context.projectRoot] - Project root path (needed for AI service key resolution).
|
||||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). Automatically 'json' if mcpLog is present.
|
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). Automatically 'json' if mcpLog is present.
|
||||||
* @returns {Promise<Object|null>} - The updated subtask or null if update failed.
|
* @returns {Promise<Object|null>} - The updated subtask or null if update failed.
|
||||||
*/
|
*/
|
||||||
@@ -40,7 +45,7 @@ async function updateSubtaskById(
|
|||||||
context = {},
|
context = {},
|
||||||
outputFormat = context.mcpLog ? 'json' : 'text'
|
outputFormat = context.mcpLog ? 'json' : 'text'
|
||||||
) {
|
) {
|
||||||
const { session, mcpLog } = context;
|
const { session, mcpLog, projectRoot } = context;
|
||||||
const logFn = mcpLog || consoleLog;
|
const logFn = mcpLog || consoleLog;
|
||||||
const isMCP = !!mcpLog;
|
const isMCP = !!mcpLog;
|
||||||
|
|
||||||
@@ -130,36 +135,16 @@ async function updateSubtaskById(
|
|||||||
|
|
||||||
const subtask = parentTask.subtasks[subtaskIndex];
|
const subtask = parentTask.subtasks[subtaskIndex];
|
||||||
|
|
||||||
// Check if subtask is already completed
|
const subtaskSchema = z.object({
|
||||||
if (subtask.status === 'done' || subtask.status === 'completed') {
|
id: z.number().int().positive(),
|
||||||
report(
|
title: z.string(),
|
||||||
'warn',
|
description: z.string().optional(),
|
||||||
`Subtask ${subtaskId} is already marked as done and cannot be updated`
|
status: z.string(),
|
||||||
);
|
dependencies: z.array(z.union([z.string(), z.number()])).optional(),
|
||||||
|
priority: z.string().optional(),
|
||||||
// Only show UI elements for text output (CLI)
|
details: z.string().optional(),
|
||||||
if (outputFormat === 'text') {
|
testStrategy: z.string().optional()
|
||||||
console.log(
|
});
|
||||||
boxen(
|
|
||||||
chalk.yellow(
|
|
||||||
`Subtask ${subtaskId} is already marked as ${subtask.status} and cannot be updated.`
|
|
||||||
) +
|
|
||||||
'\n\n' +
|
|
||||||
chalk.white(
|
|
||||||
'Completed subtasks are locked to maintain consistency. To modify a completed subtask, you must first:'
|
|
||||||
) +
|
|
||||||
'\n' +
|
|
||||||
chalk.white(
|
|
||||||
'1. Change its status to "pending" or "in-progress"'
|
|
||||||
) +
|
|
||||||
'\n' +
|
|
||||||
chalk.white('2. Then run the update-subtask command'),
|
|
||||||
{ padding: 1, borderColor: 'yellow', borderStyle: 'round' }
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only show UI elements for text output (CLI)
|
// Only show UI elements for text output (CLI)
|
||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
@@ -192,101 +177,161 @@ async function updateSubtaskById(
|
|||||||
|
|
||||||
// Start the loading indicator - only for text output
|
// Start the loading indicator - only for text output
|
||||||
loadingIndicator = startLoadingIndicator(
|
loadingIndicator = startLoadingIndicator(
|
||||||
'Generating additional information with AI...'
|
useResearch
|
||||||
|
? 'Updating subtask with research...'
|
||||||
|
: 'Updating subtask...'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
let additionalInformation = '';
|
let parsedAIResponse;
|
||||||
try {
|
try {
|
||||||
// Reverted: Keep the original system prompt
|
// --- GET PARENT & SIBLING CONTEXT ---
|
||||||
const systemPrompt = `You are an AI assistant helping to update software development subtasks with additional information.
|
const parentContext = {
|
||||||
Given a subtask, you will provide additional details, implementation notes, or technical insights based on user request.
|
id: parentTask.id,
|
||||||
Focus only on adding content that enhances the subtask - don't repeat existing information.
|
title: parentTask.title
|
||||||
Be technical, specific, and implementation-focused rather than general.
|
// Avoid sending full parent description/details unless necessary
|
||||||
Provide concrete examples, code snippets, or implementation details when relevant.`;
|
};
|
||||||
|
|
||||||
// Reverted: Use the full JSON stringification for the user message
|
const prevSubtask =
|
||||||
const subtaskData = JSON.stringify(subtask, null, 2);
|
subtaskIndex > 0
|
||||||
const userMessageContent = `Here is the subtask to enhance:\n${subtaskData}\n\nPlease provide additional information addressing this request:\n${prompt}\n\nReturn ONLY the new information to add - do not repeat existing content.`;
|
? {
|
||||||
|
id: `${parentTask.id}.${parentTask.subtasks[subtaskIndex - 1].id}`,
|
||||||
|
title: parentTask.subtasks[subtaskIndex - 1].title,
|
||||||
|
status: parentTask.subtasks[subtaskIndex - 1].status
|
||||||
|
}
|
||||||
|
: null;
|
||||||
|
|
||||||
const serviceRole = useResearch ? 'research' : 'main';
|
const nextSubtask =
|
||||||
report('info', `Calling AI text service with role: ${serviceRole}`);
|
subtaskIndex < parentTask.subtasks.length - 1
|
||||||
|
? {
|
||||||
|
id: `${parentTask.id}.${parentTask.subtasks[subtaskIndex + 1].id}`,
|
||||||
|
title: parentTask.subtasks[subtaskIndex + 1].title,
|
||||||
|
status: parentTask.subtasks[subtaskIndex + 1].status
|
||||||
|
}
|
||||||
|
: null;
|
||||||
|
|
||||||
const streamResult = await generateTextService({
|
const contextString = `
|
||||||
role: serviceRole,
|
Parent Task: ${JSON.stringify(parentContext)}
|
||||||
session: session,
|
${prevSubtask ? `Previous Subtask: ${JSON.stringify(prevSubtask)}` : ''}
|
||||||
|
${nextSubtask ? `Next Subtask: ${JSON.stringify(nextSubtask)}` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const systemPrompt = `You are an AI assistant updating a parent task's subtask. This subtask will be part of a larger parent task and will be used to direct AI agents to complete the subtask. Your goal is to GENERATE new, relevant information based on the user's request (which may be high-level, mid-level or low-level) and APPEND it to the existing subtask 'details' field, wrapped in specific XML-like tags with an ISO 8601 timestamp. Intelligently determine the level of detail to include based on the user's request. Some requests are meant simply to update the subtask with some mid-implementation details, while others are meant to update the subtask with a detailed plan or strategy.
|
||||||
|
|
||||||
|
Context Provided:
|
||||||
|
- The current subtask object.
|
||||||
|
- Basic info about the parent task (ID, title).
|
||||||
|
- Basic info about the immediately preceding subtask (ID, title, status), if it exists.
|
||||||
|
- Basic info about the immediately succeeding subtask (ID, title, status), if it exists.
|
||||||
|
- A user request string.
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
1. Analyze the user request considering the provided subtask details AND the context of the parent and sibling tasks.
|
||||||
|
2. GENERATE new, relevant text content that should be added to the 'details' field. Focus *only* on the substance of the update based on the user request and context. Do NOT add timestamps or any special formatting yourself. Avoid over-engineering the details, provide .
|
||||||
|
3. Update the 'details' field in the subtask object with the GENERATED text content. It's okay if this overwrites previous details in the object you return, as the calling code will handle the final appending.
|
||||||
|
4. Return the *entire* updated subtask object (with your generated content in the 'details' field) as a valid JSON object conforming to the provided schema. Do NOT return explanations or markdown formatting.`;
|
||||||
|
|
||||||
|
const subtaskDataString = JSON.stringify(subtask, null, 2);
|
||||||
|
// Updated user prompt including context
|
||||||
|
const userPrompt = `Task Context:\n${contextString}\nCurrent Subtask:\n${subtaskDataString}\n\nUser Request: "${prompt}"\n\nPlease GENERATE new, relevant text content for the 'details' field based on the user request and the provided context. Return the entire updated subtask object as a valid JSON object matching the schema, with the newly generated text placed in the 'details' field.`;
|
||||||
|
// --- END UPDATED PROMPTS ---
|
||||||
|
|
||||||
|
// Call Unified AI Service using generateObjectService
|
||||||
|
const role = useResearch ? 'research' : 'main';
|
||||||
|
report('info', `Using AI object service with role: ${role}`);
|
||||||
|
|
||||||
|
parsedAIResponse = await generateObjectService({
|
||||||
|
prompt: userPrompt,
|
||||||
systemPrompt: systemPrompt,
|
systemPrompt: systemPrompt,
|
||||||
prompt: userMessageContent
|
schema: subtaskSchema,
|
||||||
|
objectName: 'updatedSubtask',
|
||||||
|
role,
|
||||||
|
session,
|
||||||
|
projectRoot,
|
||||||
|
maxRetries: 2
|
||||||
});
|
});
|
||||||
|
report(
|
||||||
|
'success',
|
||||||
|
'Successfully received object response from AI service'
|
||||||
|
);
|
||||||
|
|
||||||
if (outputFormat === 'text' && loadingIndicator) {
|
if (outputFormat === 'text' && loadingIndicator) {
|
||||||
// Stop indicator immediately since generateText is blocking
|
|
||||||
stopLoadingIndicator(loadingIndicator);
|
stopLoadingIndicator(loadingIndicator);
|
||||||
loadingIndicator = null;
|
loadingIndicator = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Assign the result directly (generateTextService returns the text string)
|
if (!parsedAIResponse || typeof parsedAIResponse !== 'object') {
|
||||||
additionalInformation = streamResult ? streamResult.trim() : '';
|
throw new Error('AI did not return a valid object.');
|
||||||
|
|
||||||
if (!additionalInformation) {
|
|
||||||
throw new Error('AI returned empty response.'); // Changed error message slightly
|
|
||||||
}
|
}
|
||||||
|
|
||||||
report(
|
report(
|
||||||
// Corrected log message to reflect generateText
|
|
||||||
'success',
|
'success',
|
||||||
`Successfully generated text using AI role: ${serviceRole}.`
|
`Successfully generated object using AI role: ${role}.`
|
||||||
);
|
);
|
||||||
} catch (aiError) {
|
} catch (aiError) {
|
||||||
report('error', `AI service call failed: ${aiError.message}`);
|
report('error', `AI service call failed: ${aiError.message}`);
|
||||||
|
if (outputFormat === 'text' && loadingIndicator) {
|
||||||
|
stopLoadingIndicator(loadingIndicator); // Ensure stop on error
|
||||||
|
loadingIndicator = null;
|
||||||
|
}
|
||||||
throw aiError;
|
throw aiError;
|
||||||
} // Removed the inner finally block as streamingInterval is gone
|
}
|
||||||
|
|
||||||
const currentDate = new Date();
|
// --- TIMESTAMP & FORMATTING LOGIC (Handled Locally) ---
|
||||||
|
// Extract only the generated content from the AI's response details field.
|
||||||
|
const generatedContent = parsedAIResponse.details || ''; // Default to empty string
|
||||||
|
|
||||||
// Format the additional information with timestamp
|
if (generatedContent.trim()) {
|
||||||
const formattedInformation = `\n\n<info added on ${currentDate.toISOString()}>\n${additionalInformation}\n</info added on ${currentDate.toISOString()}>`;
|
// Generate timestamp locally
|
||||||
|
const timestamp = new Date().toISOString(); // <<< Local Timestamp
|
||||||
|
|
||||||
|
// Format the content with XML-like tags and timestamp LOCALLY
|
||||||
|
const formattedBlock = `<info added on ${timestamp}>\n${generatedContent.trim()}\n</info added on ${timestamp}>`; // <<< Local Formatting
|
||||||
|
|
||||||
|
// Append the formatted block to the *original* subtask details
|
||||||
|
subtask.details =
|
||||||
|
(subtask.details ? subtask.details + '\n' : '') + formattedBlock; // <<< Local Appending
|
||||||
|
report(
|
||||||
|
'info',
|
||||||
|
'Appended timestamped, formatted block with AI-generated content to subtask.details.'
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
report(
|
||||||
|
'warn',
|
||||||
|
'AI response object did not contain generated content in the "details" field. Original details remain unchanged.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
// --- END TIMESTAMP & FORMATTING LOGIC ---
|
||||||
|
|
||||||
|
// Get a reference to the subtask *after* its details have been updated
|
||||||
|
const updatedSubtask = parentTask.subtasks[subtaskIndex]; // subtask === updatedSubtask now
|
||||||
|
|
||||||
|
report('info', 'Updated subtask details locally after AI generation.');
|
||||||
|
// --- END UPDATE SUBTASK ---
|
||||||
|
|
||||||
// Only show debug info for text output (CLI)
|
// Only show debug info for text output (CLI)
|
||||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||||
console.log(
|
console.log(
|
||||||
'>>> DEBUG: formattedInformation:',
|
'>>> DEBUG: Subtask details AFTER AI update:',
|
||||||
formattedInformation.substring(0, 70) + '...'
|
updatedSubtask.details // Use updatedSubtask
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Append to subtask details and description
|
// Description update logic (keeping as is for now)
|
||||||
// Only show debug info for text output (CLI)
|
if (updatedSubtask.description) {
|
||||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
// Use updatedSubtask
|
||||||
console.log('>>> DEBUG: Subtask details BEFORE append:', subtask.details);
|
if (prompt.length < 100) {
|
||||||
}
|
|
||||||
|
|
||||||
if (subtask.details) {
|
|
||||||
subtask.details += formattedInformation;
|
|
||||||
} else {
|
|
||||||
subtask.details = `${formattedInformation}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only show debug info for text output (CLI)
|
|
||||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
|
||||||
console.log('>>> DEBUG: Subtask details AFTER append:', subtask.details);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (subtask.description) {
|
|
||||||
// Only append to description if it makes sense (for shorter updates)
|
|
||||||
if (additionalInformation.length < 200) {
|
|
||||||
// Only show debug info for text output (CLI)
|
|
||||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||||
console.log(
|
console.log(
|
||||||
'>>> DEBUG: Subtask description BEFORE append:',
|
'>>> DEBUG: Subtask description BEFORE append:',
|
||||||
subtask.description
|
updatedSubtask.description // Use updatedSubtask
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
subtask.description += ` [Updated: ${currentDate.toLocaleDateString()}]`;
|
updatedSubtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`; // Use updatedSubtask
|
||||||
// Only show debug info for text output (CLI)
|
|
||||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||||
console.log(
|
console.log(
|
||||||
'>>> DEBUG: Subtask description AFTER append:',
|
'>>> DEBUG: Subtask description AFTER append:',
|
||||||
subtask.description
|
updatedSubtask.description // Use updatedSubtask
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -297,10 +342,7 @@ Provide concrete examples, code snippets, or implementation details when relevan
|
|||||||
console.log('>>> DEBUG: About to call writeJSON with updated data...');
|
console.log('>>> DEBUG: About to call writeJSON with updated data...');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update the subtask in the parent task's array
|
// Write the updated tasks to the file (parentTask already contains the updated subtask)
|
||||||
parentTask.subtasks[subtaskIndex] = subtask;
|
|
||||||
|
|
||||||
// Write the updated tasks to the file
|
|
||||||
writeJSON(tasksPath, data);
|
writeJSON(tasksPath, data);
|
||||||
|
|
||||||
// Only show debug info for text output (CLI)
|
// Only show debug info for text output (CLI)
|
||||||
@@ -326,17 +368,18 @@ Provide concrete examples, code snippets, or implementation details when relevan
|
|||||||
'\n\n' +
|
'\n\n' +
|
||||||
chalk.white.bold('Title:') +
|
chalk.white.bold('Title:') +
|
||||||
' ' +
|
' ' +
|
||||||
subtask.title +
|
updatedSubtask.title +
|
||||||
'\n\n' +
|
'\n\n' +
|
||||||
chalk.white.bold('Information Added:') +
|
// Update the display to show the new details field
|
||||||
|
chalk.white.bold('Updated Details:') +
|
||||||
'\n' +
|
'\n' +
|
||||||
chalk.white(truncate(additionalInformation, 300, true)),
|
chalk.white(truncate(updatedSubtask.details || '', 500, true)), // Use updatedSubtask
|
||||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
return subtask;
|
return updatedSubtask; // Return the modified subtask object
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Outer catch block handles final errors after loop/attempts
|
// Outer catch block handles final errors after loop/attempts
|
||||||
// Stop indicator on error - only for text output (CLI)
|
// Stop indicator on error - only for text output (CLI)
|
||||||
|
|||||||
@@ -70,29 +70,80 @@ function parseUpdatedTaskFromText(text, expectedTaskId, logFn, isMCP) {
|
|||||||
|
|
||||||
let cleanedResponse = text.trim();
|
let cleanedResponse = text.trim();
|
||||||
const originalResponseForDebug = cleanedResponse;
|
const originalResponseForDebug = cleanedResponse;
|
||||||
|
let parseMethodUsed = 'raw'; // Keep track of which method worked
|
||||||
|
|
||||||
// Extract from Markdown code block first
|
// --- NEW Step 1: Try extracting between {} first ---
|
||||||
|
const firstBraceIndex = cleanedResponse.indexOf('{');
|
||||||
|
const lastBraceIndex = cleanedResponse.lastIndexOf('}');
|
||||||
|
let potentialJsonFromBraces = null;
|
||||||
|
|
||||||
|
if (firstBraceIndex !== -1 && lastBraceIndex > firstBraceIndex) {
|
||||||
|
potentialJsonFromBraces = cleanedResponse.substring(
|
||||||
|
firstBraceIndex,
|
||||||
|
lastBraceIndex + 1
|
||||||
|
);
|
||||||
|
if (potentialJsonFromBraces.length <= 2) {
|
||||||
|
potentialJsonFromBraces = null; // Ignore empty braces {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If {} extraction yielded something, try parsing it immediately
|
||||||
|
if (potentialJsonFromBraces) {
|
||||||
|
try {
|
||||||
|
const testParse = JSON.parse(potentialJsonFromBraces);
|
||||||
|
// It worked! Use this as the primary cleaned response.
|
||||||
|
cleanedResponse = potentialJsonFromBraces;
|
||||||
|
parseMethodUsed = 'braces';
|
||||||
|
report(
|
||||||
|
'info',
|
||||||
|
'Successfully parsed JSON content extracted between first { and last }.'
|
||||||
|
);
|
||||||
|
} catch (e) {
|
||||||
|
report(
|
||||||
|
'info',
|
||||||
|
'Content between {} looked promising but failed initial parse. Proceeding to other methods.'
|
||||||
|
);
|
||||||
|
// Reset cleanedResponse to original if brace parsing failed
|
||||||
|
cleanedResponse = originalResponseForDebug;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Step 2: If brace parsing didn't work or wasn't applicable, try code block extraction ---
|
||||||
|
if (parseMethodUsed === 'raw') {
|
||||||
const codeBlockMatch = cleanedResponse.match(
|
const codeBlockMatch = cleanedResponse.match(
|
||||||
/```(?:json)?\s*([\s\S]*?)\s*```/
|
/```(?:json|javascript)?\s*([\s\S]*?)\s*```/i
|
||||||
);
|
);
|
||||||
if (codeBlockMatch) {
|
if (codeBlockMatch) {
|
||||||
cleanedResponse = codeBlockMatch[1].trim();
|
cleanedResponse = codeBlockMatch[1].trim();
|
||||||
|
parseMethodUsed = 'codeblock';
|
||||||
report('info', 'Extracted JSON content from Markdown code block.');
|
report('info', 'Extracted JSON content from Markdown code block.');
|
||||||
} else {
|
} else {
|
||||||
// If no code block, find first '{' and last '}' for the object
|
// --- Step 3: If code block failed, try stripping prefixes ---
|
||||||
const firstBrace = cleanedResponse.indexOf('{');
|
const commonPrefixes = [
|
||||||
const lastBrace = cleanedResponse.lastIndexOf('}');
|
'json\n',
|
||||||
if (firstBrace !== -1 && lastBrace > firstBrace) {
|
'javascript\n'
|
||||||
cleanedResponse = cleanedResponse.substring(firstBrace, lastBrace + 1);
|
// ... other prefixes ...
|
||||||
report('info', 'Extracted content between first { and last }.');
|
];
|
||||||
} else {
|
let prefixFound = false;
|
||||||
|
for (const prefix of commonPrefixes) {
|
||||||
|
if (cleanedResponse.toLowerCase().startsWith(prefix)) {
|
||||||
|
cleanedResponse = cleanedResponse.substring(prefix.length).trim();
|
||||||
|
parseMethodUsed = 'prefix';
|
||||||
|
report('info', `Stripped prefix: "${prefix.trim()}"`);
|
||||||
|
prefixFound = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!prefixFound) {
|
||||||
report(
|
report(
|
||||||
'warn',
|
'warn',
|
||||||
'Response does not appear to contain a JSON object structure. Parsing raw response.'
|
'Response does not appear to contain {}, code block, or known prefix. Attempting raw parse.'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Step 4: Attempt final parse ---
|
||||||
let parsedTask;
|
let parsedTask;
|
||||||
try {
|
try {
|
||||||
parsedTask = JSON.parse(cleanedResponse);
|
parsedTask = JSON.parse(cleanedResponse);
|
||||||
@@ -168,7 +219,7 @@ async function updateTaskById(
|
|||||||
context = {},
|
context = {},
|
||||||
outputFormat = 'text'
|
outputFormat = 'text'
|
||||||
) {
|
) {
|
||||||
const { session, mcpLog } = context;
|
const { session, mcpLog, projectRoot } = context;
|
||||||
const logFn = mcpLog || consoleLog;
|
const logFn = mcpLog || consoleLog;
|
||||||
const isMCP = !!mcpLog;
|
const isMCP = !!mcpLog;
|
||||||
|
|
||||||
@@ -329,7 +380,7 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
|||||||
let loadingIndicator = null;
|
let loadingIndicator = null;
|
||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
loadingIndicator = startLoadingIndicator(
|
loadingIndicator = startLoadingIndicator(
|
||||||
useResearch ? 'Updating task with research...' : 'Updating task...'
|
useResearch ? 'Updating task with research...\n' : 'Updating task...\n'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -343,7 +394,8 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
|||||||
prompt: userPrompt,
|
prompt: userPrompt,
|
||||||
systemPrompt: systemPrompt,
|
systemPrompt: systemPrompt,
|
||||||
role,
|
role,
|
||||||
session
|
session,
|
||||||
|
projectRoot
|
||||||
});
|
});
|
||||||
report('success', 'Successfully received text response from AI service');
|
report('success', 'Successfully received text response from AI service');
|
||||||
// --- End AI Service Call ---
|
// --- End AI Service Call ---
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import {
|
|||||||
import { getDebugFlag } from '../config-manager.js';
|
import { getDebugFlag } from '../config-manager.js';
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
import generateTaskFiles from './generate-task-files.js';
|
||||||
import { generateTextService } from '../ai-services-unified.js';
|
import { generateTextService } from '../ai-services-unified.js';
|
||||||
|
import { getModelConfiguration } from './models.js';
|
||||||
|
|
||||||
// Zod schema for validating the structure of tasks AFTER parsing
|
// Zod schema for validating the structure of tasks AFTER parsing
|
||||||
const updatedTaskSchema = z
|
const updatedTaskSchema = z
|
||||||
@@ -42,13 +43,12 @@ const updatedTaskArraySchema = z.array(updatedTaskSchema);
|
|||||||
* Parses an array of task objects from AI's text response.
|
* Parses an array of task objects from AI's text response.
|
||||||
* @param {string} text - Response text from AI.
|
* @param {string} text - Response text from AI.
|
||||||
* @param {number} expectedCount - Expected number of tasks.
|
* @param {number} expectedCount - Expected number of tasks.
|
||||||
* @param {Function | Object} logFn - The logging function (consoleLog) or MCP log object.
|
* @param {Function | Object} logFn - The logging function or MCP log object.
|
||||||
* @param {boolean} isMCP - Flag indicating if logFn is MCP logger.
|
* @param {boolean} isMCP - Flag indicating if logFn is MCP logger.
|
||||||
* @returns {Array} Parsed and validated tasks array.
|
* @returns {Array} Parsed and validated tasks array.
|
||||||
* @throws {Error} If parsing or validation fails.
|
* @throws {Error} If parsing or validation fails.
|
||||||
*/
|
*/
|
||||||
function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
||||||
// Helper for consistent logging inside parser
|
|
||||||
const report = (level, ...args) => {
|
const report = (level, ...args) => {
|
||||||
if (isMCP) {
|
if (isMCP) {
|
||||||
if (typeof logFn[level] === 'function') logFn[level](...args);
|
if (typeof logFn[level] === 'function') logFn[level](...args);
|
||||||
@@ -68,38 +68,98 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
|||||||
|
|
||||||
let cleanedResponse = text.trim();
|
let cleanedResponse = text.trim();
|
||||||
const originalResponseForDebug = cleanedResponse;
|
const originalResponseForDebug = cleanedResponse;
|
||||||
|
let parseMethodUsed = 'raw'; // Track which method worked
|
||||||
|
|
||||||
// Extract from Markdown code block first
|
// --- NEW Step 1: Try extracting between [] first ---
|
||||||
|
const firstBracketIndex = cleanedResponse.indexOf('[');
|
||||||
|
const lastBracketIndex = cleanedResponse.lastIndexOf(']');
|
||||||
|
let potentialJsonFromArray = null;
|
||||||
|
|
||||||
|
if (firstBracketIndex !== -1 && lastBracketIndex > firstBracketIndex) {
|
||||||
|
potentialJsonFromArray = cleanedResponse.substring(
|
||||||
|
firstBracketIndex,
|
||||||
|
lastBracketIndex + 1
|
||||||
|
);
|
||||||
|
// Basic check to ensure it's not just "[]" or malformed
|
||||||
|
if (potentialJsonFromArray.length <= 2) {
|
||||||
|
potentialJsonFromArray = null; // Ignore empty array
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If [] extraction yielded something, try parsing it immediately
|
||||||
|
if (potentialJsonFromArray) {
|
||||||
|
try {
|
||||||
|
const testParse = JSON.parse(potentialJsonFromArray);
|
||||||
|
// It worked! Use this as the primary cleaned response.
|
||||||
|
cleanedResponse = potentialJsonFromArray;
|
||||||
|
parseMethodUsed = 'brackets';
|
||||||
|
report(
|
||||||
|
'info',
|
||||||
|
'Successfully parsed JSON content extracted between first [ and last ].'
|
||||||
|
);
|
||||||
|
} catch (e) {
|
||||||
|
report(
|
||||||
|
'info',
|
||||||
|
'Content between [] looked promising but failed initial parse. Proceeding to other methods.'
|
||||||
|
);
|
||||||
|
// Reset cleanedResponse to original if bracket parsing failed
|
||||||
|
cleanedResponse = originalResponseForDebug;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Step 2: If bracket parsing didn't work or wasn't applicable, try code block extraction ---
|
||||||
|
if (parseMethodUsed === 'raw') {
|
||||||
|
// Only look for ```json blocks now
|
||||||
const codeBlockMatch = cleanedResponse.match(
|
const codeBlockMatch = cleanedResponse.match(
|
||||||
/```(?:json)?\s*([\s\S]*?)\s*```/
|
/```json\s*([\s\S]*?)\s*```/i // Only match ```json
|
||||||
);
|
);
|
||||||
if (codeBlockMatch) {
|
if (codeBlockMatch) {
|
||||||
cleanedResponse = codeBlockMatch[1].trim();
|
cleanedResponse = codeBlockMatch[1].trim();
|
||||||
report('info', 'Extracted JSON content from Markdown code block.');
|
parseMethodUsed = 'codeblock';
|
||||||
} else {
|
report('info', 'Extracted JSON content from JSON Markdown code block.');
|
||||||
// If no code block, find first '[' and last ']' for the array
|
|
||||||
const firstBracket = cleanedResponse.indexOf('[');
|
|
||||||
const lastBracket = cleanedResponse.lastIndexOf(']');
|
|
||||||
if (firstBracket !== -1 && lastBracket > firstBracket) {
|
|
||||||
cleanedResponse = cleanedResponse.substring(
|
|
||||||
firstBracket,
|
|
||||||
lastBracket + 1
|
|
||||||
);
|
|
||||||
report('info', 'Extracted content between first [ and last ].');
|
|
||||||
} else {
|
} else {
|
||||||
|
report('info', 'No JSON code block found.');
|
||||||
|
// --- Step 3: If code block failed, try stripping prefixes ---
|
||||||
|
const commonPrefixes = [
|
||||||
|
'json\n',
|
||||||
|
'javascript\n', // Keep checking common prefixes just in case
|
||||||
|
'python\n',
|
||||||
|
'here are the updated tasks:',
|
||||||
|
'here is the updated json:',
|
||||||
|
'updated tasks:',
|
||||||
|
'updated json:',
|
||||||
|
'response:',
|
||||||
|
'output:'
|
||||||
|
];
|
||||||
|
let prefixFound = false;
|
||||||
|
for (const prefix of commonPrefixes) {
|
||||||
|
if (cleanedResponse.toLowerCase().startsWith(prefix)) {
|
||||||
|
cleanedResponse = cleanedResponse.substring(prefix.length).trim();
|
||||||
|
parseMethodUsed = 'prefix';
|
||||||
|
report('info', `Stripped prefix: "${prefix.trim()}"`);
|
||||||
|
prefixFound = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!prefixFound) {
|
||||||
report(
|
report(
|
||||||
'warn',
|
'warn',
|
||||||
'Response does not appear to contain a JSON array structure. Parsing raw response.'
|
'Response does not appear to contain [], JSON code block, or known prefix. Attempting raw parse.'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Attempt to parse the array
|
// --- Step 4: Attempt final parse ---
|
||||||
let parsedTasks;
|
let parsedTasks;
|
||||||
try {
|
try {
|
||||||
parsedTasks = JSON.parse(cleanedResponse);
|
parsedTasks = JSON.parse(cleanedResponse);
|
||||||
} catch (parseError) {
|
} catch (parseError) {
|
||||||
report('error', `Failed to parse JSON array: ${parseError.message}`);
|
report('error', `Failed to parse JSON array: ${parseError.message}`);
|
||||||
|
report(
|
||||||
|
'error',
|
||||||
|
`Extraction method used: ${parseMethodUsed}` // Log which method failed
|
||||||
|
);
|
||||||
report(
|
report(
|
||||||
'error',
|
'error',
|
||||||
`Problematic JSON string (first 500 chars): ${cleanedResponse.substring(0, 500)}`
|
`Problematic JSON string (first 500 chars): ${cleanedResponse.substring(0, 500)}`
|
||||||
@@ -113,7 +173,7 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate Array structure
|
// --- Step 5 & 6: Validate Array structure and Zod schema ---
|
||||||
if (!Array.isArray(parsedTasks)) {
|
if (!Array.isArray(parsedTasks)) {
|
||||||
report(
|
report(
|
||||||
'error',
|
'error',
|
||||||
@@ -134,7 +194,6 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate each task object using Zod
|
|
||||||
const validationResult = updatedTaskArraySchema.safeParse(parsedTasks);
|
const validationResult = updatedTaskArraySchema.safeParse(parsedTasks);
|
||||||
if (!validationResult.success) {
|
if (!validationResult.success) {
|
||||||
report('error', 'Parsed task array failed Zod validation.');
|
report('error', 'Parsed task array failed Zod validation.');
|
||||||
@@ -147,7 +206,6 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
report('info', 'Successfully validated task structure.');
|
report('info', 'Successfully validated task structure.');
|
||||||
// Return the validated data, potentially filtering/adjusting length if needed
|
|
||||||
return validationResult.data.slice(
|
return validationResult.data.slice(
|
||||||
0,
|
0,
|
||||||
expectedCount || validationResult.data.length
|
expectedCount || validationResult.data.length
|
||||||
@@ -173,7 +231,7 @@ async function updateTasks(
|
|||||||
context = {},
|
context = {},
|
||||||
outputFormat = 'text' // Default to text for CLI
|
outputFormat = 'text' // Default to text for CLI
|
||||||
) {
|
) {
|
||||||
const { session, mcpLog } = context;
|
const { session, mcpLog, projectRoot } = context;
|
||||||
// Use mcpLog if available, otherwise use the imported consoleLog function
|
// Use mcpLog if available, otherwise use the imported consoleLog function
|
||||||
const logFn = mcpLog || consoleLog;
|
const logFn = mcpLog || consoleLog;
|
||||||
// Flag to easily check which logger type we have
|
// Flag to easily check which logger type we have
|
||||||
@@ -217,7 +275,7 @@ async function updateTasks(
|
|||||||
chalk.cyan.bold('Title'),
|
chalk.cyan.bold('Title'),
|
||||||
chalk.cyan.bold('Status')
|
chalk.cyan.bold('Status')
|
||||||
],
|
],
|
||||||
colWidths: [5, 60, 10]
|
colWidths: [5, 70, 20]
|
||||||
});
|
});
|
||||||
|
|
||||||
tasksToUpdate.forEach((task) => {
|
tasksToUpdate.forEach((task) => {
|
||||||
@@ -294,9 +352,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
|||||||
|
|
||||||
let loadingIndicator = null;
|
let loadingIndicator = null;
|
||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
loadingIndicator = startLoadingIndicator(
|
loadingIndicator = startLoadingIndicator('Updating tasks...\n');
|
||||||
'Calling AI service to update tasks...'
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let responseText = '';
|
let responseText = '';
|
||||||
@@ -312,7 +368,8 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
|||||||
prompt: userPrompt,
|
prompt: userPrompt,
|
||||||
systemPrompt: systemPrompt,
|
systemPrompt: systemPrompt,
|
||||||
role,
|
role,
|
||||||
session
|
session,
|
||||||
|
projectRoot
|
||||||
});
|
});
|
||||||
if (isMCP) logFn.info('Successfully received text response');
|
if (isMCP) logFn.info('Successfully received text response');
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -9,7 +9,13 @@ import boxen from 'boxen';
|
|||||||
import ora from 'ora';
|
import ora from 'ora';
|
||||||
import Table from 'cli-table3';
|
import Table from 'cli-table3';
|
||||||
import gradient from 'gradient-string';
|
import gradient from 'gradient-string';
|
||||||
import { log, findTaskById, readJSON, truncate } from './utils.js';
|
import {
|
||||||
|
log,
|
||||||
|
findTaskById,
|
||||||
|
readJSON,
|
||||||
|
truncate,
|
||||||
|
isSilentMode
|
||||||
|
} from './utils.js';
|
||||||
import path from 'path';
|
import path from 'path';
|
||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import { findNextTask, analyzeTaskComplexity } from './task-manager.js';
|
import { findNextTask, analyzeTaskComplexity } from './task-manager.js';
|
||||||
@@ -23,6 +29,8 @@ const warmGradient = gradient(['#fb8b24', '#e36414', '#9a031e']);
|
|||||||
* Display a fancy banner for the CLI
|
* Display a fancy banner for the CLI
|
||||||
*/
|
*/
|
||||||
function displayBanner() {
|
function displayBanner() {
|
||||||
|
if (isSilentMode()) return;
|
||||||
|
|
||||||
console.clear();
|
console.clear();
|
||||||
const bannerText = figlet.textSync('Task Master', {
|
const bannerText = figlet.textSync('Task Master', {
|
||||||
font: 'Standard',
|
font: 'Standard',
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import path from 'path';
|
import path from 'path';
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
|
import dotenv from 'dotenv';
|
||||||
// Import specific config getters needed here
|
// Import specific config getters needed here
|
||||||
import { getLogLevel, getDebugFlag } from './config-manager.js';
|
import { getLogLevel, getDebugFlag } from './config-manager.js';
|
||||||
|
|
||||||
@@ -14,16 +15,47 @@ let silentMode = false;
|
|||||||
|
|
||||||
// --- Environment Variable Resolution Utility ---
|
// --- Environment Variable Resolution Utility ---
|
||||||
/**
|
/**
|
||||||
* Resolves an environment variable by checking process.env first, then session.env.
|
* Resolves an environment variable's value.
|
||||||
* @param {string} varName - The name of the environment variable.
|
* Precedence:
|
||||||
* @param {string|null} session - The MCP session object (optional).
|
* 1. session.env (if session provided)
|
||||||
|
* 2. process.env
|
||||||
|
* 3. .env file at projectRoot (if projectRoot provided)
|
||||||
|
* @param {string} key - The environment variable key.
|
||||||
|
* @param {object|null} [session=null] - The MCP session object.
|
||||||
|
* @param {string|null} [projectRoot=null] - The project root directory (for .env fallback).
|
||||||
* @returns {string|undefined} The value of the environment variable or undefined if not found.
|
* @returns {string|undefined} The value of the environment variable or undefined if not found.
|
||||||
*/
|
*/
|
||||||
function resolveEnvVariable(varName, session) {
|
function resolveEnvVariable(key, session = null, projectRoot = null) {
|
||||||
// Ensure session and session.env exist before attempting access
|
// 1. Check session.env
|
||||||
const sessionValue =
|
if (session?.env?.[key]) {
|
||||||
session && session.env ? session.env[varName] : undefined;
|
return session.env[key];
|
||||||
return process.env[varName] ?? sessionValue;
|
}
|
||||||
|
|
||||||
|
// 2. Read .env file at projectRoot
|
||||||
|
if (projectRoot) {
|
||||||
|
const envPath = path.join(projectRoot, '.env');
|
||||||
|
if (fs.existsSync(envPath)) {
|
||||||
|
try {
|
||||||
|
const envFileContent = fs.readFileSync(envPath, 'utf-8');
|
||||||
|
const parsedEnv = dotenv.parse(envFileContent); // Use dotenv to parse
|
||||||
|
if (parsedEnv && parsedEnv[key]) {
|
||||||
|
// console.log(`DEBUG: Found key ${key} in ${envPath}`); // Optional debug log
|
||||||
|
return parsedEnv[key];
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
// Log error but don't crash, just proceed as if key wasn't found in file
|
||||||
|
log('warn', `Could not read or parse ${envPath}: ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Fallback: Check process.env
|
||||||
|
if (process.env[key]) {
|
||||||
|
return process.env[key];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not found anywhere
|
||||||
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Project Root Finding Utility ---
|
// --- Project Root Finding Utility ---
|
||||||
@@ -478,8 +510,6 @@ function detectCamelCaseFlags(args) {
|
|||||||
|
|
||||||
// Export all utility functions and configuration
|
// Export all utility functions and configuration
|
||||||
export {
|
export {
|
||||||
// CONFIG, <-- Already Removed
|
|
||||||
// getConfig <-- Removing now
|
|
||||||
LOG_LEVELS,
|
LOG_LEVELS,
|
||||||
log,
|
log,
|
||||||
readJSON,
|
readJSON,
|
||||||
@@ -500,5 +530,4 @@ export {
|
|||||||
resolveEnvVariable,
|
resolveEnvVariable,
|
||||||
getTaskManager,
|
getTaskManager,
|
||||||
findProjectRoot
|
findProjectRoot
|
||||||
// getConfig <-- Removed
|
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -1,3 +0,0 @@
|
|||||||
Task Master PRD
|
|
||||||
|
|
||||||
Create a CLI tool for task management
|
|
||||||
@@ -1,259 +1,299 @@
|
|||||||
{
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"generatedAt": "2025-04-25T02:29:42.258Z",
|
"generatedAt": "2025-05-03T04:45:36.864Z",
|
||||||
"tasksAnalyzed": 31,
|
"tasksAnalyzed": 36,
|
||||||
"thresholdScore": 5,
|
"thresholdScore": 5,
|
||||||
"projectName": "Task Master",
|
"projectName": "Taskmaster",
|
||||||
"usedResearch": false
|
"usedResearch": false
|
||||||
},
|
},
|
||||||
"complexityAnalysis": [
|
"complexityAnalysis": [
|
||||||
{
|
{
|
||||||
"taskId": 24,
|
"taskId": 24,
|
||||||
"taskTitle": "Implement AI-Powered Test Generation Command",
|
"taskTitle": "Implement AI-Powered Test Generation Command",
|
||||||
"complexityScore": 9,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Break down the implementation of an AI-powered test generation command into granular steps, covering CLI integration, task retrieval, AI prompt construction, API integration, test file formatting, error handling, documentation, and comprehensive testing (unit, integration, error cases, and manual verification).",
|
"expansionPrompt": "Expand the 'Implement AI-Powered Test Generation Command' task by detailing the specific steps required for AI prompt engineering, including data extraction, prompt formatting, and error handling.",
|
||||||
"reasoning": "This task involves advanced CLI development, deep integration with external AI APIs, dynamic prompt engineering, file system operations, error handling, and extensive testing. It requires orchestrating multiple subsystems and ensuring robust, user-friendly output. The cognitive and technical demands are high, justifying a high complexity score and a need for further decomposition into at least 10 subtasks to manage risk and ensure quality.[1][3][4][5]"
|
"reasoning": "Requires AI integration, complex logic, and thorough testing. Prompt engineering and API interaction add significant complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 26,
|
"taskId": 26,
|
||||||
"taskTitle": "Implement Context Foundation for AI Operations",
|
"taskTitle": "Implement Context Foundation for AI Operations",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the context foundation implementation into detailed subtasks for CLI flag integration, file reading utilities, error handling, context formatting, command handler updates, documentation, and comprehensive testing for both functionality and error scenarios.",
|
"expansionPrompt": "Expand the 'Implement Context Foundation for AI Operations' task by detailing the specific steps for integrating file reading, cursor rules, and basic context extraction into the Claude API prompts.",
|
||||||
"reasoning": "This task introduces foundational context management across multiple commands, requiring careful CLI design, file I/O, error handling, and integration with AI prompt construction. While less complex than full AI-powered features, it still spans several modules and requires robust validation, suggesting a moderate-to-high complexity and a need for further breakdown.[1][3][4]"
|
"reasoning": "Involves modifying multiple commands and integrating different context sources. Error handling and backwards compatibility are crucial."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 27,
|
"taskId": 27,
|
||||||
"taskTitle": "Implement Context Enhancements for AI Operations",
|
"taskTitle": "Implement Context Enhancements for AI Operations",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Decompose the context enhancement task into subtasks for code context extraction, task history integration, PRD summarization, context formatting, token optimization, error handling, and comprehensive testing for each new context type.",
|
"expansionPrompt": "Expand the 'Implement Context Enhancements for AI Operations' task by detailing the specific steps for code context extraction, task history integration, and PRD context integration, including parsing, summarization, and formatting.",
|
||||||
"reasoning": "This phase builds on the foundation to add sophisticated context extraction (code, history, PRD), requiring advanced parsing, summarization, and prompt engineering. The need to optimize for token limits and maintain performance across large codebases increases both technical and cognitive complexity, warranting a high score and further subtask expansion.[1][3][4][5]"
|
"reasoning": "Builds upon the previous task with more sophisticated context extraction and integration. Requires intelligent parsing and summarization."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 28,
|
"taskId": 28,
|
||||||
"taskTitle": "Implement Advanced ContextManager System",
|
"taskTitle": "Implement Advanced ContextManager System",
|
||||||
"complexityScore": 10,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 12,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the ContextManager implementation into subtasks for class design, context source integration, optimization algorithms, caching, token management, command interface updates, AI service integration, performance monitoring, logging, and comprehensive testing (unit, integration, performance, and user experience).",
|
"expansionPrompt": "Expand the 'Implement Advanced ContextManager System' task by detailing the specific steps for creating the ContextManager class, implementing the optimization pipeline, and adding command interface enhancements, including caching and performance monitoring.",
|
||||||
"reasoning": "This is a highly complex architectural task involving advanced class design, optimization algorithms, dynamic context prioritization, caching, and integration with multiple AI services. It requires deep system knowledge, careful performance considerations, and robust error handling, making it one of the most complex tasks in the set and justifying a large number of subtasks.[1][3][4][5]"
|
"reasoning": "A comprehensive system requiring careful design, optimization, and testing. Involves complex algorithms and performance considerations."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 32,
|
"taskId": 32,
|
||||||
"taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
"taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
||||||
"complexityScore": 9,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 15,
|
"recommendedSubtasks": 10,
|
||||||
"expansionPrompt": "Break down the 'learn' command implementation into subtasks for file structure setup, path utilities, chat history analysis, rule management, AI integration, error handling, performance optimization, CLI integration, logging, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement \"learn\" Command for Automatic Cursor Rule Generation' task by detailing the specific steps for Cursor data analysis, rule management, and AI integration, including error handling and performance optimization.",
|
||||||
"reasoning": "This task requires orchestrating file system operations, parsing complex chat and code histories, managing rule templates, integrating with AI for pattern extraction, and ensuring robust error handling and performance. The breadth and depth of required functionality, along with the need for both automatic and manual triggers, make this a highly complex task needing extensive decomposition.[1][3][4][5]"
|
"reasoning": "Requires deep integration with Cursor's data, complex pattern analysis, and AI interaction. Significant error handling and performance optimization are needed."
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 35,
|
|
||||||
"taskTitle": "Integrate Grok3 API for Research Capabilities",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 8,
|
|
||||||
"expansionPrompt": "Expand the Grok3 API integration into subtasks for API client development, service layer updates, payload/response adaptation, error handling, configuration management, UI updates, backward compatibility, and documentation/testing.",
|
|
||||||
"reasoning": "This migration task involves replacing a core external API, adapting to new request/response formats, updating configuration and UI, and ensuring backward compatibility. While not as cognitively complex as some AI tasks, the risk and breadth of impact across the system justify a moderate-to-high complexity and further breakdown.[1][3][4]"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 36,
|
|
||||||
"taskTitle": "Add Ollama Support for AI Services as Claude Alternative",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 8,
|
|
||||||
"expansionPrompt": "Decompose the Ollama integration into subtasks for service class implementation, configuration, model selection, prompt formatting, error handling, fallback logic, documentation, and comprehensive testing.",
|
|
||||||
"reasoning": "Adding a local AI provider requires interface compatibility, configuration management, error handling, and fallback logic, as well as user documentation. The technical complexity is moderate-to-high, especially in ensuring seamless switching and robust error handling, warranting further subtasking.[1][3][4]"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 37,
|
|
||||||
"taskTitle": "Add Gemini Support for Main AI Services as Claude Alternative",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 8,
|
|
||||||
"expansionPrompt": "Expand Gemini integration into subtasks for service class creation, authentication, prompt/response mapping, configuration, error handling, streaming support, documentation, and comprehensive testing.",
|
|
||||||
"reasoning": "Integrating a new cloud AI provider involves authentication, API adaptation, configuration, and ensuring feature parity. The complexity is similar to other provider integrations, requiring careful planning and multiple subtasks for robust implementation and testing.[1][3][4]"
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 40,
|
"taskId": 40,
|
||||||
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
||||||
"complexityScore": 6,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Break down the 'plan' command implementation into subtasks for CLI integration, task/subtask retrieval, AI prompt construction, plan formatting, error handling, and testing.",
|
"expansionPrompt": "Expand the 'Implement 'plan' Command for Task Implementation Planning' task by detailing the steps for retrieving task content, generating implementation plans with AI, and formatting the plan within XML tags.",
|
||||||
"reasoning": "This task involves AI prompt engineering, CLI integration, and content formatting, but is more focused and less technically demanding than full AI service or context management features. It still requires careful error handling and testing, suggesting a moderate complexity and a handful of subtasks.[1][3][4]"
|
"reasoning": "Involves AI integration and requires careful formatting and error handling. Switching between Claude and Perplexity adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 41,
|
"taskId": 41,
|
||||||
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 8,
|
||||||
"expansionPrompt": "Expand the visual dependency graph implementation into subtasks for CLI command setup, graph layout algorithms, ASCII/Unicode rendering, color coding, circular dependency detection, filtering, accessibility, performance optimization, documentation, and testing.",
|
"expansionPrompt": "Expand the 'Implement Visual Task Dependency Graph in Terminal' task by detailing the steps for designing the graph rendering system, implementing layout algorithms, and handling circular dependencies and filtering options.",
|
||||||
"reasoning": "Rendering complex dependency graphs in the terminal with color coding, layout optimization, and accessibility features is technically challenging and requires careful algorithm design and robust error handling. The need for performance optimization and user-friendly output increases the complexity, justifying a high score and further subtasking.[1][3][4][5]"
|
"reasoning": "Requires complex graph algorithms and terminal rendering. Accessibility and performance are important considerations."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 42,
|
"taskId": 42,
|
||||||
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
||||||
"complexityScore": 10,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 12,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Break down the MCP-to-MCP protocol implementation into subtasks for protocol definition, adapter pattern, client module, reference integration, mode support, core module updates, configuration, documentation, error handling, security, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement MCP-to-MCP Communication Protocol' task by detailing the steps for defining the protocol, implementing the adapter pattern, and building the client module, including error handling and security considerations.",
|
||||||
"reasoning": "Designing and implementing a standardized communication protocol with dynamic mode switching, adapter patterns, and robust error handling is architecturally complex. It requires deep system understanding, security considerations, and extensive testing, making it one of the most complex tasks and requiring significant decomposition.[1][3][4][5]"
|
"reasoning": "Requires designing a new protocol and implementing communication with external systems. Security and error handling are critical."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 43,
|
"taskId": 43,
|
||||||
"taskTitle": "Add Research Flag to Add-Task Command",
|
"taskTitle": "Add Research Flag to Add-Task Command",
|
||||||
"complexityScore": 5,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 3,
|
||||||
"expansionPrompt": "Expand the research flag implementation into subtasks for CLI parser updates, subtask generation logic, parent linking, help documentation, and testing.",
|
"expansionPrompt": "Expand the 'Add Research Flag to Add-Task Command' task by detailing the steps for updating the command parser, generating research subtasks, and linking them to the parent task.",
|
||||||
"reasoning": "This is a focused feature addition involving CLI parsing, subtask generation, and documentation. While it requires some integration with AI or templating logic, the scope is well-defined and less complex than architectural or multi-module tasks, suggesting a moderate complexity and a handful of subtasks.[1][3][4]"
|
"reasoning": "Relatively straightforward, but requires careful handling of subtask generation and linking."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 44,
|
"taskId": 44,
|
||||||
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
||||||
"complexityScore": 9,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Decompose the webhook and event trigger system into subtasks for event system design, webhook registration, trigger definition, incoming/outgoing webhook handling, authentication, rate limiting, CLI management, payload templating, logging, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Task Automation with Webhooks and Event Triggers' task by detailing the steps for implementing the webhook registration system, event system, and trigger definition interface, including security and error handling.",
|
||||||
"reasoning": "Building a robust automation system with webhooks and event triggers involves designing an event system, secure webhook handling, trigger logic, CLI management, and error handling. The breadth and integration requirements make this a highly complex task needing extensive breakdown.[1][3][4][5]"
|
"reasoning": "Requires designing a robust event system and integrating with external services. Security and error handling are critical."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 45,
|
"taskId": 45,
|
||||||
"taskTitle": "Implement GitHub Issue Import Feature",
|
"taskTitle": "Implement GitHub Issue Import Feature",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the GitHub issue import feature into subtasks for CLI flag parsing, URL extraction, API integration, data mapping, authentication, error handling, override logic, documentation, and testing.",
|
"expansionPrompt": "Expand the 'Implement GitHub Issue Import Feature' task by detailing the steps for parsing the URL, fetching issue details from the GitHub API, and generating a well-formatted task.",
|
||||||
"reasoning": "This task involves external API integration, data mapping, authentication, error handling, and user override logic. While not as complex as architectural changes, it still requires careful planning and multiple subtasks for robust implementation and testing.[1][3][4]"
|
"reasoning": "Requires interacting with the GitHub API and handling various error conditions. Authentication adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 46,
|
"taskId": 46,
|
||||||
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Break down the ICE analysis command into subtasks for scoring algorithm development, LLM prompt engineering, report generation, CLI rendering, integration with complexity reports, sorting/filtering, error handling, and testing.",
|
"expansionPrompt": "Expand the 'Implement ICE Analysis Command for Task Prioritization' task by detailing the steps for calculating ICE scores, generating the report file, and implementing the CLI rendering.",
|
||||||
"reasoning": "Implementing a prioritization command with LLM-based scoring, report generation, and CLI rendering involves moderate technical and cognitive complexity, especially in ensuring accurate and actionable outputs. It requires several subtasks for robust implementation and validation.[1][3][4]"
|
"reasoning": "Requires AI integration for scoring and careful formatting of the report. Integration with existing complexity reports adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 47,
|
"taskId": 47,
|
||||||
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the workflow enhancement into subtasks for UI redesign, phase management logic, interactive elements, progress tracking, context addition, task management integration, accessibility, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Enhance Task Suggestion Actions Card Workflow' task by detailing the steps for implementing the task expansion, context addition, and task management phases, including UI/UX considerations.",
|
||||||
"reasoning": "Redesigning a multi-phase workflow with interactive UI elements, progress tracking, and context management involves both UI/UX and logic complexity. The need for seamless transitions and robust state management increases the complexity, warranting further breakdown.[1][3][4]"
|
"reasoning": "Requires significant UI/UX work and careful state management. Integration with existing functionality is crucial."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 48,
|
"taskId": 48,
|
||||||
"taskTitle": "Refactor Prompts into Centralized Structure",
|
"taskTitle": "Refactor Prompts into Centralized Structure",
|
||||||
"complexityScore": 6,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 3,
|
||||||
"expansionPrompt": "Break down the prompt refactoring into subtasks for directory setup, prompt extraction, import updates, naming conventions, documentation, and regression testing.",
|
"expansionPrompt": "Expand the 'Refactor Prompts into Centralized Structure' task by detailing the steps for creating the 'prompts' directory, extracting prompts into individual files, and updating functions to import them.",
|
||||||
"reasoning": "This is a codebase refactoring task focused on maintainability and organization. While it touches many files, the technical complexity is moderate, but careful planning and testing are needed to avoid regressions, suggesting a moderate complexity and several subtasks.[1][3][4]"
|
"reasoning": "Primarily a refactoring task, but requires careful attention to detail to avoid breaking existing functionality."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 49,
|
"taskId": 49,
|
||||||
"taskTitle": "Implement Code Quality Analysis Command",
|
"taskTitle": "Implement Code Quality Analysis Command",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the code quality analysis command into subtasks for pattern recognition, best practice verification, AI integration, recommendation generation, task integration, CLI development, configuration, error handling, documentation, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Code Quality Analysis Command' task by detailing the steps for pattern recognition, best practice verification, and improvement recommendations, including AI integration and task creation.",
|
||||||
"reasoning": "This task involves static code analysis, AI integration for best practice checks, recommendation generation, and task creation workflows. The technical and cognitive demands are high, requiring robust validation and integration, justifying a high complexity and multiple subtasks.[1][3][4][5]"
|
"reasoning": "Requires complex code analysis and AI integration. Generating actionable recommendations adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 50,
|
"taskId": 50,
|
||||||
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
||||||
"complexityScore": 9,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 12,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Break down the test coverage tracking system into subtasks for data structure design, coverage parsing, mapping algorithms, CLI commands, LLM-powered test generation, MCP integration, visualization, workflow integration, error handling, documentation, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Test Coverage Tracking System by Task' task by detailing the steps for creating the tests.json file structure, developing the coverage report parser, and implementing the CLI commands and AI-powered test generation system.",
|
||||||
"reasoning": "Mapping test coverage to tasks, integrating with coverage tools, generating targeted tests, and visualizing coverage requires advanced data modeling, parsing, AI integration, and workflow design. The breadth and depth of this system make it highly complex and in need of extensive decomposition.[1][3][4][5]"
|
"reasoning": "A comprehensive system requiring deep integration with testing tools and AI. Maintaining bidirectional relationships adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 51,
|
"taskId": 51,
|
||||||
"taskTitle": "Implement Perplexity Research Command",
|
"taskTitle": "Implement Perplexity Research Command",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the Perplexity research command into subtasks for API client development, context extraction, CLI interface, result formatting, caching, error handling, documentation, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Perplexity Research Command' task by detailing the steps for creating the Perplexity API client, implementing task context extraction, and building the CLI interface.",
|
||||||
"reasoning": "This task involves external API integration, context extraction, CLI development, result formatting, caching, and error handling. The technical complexity is moderate-to-high, especially in ensuring robust and user-friendly output, suggesting multiple subtasks.[1][3][4]"
|
"reasoning": "Requires API integration and careful formatting of the research results. Caching adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 52,
|
"taskId": 52,
|
||||||
"taskTitle": "Implement Task Suggestion Command for CLI",
|
"taskTitle": "Implement Task Suggestion Command for CLI",
|
||||||
"complexityScore": 6,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Break down the task suggestion command into subtasks for task snapshot collection, context extraction, AI suggestion generation, interactive CLI interface, error handling, and testing.",
|
"expansionPrompt": "Expand the 'Implement Task Suggestion Command for CLI' task by detailing the steps for collecting existing task data, generating task suggestions with AI, and implementing the interactive CLI interface.",
|
||||||
"reasoning": "This is a focused feature involving AI suggestion generation and interactive CLI elements. While it requires careful context management and error handling, the scope is well-defined and less complex than architectural or multi-module tasks, suggesting a moderate complexity and several subtasks.[1][3][4]"
|
"reasoning": "Requires AI integration and careful design of the interactive interface. Handling various flag combinations adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 53,
|
"taskId": 53,
|
||||||
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
||||||
"complexityScore": 6,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the subtask suggestion feature into subtasks for parent task validation, context gathering, AI suggestion logic, interactive CLI interface, subtask linking, and testing.",
|
"expansionPrompt": "Expand the 'Implement Subtask Suggestion Feature for Parent Tasks' task by detailing the steps for validating parent tasks, gathering context, generating subtask suggestions with AI, and implementing the interactive CLI interface.",
|
||||||
"reasoning": "Similar to the task suggestion command, this feature is focused but requires robust context management, AI integration, and interactive CLI handling. The complexity is moderate, warranting several subtasks for a robust implementation.[1][3][4]"
|
"reasoning": "Requires AI integration and careful design of the interactive interface. Linking subtasks to parent tasks adds complexity."
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 54,
|
|
||||||
"taskTitle": "Add Research Flag to Add-Task Command",
|
|
||||||
"complexityScore": 5,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Break down the research flag enhancement into subtasks for CLI parser updates, research invocation, user interaction, task creation flow integration, and testing.",
|
|
||||||
"reasoning": "This is a focused enhancement involving CLI parsing, research invocation, and user interaction. The technical complexity is moderate, with a clear scope and integration points, suggesting a handful of subtasks.[1][3][4]"
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 55,
|
"taskId": 55,
|
||||||
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
||||||
"complexityScore": 6,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand positional argument support into subtasks for parser updates, argument mapping, help documentation, error handling, backward compatibility, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Positional Arguments Support for CLI Commands' task by detailing the steps for updating the argument parsing logic, defining the positional argument order, and handling edge cases.",
|
||||||
"reasoning": "Upgrading CLI parsing to support positional arguments requires careful mapping, error handling, documentation, and regression testing to maintain backward compatibility. The complexity is moderate, suggesting several subtasks.[1][3][4]"
|
"reasoning": "Requires careful modification of the command parsing logic and ensuring backward compatibility. Handling edge cases adds complexity."
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 56,
|
|
||||||
"taskTitle": "Refactor Task-Master Files into Node Module Structure",
|
|
||||||
"complexityScore": 8,
|
|
||||||
"recommendedSubtasks": 10,
|
|
||||||
"expansionPrompt": "Break down the refactoring into subtasks for directory setup, file migration, import path updates, build script adjustments, compatibility checks, documentation, regression testing, and rollback planning.",
|
|
||||||
"reasoning": "This is a high-risk, broad refactoring affecting many files and build processes. It requires careful planning, incremental changes, and extensive testing to avoid regressions, justifying a high complexity and multiple subtasks.[1][3][4][5]"
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 57,
|
"taskId": 57,
|
||||||
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 8,
|
|
||||||
"expansionPrompt": "Expand the CLI UX enhancement into subtasks for log management, visual design, interactive elements, output formatting, help/documentation, accessibility, performance optimization, and comprehensive testing.",
|
|
||||||
"reasoning": "Improving CLI UX involves log management, visual enhancements, interactive elements, and accessibility, requiring both technical and design skills. The breadth of improvements and need for robust testing increase the complexity, suggesting multiple subtasks.[1][3][4]"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 58,
|
|
||||||
"taskTitle": "Implement Elegant Package Update Mechanism for Task-Master",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 8,
|
|
||||||
"expansionPrompt": "Break down the update mechanism into subtasks for version detection, update command implementation, file management, configuration migration, notification system, rollback logic, documentation, and comprehensive testing.",
|
|
||||||
"reasoning": "Implementing a robust update mechanism involves version management, file operations, configuration migration, rollback planning, and user communication. The technical and operational complexity is moderate-to-high, requiring multiple subtasks.[1][3][4]"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 59,
|
|
||||||
"taskTitle": "Remove Manual Package.json Modifications and Implement Automatic Dependency Management",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the dependency management refactor into subtasks for code audit, removal of manual modifications, npm dependency updates, initialization command updates, documentation, and regression testing.",
|
"expansionPrompt": "Expand the 'Enhance Task-Master CLI User Experience and Interface' task by detailing the steps for log management, visual enhancements, interactive elements, and output formatting.",
|
||||||
"reasoning": "This is a focused refactoring to align with npm best practices. While it touches installation and configuration logic, the technical complexity is moderate, with a clear scope and manageable risk, suggesting several subtasks.[1][3][4]"
|
"reasoning": "Requires significant UI/UX work and careful consideration of different terminal environments. Reducing verbose logging adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 60,
|
"taskId": 60,
|
||||||
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
||||||
"complexityScore": 9,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 12,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Break down the mentor system implementation into subtasks for mentor management, round-table simulation, CLI integration, AI personality simulation, task integration, output formatting, error handling, documentation, and comprehensive testing.",
|
"expansionPrompt": "Expand the 'Implement Mentor System with Round-Table Discussion Feature' task by detailing the steps for mentor management, round-table discussion implementation, and integration with the task system, including LLM integration.",
|
||||||
"reasoning": "This task involves designing a new system for mentor management, simulating multi-personality AI discussions, integrating with tasks, and ensuring robust CLI and output handling. The breadth and novelty of the feature, along with the need for robust simulation and integration, make it highly complex and in need of extensive decomposition.[1][3][4][5]"
|
"reasoning": "Requires complex AI simulation and careful formatting of the discussion output. Integrating with the task system adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 61,
|
"taskId": 61,
|
||||||
"taskTitle": "Implement Flexible AI Model Management",
|
"taskTitle": "Implement Flexible AI Model Management",
|
||||||
"complexityScore": 10,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 15,
|
"recommendedSubtasks": 8,
|
||||||
"expansionPrompt": "Expand the AI model management implementation into subtasks for configuration management, CLI command parsing, provider module development, unified service abstraction, environment variable handling, documentation, integration testing, migration planning, and cleanup of legacy code.",
|
"expansionPrompt": "Expand the 'Implement Flexible AI Model Management' task by detailing the steps for creating the configuration management module, implementing the CLI command parser, and integrating the Vercel AI SDK.",
|
||||||
"reasoning": "This is a major architectural overhaul involving configuration management, CLI design, multi-provider integration, abstraction layers, environment variable handling, documentation, and migration. The technical and organizational complexity is extremely high, requiring extensive decomposition and careful coordination.[1][3][4][5]"
|
"reasoning": "Requires deep integration with multiple AI models and careful management of API keys and configuration options. Vercel AI SDK integration adds complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 62,
|
"taskId": 62,
|
||||||
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
||||||
"complexityScore": 5,
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Expand the 'Add --simple Flag to Update Commands for Direct Text Input' task by detailing the steps for updating the command parsers, implementing the conditional logic, and formatting the user input with a timestamp.",
|
||||||
|
"reasoning": "Relatively straightforward, but requires careful attention to formatting and ensuring consistency with AI-processed updates."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 63,
|
||||||
|
"taskTitle": "Add pnpm Support for the Taskmaster Package",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand the 'Add pnpm Support for the Taskmaster Package' task by detailing the steps for updating the documentation, ensuring package scripts compatibility, and testing the installation and operation with pnpm.",
|
||||||
|
"reasoning": "Requires careful attention to detail to ensure compatibility with pnpm's execution model. Testing and documentation are crucial."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 64,
|
||||||
|
"taskTitle": "Add Yarn Support for Taskmaster Installation",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand the 'Add Yarn Support for Taskmaster Installation' task by detailing the steps for updating package.json, adding Yarn-specific configuration files, and testing the installation and operation with Yarn.",
|
||||||
|
"reasoning": "Requires careful attention to detail to ensure compatibility with Yarn's execution model. Testing and documentation are crucial."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 65,
|
||||||
|
"taskTitle": "Add Bun Support for Taskmaster Installation",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand the 'Add Bun Support for Taskmaster Installation' task by detailing the steps for updating the installation scripts, testing the installation and operation with Bun, and updating the documentation.",
|
||||||
|
"reasoning": "Requires careful attention to detail to ensure compatibility with Bun's execution model. Testing and documentation are crucial."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 66,
|
||||||
|
"taskTitle": "Support Status Filtering in Show Command for Subtasks",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Expand the 'Support Status Filtering in Show Command for Subtasks' task by detailing the steps for updating the command parser, modifying the show command handler, and updating the help documentation.",
|
||||||
|
"reasoning": "Relatively straightforward, but requires careful handling of status validation and filtering."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 67,
|
||||||
|
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
|
||||||
|
"complexityScore": 7,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand the 'Add CLI JSON output and Cursor keybindings integration' task by detailing the steps for implementing the JSON output logic, creating the install-keybindings command structure, and handling keybinding file manipulation.",
|
||||||
|
"reasoning": "Requires careful formatting of the JSON output and handling of file system operations. OS detection adds complexity."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 68,
|
||||||
|
"taskTitle": "Ability to create tasks without parsing PRD",
|
||||||
|
"complexityScore": 3,
|
||||||
|
"recommendedSubtasks": 2,
|
||||||
|
"expansionPrompt": "Expand the 'Ability to create tasks without parsing PRD' task by detailing the steps for creating tasks without a PRD.",
|
||||||
|
"reasoning": "Simple task to allow task creation without a PRD."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 69,
|
||||||
|
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Expand the 'Enhance Analyze Complexity for Specific Task IDs' task by detailing the steps for modifying the core logic, updating the CLI, and updating the MCP tool.",
|
||||||
|
"reasoning": "Requires modifying existing functionality and ensuring compatibility with both CLI and MCP."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 70,
|
||||||
|
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Expand the 'Implement 'diagram' command for Mermaid diagram generation' task by detailing the steps for creating the command, generating the Mermaid diagram, and handling different output options.",
|
||||||
|
"reasoning": "Requires generating Mermaid diagrams and handling different output options."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 72,
|
||||||
|
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
||||||
|
"complexityScore": 8,
|
||||||
|
"recommendedSubtasks": 6,
|
||||||
|
"expansionPrompt": "Expand the 'Implement PDF Generation for Project Progress and Dependency Overview' task by detailing the steps for summarizing project progress, visualizing the dependency chain, and generating the PDF document.",
|
||||||
|
"reasoning": "Requires integrating with the diagram command and using a PDF generation library. Handling large dependency chains adds complexity."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 73,
|
||||||
|
"taskTitle": "Implement Custom Model ID Support for Ollama/OpenRouter",
|
||||||
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Break down the --simple flag implementation into subtasks for CLI parser updates, update logic modification, timestamp formatting, display logic, documentation, and testing.",
|
"expansionPrompt": "Expand the 'Implement Custom Model ID Support for Ollama/OpenRouter' task by detailing the steps for modifying the CLI, implementing the interactive setup, and handling validation and warnings.",
|
||||||
"reasoning": "This is a focused feature addition involving CLI parsing, conditional logic, timestamp formatting, and display updates. The technical complexity is moderate, with a clear scope and manageable risk, suggesting a handful of subtasks.[1][3][4]"
|
"reasoning": "Requires integrating with external APIs and handling different model types. Validation and warnings are crucial."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 75,
|
||||||
|
"taskTitle": "Integrate Google Search Grounding for Research Role",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "Expand the 'Integrate Google Search Grounding for Research Role' task by detailing the steps for modifying the AI service layer, implementing the conditional logic, and updating the supported models.",
|
||||||
|
"reasoning": "Requires conditional logic and integration with the Google Search Grounding API."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 76,
|
||||||
|
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
|
||||||
|
"complexityScore": 9,
|
||||||
|
"recommendedSubtasks": 7,
|
||||||
|
"expansionPrompt": "Expand the 'Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)' task by detailing the steps for launching the FastMCP server, implementing the message protocol handler, and developing the request/response correlation mechanism.",
|
||||||
|
"reasoning": "Requires complex system integration and robust error handling. Designing a comprehensive test framework adds complexity."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -46,3 +46,20 @@ Generate task files from sample tasks.json data and verify the content matches t
|
|||||||
### Details:
|
### Details:
|
||||||
|
|
||||||
|
|
||||||
|
<info added on 2025-05-01T21:59:10.551Z>
|
||||||
|
{
|
||||||
|
"id": 5,
|
||||||
|
"title": "Implement Change Detection and Update Handling",
|
||||||
|
"description": "Create a system to detect changes in task files and tasks.json, and handle updates bidirectionally. This includes implementing file watching or comparison mechanisms, determining which version is newer, and applying changes in the appropriate direction. Ensure the system handles edge cases like deleted files, new tasks, and conflicting changes.",
|
||||||
|
"status": "done",
|
||||||
|
"dependencies": [
|
||||||
|
1,
|
||||||
|
3,
|
||||||
|
4,
|
||||||
|
2
|
||||||
|
],
|
||||||
|
"acceptanceCriteria": "- Detects changes in both task files and tasks.json\n- Determines which version is newer based on modification timestamps or content\n- Applies changes in the appropriate direction (file to JSON or JSON to file)\n- Handles edge cases like deleted files, new tasks, and renamed tasks\n- Provides options for manual conflict resolution when necessary\n- Maintains data integrity during the synchronization process\n- Includes a command to force synchronization in either direction\n- Logs all synchronization activities for troubleshooting\n\nEach of these subtasks addresses a specific component of the task file generation system, following a logical progression from template design to bidirectional synchronization. The dependencies ensure that prerequisites are completed before dependent work begins, and the acceptance criteria provide clear guidelines for verifying each subtask's completion.",
|
||||||
|
"details": "[2025-05-01 21:59:07] Adding another note via MCP test."
|
||||||
|
}
|
||||||
|
</info added on 2025-05-01T21:59:10.551Z>
|
||||||
|
|
||||||
|
|||||||
@@ -37,3 +37,29 @@ Test cases should include:
|
|||||||
- Running the command on tasks with existing implementation plans to ensure proper appending
|
- Running the command on tasks with existing implementation plans to ensure proper appending
|
||||||
|
|
||||||
Manually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.
|
Manually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.
|
||||||
|
|
||||||
|
# Subtasks:
|
||||||
|
## 1. Retrieve Task Content [in-progress]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Fetch the content of the specified task from the task management system. This includes the task title, description, and any associated details.
|
||||||
|
### Details:
|
||||||
|
Implement a function to retrieve task details based on a task ID. Handle cases where the task does not exist.
|
||||||
|
|
||||||
|
## 2. Generate Implementation Plan with AI [pending]
|
||||||
|
### Dependencies: 40.1
|
||||||
|
### Description: Use an AI model (Claude or Perplexity) to generate an implementation plan based on the retrieved task content. The plan should outline the steps required to complete the task.
|
||||||
|
### Details:
|
||||||
|
Implement logic to switch between Claude and Perplexity APIs. Handle API authentication and rate limiting. Prompt the AI model with the task content and request a detailed implementation plan.
|
||||||
|
|
||||||
|
## 3. Format Plan in XML [pending]
|
||||||
|
### Dependencies: 40.2, 40.2
|
||||||
|
### Description: Format the generated implementation plan within XML tags. Each step in the plan should be represented as an XML element with appropriate attributes.
|
||||||
|
### Details:
|
||||||
|
Define the XML schema for the implementation plan. Implement a function to convert the AI-generated plan into the defined XML format. Ensure proper XML syntax and validation.
|
||||||
|
|
||||||
|
## 4. Error Handling and Output [pending]
|
||||||
|
### Dependencies: 40.3
|
||||||
|
### Description: Implement error handling for all steps, including API failures and XML formatting errors. Output the formatted XML plan to the console or a file.
|
||||||
|
### Details:
|
||||||
|
Add try-except blocks to handle potential exceptions. Log errors for debugging. Provide informative error messages to the user. Output the XML plan in a user-friendly format.
|
||||||
|
|
||||||
|
|||||||
@@ -1962,9 +1962,9 @@ Implementation notes:
|
|||||||
- This stricter approach enforces configuration-as-code principles, ensures reproducibility, and prevents configuration drift, aligning with modern best practices for immutable infrastructure and automated configuration management[2][4].
|
- This stricter approach enforces configuration-as-code principles, ensures reproducibility, and prevents configuration drift, aligning with modern best practices for immutable infrastructure and automated configuration management[2][4].
|
||||||
</info added on 2025-04-22T02:41:51.174Z>
|
</info added on 2025-04-22T02:41:51.174Z>
|
||||||
|
|
||||||
## 31. Implement Integration Tests for Unified AI Service [pending]
|
## 31. Implement Integration Tests for Unified AI Service [done]
|
||||||
### Dependencies: 61.18
|
### Dependencies: 61.18
|
||||||
### Description: Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`.
|
### Description: Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]
|
||||||
### Details:
|
### Details:
|
||||||
|
|
||||||
|
|
||||||
@@ -2009,6 +2009,107 @@ For the integration tests of the Unified AI Service, consider the following impl
|
|||||||
6. Include tests for configuration changes at runtime and their effect on service behavior.
|
6. Include tests for configuration changes at runtime and their effect on service behavior.
|
||||||
</info added on 2025-04-20T03:51:23.368Z>
|
</info added on 2025-04-20T03:51:23.368Z>
|
||||||
|
|
||||||
|
<info added on 2025-05-02T18:41:13.374Z>
|
||||||
|
]
|
||||||
|
{
|
||||||
|
"id": 31,
|
||||||
|
"title": "Implement Integration Test for Unified AI Service",
|
||||||
|
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider module based on configuration and ensure the unified service function (`generateTextService`, `generateObjectService`, etc.) work correctly when called from module like `task-manager.js`.",
|
||||||
|
"details": "\n\n<info added on 2025-04-20T03:51:23.368Z>\nFor the integration test of the Unified AI Service, consider the following implementation details:\n\n1. Setup test fixture:\n - Create a mock `.taskmasterconfig` file with different provider configuration\n - Define test case with various model selection and parameter setting\n - Use environment variable mock only for API key (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)\n\n2. Test configuration resolution:\n - Verify that `ai-services-unified.js` correctly retrieve setting from `config-manager.js`\n - Test that model selection follow the hierarchy defined in `.taskmasterconfig`\n - Ensure fallback mechanism work when primary provider are unavailable\n\n3. Mock the provider module:\n ```javascript\n jest.mock('../service/openai-service.js');\n jest.mock('../service/anthropic-service.js');\n ```\n\n4. Test specific scenario:\n - Provider selection based on configured preference\n - Parameter inheritance from config (temperature, maxToken)\n - Error handling when API key are missing\n - Proper routing when specific model are requested\n\n5. Verify integration with task-manager:\n ```javascript\n test('task-manager correctly use unified AI service with config-based setting', async () => {\n // Setup mock config with specific setting\n mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);\n mockConfigManager.getModelForRole.mockReturnValue('gpt-4');\n mockConfigManager.getParameterForModel.mockReturnValue({ temperature: 0.7, maxToken: 2000 });\n \n // Verify task-manager use these setting when calling the unified service\n // ...\n });\n ```\n\n6. Include test for configuration change at runtime and their effect on service behavior.\n</info added on 2025-04-20T03:51:23.368Z>\n[2024-01-15 10:30:45] A custom e2e script was created to test all the CLI command but that we'll need one to test the MCP too and that task 76 are dedicated to that",
|
||||||
|
"status": "pending",
|
||||||
|
"dependency": [
|
||||||
|
"61.18"
|
||||||
|
],
|
||||||
|
"parentTaskId": 61
|
||||||
|
}
|
||||||
|
</info added on 2025-05-02T18:41:13.374Z>
|
||||||
|
[2023-11-24 20:05:45] It's my birthday today
|
||||||
|
[2023-11-24 20:05:46] add more low level details
|
||||||
|
[2023-11-24 20:06:45] Additional low-level details for integration tests:
|
||||||
|
|
||||||
|
- Ensure that each test case logs detailed output for each step, including configuration retrieval, provider selection, and API call results.
|
||||||
|
- Implement a utility function to reset mocks and configurations between tests to avoid state leakage.
|
||||||
|
- Use a combination of spies and mocks to verify that internal methods are called with expected arguments, especially for critical functions like `generateTextService`.
|
||||||
|
- Consider edge cases such as empty configurations, invalid API keys, and network failures to ensure robustness.
|
||||||
|
- Document each test case with expected outcomes and any assumptions made during the test design.
|
||||||
|
- Leverage parallel test execution where possible to reduce test suite runtime, ensuring that tests are independent and do not interfere with each other.
|
||||||
|
<info added on 2025-05-02T20:42:14.388Z>
|
||||||
|
<info added on 2025-04-20T03:51:23.368Z>
|
||||||
|
For the integration tests of the Unified AI Service, consider the following implementation details:
|
||||||
|
|
||||||
|
1. Setup test fixtures:
|
||||||
|
- Create a mock `.taskmasterconfig` file with different provider configurations
|
||||||
|
- Define test cases with various model selections and parameter settings
|
||||||
|
- Use environment variable mocks only for API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
|
||||||
|
|
||||||
|
2. Test configuration resolution:
|
||||||
|
- Verify that `ai-services-unified.js` correctly retrieves settings from `config-manager.js`
|
||||||
|
- Test that model selection follows the hierarchy defined in `.taskmasterconfig`
|
||||||
|
- Ensure fallback mechanisms work when primary providers are unavailable
|
||||||
|
|
||||||
|
3. Mock the provider modules:
|
||||||
|
```javascript
|
||||||
|
jest.mock('../services/openai-service.js');
|
||||||
|
jest.mock('../services/anthropic-service.js');
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Test specific scenarios:
|
||||||
|
- Provider selection based on configured preferences
|
||||||
|
- Parameter inheritance from config (temperature, maxTokens)
|
||||||
|
- Error handling when API keys are missing
|
||||||
|
- Proper routing when specific models are requested
|
||||||
|
|
||||||
|
5. Verify integration with task-manager:
|
||||||
|
```javascript
|
||||||
|
test('task-manager correctly uses unified AI service with config-based settings', async () => {
|
||||||
|
// Setup mock config with specific settings
|
||||||
|
mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);
|
||||||
|
mockConfigManager.getModelForRole.mockReturnValue('gpt-4');
|
||||||
|
mockConfigManager.getParametersForModel.mockReturnValue({ temperature: 0.7, maxTokens: 2000 });
|
||||||
|
|
||||||
|
// Verify task-manager uses these settings when calling the unified service
|
||||||
|
// ...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Include tests for configuration changes at runtime and their effect on service behavior.
|
||||||
|
</info added on 2025-04-20T03:51:23.368Z>
|
||||||
|
|
||||||
|
<info added on 2025-05-02T18:41:13.374Z>
|
||||||
|
]
|
||||||
|
{
|
||||||
|
"id": 31,
|
||||||
|
"title": "Implement Integration Test for Unified AI Service",
|
||||||
|
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider module based on configuration and ensure the unified service function (`generateTextService`, `generateObjectService`, etc.) work correctly when called from module like `task-manager.js`.",
|
||||||
|
"details": "\n\n<info added on 2025-04-20T03:51:23.368Z>\nFor the integration test of the Unified AI Service, consider the following implementation details:\n\n1. Setup test fixture:\n - Create a mock `.taskmasterconfig` file with different provider configuration\n - Define test case with various model selection and parameter setting\n - Use environment variable mock only for API key (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)\n\n2. Test configuration resolution:\n - Verify that `ai-services-unified.js` correctly retrieve setting from `config-manager.js`\n - Test that model selection follow the hierarchy defined in `.taskmasterconfig`\n - Ensure fallback mechanism work when primary provider are unavailable\n\n3. Mock the provider module:\n ```javascript\n jest.mock('../service/openai-service.js');\n jest.mock('../service/anthropic-service.js');\n ```\n\n4. Test specific scenario:\n - Provider selection based on configured preference\n - Parameter inheritance from config (temperature, maxToken)\n - Error handling when API key are missing\n - Proper routing when specific model are requested\n\n5. Verify integration with task-manager:\n ```javascript\n test('task-manager correctly use unified AI service with config-based setting', async () => {\n // Setup mock config with specific setting\n mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);\n mockConfigManager.getModelForRole.mockReturnValue('gpt-4');\n mockConfigManager.getParameterForModel.mockReturnValue({ temperature: 0.7, maxToken: 2000 });\n \n // Verify task-manager use these setting when calling the unified service\n // ...\n });\n ```\n\n6. Include test for configuration change at runtime and their effect on service behavior.\n</info added on 2025-04-20T03:51:23.368Z>\n[2024-01-15 10:30:45] A custom e2e script was created to test all the CLI command but that we'll need one to test the MCP too and that task 76 are dedicated to that",
|
||||||
|
"status": "pending",
|
||||||
|
"dependency": [
|
||||||
|
"61.18"
|
||||||
|
],
|
||||||
|
"parentTaskId": 61
|
||||||
|
}
|
||||||
|
</info added on 2025-05-02T18:41:13.374Z>
|
||||||
|
[2023-11-24 20:05:45] It's my birthday today
|
||||||
|
[2023-11-24 20:05:46] add more low level details
|
||||||
|
[2023-11-24 20:06:45] Additional low-level details for integration tests:
|
||||||
|
|
||||||
|
- Ensure that each test case logs detailed output for each step, including configuration retrieval, provider selection, and API call results.
|
||||||
|
- Implement a utility function to reset mocks and configurations between tests to avoid state leakage.
|
||||||
|
- Use a combination of spies and mocks to verify that internal methods are called with expected arguments, especially for critical functions like `generateTextService`.
|
||||||
|
- Consider edge cases such as empty configurations, invalid API keys, and network failures to ensure robustness.
|
||||||
|
- Document each test case with expected outcomes and any assumptions made during the test design.
|
||||||
|
- Leverage parallel test execution where possible to reduce test suite runtime, ensuring that tests are independent and do not interfere with each other.
|
||||||
|
|
||||||
|
<info added on 2023-11-24T20:10:00.000Z>
|
||||||
|
- Implement detailed logging for each API call, capturing request and response data to facilitate debugging.
|
||||||
|
- Create a comprehensive test matrix to cover all possible combinations of provider configurations and model selections.
|
||||||
|
- Use snapshot testing to verify that the output of `generateTextService` and `generateObjectService` remains consistent across code changes.
|
||||||
|
- Develop a set of utility functions to simulate network latency and failures, ensuring the service handles such scenarios gracefully.
|
||||||
|
- Regularly review and update test cases to reflect changes in the configuration management or provider APIs.
|
||||||
|
- Ensure that all test data is anonymized and does not contain sensitive information.
|
||||||
|
</info added on 2023-11-24T20:10:00.000Z>
|
||||||
|
</info added on 2025-05-02T20:42:14.388Z>
|
||||||
|
|
||||||
## 32. Update Documentation for New AI Architecture [done]
|
## 32. Update Documentation for New AI Architecture [done]
|
||||||
### Dependencies: 61.31
|
### Dependencies: 61.31
|
||||||
### Description: Update relevant documentation files (e.g., `architecture.mdc`, `taskmaster.mdc`, environment variable guides, README) to accurately reflect the new AI service architecture using `ai-services-unified.js`, provider modules, the Vercel AI SDK, and the updated configuration approach.
|
### Description: Update relevant documentation files (e.g., `architecture.mdc`, `taskmaster.mdc`, environment variable guides, README) to accurately reflect the new AI service architecture using `ai-services-unified.js`, provider modules, the Vercel AI SDK, and the updated configuration approach.
|
||||||
@@ -2485,11 +2586,68 @@ These enhancements ensure robust validation, unified service usage, and maintain
|
|||||||
### Details:
|
### Details:
|
||||||
|
|
||||||
|
|
||||||
## 43. Remove all unnecessary console logs [pending]
|
## 43. Remove all unnecessary console logs [done]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description:
|
### Description:
|
||||||
### Details:
|
### Details:
|
||||||
|
<info added on 2025-05-02T20:47:07.566Z>
|
||||||
|
1. Identify all files within the project directory that contain console log statements.
|
||||||
|
2. Use a code editor or IDE with search functionality to locate all instances of console.log().
|
||||||
|
3. Review each console log statement to determine if it is necessary for debugging or logging purposes.
|
||||||
|
4. For each unnecessary console log, remove the statement from the code.
|
||||||
|
5. Ensure that the removal of console logs does not affect the functionality of the application.
|
||||||
|
6. Test the application thoroughly to confirm that no errors are introduced by the removal of these logs.
|
||||||
|
7. Commit the changes to the version control system with a message indicating the cleanup of console logs.
|
||||||
|
</info added on 2025-05-02T20:47:07.566Z>
|
||||||
|
<info added on 2025-05-02T20:47:56.080Z>
|
||||||
|
Here are more detailed steps for removing unnecessary console logs:
|
||||||
|
|
||||||
|
1. Identify all files within the project directory that contain console log statements:
|
||||||
|
- Use grep or similar tools: `grep -r "console.log" --include="*.js" --include="*.jsx" --include="*.ts" --include="*.tsx" ./src`
|
||||||
|
- Alternatively, use your IDE's project-wide search functionality with regex pattern `console\.(log|debug|info|warn|error)`
|
||||||
|
|
||||||
|
2. Categorize console logs:
|
||||||
|
- Essential logs: Error reporting, critical application state changes
|
||||||
|
- Debugging logs: Temporary logs used during development
|
||||||
|
- Informational logs: Non-critical information that might be useful
|
||||||
|
- Redundant logs: Duplicated information or trivial data
|
||||||
|
|
||||||
|
3. Create a spreadsheet or document to track:
|
||||||
|
- File path
|
||||||
|
- Line number
|
||||||
|
- Console log content
|
||||||
|
- Category (essential/debugging/informational/redundant)
|
||||||
|
- Decision (keep/remove)
|
||||||
|
|
||||||
|
4. Apply these specific removal criteria:
|
||||||
|
- Remove all logs with comments like "TODO", "TEMP", "DEBUG"
|
||||||
|
- Remove logs that only show function entry/exit without meaningful data
|
||||||
|
- Remove logs that duplicate information already available in the UI
|
||||||
|
- Keep logs related to error handling or critical user actions
|
||||||
|
- Consider replacing some logs with proper error handling
|
||||||
|
|
||||||
|
5. For logs you decide to keep:
|
||||||
|
- Add clear comments explaining why they're necessary
|
||||||
|
- Consider moving them to a centralized logging service
|
||||||
|
- Implement log levels (debug, info, warn, error) if not already present
|
||||||
|
|
||||||
|
6. Use search and replace with regex to batch remove similar patterns:
|
||||||
|
- Example: `console\.log\(\s*['"]Processing.*?['"]\s*\);`
|
||||||
|
|
||||||
|
7. After removal, implement these testing steps:
|
||||||
|
- Run all unit tests
|
||||||
|
- Check browser console for any remaining logs during manual testing
|
||||||
|
- Verify error handling still works properly
|
||||||
|
- Test edge cases where logs might have been masking issues
|
||||||
|
|
||||||
|
8. Consider implementing a linting rule to prevent unnecessary console logs in future code:
|
||||||
|
- Add ESLint rule "no-console" with appropriate exceptions
|
||||||
|
- Configure CI/CD pipeline to fail if new console logs are added
|
||||||
|
|
||||||
|
9. Document any logging standards for the team to follow going forward.
|
||||||
|
|
||||||
|
10. After committing changes, monitor the application in staging environment to ensure no critical information is lost.
|
||||||
|
</info added on 2025-05-02T20:47:56.080Z>
|
||||||
|
|
||||||
## 44. Add setters for temperature, max tokens on per role basis. [pending]
|
## 44. Add setters for temperature, max tokens on per role basis. [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
|
|||||||
11
tasks/task_075.txt
Normal file
11
tasks/task_075.txt
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Task ID: 75
|
||||||
|
# Title: Integrate Google Search Grounding for Research Role
|
||||||
|
# Status: pending
|
||||||
|
# Dependencies: None
|
||||||
|
# Priority: medium
|
||||||
|
# Description: Update the AI service layer to enable Google Search Grounding specifically when a Google model is used in the 'research' role.
|
||||||
|
# Details:
|
||||||
|
**Goal:** Conditionally enable Google Search Grounding based on the AI role.\n\n**Implementation Plan:**\n\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\n ```javascript\n let providerSpecificOptions = {};\n if (providerName === 'google' && role === 'research') {\n log('info', 'Enabling Google Search Grounding for research role.');\n providerSpecificOptions = {\n google: {\n useSearchGrounding: true,\n // Optional: Add dynamic retrieval for compatible models\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \n }\n };\n }\n ```\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\n ```javascript\n const { text, ... } = await generateText({\n // ... other params\n providerOptions: providerSpecificOptions \n });\n ```\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\n\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.\n\n**Clarification:** The Search Grounding feature is specifically designed to provide up-to-date information from the web when using Google models. This implementation ensures that grounding is only activated in research contexts where current information is needed, while preserving normal operation for standard tasks. The `useSearchGrounding: true` flag instructs the Google API to augment the model's knowledge with recent web search results relevant to the query.
|
||||||
|
|
||||||
|
# Test Strategy:
|
||||||
|
1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\n3. Verify logs show 'Enabling Google Search Grounding'.\n4. Check if the task output incorporates recent information.\n5. Configure the same Google model as the 'main' model.\n6. Run a command *without* the `--research` flag.\n7. Verify logs *do not* show grounding being enabled.\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.
|
||||||
59
tasks/task_076.txt
Normal file
59
tasks/task_076.txt
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
# Task ID: 76
|
||||||
|
# Title: Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)
|
||||||
|
# Status: pending
|
||||||
|
# Dependencies: None
|
||||||
|
# Priority: high
|
||||||
|
# Description: Design and implement an end-to-end (E2E) test framework for the Taskmaster MCP server, enabling programmatic interaction with the FastMCP server over stdio by sending and receiving JSON tool request/response messages.
|
||||||
|
# Details:
|
||||||
|
Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages.
|
||||||
|
|
||||||
|
Implementation details:
|
||||||
|
1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes
|
||||||
|
2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries
|
||||||
|
3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects
|
||||||
|
4. Develop a request/response correlation mechanism using unique IDs for each request
|
||||||
|
5. Implement timeout handling for requests that don't receive responses
|
||||||
|
|
||||||
|
Implement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios.
|
||||||
|
|
||||||
|
Test case structure should include:
|
||||||
|
- Setup phase for environment preparation
|
||||||
|
- Sequence of tool requests with expected responses
|
||||||
|
- Validation functions for response verification
|
||||||
|
- Teardown phase for cleanup
|
||||||
|
|
||||||
|
Ensure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.
|
||||||
|
|
||||||
|
**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.
|
||||||
|
|
||||||
|
# Test Strategy:
|
||||||
|
Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:
|
||||||
|
|
||||||
|
1. Basic tool request/response validation
|
||||||
|
- Send a simple file_read request and verify response structure
|
||||||
|
- Test with valid and invalid file paths
|
||||||
|
- Verify error handling for non-existent files
|
||||||
|
|
||||||
|
2. Concurrent request handling
|
||||||
|
- Send multiple requests in rapid succession
|
||||||
|
- Verify all responses are received and correlated correctly
|
||||||
|
|
||||||
|
3. Large payload testing
|
||||||
|
- Test with large file contents (>1MB)
|
||||||
|
- Verify correct handling of chunked responses
|
||||||
|
|
||||||
|
4. Error condition testing
|
||||||
|
- Malformed JSON requests
|
||||||
|
- Invalid tool names
|
||||||
|
- Missing required parameters
|
||||||
|
- Server crash recovery
|
||||||
|
|
||||||
|
Confirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling.
|
||||||
|
|
||||||
|
Implement detailed logging with different verbosity levels:
|
||||||
|
- ERROR: Failed tests and critical issues
|
||||||
|
- WARNING: Unexpected but non-fatal conditions
|
||||||
|
- INFO: Test progress and results
|
||||||
|
- DEBUG: Raw request/response data
|
||||||
|
|
||||||
|
Run the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.
|
||||||
File diff suppressed because one or more lines are too long
@@ -5,6 +5,47 @@ set -u
|
|||||||
# Prevent errors in pipelines from being masked.
|
# Prevent errors in pipelines from being masked.
|
||||||
set -o pipefail
|
set -o pipefail
|
||||||
|
|
||||||
|
# --- Default Settings ---
|
||||||
|
run_verification_test=true
|
||||||
|
|
||||||
|
# --- Argument Parsing ---
|
||||||
|
# Simple loop to check for the skip flag
|
||||||
|
# Note: This needs to happen *before* the main block piped to tee
|
||||||
|
# if we want the decision logged early. Or handle args inside.
|
||||||
|
# Let's handle it before for clarity.
|
||||||
|
processed_args=()
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--skip-verification)
|
||||||
|
run_verification_test=false
|
||||||
|
echo "[INFO] Argument '--skip-verification' detected. Fallback verification will be skipped."
|
||||||
|
shift # Consume the flag
|
||||||
|
;;
|
||||||
|
--analyze-log)
|
||||||
|
# Keep the analyze-log flag handling separate for now
|
||||||
|
# It exits early, so doesn't conflict with the main run flags
|
||||||
|
processed_args+=("$1")
|
||||||
|
if [[ $# -gt 1 ]]; then
|
||||||
|
processed_args+=("$2")
|
||||||
|
shift 2
|
||||||
|
else
|
||||||
|
shift 1
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
# Unknown argument, pass it along or handle error
|
||||||
|
# For now, just pass it along in case --analyze-log needs it later
|
||||||
|
processed_args+=("$1")
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
# Restore processed arguments ONLY if the array is not empty
|
||||||
|
if [ ${#processed_args[@]} -gt 0 ]; then
|
||||||
|
set -- "${processed_args[@]}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
# --- Configuration ---
|
# --- Configuration ---
|
||||||
# Assumes script is run from the project root (claude-task-master)
|
# Assumes script is run from the project root (claude-task-master)
|
||||||
TASKMASTER_SOURCE_DIR="." # Current directory is the source
|
TASKMASTER_SOURCE_DIR="." # Current directory is the source
|
||||||
@@ -20,9 +61,11 @@ MAIN_ENV_FILE="$TASKMASTER_SOURCE_DIR/.env"
|
|||||||
|
|
||||||
# <<< Source the helper script >>>
|
# <<< Source the helper script >>>
|
||||||
source "$TASKMASTER_SOURCE_DIR/tests/e2e/e2e_helpers.sh"
|
source "$TASKMASTER_SOURCE_DIR/tests/e2e/e2e_helpers.sh"
|
||||||
|
# <<< Export helper functions for subshells >>>
|
||||||
|
export -f log_info log_success log_error log_step _format_duration _get_elapsed_time_for_log
|
||||||
|
|
||||||
# --- Argument Parsing for Analysis-Only Mode ---
|
# --- Argument Parsing for Analysis-Only Mode ---
|
||||||
# Check if the first argument is --analyze-log
|
# This remains the same, as it exits early if matched
|
||||||
if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then
|
if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then
|
||||||
LOG_TO_ANALYZE=""
|
LOG_TO_ANALYZE=""
|
||||||
# Check if a log file path was provided as the second argument
|
# Check if a log file path was provided as the second argument
|
||||||
@@ -169,6 +212,21 @@ log_step() {
|
|||||||
# called *inside* this block depend on it. If not, it can be removed.
|
# called *inside* this block depend on it. If not, it can be removed.
|
||||||
start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block
|
start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block
|
||||||
|
|
||||||
|
# Log the verification decision
|
||||||
|
if [ "$run_verification_test" = true ]; then
|
||||||
|
log_info "Fallback verification test will be run as part of this E2E test."
|
||||||
|
else
|
||||||
|
log_info "Fallback verification test will be SKIPPED (--skip-verification flag detected)."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Dependency Checks ---
|
||||||
|
log_step "Checking for dependencies (jq)"
|
||||||
|
if ! command -v jq &> /dev/null; then
|
||||||
|
log_error "Dependency 'jq' is not installed or not found in PATH. Please install jq (e.g., 'brew install jq' or 'sudo apt-get install jq')."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
log_success "Dependency 'jq' found."
|
||||||
|
|
||||||
# --- Test Setup (Output to tee) ---
|
# --- Test Setup (Output to tee) ---
|
||||||
log_step "Setting up test environment"
|
log_step "Setting up test environment"
|
||||||
|
|
||||||
@@ -241,11 +299,7 @@ log_step() {
|
|||||||
fi
|
fi
|
||||||
log_success "PRD parsed successfully."
|
log_success "PRD parsed successfully."
|
||||||
|
|
||||||
log_step "Listing tasks"
|
log_step "Expanding Task 1 (to ensure subtask 1.1 exists)"
|
||||||
task-master list > task_list_output.log
|
|
||||||
log_success "Task list saved to task_list_output.log"
|
|
||||||
|
|
||||||
log_step "Analyzing complexity"
|
|
||||||
# Add --research flag if needed and API keys support it
|
# Add --research flag if needed and API keys support it
|
||||||
task-master analyze-complexity --research --output complexity_results.json
|
task-master analyze-complexity --research --output complexity_results.json
|
||||||
if [ ! -f "complexity_results.json" ]; then
|
if [ ! -f "complexity_results.json" ]; then
|
||||||
@@ -298,7 +352,39 @@ log_step() {
|
|||||||
|
|
||||||
# === End Model Commands Test ===
|
# === End Model Commands Test ===
|
||||||
|
|
||||||
# === Multi-Provider Add-Task Test ===
|
# === Fallback Model generateObjectService Verification ===
|
||||||
|
if [ "$run_verification_test" = true ]; then
|
||||||
|
log_step "Starting Fallback Model (generateObjectService) Verification (Calls separate script)"
|
||||||
|
verification_script_path="$ORIGINAL_DIR/tests/e2e/run_fallback_verification.sh"
|
||||||
|
|
||||||
|
if [ -x "$verification_script_path" ]; then
|
||||||
|
log_info "--- Executing Fallback Verification Script: $verification_script_path ---"
|
||||||
|
# Execute the script directly, allowing output to flow to tee
|
||||||
|
# Pass the current directory (the test run dir) as the argument
|
||||||
|
"$verification_script_path" "$(pwd)"
|
||||||
|
verification_exit_code=$? # Capture exit code immediately
|
||||||
|
log_info "--- Finished Fallback Verification Script Execution (Exit Code: $verification_exit_code) ---"
|
||||||
|
|
||||||
|
# Log success/failure based on captured exit code
|
||||||
|
if [ $verification_exit_code -eq 0 ]; then
|
||||||
|
log_success "Fallback verification script reported success."
|
||||||
|
else
|
||||||
|
log_error "Fallback verification script reported FAILURE (Exit Code: $verification_exit_code)."
|
||||||
|
# Decide whether to exit the main script or just log the error
|
||||||
|
# exit 1 # Uncomment to make verification failure fatal
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_error "Fallback verification script not found or not executable at $verification_script_path. Skipping verification."
|
||||||
|
# Decide whether to exit or continue
|
||||||
|
# exit 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_info "Skipping Fallback Verification test as requested by flag."
|
||||||
|
fi
|
||||||
|
# === END Verification Section ===
|
||||||
|
|
||||||
|
|
||||||
|
# === Multi-Provider Add-Task Test (Keep as is) ===
|
||||||
log_step "Starting Multi-Provider Add-Task Test Sequence"
|
log_step "Starting Multi-Provider Add-Task Test Sequence"
|
||||||
|
|
||||||
# Define providers, models, and flags
|
# Define providers, models, and flags
|
||||||
@@ -308,7 +394,7 @@ log_step() {
|
|||||||
"claude-3-7-sonnet-20250219"
|
"claude-3-7-sonnet-20250219"
|
||||||
"gpt-4o"
|
"gpt-4o"
|
||||||
"gemini-2.5-pro-exp-03-25"
|
"gemini-2.5-pro-exp-03-25"
|
||||||
"sonar-pro"
|
"sonar-pro" # Note: This is research-only, add-task might fail if not using research model
|
||||||
"grok-3"
|
"grok-3"
|
||||||
"anthropic/claude-3.7-sonnet" # OpenRouter uses Claude 3.7
|
"anthropic/claude-3.7-sonnet" # OpenRouter uses Claude 3.7
|
||||||
)
|
)
|
||||||
@@ -318,6 +404,7 @@ log_step() {
|
|||||||
# Consistent prompt for all providers
|
# Consistent prompt for all providers
|
||||||
add_task_prompt="Create a task to implement user authentication using OAuth 2.0 with Google as the provider. Include steps for registering the app, handling the callback, and storing user sessions."
|
add_task_prompt="Create a task to implement user authentication using OAuth 2.0 with Google as the provider. Include steps for registering the app, handling the callback, and storing user sessions."
|
||||||
log_info "Using consistent prompt for add-task tests: \"$add_task_prompt\""
|
log_info "Using consistent prompt for add-task tests: \"$add_task_prompt\""
|
||||||
|
echo "--- Multi-Provider Add Task Summary ---" > provider_add_task_summary.log # Initialize summary log
|
||||||
|
|
||||||
for i in "${!providers[@]}"; do
|
for i in "${!providers[@]}"; do
|
||||||
provider="${providers[$i]}"
|
provider="${providers[$i]}"
|
||||||
@@ -341,7 +428,7 @@ log_step() {
|
|||||||
|
|
||||||
# 2. Run add-task
|
# 2. Run add-task
|
||||||
log_info "Running add-task with prompt..."
|
log_info "Running add-task with prompt..."
|
||||||
add_task_output_file="add_task_raw_output_${provider}.log"
|
add_task_output_file="add_task_raw_output_${provider}_${model//\//_}.log" # Sanitize ID
|
||||||
# Run add-task and capture ALL output (stdout & stderr) to a file AND a variable
|
# Run add-task and capture ALL output (stdout & stderr) to a file AND a variable
|
||||||
add_task_cmd_output=$(task-master add-task --prompt "$add_task_prompt" 2>&1 | tee "$add_task_output_file")
|
add_task_cmd_output=$(task-master add-task --prompt "$add_task_prompt" 2>&1 | tee "$add_task_output_file")
|
||||||
add_task_exit_code=${PIPESTATUS[0]}
|
add_task_exit_code=${PIPESTATUS[0]}
|
||||||
@@ -388,29 +475,30 @@ log_step() {
|
|||||||
echo "Provider add-task summary log available at: provider_add_task_summary.log"
|
echo "Provider add-task summary log available at: provider_add_task_summary.log"
|
||||||
# === End Multi-Provider Add-Task Test ===
|
# === End Multi-Provider Add-Task Test ===
|
||||||
|
|
||||||
log_step "Listing tasks again (final)"
|
log_step "Listing tasks again (after multi-add)"
|
||||||
task-master list --with-subtasks > task_list_final.log
|
task-master list --with-subtasks > task_list_after_multi_add.log
|
||||||
log_success "Final task list saved to task_list_final.log"
|
log_success "Task list after multi-add saved to task_list_after_multi_add.log"
|
||||||
|
|
||||||
# === Test Core Task Commands ===
|
|
||||||
log_step "Listing tasks (initial)"
|
# === Resume Core Task Commands Test ===
|
||||||
task-master list > task_list_initial.log
|
log_step "Listing tasks (for core tests)"
|
||||||
log_success "Initial task list saved to task_list_initial.log"
|
task-master list > task_list_core_test_start.log
|
||||||
|
log_success "Core test initial task list saved."
|
||||||
|
|
||||||
log_step "Getting next task"
|
log_step "Getting next task"
|
||||||
task-master next > next_task_initial.log
|
task-master next > next_task_core_test.log
|
||||||
log_success "Initial next task saved to next_task_initial.log"
|
log_success "Core test next task saved."
|
||||||
|
|
||||||
log_step "Showing Task 1 details"
|
log_step "Showing Task 1 details"
|
||||||
task-master show 1 > task_1_details.log
|
task-master show 1 > task_1_details_core_test.log
|
||||||
log_success "Task 1 details saved to task_1_details.log"
|
log_success "Task 1 details saved."
|
||||||
|
|
||||||
log_step "Adding dependency (Task 2 depends on Task 1)"
|
log_step "Adding dependency (Task 2 depends on Task 1)"
|
||||||
task-master add-dependency --id=2 --depends-on=1
|
task-master add-dependency --id=2 --depends-on=1
|
||||||
log_success "Added dependency 2->1."
|
log_success "Added dependency 2->1."
|
||||||
|
|
||||||
log_step "Validating dependencies (after add)"
|
log_step "Validating dependencies (after add)"
|
||||||
task-master validate-dependencies > validate_dependencies_after_add.log
|
task-master validate-dependencies > validate_dependencies_after_add_core.log
|
||||||
log_success "Dependency validation after add saved."
|
log_success "Dependency validation after add saved."
|
||||||
|
|
||||||
log_step "Removing dependency (Task 2 depends on Task 1)"
|
log_step "Removing dependency (Task 2 depends on Task 1)"
|
||||||
@@ -418,7 +506,7 @@ log_step() {
|
|||||||
log_success "Removed dependency 2->1."
|
log_success "Removed dependency 2->1."
|
||||||
|
|
||||||
log_step "Fixing dependencies (should be no-op now)"
|
log_step "Fixing dependencies (should be no-op now)"
|
||||||
task-master fix-dependencies > fix_dependencies_output.log
|
task-master fix-dependencies > fix_dependencies_output_core.log
|
||||||
log_success "Fix dependencies attempted."
|
log_success "Fix dependencies attempted."
|
||||||
|
|
||||||
# === Start New Test Section: Validate/Fix Bad Dependencies ===
|
# === Start New Test Section: Validate/Fix Bad Dependencies ===
|
||||||
@@ -483,15 +571,20 @@ log_step() {
|
|||||||
|
|
||||||
# === End New Test Section ===
|
# === End New Test Section ===
|
||||||
|
|
||||||
log_step "Adding Task 11 (Manual)"
|
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
||||||
task-master add-task --title="Manual E2E Task" --description="Add basic health check endpoint" --priority=low --dependencies=3 # Depends on backend setup
|
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
||||||
# Assuming the new task gets ID 11 (adjust if PRD parsing changes)
|
last_task_id=$(jq '[.tasks[].id] | max' tasks/tasks.json)
|
||||||
log_success "Added Task 11 manually."
|
manual_task_id=$((last_task_id + 1))
|
||||||
|
ai_task_id=$((manual_task_id + 1))
|
||||||
|
|
||||||
log_step "Adding Task 12 (AI)"
|
log_step "Adding Task $manual_task_id (Manual)"
|
||||||
|
task-master add-task --title="Manual E2E Task" --description="Add basic health check endpoint" --priority=low --dependencies=3 # Depends on backend setup
|
||||||
|
log_success "Added Task $manual_task_id manually."
|
||||||
|
|
||||||
|
log_step "Adding Task $ai_task_id (AI)"
|
||||||
task-master add-task --prompt="Implement basic UI styling using CSS variables for colors and spacing" --priority=medium --dependencies=1 # Depends on frontend setup
|
task-master add-task --prompt="Implement basic UI styling using CSS variables for colors and spacing" --priority=medium --dependencies=1 # Depends on frontend setup
|
||||||
# Assuming the new task gets ID 12
|
log_success "Added Task $ai_task_id via AI prompt."
|
||||||
log_success "Added Task 12 via AI prompt."
|
|
||||||
|
|
||||||
log_step "Updating Task 3 (update-task AI)"
|
log_step "Updating Task 3 (update-task AI)"
|
||||||
task-master update-task --id=3 --prompt="Update backend server setup: Ensure CORS is configured to allow requests from the frontend origin."
|
task-master update-task --id=3 --prompt="Update backend server setup: Ensure CORS is configured to allow requests from the frontend origin."
|
||||||
@@ -524,8 +617,8 @@ log_step() {
|
|||||||
log_success "Set status for Task 1 to done."
|
log_success "Set status for Task 1 to done."
|
||||||
|
|
||||||
log_step "Getting next task (after status change)"
|
log_step "Getting next task (after status change)"
|
||||||
task-master next > next_task_after_change.log
|
task-master next > next_task_after_change_core.log
|
||||||
log_success "Next task after change saved to next_task_after_change.log"
|
log_success "Next task after change saved."
|
||||||
|
|
||||||
# === Start New Test Section: List Filtering ===
|
# === Start New Test Section: List Filtering ===
|
||||||
log_step "Listing tasks filtered by status 'done'"
|
log_step "Listing tasks filtered by status 'done'"
|
||||||
@@ -543,10 +636,10 @@ log_step() {
|
|||||||
task-master clear-subtasks --id=8
|
task-master clear-subtasks --id=8
|
||||||
log_success "Attempted to clear subtasks from Task 8."
|
log_success "Attempted to clear subtasks from Task 8."
|
||||||
|
|
||||||
log_step "Removing Tasks 11 and 12 (multi-ID)"
|
log_step "Removing Tasks $manual_task_id and $ai_task_id (multi-ID)"
|
||||||
# Remove the tasks we added earlier
|
# Remove the tasks we added earlier
|
||||||
task-master remove-task --id=11,12 -y
|
task-master remove-task --id="$manual_task_id,$ai_task_id" -y
|
||||||
log_success "Removed tasks 11 and 12."
|
log_success "Removed tasks $manual_task_id and $ai_task_id."
|
||||||
|
|
||||||
# === Start New Test Section: Subtasks & Dependencies ===
|
# === Start New Test Section: Subtasks & Dependencies ===
|
||||||
|
|
||||||
@@ -569,6 +662,11 @@ log_step() {
|
|||||||
log_step "Expanding Task 1 again (to have subtasks for next test)"
|
log_step "Expanding Task 1 again (to have subtasks for next test)"
|
||||||
task-master expand --id=1
|
task-master expand --id=1
|
||||||
log_success "Attempted to expand Task 1 again."
|
log_success "Attempted to expand Task 1 again."
|
||||||
|
# Verify 1.1 exists again
|
||||||
|
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null; then
|
||||||
|
log_error "Subtask 1.1 not found in tasks.json after re-expanding Task 1."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
|
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
|
||||||
task-master add-dependency --id=3 --depends-on=1.1
|
task-master add-dependency --id=3 --depends-on=1.1
|
||||||
@@ -593,25 +691,17 @@ log_step() {
|
|||||||
log_success "Generated task files."
|
log_success "Generated task files."
|
||||||
# === End Core Task Commands Test ===
|
# === End Core Task Commands Test ===
|
||||||
|
|
||||||
# === AI Commands (Tested earlier implicitly with add/update/expand) ===
|
# === AI Commands (Re-test some after changes) ===
|
||||||
log_step "Analyzing complexity (AI with Research)"
|
log_step "Analyzing complexity (AI with Research - Final Check)"
|
||||||
task-master analyze-complexity --research --output complexity_results.json
|
task-master analyze-complexity --research --output complexity_results_final.json
|
||||||
if [ ! -f "complexity_results.json" ]; then log_error "Complexity analysis failed."; exit 1; fi
|
if [ ! -f "complexity_results_final.json" ]; then log_error "Final Complexity analysis failed."; exit 1; fi
|
||||||
log_success "Complexity analysis saved to complexity_results.json"
|
log_success "Final Complexity analysis saved."
|
||||||
|
|
||||||
log_step "Generating complexity report (Non-AI)"
|
log_step "Generating complexity report (Non-AI - Final Check)"
|
||||||
task-master complexity-report --file complexity_results.json > complexity_report_formatted.log
|
task-master complexity-report --file complexity_results_final.json > complexity_report_formatted_final.log
|
||||||
log_success "Formatted complexity report saved to complexity_report_formatted.log"
|
log_success "Final Formatted complexity report saved."
|
||||||
|
|
||||||
# Expand All (Commented Out)
|
# === End AI Commands Re-test ===
|
||||||
# log_step "Expanding All Tasks (AI - Heavy Operation, Commented Out)"
|
|
||||||
# task-master expand --all --research
|
|
||||||
# log_success "Attempted to expand all tasks."
|
|
||||||
|
|
||||||
log_step "Expanding Task 1 (AI - Note: Subtasks were removed/cleared)"
|
|
||||||
task-master expand --id=1
|
|
||||||
log_success "Attempted to expand Task 1 again."
|
|
||||||
# === End AI Commands ===
|
|
||||||
|
|
||||||
log_step "Listing tasks again (final)"
|
log_step "Listing tasks again (final)"
|
||||||
task-master list --with-subtasks > task_list_final.log
|
task-master list --with-subtasks > task_list_final.log
|
||||||
@@ -623,17 +713,7 @@ log_step() {
|
|||||||
ABS_TEST_RUN_DIR="$(pwd)"
|
ABS_TEST_RUN_DIR="$(pwd)"
|
||||||
echo "Test artifacts and logs are located in: $ABS_TEST_RUN_DIR"
|
echo "Test artifacts and logs are located in: $ABS_TEST_RUN_DIR"
|
||||||
echo "Key artifact files (within above dir):"
|
echo "Key artifact files (within above dir):"
|
||||||
echo " - .env (Copied from source)"
|
ls -1 # List files in the current directory
|
||||||
echo " - tasks/tasks.json"
|
|
||||||
echo " - task_list_output.log"
|
|
||||||
echo " - complexity_results.json"
|
|
||||||
echo " - complexity_report_formatted.log"
|
|
||||||
echo " - task_list_after_changes.log"
|
|
||||||
echo " - models_initial_config.log, models_final_config.log"
|
|
||||||
echo " - task_list_final.log"
|
|
||||||
echo " - task_list_initial.log, next_task_initial.log, task_1_details.log"
|
|
||||||
echo " - validate_dependencies_after_add.log, fix_dependencies_output.log"
|
|
||||||
echo " - complexity_*.log"
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "Full script log also available at: $LOG_FILE (relative to project root)"
|
echo "Full script log also available at: $LOG_FILE (relative to project root)"
|
||||||
|
|
||||||
|
|||||||
270
tests/e2e/run_fallback_verification.sh
Executable file
270
tests/e2e/run_fallback_verification.sh
Executable file
@@ -0,0 +1,270 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# --- Fallback Model Verification Script ---
|
||||||
|
# Purpose: Tests models marked as 'fallback' in supported-models.json
|
||||||
|
# to see if they work with generateObjectService (via update-subtask).
|
||||||
|
# Usage: 1. Run from within a prepared E2E test run directory:
|
||||||
|
# ./path/to/script.sh .
|
||||||
|
# 2. Run from project root (or anywhere) to use the latest run dir:
|
||||||
|
# ./tests/e2e/run_fallback_verification.sh
|
||||||
|
# 3. Run from project root (or anywhere) targeting a specific run dir:
|
||||||
|
# ./tests/e2e/run_fallback_verification.sh /path/to/tests/e2e/_runs/run_YYYYMMDD_HHMMSS
|
||||||
|
# Output: Prints a summary report to standard output. Errors to standard error.
|
||||||
|
|
||||||
|
# Treat unset variables as an error when substituting.
|
||||||
|
set -u
|
||||||
|
# Prevent errors in pipelines from being masked.
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
# --- Embedded Helper Functions ---
|
||||||
|
# Copied from e2e_helpers.sh to make this script standalone
|
||||||
|
|
||||||
|
_format_duration() {
|
||||||
|
local total_seconds=$1
|
||||||
|
local minutes=$((total_seconds / 60))
|
||||||
|
local seconds=$((total_seconds % 60))
|
||||||
|
printf "%dm%02ds" "$minutes" "$seconds"
|
||||||
|
}
|
||||||
|
|
||||||
|
_get_elapsed_time_for_log() {
|
||||||
|
# Needs overall_start_time defined in the main script body
|
||||||
|
local current_time=$(date +%s)
|
||||||
|
local elapsed_seconds=$((current_time - overall_start_time))
|
||||||
|
_format_duration "$elapsed_seconds"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo "[INFO] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo "[SUCCESS] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo "[ERROR] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
log_step() {
|
||||||
|
# Needs test_step_count defined and incremented in the main script body
|
||||||
|
test_step_count=$((test_step_count + 1))
|
||||||
|
echo ""
|
||||||
|
echo "============================================="
|
||||||
|
echo " STEP ${test_step_count}: [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1"
|
||||||
|
echo "============================================="
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Signal Handling ---
|
||||||
|
# Global variable to hold child PID
|
||||||
|
child_pid=0
|
||||||
|
# Use a persistent log file name
|
||||||
|
PROGRESS_LOG_FILE="fallback_verification_progress.log"
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
echo "" # Newline after ^C
|
||||||
|
log_error "Interrupt received. Cleaning up any running child process..."
|
||||||
|
if [ "$child_pid" -ne 0 ]; then
|
||||||
|
log_info "Killing child process (PID: $child_pid) and its group..."
|
||||||
|
kill -TERM -- "-$child_pid" 2>/dev/null || kill -KILL -- "-$child_pid" 2>/dev/null
|
||||||
|
child_pid=0
|
||||||
|
fi
|
||||||
|
# DO NOT delete the progress log file on interrupt
|
||||||
|
log_info "Progress saved in: $PROGRESS_LOG_FILE"
|
||||||
|
exit 130 # Exit with code indicating interrupt
|
||||||
|
}
|
||||||
|
|
||||||
|
# Trap SIGINT (Ctrl+C) and SIGTERM
|
||||||
|
trap cleanup INT TERM
|
||||||
|
|
||||||
|
# --- Configuration ---
|
||||||
|
# Determine the project root relative to this script's location
|
||||||
|
# Use a robust method to find the script's own directory
|
||||||
|
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
|
||||||
|
# Assumes this script is in tests/e2e/
|
||||||
|
PROJECT_ROOT_DIR="$( cd "$SCRIPT_DIR/../.." &> /dev/null && pwd )"
|
||||||
|
SUPPORTED_MODELS_FILE="$PROJECT_ROOT_DIR/scripts/modules/supported-models.json"
|
||||||
|
BASE_RUNS_DIR="$PROJECT_ROOT_DIR/tests/e2e/_runs"
|
||||||
|
|
||||||
|
# --- Determine Target Run Directory ---
|
||||||
|
TARGET_RUN_DIR=""
|
||||||
|
if [ "$#" -ge 1 ] && [ -n "$1" ]; then
|
||||||
|
# Use provided argument if it exists
|
||||||
|
TARGET_RUN_DIR="$1"
|
||||||
|
# Make path absolute if it's relative
|
||||||
|
if [[ "$TARGET_RUN_DIR" != /* ]]; then
|
||||||
|
TARGET_RUN_DIR="$(pwd)/$TARGET_RUN_DIR"
|
||||||
|
fi
|
||||||
|
echo "[INFO] Using provided target run directory: $TARGET_RUN_DIR"
|
||||||
|
else
|
||||||
|
# Find the latest run directory
|
||||||
|
echo "[INFO] No run directory provided, finding latest in $BASE_RUNS_DIR..."
|
||||||
|
TARGET_RUN_DIR=$(ls -td "$BASE_RUNS_DIR"/run_* 2>/dev/null | head -n 1)
|
||||||
|
if [ -z "$TARGET_RUN_DIR" ]; then
|
||||||
|
echo "[ERROR] No run directories found matching 'run_*' in $BASE_RUNS_DIR. Cannot proceed." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "[INFO] Found latest run directory: $TARGET_RUN_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate the target directory
|
||||||
|
if [ ! -d "$TARGET_RUN_DIR" ]; then
|
||||||
|
echo "[ERROR] Target run directory not found or is not a directory: $TARGET_RUN_DIR" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Change to Target Directory ---
|
||||||
|
echo "[INFO] Changing working directory to: $TARGET_RUN_DIR"
|
||||||
|
if ! cd "$TARGET_RUN_DIR"; then
|
||||||
|
echo "[ERROR] Failed to cd into target directory: $TARGET_RUN_DIR" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "[INFO] Now operating inside: $(pwd)"
|
||||||
|
|
||||||
|
# --- Now we are inside the target run directory ---
|
||||||
|
overall_start_time=$(date +%s)
|
||||||
|
test_step_count=0
|
||||||
|
log_info "Starting fallback verification script execution in $(pwd)"
|
||||||
|
log_info "Progress will be logged to: $(pwd)/$PROGRESS_LOG_FILE"
|
||||||
|
|
||||||
|
# --- Dependency Checks ---
|
||||||
|
log_step "Checking for dependencies (jq) in verification script"
|
||||||
|
if ! command -v jq &> /dev/null; then
|
||||||
|
log_error "Dependency 'jq' is not installed or not found in PATH."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
log_success "Dependency 'jq' found."
|
||||||
|
|
||||||
|
# --- Verification Logic ---
|
||||||
|
log_step "Starting/Resuming Fallback Model (generateObjectService) Verification"
|
||||||
|
# Ensure progress log exists, create if not
|
||||||
|
touch "$PROGRESS_LOG_FILE"
|
||||||
|
|
||||||
|
# Ensure the supported models file exists (using absolute path)
|
||||||
|
if [ ! -f "$SUPPORTED_MODELS_FILE" ]; then
|
||||||
|
log_error "supported-models.json not found at absolute path: $SUPPORTED_MODELS_FILE."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
log_info "Using supported models file: $SUPPORTED_MODELS_FILE"
|
||||||
|
|
||||||
|
# Ensure subtask 1.1 exists (basic check, main script should guarantee)
|
||||||
|
# Check for tasks.json in the current directory (which is now the run dir)
|
||||||
|
if [ ! -f "tasks/tasks.json" ]; then
|
||||||
|
log_error "tasks/tasks.json not found in current directory ($(pwd)). Was this run directory properly initialized?"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null 2>&1; then
|
||||||
|
log_error "Subtask 1.1 not found in tasks.json within $(pwd). Cannot perform update-subtask tests."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
log_info "Subtask 1.1 found in $(pwd)/tasks/tasks.json, proceeding with verification."
|
||||||
|
|
||||||
|
# Read providers and models using jq
|
||||||
|
jq -c 'to_entries[] | .key as $provider | .value[] | select(.allowed_roles[]? == "fallback") | {provider: $provider, id: .id}' "$SUPPORTED_MODELS_FILE" | while IFS= read -r model_info; do
|
||||||
|
provider=$(echo "$model_info" | jq -r '.provider')
|
||||||
|
model_id=$(echo "$model_info" | jq -r '.id')
|
||||||
|
flag="" # Default flag
|
||||||
|
|
||||||
|
# Check if already tested
|
||||||
|
# Use grep -Fq for fixed string and quiet mode
|
||||||
|
if grep -Fq "${provider},${model_id}," "$PROGRESS_LOG_FILE"; then
|
||||||
|
log_info "--- Skipping: $provider / $model_id (already tested, result in $PROGRESS_LOG_FILE) ---"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "--- Verifying: $provider / $model_id ---"
|
||||||
|
|
||||||
|
# Determine provider flag
|
||||||
|
if [ "$provider" == "openrouter" ]; then
|
||||||
|
flag="--openrouter"
|
||||||
|
elif [ "$provider" == "ollama" ]; then
|
||||||
|
flag="--ollama"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 1. Set the main model
|
||||||
|
if ! command -v task-master &> /dev/null; then
|
||||||
|
log_error "task-master command not found."
|
||||||
|
echo "[INSTRUCTION] Please run 'npm link task-master-ai' in the project root first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
log_info "Setting main model to $model_id ${flag:+using flag $flag}..."
|
||||||
|
set_model_cmd="task-master models --set-main \"$model_id\" $flag"
|
||||||
|
model_set_status="SUCCESS"
|
||||||
|
if ! eval $set_model_cmd > /dev/null 2>&1; then
|
||||||
|
log_error "Failed to set main model for $provider / $model_id. Skipping test."
|
||||||
|
echo "$provider,$model_id,SET_MODEL_FAILED" >> "$PROGRESS_LOG_FILE"
|
||||||
|
continue # Skip the actual test if setting fails
|
||||||
|
fi
|
||||||
|
log_info "Set main model ok."
|
||||||
|
|
||||||
|
# 2. Run update-subtask
|
||||||
|
log_info "Running update-subtask --id=1.1 --prompt='Test generateObjectService' (timeout 120s)"
|
||||||
|
update_subtask_output_file="update_subtask_raw_output_${provider}_${model_id//\//_}.log"
|
||||||
|
|
||||||
|
timeout 120s task-master update-subtask --id=1.1 --prompt="Simple test prompt to verify generateObjectService call." > "$update_subtask_output_file" 2>&1 &
|
||||||
|
child_pid=$!
|
||||||
|
wait "$child_pid"
|
||||||
|
update_subtask_exit_code=$?
|
||||||
|
child_pid=0
|
||||||
|
|
||||||
|
# 3. Check result and log persistently
|
||||||
|
result_status=""
|
||||||
|
if [ $update_subtask_exit_code -eq 0 ] && grep -q "Successfully updated subtask #1.1" "$update_subtask_output_file"; then
|
||||||
|
log_success "update-subtask succeeded for $provider / $model_id (Verified Output)."
|
||||||
|
result_status="SUCCESS"
|
||||||
|
elif [ $update_subtask_exit_code -eq 124 ]; then
|
||||||
|
log_error "update-subtask TIMED OUT for $provider / $model_id. Check $update_subtask_output_file."
|
||||||
|
result_status="FAILED_TIMEOUT"
|
||||||
|
elif [ $update_subtask_exit_code -eq 130 ] || [ $update_subtask_exit_code -eq 143 ]; then
|
||||||
|
log_error "update-subtask INTERRUPTED for $provider / $model_id."
|
||||||
|
result_status="INTERRUPTED" # Record interruption
|
||||||
|
# Don't exit the loop, allow script to finish or be interrupted again
|
||||||
|
else
|
||||||
|
log_error "update-subtask FAILED for $provider / $model_id (Exit Code: $update_subtask_exit_code). Check $update_subtask_output_file."
|
||||||
|
result_status="FAILED"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Append result to the persistent log file
|
||||||
|
echo "$provider,$model_id,$result_status" >> "$PROGRESS_LOG_FILE"
|
||||||
|
|
||||||
|
done # End of fallback verification loop
|
||||||
|
|
||||||
|
# --- Generate Final Verification Report to STDOUT ---
|
||||||
|
# Report reads from the persistent PROGRESS_LOG_FILE
|
||||||
|
echo ""
|
||||||
|
echo "--- Fallback Model Verification Report (via $0) ---"
|
||||||
|
echo "Executed inside run directory: $(pwd)"
|
||||||
|
echo "Progress log: $(pwd)/$PROGRESS_LOG_FILE"
|
||||||
|
echo ""
|
||||||
|
echo "Test Command: task-master update-subtask --id=1.1 --prompt=\"...\" (tests generateObjectService)"
|
||||||
|
echo "Models were tested by setting them as the 'main' model temporarily."
|
||||||
|
echo "Results based on exit code and output verification:"
|
||||||
|
echo ""
|
||||||
|
echo "Models CONFIRMED to support generateObjectService (Keep 'fallback' role):"
|
||||||
|
awk -F',' '$3 == "SUCCESS" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
||||||
|
echo ""
|
||||||
|
echo "Models FAILED generateObjectService test (Suggest REMOVING 'fallback' role):"
|
||||||
|
awk -F',' '$3 == "FAILED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
||||||
|
echo ""
|
||||||
|
echo "Models TIMED OUT during test (Suggest REMOVING 'fallback' role):"
|
||||||
|
awk -F',' '$3 == "FAILED_TIMEOUT" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
||||||
|
echo ""
|
||||||
|
echo "Models where setting the model failed (Inconclusive):"
|
||||||
|
awk -F',' '$3 == "SET_MODEL_FAILED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
||||||
|
echo ""
|
||||||
|
echo "Models INTERRUPTED during test (Inconclusive - Rerun):"
|
||||||
|
awk -F',' '$3 == "INTERRUPTED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
||||||
|
echo ""
|
||||||
|
echo "-------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Don't clean up the progress log
|
||||||
|
# if [ -f "$PROGRESS_LOG_FILE" ]; then
|
||||||
|
# rm "$PROGRESS_LOG_FILE"
|
||||||
|
# fi
|
||||||
|
|
||||||
|
log_info "Finished Fallback Model (generateObjectService) Verification Script"
|
||||||
|
|
||||||
|
# Remove trap before exiting normally
|
||||||
|
trap - INT TERM
|
||||||
|
|
||||||
|
exit 0 # Exit successfully after printing the report
|
||||||
@@ -40,12 +40,14 @@ jest.unstable_mockModule('../../src/ai-providers/perplexity.js', () => ({
|
|||||||
|
|
||||||
// ... Mock other providers (google, openai, etc.) similarly ...
|
// ... Mock other providers (google, openai, etc.) similarly ...
|
||||||
|
|
||||||
// Mock utils logger and API key resolver
|
// Mock utils logger, API key resolver, AND findProjectRoot
|
||||||
const mockLog = jest.fn();
|
const mockLog = jest.fn();
|
||||||
const mockResolveEnvVariable = jest.fn();
|
const mockResolveEnvVariable = jest.fn();
|
||||||
|
const mockFindProjectRoot = jest.fn();
|
||||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||||
log: mockLog,
|
log: mockLog,
|
||||||
resolveEnvVariable: mockResolveEnvVariable
|
resolveEnvVariable: mockResolveEnvVariable,
|
||||||
|
findProjectRoot: mockFindProjectRoot
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Import the module to test (AFTER mocks)
|
// Import the module to test (AFTER mocks)
|
||||||
@@ -54,6 +56,8 @@ const { generateTextService } = await import(
|
|||||||
);
|
);
|
||||||
|
|
||||||
describe('Unified AI Services', () => {
|
describe('Unified AI Services', () => {
|
||||||
|
const fakeProjectRoot = '/fake/project/root'; // Define for reuse
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
// Clear mocks before each test
|
// Clear mocks before each test
|
||||||
jest.clearAllMocks(); // Clears all mocks
|
jest.clearAllMocks(); // Clears all mocks
|
||||||
@@ -76,6 +80,9 @@ describe('Unified AI Services', () => {
|
|||||||
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
|
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
|
||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Set a default behavior for the new mock
|
||||||
|
mockFindProjectRoot.mockReturnValue(fakeProjectRoot);
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('generateTextService', () => {
|
describe('generateTextService', () => {
|
||||||
@@ -91,12 +98,16 @@ describe('Unified AI Services', () => {
|
|||||||
const result = await generateTextService(params);
|
const result = await generateTextService(params);
|
||||||
|
|
||||||
expect(result).toBe('Main provider response');
|
expect(result).toBe('Main provider response');
|
||||||
expect(mockGetMainProvider).toHaveBeenCalled();
|
expect(mockGetMainProvider).toHaveBeenCalledWith(fakeProjectRoot);
|
||||||
expect(mockGetMainModelId).toHaveBeenCalled();
|
expect(mockGetMainModelId).toHaveBeenCalledWith(fakeProjectRoot);
|
||||||
expect(mockGetParametersForRole).toHaveBeenCalledWith('main');
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
|
'main',
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
'ANTHROPIC_API_KEY',
|
'ANTHROPIC_API_KEY',
|
||||||
params.session
|
params.session,
|
||||||
|
fakeProjectRoot
|
||||||
);
|
);
|
||||||
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(1);
|
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(1);
|
||||||
expect(mockGenerateAnthropicText).toHaveBeenCalledWith({
|
expect(mockGenerateAnthropicText).toHaveBeenCalledWith({
|
||||||
@@ -109,26 +120,43 @@ describe('Unified AI Services', () => {
|
|||||||
{ role: 'user', content: 'Test' }
|
{ role: 'user', content: 'Test' }
|
||||||
]
|
]
|
||||||
});
|
});
|
||||||
// Verify other providers NOT called
|
|
||||||
expect(mockGeneratePerplexityText).not.toHaveBeenCalled();
|
expect(mockGeneratePerplexityText).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should fall back to fallback provider if main fails', async () => {
|
test('should fall back to fallback provider if main fails', async () => {
|
||||||
const mainError = new Error('Main provider failed');
|
const mainError = new Error('Main provider failed');
|
||||||
mockGenerateAnthropicText
|
mockGenerateAnthropicText
|
||||||
.mockRejectedValueOnce(mainError) // Main fails first
|
.mockRejectedValueOnce(mainError)
|
||||||
.mockResolvedValueOnce('Fallback provider response'); // Fallback succeeds
|
.mockResolvedValueOnce('Fallback provider response');
|
||||||
|
|
||||||
const params = { role: 'main', prompt: 'Fallback test' };
|
const explicitRoot = '/explicit/test/root';
|
||||||
|
const params = {
|
||||||
|
role: 'main',
|
||||||
|
prompt: 'Fallback test',
|
||||||
|
projectRoot: explicitRoot
|
||||||
|
};
|
||||||
const result = await generateTextService(params);
|
const result = await generateTextService(params);
|
||||||
|
|
||||||
expect(result).toBe('Fallback provider response');
|
expect(result).toBe('Fallback provider response');
|
||||||
expect(mockGetMainProvider).toHaveBeenCalled();
|
expect(mockGetMainProvider).toHaveBeenCalledWith(explicitRoot);
|
||||||
expect(mockGetFallbackProvider).toHaveBeenCalled(); // Fallback was tried
|
expect(mockGetFallbackProvider).toHaveBeenCalledWith(explicitRoot);
|
||||||
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(2); // Called for main (fail) and fallback (success)
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
expect(mockGeneratePerplexityText).not.toHaveBeenCalled(); // Research not called
|
'main',
|
||||||
|
explicitRoot
|
||||||
|
);
|
||||||
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
|
'fallback',
|
||||||
|
explicitRoot
|
||||||
|
);
|
||||||
|
|
||||||
// Check log messages for fallback attempt
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
|
'ANTHROPIC_API_KEY',
|
||||||
|
undefined,
|
||||||
|
explicitRoot
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(2);
|
||||||
|
expect(mockGeneratePerplexityText).not.toHaveBeenCalled();
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
expect(mockLog).toHaveBeenCalledWith(
|
||||||
'error',
|
'error',
|
||||||
expect.stringContaining('Service call failed for role main')
|
expect.stringContaining('Service call failed for role main')
|
||||||
@@ -153,12 +181,40 @@ describe('Unified AI Services', () => {
|
|||||||
const result = await generateTextService(params);
|
const result = await generateTextService(params);
|
||||||
|
|
||||||
expect(result).toBe('Research provider response');
|
expect(result).toBe('Research provider response');
|
||||||
expect(mockGetMainProvider).toHaveBeenCalled();
|
expect(mockGetMainProvider).toHaveBeenCalledWith(fakeProjectRoot);
|
||||||
expect(mockGetFallbackProvider).toHaveBeenCalled();
|
expect(mockGetFallbackProvider).toHaveBeenCalledWith(fakeProjectRoot);
|
||||||
expect(mockGetResearchProvider).toHaveBeenCalled(); // Research was tried
|
expect(mockGetResearchProvider).toHaveBeenCalledWith(fakeProjectRoot);
|
||||||
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(2); // main, fallback
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
expect(mockGeneratePerplexityText).toHaveBeenCalledTimes(1); // research
|
'main',
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
|
'fallback',
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
expect(mockGetParametersForRole).toHaveBeenCalledWith(
|
||||||
|
'research',
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
|
'ANTHROPIC_API_KEY',
|
||||||
|
undefined,
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
|
'ANTHROPIC_API_KEY',
|
||||||
|
undefined,
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
|
'PERPLEXITY_API_KEY',
|
||||||
|
undefined,
|
||||||
|
fakeProjectRoot
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(2);
|
||||||
|
expect(mockGeneratePerplexityText).toHaveBeenCalledTimes(1);
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
expect(mockLog).toHaveBeenCalledWith(
|
||||||
'error',
|
'error',
|
||||||
expect.stringContaining('Service call failed for role fallback')
|
expect.stringContaining('Service call failed for role fallback')
|
||||||
@@ -204,6 +260,23 @@ describe('Unified AI Services', () => {
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test('should use default project root or handle null if findProjectRoot returns null', async () => {
|
||||||
|
mockFindProjectRoot.mockReturnValue(null); // Simulate not finding root
|
||||||
|
mockGenerateAnthropicText.mockResolvedValue('Response with no root');
|
||||||
|
|
||||||
|
const params = { role: 'main', prompt: 'No root test' }; // No explicit root passed
|
||||||
|
await generateTextService(params);
|
||||||
|
|
||||||
|
expect(mockGetMainProvider).toHaveBeenCalledWith(null);
|
||||||
|
expect(mockGetParametersForRole).toHaveBeenCalledWith('main', null);
|
||||||
|
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
||||||
|
'ANTHROPIC_API_KEY',
|
||||||
|
undefined,
|
||||||
|
null
|
||||||
|
);
|
||||||
|
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(1);
|
||||||
|
});
|
||||||
|
|
||||||
// Add more tests for edge cases:
|
// Add more tests for edge cases:
|
||||||
// - Missing API keys (should throw from _resolveApiKey)
|
// - Missing API keys (should throw from _resolveApiKey)
|
||||||
// - Unsupported provider configured (should skip and log)
|
// - Unsupported provider configured (should skip and log)
|
||||||
|
|||||||
Reference in New Issue
Block a user