Compare commits
69 Commits
fix/mcp.to
...
telemetry
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
60b8f5faa3 | ||
|
|
cd6e42249e | ||
|
|
fcd80623b6 | ||
|
|
026815353f | ||
|
|
8a3b611fc2 | ||
|
|
6ba42b53dc | ||
|
|
3e304232ab | ||
|
|
70fa5b0031 | ||
|
|
314c0de8c4 | ||
|
|
58b417a8ce | ||
|
|
a8dabf4485 | ||
|
|
bc19bc7927 | ||
|
|
da317f2607 | ||
|
|
ed17cb0e0a | ||
|
|
e96734a6cc | ||
|
|
17294ff259 | ||
|
|
a96215a359 | ||
|
|
0a611843b5 | ||
|
|
a1f8d52474 | ||
|
|
da636f6681 | ||
|
|
ca5ec03cd8 | ||
|
|
c47deeb869 | ||
|
|
dd90c9cb5d | ||
|
|
c7042845d6 | ||
|
|
79a41543d5 | ||
|
|
efce37469b | ||
|
|
4117f71c18 | ||
|
|
9f4bac8d6a | ||
|
|
e53d5e1577 | ||
|
|
59230c4d91 | ||
|
|
04b6a3cb21 | ||
|
|
37178ff1b9 | ||
|
|
bbc8b9cc1f | ||
|
|
c955431753 | ||
|
|
21c3cb8cda | ||
|
|
ab84afd036 | ||
|
|
f89d2aacc0 | ||
|
|
0288311965 | ||
|
|
8ae772086d | ||
|
|
2b3ae8bf89 | ||
|
|
245c3cb398 | ||
|
|
09d839fff5 | ||
|
|
90068348d3 | ||
|
|
02e347d2d7 | ||
|
|
0527c363e3 | ||
|
|
735135efe9 | ||
|
|
4fee667a05 | ||
|
|
01963af2cb | ||
|
|
0633895f3b | ||
|
|
10442c1119 | ||
|
|
734a4fdcfc | ||
|
|
8dace2186c | ||
|
|
095e373843 | ||
|
|
0bc9bac392 | ||
|
|
0a45f4329c | ||
|
|
c4b2f7e514 | ||
|
|
9684beafc3 | ||
|
|
302b916045 | ||
|
|
e7f18f65b9 | ||
|
|
655c7c225a | ||
|
|
e1218b3747 | ||
|
|
ffa621a37c | ||
|
|
cd32fd9edf | ||
|
|
590e4bd66d | ||
|
|
70d3f2f103 | ||
|
|
424aae10ed | ||
|
|
a48d1f13e2 | ||
|
|
25ca1a45a0 | ||
|
|
2e17437da3 |
5
.changeset/beige-doodles-type.md
Normal file
5
.changeset/beige-doodles-type.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Resolve all issues related to MCP
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
- Add support for Google Gemini models via Vercel AI SDK integration.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Add xAI provider and Grok models support
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
feat(expand): Enhance `expand` and `expand-all` commands
|
||||
|
||||
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
||||
- Change default behavior to *append* new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Better support for file paths on Windows, Linux & WSL.
|
||||
|
||||
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||
- Simplifies tool implementation by centralizing normalization logic.
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs.
|
||||
- IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Improved update-subtask
|
||||
- Now it has context about the parent task details
|
||||
- It also has context about the subtask before it and the subtask after it (if they exist)
|
||||
- Not passing all subtasks to stay token efficient
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Improve and adjust `init` command for robustness and updated dependencies.
|
||||
|
||||
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
||||
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
||||
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
||||
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
|
||||
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
|
||||
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
|
||||
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.
|
||||
9
.changeset/floppy-plants-marry.md
Normal file
9
.changeset/floppy-plants-marry.md
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix CLI --force flag for parse-prd command
|
||||
|
||||
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
||||
|
||||
- Fixes #477
|
||||
5
.changeset/forty-plums-stay.md
Normal file
5
.changeset/forty-plums-stay.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
.taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.
|
||||
5
.changeset/many-wasps-sell.md
Normal file
5
.changeset/many-wasps-sell.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Task Master no longer tells you to update when you're already up to date
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140
|
||||
5
.changeset/nice-lies-cover.md
Normal file
5
.changeset/nice-lies-cover.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Adds costs information to AI commands using input/output tokens and model costs.
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Improves next command to be subtask-aware
|
||||
- The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'.
|
||||
- The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority.
|
||||
- If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
|
||||
|
||||
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
|
||||
@@ -1,11 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Adds custom model ID support for Ollama and OpenRouter providers.
|
||||
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
||||
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
||||
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
||||
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
|
||||
- Added warnings when setting custom/unvalidated models.
|
||||
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Add `--status` flag to `show` command to filter displayed subtasks.
|
||||
23
.changeset/pre.json
Normal file
23
.changeset/pre.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.13.2"
|
||||
},
|
||||
"changesets": [
|
||||
"beige-doodles-type",
|
||||
"floppy-plants-marry",
|
||||
"forty-plums-stay",
|
||||
"many-wasps-sell",
|
||||
"red-oranges-attend",
|
||||
"red-suns-wash",
|
||||
"sharp-dingos-melt",
|
||||
"six-cloths-happen",
|
||||
"slow-singers-swim",
|
||||
"social-masks-fold",
|
||||
"soft-zoos-flow",
|
||||
"ten-ways-mate",
|
||||
"tricky-wombats-spend",
|
||||
"wide-eyes-relax"
|
||||
]
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Integrate OpenAI as a new AI provider.
|
||||
- Enhance `models` command/tool to display API key status.
|
||||
- Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.
|
||||
5
.changeset/red-oranges-attend.md
Normal file
5
.changeset/red-oranges-attend.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
|
||||
@@ -2,4 +2,4 @@
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Add integration for Roo Code
|
||||
Add src directory to exports
|
||||
5
.changeset/sharp-dingos-melt.md
Normal file
5
.changeset/sharp-dingos-melt.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix the error handling of task status settings
|
||||
7
.changeset/six-cloths-happen.md
Normal file
7
.changeset/six-cloths-happen.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
||||
|
||||
- Fixes issues users where having where they were getting stale data
|
||||
5
.changeset/slow-singers-swim.md
Normal file
5
.changeset/slow-singers-swim.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix for issue #409 LOG_LEVEL Pydantic validation error
|
||||
7
.changeset/small-toys-fly.md
Normal file
7
.changeset/small-toys-fly.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Small fixes
|
||||
- `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI
|
||||
- fixes the `append` flag so it properly works in the CLI
|
||||
5
.changeset/social-masks-fold.md
Normal file
5
.changeset/social-masks-fold.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
|
||||
Display task complexity scores in task lists, next task, and task details views.
|
||||
7
.changeset/soft-zoos-flow.md
Normal file
7
.changeset/soft-zoos-flow.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix initial .env.example to work out of the box
|
||||
|
||||
- Closes #419
|
||||
5
.changeset/ten-ways-mate.md
Normal file
5
.changeset/ten-ways-mate.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix default fallback model and maxTokens in Taskmaster initialization
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
'task-master-ai': minor
|
||||
---
|
||||
Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information
|
||||
- Forces temp at 0.1 for highly deterministic output, no variations
|
||||
- Adds a system prompt to further improve the output
|
||||
- Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity
|
||||
- Specificies to use a high degree of research across the web
|
||||
- Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
|
||||
5
.changeset/tricky-wombats-spend.md
Normal file
5
.changeset/tricky-wombats-spend.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix bug when updating tasks on the MCP server (#412)
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix --task to --num-tasks in ui + related tests - issue #324
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models."
|
||||
- In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup.
|
||||
- In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both.
|
||||
- Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information.
|
||||
- Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.
|
||||
11
.changeset/wide-eyes-relax.md
Normal file
11
.changeset/wide-eyes-relax.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Fix duplicate output on CLI help screen
|
||||
|
||||
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
||||
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
||||
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
||||
- Ensures a clean, branded help experience with no repeated content.
|
||||
- Fixes #339
|
||||
@@ -25,6 +25,7 @@ This document outlines the architecture and usage patterns for interacting with
|
||||
* Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`).
|
||||
* Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`).
|
||||
* Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`.
|
||||
* Returns a structured object containing the primary AI result (`mainResult`) and telemetry data (`telemetryData`). See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for details on how this telemetry data is propagated and handled.
|
||||
|
||||
* **Provider Implementations (`src/ai-providers/*.js`):**
|
||||
* Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`).
|
||||
|
||||
@@ -42,6 +42,7 @@ alwaysApply: false
|
||||
- Resolves API keys (from `.env` or `session.env`).
|
||||
- Implements fallback and retry logic.
|
||||
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
|
||||
- Telemetry data generated by the AI service layer is propagated upwards through core logic, direct functions, and MCP tools. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the detailed integration pattern.
|
||||
|
||||
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
|
||||
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
|
||||
|
||||
@@ -116,7 +116,7 @@ Taskmaster configuration is managed through two main mechanisms:
|
||||
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
|
||||
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
|
||||
|
||||
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
||||
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
|
||||
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
|
||||
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ description: Glossary of other Cursor rules
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Glossary of Task Master Cursor Rules
|
||||
|
||||
This file provides a quick reference to the purpose of each rule file located in the `.cursor/rules` directory.
|
||||
@@ -23,4 +22,5 @@ This file provides a quick reference to the purpose of each rule file located in
|
||||
- **[`tests.mdc`](mdc:.cursor/rules/tests.mdc)**: Guidelines for implementing and maintaining tests for Task Master CLI.
|
||||
- **[`ui.mdc`](mdc:.cursor/rules/ui.mdc)**: Guidelines for implementing and maintaining user interface components.
|
||||
- **[`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)**: Guidelines for implementing utility functions.
|
||||
- **[`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc)**: Guidelines for integrating AI usage telemetry across Task Master.
|
||||
|
||||
|
||||
@@ -522,3 +522,8 @@ Follow these steps to add MCP support for an existing Task Master command (see [
|
||||
// Add more functions as implemented
|
||||
};
|
||||
```
|
||||
|
||||
## Telemetry Integration
|
||||
|
||||
- Direct functions calling core logic that involves AI should receive and pass through `telemetryData` within their successful `data` payload. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the standard pattern.
|
||||
- MCP tools use `handleApiResult`, which ensures the `data` object (potentially including `telemetryData`) from the direct function is correctly included in the final response.
|
||||
|
||||
@@ -3,7 +3,6 @@ description: Guidelines for integrating new features into the Task Master CLI
|
||||
globs: scripts/modules/*.js
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Task Master Feature Integration Guidelines
|
||||
|
||||
## Feature Placement Decision Process
|
||||
@@ -196,6 +195,8 @@ The standard pattern for adding a feature follows this workflow:
|
||||
- ✅ **DO**: If an MCP tool fails with vague errors (e.g., JSON parsing issues like `Unexpected token ... is not valid JSON`), **try running the equivalent CLI command directly in the terminal** (e.g., `task-master expand --all`). CLI output often provides much more specific error messages (like missing function definitions or stack traces from the core logic) that pinpoint the root cause.
|
||||
- ❌ **DON'T**: Rely solely on MCP logs if the error is unclear; use the CLI as a complementary debugging tool for core logic issues.
|
||||
|
||||
- **Telemetry Integration**: Ensure AI calls correctly handle and propagate `telemetryData` as described in [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc).
|
||||
|
||||
```javascript
|
||||
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
|
||||
/**
|
||||
|
||||
228
.cursor/rules/telemetry.mdc
Normal file
228
.cursor/rules/telemetry.mdc
Normal file
@@ -0,0 +1,228 @@
|
||||
---
|
||||
description: Guidelines for integrating AI usage telemetry across Task Master.
|
||||
globs: scripts/modules/**/*.js,mcp-server/src/**/*.js
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# AI Usage Telemetry Integration
|
||||
|
||||
This document outlines the standard pattern for capturing, propagating, and handling AI usage telemetry data (cost, tokens, model, etc.) across the Task Master stack. This ensures consistent telemetry for both CLI and MCP interactions.
|
||||
|
||||
## Overview
|
||||
|
||||
Telemetry data is generated within the unified AI service layer ([`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)) and then passed upwards through the calling functions.
|
||||
|
||||
- **Data Source**: [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js) (specifically its `generateTextService`, `generateObjectService`, etc.) returns an object like `{ mainResult: AI_CALL_OUTPUT, telemetryData: TELEMETRY_OBJECT }`.
|
||||
- **`telemetryData` Object Structure**:
|
||||
```json
|
||||
{
|
||||
"timestamp": "ISO_STRING_DATE",
|
||||
"userId": "USER_ID_FROM_CONFIG",
|
||||
"commandName": "invoking_command_or_tool_name",
|
||||
"modelUsed": "ai_model_id",
|
||||
"providerName": "ai_provider_name",
|
||||
"inputTokens": NUMBER,
|
||||
"outputTokens": NUMBER,
|
||||
"totalTokens": NUMBER,
|
||||
"totalCost": NUMBER, // e.g., 0.012414
|
||||
"currency": "USD" // e.g., "USD"
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Pattern by Layer
|
||||
|
||||
The key principle is that each layer receives telemetry data from the layer below it (if applicable) and passes it to the layer above it, or handles it for display in the case of the CLI.
|
||||
|
||||
### 1. Core Logic Functions (e.g., in `scripts/modules/task-manager/`)
|
||||
|
||||
Functions in this layer that invoke AI services are responsible for handling the `telemetryData` they receive from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
|
||||
|
||||
- **Actions**:
|
||||
1. Call the appropriate AI service function (e.g., `generateObjectService`).
|
||||
- Pass `commandName` (e.g., `add-task`, `expand-task`) and `outputType` (e.g., `cli` or `mcp`) in the `params` object to the AI service. The `outputType` can be derived from context (e.g., presence of `mcpLog`).
|
||||
2. The AI service returns an object, e.g., `aiServiceResponse = { mainResult: {/*AI output*/}, telemetryData: {/*telemetry data*/} }`.
|
||||
3. Extract `aiServiceResponse.mainResult` for the core processing.
|
||||
4. **Must return an object that includes `aiServiceResponse.telemetryData`**.
|
||||
Example: `return { operationSpecificData: /*...*/, telemetryData: aiServiceResponse.telemetryData };`
|
||||
|
||||
- **CLI Output Handling (If Applicable)**:
|
||||
- If the core function also handles CLI output (e.g., it has an `outputFormat` parameter that can be `'text'` or `'cli'`):
|
||||
1. Check if `outputFormat === 'text'` (or `'cli'`).
|
||||
2. If so, and if `aiServiceResponse.telemetryData` is available, call `displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli')` from [`scripts/modules/ui.js`](mdc:scripts/modules/ui.js).
|
||||
- This ensures telemetry is displayed directly to CLI users after the main command output.
|
||||
|
||||
- **Example Snippet (Core Logic in `scripts/modules/task-manager/someAiAction.js`)**:
|
||||
```javascript
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { displayAiUsageSummary } from '../ui.js';
|
||||
|
||||
async function performAiRelatedAction(params, context, outputFormat = 'text') {
|
||||
const { commandNameFromContext, /* other context vars */ } = context;
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
aiServiceResponse = await generateObjectService({
|
||||
// ... other parameters for AI service ...
|
||||
commandName: commandNameFromContext || 'default-action-name',
|
||||
outputType: context.mcpLog ? 'mcp' : 'cli' // Derive outputType
|
||||
});
|
||||
|
||||
const usefulAiOutput = aiServiceResponse.mainResult.object;
|
||||
// ... do work with usefulAiOutput ...
|
||||
|
||||
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
|
||||
return {
|
||||
actionData: /* results of processing */,
|
||||
telemetryData: aiServiceResponse.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
// ... handle error ...
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Direct Function Wrappers (in `mcp-server/src/core/direct-functions/`)
|
||||
|
||||
These functions adapt core logic for the MCP server, ensuring structured responses.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the corresponding core logic function.
|
||||
- Pass necessary context (e.g., `session`, `mcpLog`, `projectRoot`).
|
||||
- Provide the `commandName` (typically derived from the MCP tool name) and `outputType: 'mcp'` in the context object passed to the core function.
|
||||
- If the core function supports an `outputFormat` parameter, pass `'json'` to suppress CLI-specific UI.
|
||||
2. The core logic function returns an object (e.g., `coreResult = { actionData: ..., telemetryData: ... }`).
|
||||
3. Include `coreResult.telemetryData` as a field within the `data` object of the successful response returned by the direct function.
|
||||
|
||||
- **Example Snippet (Direct Function `someAiActionDirect.js`)**:
|
||||
```javascript
|
||||
import { performAiRelatedAction } from '../../../../scripts/modules/task-manager/someAiAction.js'; // Core function
|
||||
import { createLogWrapper } from '../../tools/utils.js'; // MCP Log wrapper
|
||||
|
||||
export async function someAiActionDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
// ... prepare arguments for core function from args, including args.projectRoot ...
|
||||
|
||||
try {
|
||||
const coreResult = await performAiRelatedAction(
|
||||
{ /* parameters for core function */ },
|
||||
{ // Context for core function
|
||||
session,
|
||||
mcpLog: createLogWrapper(log),
|
||||
projectRoot: args.projectRoot,
|
||||
commandNameFromContext: 'mcp_tool_some_ai_action', // Example command name
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json' // Request 'json' output format from core function
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
operationSpecificData: coreResult.actionData,
|
||||
telemetryData: coreResult.telemetryData // Pass telemetry through
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// ... error handling, return { success: false, error: ... } ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. MCP Tools (in `mcp-server/src/tools/`)
|
||||
|
||||
These are the exposed endpoints for MCP clients.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the corresponding direct function wrapper.
|
||||
2. The direct function returns an object structured like `{ success: true, data: { operationSpecificData: ..., telemetryData: ... } }` (or an error object).
|
||||
3. Pass this entire result object to `handleApiResult(result, log)` from [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
||||
4. `handleApiResult` ensures that the `data` field from the direct function's response (which correctly includes `telemetryData`) is part of the final MCP response.
|
||||
|
||||
- **Example Snippet (MCP Tool `some_ai_action.js`)**:
|
||||
```javascript
|
||||
import { someAiActionDirect } from '../core/task-master-core.js';
|
||||
import { handleApiResult, withNormalizedProjectRoot } from './utils.js';
|
||||
// ... zod for parameters ...
|
||||
|
||||
export function registerSomeAiActionTool(server) {
|
||||
server.addTool({
|
||||
name: "some_ai_action",
|
||||
// ... description, parameters ...
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resultFromDirectFunction = await someAiActionDirect(
|
||||
{ /* args including projectRoot */ },
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
return handleApiResult(resultFromDirectFunction, log); // This passes the nested telemetryData through
|
||||
} catch (error) {
|
||||
// ... error handling ...
|
||||
}
|
||||
})
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 4. CLI Commands (`scripts/modules/commands.js`)
|
||||
|
||||
These define the command-line interface.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the appropriate core logic function.
|
||||
2. Pass `outputFormat: 'text'` (or ensure the core function defaults to text-based output for CLI).
|
||||
3. The core logic function (as per Section 1) is responsible for calling `displayAiUsageSummary` if telemetry data is available and it's in CLI mode.
|
||||
4. The command action itself **should not** call `displayAiUsageSummary` if the core logic function already handles this. This avoids duplicate display.
|
||||
|
||||
- **Example Snippet (CLI Command in `commands.js`)**:
|
||||
```javascript
|
||||
// In scripts/modules/commands.js
|
||||
import { performAiRelatedAction } from './task-manager/someAiAction.js'; // Core function
|
||||
|
||||
programInstance
|
||||
.command('some-cli-ai-action')
|
||||
// ... .option() ...
|
||||
.action(async (options) => {
|
||||
try {
|
||||
const projectRoot = findProjectRoot() || '.'; // Example root finding
|
||||
// ... prepare parameters for core function from command options ...
|
||||
await performAiRelatedAction(
|
||||
{ /* parameters for core function */ },
|
||||
{ // Context for core function
|
||||
projectRoot,
|
||||
commandNameFromContext: 'some-cli-ai-action',
|
||||
outputType: 'cli'
|
||||
},
|
||||
'text' // Explicitly request text output format for CLI
|
||||
);
|
||||
// Core function handles displayAiUsageSummary internally for 'text' outputFormat
|
||||
} catch (error) {
|
||||
// ... error handling ...
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Summary Flow
|
||||
|
||||
The telemetry data flows as follows:
|
||||
|
||||
1. **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)**: Generates `telemetryData` and returns `{ mainResult, telemetryData }`.
|
||||
2. **Core Logic Function**:
|
||||
* Receives `{ mainResult, telemetryData }`.
|
||||
* Uses `mainResult`.
|
||||
* If CLI (`outputFormat: 'text'`), calls `displayAiUsageSummary(telemetryData)`.
|
||||
* Returns `{ operationSpecificData, telemetryData }`.
|
||||
3. **Direct Function Wrapper**:
|
||||
* Receives `{ operationSpecificData, telemetryData }` from core logic.
|
||||
* Returns `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||
4. **MCP Tool**:
|
||||
* Receives direct function response.
|
||||
* `handleApiResult` ensures the final MCP response to the client is `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||
5. **CLI Command**:
|
||||
* Calls core logic with `outputFormat: 'text'`. Display is handled by core logic.
|
||||
|
||||
This pattern ensures telemetry is captured and appropriately handled/exposed across all interaction modes.
|
||||
62
.github/workflows/pre-release.yml
vendored
Normal file
62
.github/workflows/pre-release.yml
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
name: Pre-Release (RC)
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows manual triggering from GitHub UI/API
|
||||
|
||||
concurrency: pre-release-${{ github.ref }}
|
||||
|
||||
jobs:
|
||||
rc:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 20
|
||||
cache: 'npm'
|
||||
|
||||
- name: Cache node_modules
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
node_modules
|
||||
*/*/node_modules
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-node-
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Enter RC mode
|
||||
run: |
|
||||
npx changeset pre exit || true
|
||||
npx changeset pre enter rc
|
||||
|
||||
- name: Version RC packages
|
||||
run: npx changeset version
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
publish: npm run release
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
|
||||
- name: Exit RC mode
|
||||
run: npx changeset pre exit
|
||||
|
||||
- name: Commit & Push changes
|
||||
uses: actions-js/push@master
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
branch: ${{ github.ref }}
|
||||
message: 'chore: rc version bump'
|
||||
3
.github/workflows/release.yml
vendored
3
.github/workflows/release.yml
vendored
@@ -33,6 +33,9 @@ jobs:
|
||||
run: npm ci
|
||||
timeout-minutes: 2
|
||||
|
||||
- name: Exit pre-release mode (safety check)
|
||||
run: npx changeset pre exit || true
|
||||
|
||||
- name: Create Release Pull Request or Publish to npm
|
||||
uses: changesets/action@v1
|
||||
with:
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -61,3 +61,6 @@ dist
|
||||
*.debug
|
||||
init-debug.log
|
||||
dev-debug.log
|
||||
|
||||
# NPMRC
|
||||
.npmrc
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"maxTokens": 8192,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
@@ -26,6 +26,7 @@
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"userId": "1234567890",
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
114
CHANGELOG.md
114
CHANGELOG.md
@@ -1,5 +1,119 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.14.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#521](https://github.com/eyaltoledano/claude-task-master/pull/521) [`ed17cb0`](https://github.com/eyaltoledano/claude-task-master/commit/ed17cb0e0a04dedde6c616f68f24f3660f68dd04) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - .taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
|
||||
|
||||
- [#528](https://github.com/eyaltoledano/claude-task-master/pull/528) [`58b417a`](https://github.com/eyaltoledano/claude-task-master/commit/58b417a8ce697e655f749ca4d759b1c20014c523) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Display task complexity scores in task lists, next task, and task details views.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#478](https://github.com/eyaltoledano/claude-task-master/pull/478) [`4117f71`](https://github.com/eyaltoledano/claude-task-master/commit/4117f71c18ee4d321a9c91308d00d5d69bfac61e) Thanks [@joedanz](https://github.com/joedanz)! - Fix CLI --force flag for parse-prd command
|
||||
|
||||
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
||||
|
||||
- Fixes #477
|
||||
|
||||
- [#511](https://github.com/eyaltoledano/claude-task-master/pull/511) [`17294ff`](https://github.com/eyaltoledano/claude-task-master/commit/17294ff25918d64278674e558698a1a9ad785098) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Task Master no longer tells you to update when you're already up to date
|
||||
|
||||
- [#523](https://github.com/eyaltoledano/claude-task-master/pull/523) [`da317f2`](https://github.com/eyaltoledano/claude-task-master/commit/da317f2607ca34db1be78c19954996f634c40923) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix the error handling of task status settings
|
||||
|
||||
- [#527](https://github.com/eyaltoledano/claude-task-master/pull/527) [`a8dabf4`](https://github.com/eyaltoledano/claude-task-master/commit/a8dabf44856713f488960224ee838761716bba26) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
||||
|
||||
- Fixes issues users where having where they were getting stale data
|
||||
|
||||
- [#417](https://github.com/eyaltoledano/claude-task-master/pull/417) [`a1f8d52`](https://github.com/eyaltoledano/claude-task-master/commit/a1f8d52474fdbdf48e17a63e3f567a6d63010d9f) Thanks [@ksylvan](https://github.com/ksylvan)! - Fix for issue #409 LOG_LEVEL Pydantic validation error
|
||||
|
||||
- [#501](https://github.com/eyaltoledano/claude-task-master/pull/501) [`0a61184`](https://github.com/eyaltoledano/claude-task-master/commit/0a611843b56a856ef0a479dc34078326e05ac3a8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix initial .env.example to work out of the box
|
||||
|
||||
- Closes #419
|
||||
|
||||
- [#435](https://github.com/eyaltoledano/claude-task-master/pull/435) [`a96215a`](https://github.com/eyaltoledano/claude-task-master/commit/a96215a359b25061fd3b3f3c7b10e8ac0390c062) Thanks [@lebsral](https://github.com/lebsral)! - Fix default fallback model and maxTokens in Taskmaster initialization
|
||||
|
||||
- [#517](https://github.com/eyaltoledano/claude-task-master/pull/517) [`e96734a`](https://github.com/eyaltoledano/claude-task-master/commit/e96734a6cc6fec7731de72eb46b182a6e3743d02) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug when updating tasks on the MCP server (#412)
|
||||
|
||||
- [#496](https://github.com/eyaltoledano/claude-task-master/pull/496) [`efce374`](https://github.com/eyaltoledano/claude-task-master/commit/efce37469bc58eceef46763ba32df1ed45242211) Thanks [@joedanz](https://github.com/joedanz)! - Fix duplicate output on CLI help screen
|
||||
|
||||
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
||||
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
||||
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
||||
- Ensures a clean, branded help experience with no repeated content.
|
||||
- Fixes #339
|
||||
|
||||
## 0.13.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#399](https://github.com/eyaltoledano/claude-task-master/pull/399) [`734a4fd`](https://github.com/eyaltoledano/claude-task-master/commit/734a4fdcfc89c2e089255618cf940561ad13a3c8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
|
||||
|
||||
## 0.13.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ef782ff`](https://github.com/eyaltoledano/claude-task-master/commit/ef782ff5bd4ceb3ed0dc9ea82087aae5f79ac933) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - feat(expand): Enhance `expand` and `expand-all` commands
|
||||
|
||||
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
||||
- Change default behavior to _append_ new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`87d97bb`](https://github.com/eyaltoledano/claude-task-master/commit/87d97bba00d84e905756d46ef96b2d5b984e0f38) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs. - IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`1ab836f`](https://github.com/eyaltoledano/claude-task-master/commit/1ab836f191cb8969153593a9a0bd47fc9aa4a831) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`c8722b0`](https://github.com/eyaltoledano/claude-task-master/commit/c8722b0a7a443a73b95d1bcd4a0b68e0fce2a1cd) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds custom model ID support for Ollama and OpenRouter providers.
|
||||
|
||||
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
||||
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
||||
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
||||
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
|
||||
- Added warnings when setting custom/unvalidated models.
|
||||
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`2517bc1`](https://github.com/eyaltoledano/claude-task-master/commit/2517bc112c9a497110f3286ca4bfb4130c9addcb) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Integrate OpenAI as a new AI provider. - Enhance `models` command/tool to display API key status. - Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`9a48278`](https://github.com/eyaltoledano/claude-task-master/commit/9a482789f7894f57f655fb8d30ba68542bd0df63) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information - Forces temp at 0.1 for highly deterministic output, no variations - Adds a system prompt to further improve the output - Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity - Specificies to use a high degree of research across the web - Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`842eaf7`](https://github.com/eyaltoledano/claude-task-master/commit/842eaf722498ddf7307800b4cdcef4ac4fd7e5b0) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - - Add support for Google Gemini models via Vercel AI SDK integration.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ed79d4f`](https://github.com/eyaltoledano/claude-task-master/commit/ed79d4f4735dfab4124fa189214c0bd5e23a6860) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add xAI provider and Grok models support
|
||||
|
||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`ad89253`](https://github.com/eyaltoledano/claude-task-master/commit/ad89253e313a395637aa48b9f92cc39b1ef94ad8) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Better support for file paths on Windows, Linux & WSL.
|
||||
|
||||
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||
- Simplifies tool implementation by centralizing normalization logic.
|
||||
|
||||
- [#285](https://github.com/eyaltoledano/claude-task-master/pull/285) [`2acba94`](https://github.com/eyaltoledano/claude-task-master/commit/2acba945c0afee9460d8af18814c87e80f747e9f) Thanks [@neno-is-ooo](https://github.com/neno-is-ooo)! - Add integration for Roo Code
|
||||
|
||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`d63964a`](https://github.com/eyaltoledano/claude-task-master/commit/d63964a10eed9be17856757661ff817ad6bacfdc) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improved update-subtask - Now it has context about the parent task details - It also has context about the subtask before it and the subtask after it (if they exist) - Not passing all subtasks to stay token efficient
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`5f504fa`](https://github.com/eyaltoledano/claude-task-master/commit/5f504fafb8bdaa0043c2d20dee8bbb8ec2040d85) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improve and adjust `init` command for robustness and updated dependencies.
|
||||
|
||||
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
||||
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
||||
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
||||
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
|
||||
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
|
||||
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
|
||||
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`96aeeff`](https://github.com/eyaltoledano/claude-task-master/commit/96aeeffc195372722c6a07370540e235bfe0e4d8) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`5aea93d`](https://github.com/eyaltoledano/claude-task-master/commit/5aea93d4c0490c242d7d7042a210611977848e0a) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`66ac9ab`](https://github.com/eyaltoledano/claude-task-master/commit/66ac9ab9f66d006da518d6e8a3244e708af2764d) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improves next command to be subtask-aware - The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'. - The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority. - If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
|
||||
|
||||
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ca7b045`](https://github.com/eyaltoledano/claude-task-master/commit/ca7b0457f1dc65fd9484e92527d9fd6d69db758d) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add `--status` flag to `show` command to filter displayed subtasks.
|
||||
|
||||
- [#328](https://github.com/eyaltoledano/claude-task-master/pull/328) [`5a2371b`](https://github.com/eyaltoledano/claude-task-master/commit/5a2371b7cc0c76f5e95d43921c1e8cc8081bf14e) Thanks [@knoxgraeme](https://github.com/knoxgraeme)! - Fix --task to --num-tasks in ui + related tests - issue #324
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`6cb213e`](https://github.com/eyaltoledano/claude-task-master/commit/6cb213ebbd51116ae0688e35b575d09443d17c3b) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models." - In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup. - In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both. - Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information. - Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.
|
||||
|
||||
## 0.12.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -47,7 +47,7 @@ npm install task-master-ai
|
||||
task-master init
|
||||
|
||||
# If installed locally
|
||||
npx task-master-init
|
||||
npx task-master init
|
||||
```
|
||||
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
|
||||
14
README.md
14
README.md
@@ -11,8 +11,20 @@ A task management system for AI-driven development with Claude, designed to work
|
||||
|
||||
## Requirements
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
|
||||
|
||||
You can define 3 types of models to be used: the main model, the research model, and the fallback model (in case either the main or research fail). Whatever model you use, its provider API key must be present in either mcp.json or .env.
|
||||
|
||||
At least one (1) of the following is required:
|
||||
|
||||
- Anthropic API key (Claude API)
|
||||
- OpenAI SDK (for Perplexity API integration, optional)
|
||||
- OpenAI API key
|
||||
- Google Gemini API key
|
||||
- Perplexity API key (for research model)
|
||||
- xAI API Key (for research or main model)
|
||||
- OpenRouter API Key (for research or main model)
|
||||
|
||||
Using the research model is optional but highly recommended. You will need at least ONE API key. Adding all API keys enables you to seamlessly switch between model providers at will.
|
||||
|
||||
## Quick Start
|
||||
|
||||
|
||||
@@ -14,8 +14,8 @@
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3.5-sonnet-20240620",
|
||||
"maxTokens": 120000,
|
||||
"modelId": "claude-3-5-sonnet-20240620",
|
||||
"maxTokens": 8192,
|
||||
"temperature": 0.1
|
||||
}
|
||||
},
|
||||
|
||||
@@ -198,7 +198,7 @@ alwaysApply: true
|
||||
- **MAX_TOKENS** (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
|
||||
- **TEMPERATURE** (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
|
||||
- **DEBUG** (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
|
||||
- **LOG_LEVEL** (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
|
||||
- **TASKMASTER_LOG_LEVEL** (Default: `"info"`): Console output level (Example: `TASKMASTER_LOG_LEVEL=debug`)
|
||||
- **DEFAULT_SUBTASKS** (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
|
||||
- **DEFAULT_PRIORITY** (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
|
||||
- **PROJECT_NAME** (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# API Keys (Required to enable respective provider)
|
||||
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Required: Format: sk-ant-api03-...
|
||||
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Optional: Format: pplx-...
|
||||
OPENAI_API_KEY=your_openai_api_key_here # Optional, for OpenAI/OpenRouter models. Format: sk-proj-...
|
||||
GOOGLE_API_KEY=your_google_api_key_here # Optional, for Google Gemini models.
|
||||
MISTRAL_API_KEY=your_mistral_key_here # Optional, for Mistral AI models.
|
||||
XAI_API_KEY=YOUR_XAI_KEY_HERE # Optional, for xAI AI models.
|
||||
AZURE_OPENAI_API_KEY=your_azure_key_here # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).
|
||||
ANTHROPIC_API_KEY="your_anthropic_api_key_here" # Required: Format: sk-ant-api03-...
|
||||
PERPLEXITY_API_KEY="your_perplexity_api_key_here" # Optional: Format: pplx-...
|
||||
OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI/OpenRouter models. Format: sk-proj-...
|
||||
GOOGLE_API_KEY="your_google_api_key_here" # Optional, for Google Gemini models.
|
||||
MISTRAL_API_KEY="your_mistral_key_here" # Optional, for Mistral AI models.
|
||||
XAI_API_KEY="YOUR_XAI_KEY_HERE" # Optional, for xAI AI models.
|
||||
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).
|
||||
@@ -31,7 +31,7 @@ Task Master configuration is now managed through two primary methods:
|
||||
- Create a `.env` file in your project root for CLI usage.
|
||||
- See `assets/env.example` for required key names.
|
||||
|
||||
**Important:** Settings like `MODEL`, `MAX_TOKENS`, `TEMPERATURE`, `LOG_LEVEL`, etc., are **no longer set via `.env`**. Use `task-master models --setup` instead.
|
||||
**Important:** Settings like `MODEL`, `MAX_TOKENS`, `TEMPERATURE`, `TASKMASTER_LOG_LEVEL`, etc., are **no longer set via `.env`**. Use `task-master models --setup` instead.
|
||||
|
||||
## How It Works
|
||||
|
||||
@@ -200,7 +200,7 @@ Notes:
|
||||
|
||||
## Logging
|
||||
|
||||
The script supports different logging levels controlled by the `LOG_LEVEL` environment variable:
|
||||
The script supports different logging levels controlled by the `TASKMASTER_LOG_LEVEL` environment variable:
|
||||
|
||||
- `debug`: Detailed information, typically useful for troubleshooting
|
||||
- `info`: Confirmation that things are working as expected (default)
|
||||
|
||||
368
context/chats/max-min-tokens.txt.md
Normal file
368
context/chats/max-min-tokens.txt.md
Normal file
@@ -0,0 +1,368 @@
|
||||
We want to refine how Task Master handles AI model token limits to be more precise, by:
|
||||
|
||||
1. Distinguishing between `maxInputTokens` and `maxOutputTokens` in the configuration.
|
||||
2. Dynamically adjusting the `maxOutputTokens` for an API call based on the actual prompt length to stay within the model's total context window (or respecting separate input/output limits if the API and model support that).
|
||||
3. Ensuring that `ai-services-unified.js` uses these more granular limits.
|
||||
|
||||
This is a good improvement for both cost control and preventing errors.
|
||||
|
||||
Here's a plan:
|
||||
|
||||
**Phase 1: Configuration and Core Logic Updates**
|
||||
|
||||
1. **Update `.taskmasterconfig` Structure:**
|
||||
- I'll modify the `models` section in your `.taskmasterconfig`. For each role (`main`, `research`, `fallback`), `maxTokens` will be replaced with `maxInputTokens` and `maxOutputTokens`.
|
||||
- We'll need to decide on reasonable default values for these new fields. We can look at the current `maxTokens` and the model's known limits to make an initial guess.
|
||||
2. **Update `MODEL_MAP` in `ai-services-unified.js`:**
|
||||
- This array already stores cost data. We need to ensure it also stores the _absolute_ maximum input and output tokens for each model listed (e.g., `model_max_input_tokens`, `model_max_output_tokens`). If these fields are not present, they will need to be added. The values in `.taskmasterconfig` will then represent user-defined operational limits, which should ideally be validated against these absolute maximums.
|
||||
3. **Update `config-manager.js`:**
|
||||
- Getter functions like `getParametersForRole` will be updated to fetch `maxInputTokens` and `maxOutputTokens` instead of the singular `maxTokens`.
|
||||
- New getters might be needed if we want to access the model's absolute limits directly from `MODEL_MAP` via `config-manager.js`.
|
||||
4. **Update `ai-services-unified.js` (`_unifiedServiceRunner`):**
|
||||
- **Token Counting:** This is a crucial step. Before an API call, we need to estimate the token count of the combined `systemPrompt` and `userPrompt`.
|
||||
- The Vercel AI SDK or the individual provider SDKs might offer utilities for this. For example, some SDKs expose a `tokenizer` or a way to count tokens for a given string.
|
||||
- If a direct utility isn't available through the Vercel SDK for the specific provider, we might need to use a library like `tiktoken` for OpenAI/Anthropic models or investigate provider-specific tokenization. This could be complex as tokenization varies between models.
|
||||
- For now, let's assume we can get a reasonable estimate.
|
||||
- **Dynamic Output Token Calculation & Validation:**
|
||||
- Retrieve `configured_max_input_tokens` and `configured_max_output_tokens` from `config-manager.js` for the current role.
|
||||
- Retrieve `model_absolute_max_input_tokens` and `model_absolute_max_output_tokens` from `MODEL_MAP`.
|
||||
- Let `prompt_tokens = countTokens(systemPrompt + userPrompt)`.
|
||||
- **Input Validation:** If `prompt_tokens > configured_max_input_tokens` OR `prompt_tokens > model_absolute_max_input_tokens`, throw an error. (We might also consider auto-truncation strategies in the future, but an error is safer to start).
|
||||
- **Output Calculation:** The `max_tokens` parameter sent to the API (which typically refers to the _max number of tokens to generate_) should be `configured_max_output_tokens`. We also need to ensure this doesn't violate the model's absolute output limit.
|
||||
- Some models have a _total_ context window (input + output). If this is the case for a model, the `max_tokens` parameter for the API call might need to be `min(configured_max_output_tokens, model_absolute_total_tokens - prompt_tokens)`. However, many modern APIs handle separate input and output limits, simplifying this. We'll aim to pass `configured_max_output_tokens` as the API's `max_tokens` parameter, assuming the input validation has passed.
|
||||
- **API Call Parameter:** Update the `callParams` to use the correctly determined `max_tokens` for generation (which will be our `configured_max_output_tokens`).
|
||||
|
||||
**Phase 2: Error Handling and Refinements**
|
||||
|
||||
1. **Enhanced Error Handling:** Provide clear error messages if prompt tokens exceed input limits, or if API calls fail due to token issues despite our calculations.
|
||||
2. **Validation:** Add validation (perhaps in `config-manager.js` or when `task-master models --setup` is run) to ensure `configured_max_input_tokens` and `configured_max_output_tokens` in `.taskmasterconfig` do not exceed the `model_absolute_max_input_tokens` and `model_absolute_max_output_tokens` from `MODEL_MAP`.
|
||||
|
||||
Let's start by inspecting `scripts/modules/config-manager.js` to see how parameters and `MODEL_MAP` are currently handled. This will inform the changes needed for `.taskmasterconfig` and `ai-services-unified.js`.
|
||||
|
||||
I will now read `scripts/modules/config-manager.js`.
|
||||
|
||||
Okay, I've reviewed `scripts/modules/config-manager.js`. Here's what I've gathered and how it impacts the plan:
|
||||
|
||||
- **`MODEL_MAP` Source:** `MODEL_MAP` is loaded from `supported-models.json`. This is excellent, as it means we can update the model capabilities (including new token limits) in a structured JSON file.
|
||||
- **Configuration Loading:** `_loadAndValidateConfig` handles loading `.taskmasterconfig` and merging it with `DEFAULTS`.
|
||||
- **Getters:** There are specific getters for `maxTokens` for each role (e.g., `getMainMaxTokens`). We will need to update these.
|
||||
- **`getParametersForRole`:** This function already exists and is a good candidate for returning the new `maxInputTokens` and `maxOutputTokens`.
|
||||
|
||||
Here's the refined plan:
|
||||
|
||||
**Phase 1: Configuration and Data Structures**
|
||||
|
||||
1. **Update `supported-models.json`:**
|
||||
|
||||
- For each model entry in `supported-models.json`, we need to add `maxInputTokens` and `maxOutputTokens` fields. These should reflect the _absolute maximums_ for that specific model.
|
||||
- Example entry change:
|
||||
|
||||
```json
|
||||
// Before
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"name": "Claude 3.7 Sonnet (Preview)",
|
||||
"context_window": 200000, // This might be total context
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15, "currency": "USD" }
|
||||
}
|
||||
|
||||
// After (assuming 200k is total, and we make a reasonable split or find official numbers)
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"name": "Claude 3.7 Sonnet (Preview)",
|
||||
"maxInputTokens": 190000, // Example
|
||||
"maxOutputTokens": 8192, // Example, often smaller for generation
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15, "currency": "USD" }
|
||||
}
|
||||
```
|
||||
|
||||
- I will need to find the official input/output token limits for the models currently in your `MODEL_MAP`. If you have this information handy, it would speed things up. Otherwise, I can make educated guesses or search for them.
|
||||
|
||||
2. **Update `.taskmasterconfig` (Defaults and User File):**
|
||||
- In `scripts/modules/config-manager.js`, modify the `DEFAULTS` object. For each role (`main`, `research`, `fallback`), replace `maxTokens` with:
|
||||
- `maxInputTokens`: A sensible default (e.g., a large portion of the model's capability, but user-configurable).
|
||||
- `maxOutputTokens`: A sensible default for generation (e.g., 4096 or 8192).
|
||||
- You will then need to manually update your existing `.taskmasterconfig` file to reflect this new structure. I can provide the snippet for you to paste.
|
||||
3. **Update `config-manager.js`:**
|
||||
- Modify `getParametersForRole(role, explicitRoot = null)`:
|
||||
- It currently fetches `maxTokens` and `temperature`.
|
||||
- Update it to fetch `maxInputTokens`, `maxOutputTokens`, and `temperature` from the loaded config for the given role.
|
||||
- Remove the role-specific `getMaxTokens` functions (e.g., `getMainMaxTokens`, `getResearchMaxTokens`, `getFallbackMaxTokens`). The `getParametersForRole` will be the central way to get these.
|
||||
- (Optional, for later validation) Consider adding a new function `getModelCapabilities(providerName, modelId)` that reads from `MODEL_MAP` to return the absolute `maxInputTokens` and `maxOutputTokens` for a given model. This would be useful for validating the user's settings in `.taskmasterconfig`.
|
||||
|
||||
**Phase 2: Core Logic in `ai-services-unified.js`**
|
||||
|
||||
1. **Token Counting (`_unifiedServiceRunner`):**
|
||||
|
||||
- This is the most complex part. We need a reliable way to count tokens for the prompts.
|
||||
- **Strategy 1 (Ideal):** Leverage Vercel AI SDK. The SDK might provide a way to get a tokenizer for the active model or a utility function. We'll need to investigate its capabilities.
|
||||
- **Strategy 2 (Fallback):** Use a library like `tiktoken` for models compatible with OpenAI's tokenization (many are, including some Anthropic models). For other models, we might need provider-specific tokenizers or make estimations (less ideal).
|
||||
- **Initial Approach:** Let's try to find a Vercel AI SDK utility first. If not, we'll start with `tiktoken` as a common case and acknowledge that other models might need specific handling later.
|
||||
- The function `_unifiedServiceRunner` will call this token counting utility:
|
||||
|
||||
```javascript
|
||||
// Placeholder for token counting
|
||||
function countTokens(text, modelId /* or providerName */) {
|
||||
// ... implementation using SDK or library ...
|
||||
// This is a simplification; actual tokenization is model-specific.
|
||||
// For now, a rough estimate might be (text.length / 4) as a placeholder
|
||||
// if a proper tokenizer isn't immediately available.
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 3.5); // Very rough general estimate
|
||||
}
|
||||
|
||||
const promptTokens = countTokens(systemPrompt) + countTokens(prompt);
|
||||
```
|
||||
|
||||
2. **Dynamic Output Token Calculation & Validation (`_unifiedServiceRunner`):**
|
||||
|
||||
- Fetch parameters using the updated `getParametersForRole` from `config-manager.js`:
|
||||
```javascript
|
||||
const roleParams = getParametersForRole(
|
||||
currentRole,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
// roleParams will now contain { maxInputTokens, maxOutputTokens, temperature }
|
||||
```
|
||||
- Fetch model's absolute limits from `MODEL_MAP` (we'll need a helper in `config-manager.js` or do it directly in `ai-services-unified.js` for now):
|
||||
```javascript
|
||||
// Simplified; ideally use a robust getter from config-manager
|
||||
const modelInfo = MODEL_MAP[providerName?.toLowerCase()]?.find(
|
||||
(m) => m.id === modelId
|
||||
);
|
||||
const modelAbsoluteMaxInput = modelInfo?.maxInputTokens || Infinity;
|
||||
const modelAbsoluteMaxOutput =
|
||||
modelInfo?.maxOutputTokens || roleParams.maxOutputTokens; // Fallback to configured if not in map
|
||||
```
|
||||
- **Input Validation:**
|
||||
```javascript
|
||||
if (promptTokens > roleParams.maxInputTokens) {
|
||||
throw new Error(
|
||||
`Prompt (${promptTokens} tokens) exceeds configured max input tokens (${roleParams.maxInputTokens}) for role '${currentRole}'.`
|
||||
);
|
||||
}
|
||||
if (promptTokens > modelAbsoluteMaxInput) {
|
||||
throw new Error(
|
||||
`Prompt (${promptTokens} tokens) exceeds model's absolute max input tokens (${modelAbsoluteMaxInput}) for ${modelId}.`
|
||||
);
|
||||
}
|
||||
```
|
||||
- **API Call Parameter (`max_tokens` for generation):**
|
||||
The `max_tokens` parameter passed to the provider's API call should be `roleParams.maxOutputTokens`. We must also ensure this doesn't exceed the model's absolute output capability.
|
||||
|
||||
```javascript
|
||||
const apiMaxOutputTokens = Math.min(
|
||||
roleParams.maxOutputTokens,
|
||||
modelAbsoluteMaxOutput
|
||||
);
|
||||
|
||||
const callParams = {
|
||||
apiKey,
|
||||
modelId,
|
||||
// Use apiMaxOutputTokens for the actual API call
|
||||
maxTokens: apiMaxOutputTokens, // This usually means max tokens to GENERATE
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
baseUrl,
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...restApiParams
|
||||
};
|
||||
```
|
||||
|
||||
3. **Update `logAiUsage`:**
|
||||
- The cost calculation in `logAiUsage` seems correct as it already uses `inputCost` and `outputCost` per 1M tokens. This part should remain compatible.
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. **Modify `supported-models.json`:** I need the content of this file to propose the new `maxInputTokens` and `maxOutputTokens` fields. Please provide it.
|
||||
2. **Decide on Tokenizer Strategy:** What are your thoughts on the token counting? Are you aware of any utilities within Vercel AI SDK for this, or should we plan to integrate `tiktoken` initially?
|
||||
|
||||
Once I have the content of `supported-models.json`, I can draft the changes for it and then proceed to `config-manager.js` and `.taskmasterconfig`.
|
||||
|
||||
```json
|
||||
// Before
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"name": "Claude 3.7 Sonnet (Preview)",
|
||||
"context_window": 200000, // This might be total context
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15, "currency": "USD" }
|
||||
}
|
||||
|
||||
// After (assuming 200k is total, and we make a reasonable split or find official numbers)
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"name": "Claude 3.7 Sonnet (Preview)",
|
||||
"maxInputTokens": 190000, // Example
|
||||
"maxOutputTokens": 8192, // Example, often smaller for generation
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15, "currency": "USD" }
|
||||
}
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Placeholder for token counting
|
||||
function countTokens(text, modelId /* or providerName */) {
|
||||
// ... implementation using SDK or library ...
|
||||
// This is a simplification; actual tokenization is model-specific.
|
||||
// For now, a rough estimate might be (text.length / 4) as a placeholder
|
||||
// if a proper tokenizer isn't immediately available.
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 3.5); // Very rough general estimate
|
||||
}
|
||||
|
||||
const promptTokens = countTokens(systemPrompt) + countTokens(prompt);
|
||||
```
|
||||
|
||||
```javascript
|
||||
const roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||
// roleParams will now contain { maxInputTokens, maxOutputTokens, temperature }
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Simplified; ideally use a robust getter from config-manager
|
||||
const modelInfo = MODEL_MAP[providerName?.toLowerCase()]?.find(
|
||||
(m) => m.id === modelId
|
||||
);
|
||||
const modelAbsoluteMaxInput = modelInfo?.maxInputTokens || Infinity;
|
||||
const modelAbsoluteMaxOutput =
|
||||
modelInfo?.maxOutputTokens || roleParams.maxOutputTokens; // Fallback to configured if not in map
|
||||
```
|
||||
|
||||
```javascript
|
||||
if (promptTokens > roleParams.maxInputTokens) {
|
||||
throw new Error(
|
||||
`Prompt (${promptTokens} tokens) exceeds configured max input tokens (${roleParams.maxInputTokens}) for role '${currentRole}'.`
|
||||
);
|
||||
}
|
||||
if (promptTokens > modelAbsoluteMaxInput) {
|
||||
throw new Error(
|
||||
`Prompt (${promptTokens} tokens) exceeds model's absolute max input tokens (${modelAbsoluteMaxInput}) for ${modelId}.`
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
```javascript
|
||||
const apiMaxOutputTokens = Math.min(
|
||||
roleParams.maxOutputTokens,
|
||||
modelAbsoluteMaxOutput
|
||||
);
|
||||
|
||||
const callParams = {
|
||||
apiKey,
|
||||
modelId,
|
||||
// Use apiMaxOutputTokens for the actual API call
|
||||
maxTokens: apiMaxOutputTokens, // This usually means max tokens to GENERATE
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
baseUrl,
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...restApiParams
|
||||
};
|
||||
```
|
||||
|
||||
Okay, I have the contents of `supported-models.json`.
|
||||
|
||||
It seems many models have a `max_tokens` field, which I assume is the _total context window_. Some don't have it, and for others (like OpenAI `gpt-4o`), the listed `max_tokens` (16384) is known to be its _output_ token limit when using the Chat Completions API, while the context window is much larger (128k). This highlights the complexity: `max_tokens` in `supported-models.json` is used inconsistently.
|
||||
|
||||
**Revised Plan for `supported-models.json` and Token Definitions:**
|
||||
|
||||
To bring clarity, we'll introduce two new fields and ensure their meaning is consistent:
|
||||
|
||||
- `contextWindowTokens`: The total number of tokens the model can process (input + output). This would replace the current ambiguous `max_tokens`.
|
||||
- `maxOutputTokens`: The maximum number of tokens the model can _generate_ in a single response. This is often smaller than the total context window, especially for larger models.
|
||||
|
||||
If a model _only_ specifies a total context window, we'll have to make a reasonable assumption for `maxOutputTokens` (e.g., 4096 or 8192, or a fraction of the total context window). If it only specifies an output token limit (like some OpenAI models in certain API modes), we'll need to find its total context window.
|
||||
|
||||
**Updated `supported-models.json` Structure (Example):**
|
||||
|
||||
```json
|
||||
// For a model like Anthropic Claude 3.7 Sonnet (Preview)
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 200000, // From Anthropic's documentation
|
||||
"maxOutputTokens": 8192 // Anthropic default, user can override in .taskmasterconfig
|
||||
}
|
||||
|
||||
// For a model like OpenAI GPT-4o
|
||||
{
|
||||
"id": "gpt-4o",
|
||||
"swe_score": 0.332,
|
||||
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 128000, // Official context window
|
||||
"maxOutputTokens": 16384 // Max output for chat completions is 16k for gpt-4o (used to be 4k/8k for older gpt-4)
|
||||
// but the model card mentions 128k total. The API call parameter for `max_tokens` sets this output cap.
|
||||
}
|
||||
```
|
||||
|
||||
I will proceed to generate the updated `supported-models.json` content. This will be a significant change. I will make my best effort to find the correct `contextWindowTokens` and `maxOutputTokens` for each model. If official numbers are ambiguous or not readily available, I'll use sensible defaults and add a comment.
|
||||
|
||||
**Regarding Tokenizer Strategy:**
|
||||
|
||||
- **Vercel AI SDK:** The `ai` package (Vercel AI SDK) itself is a lightweight wrapper. Tokenization is usually handled by the underlying provider-specific SDKs (e.g., `@anthropic-ai/sdk`, `openai`). The Vercel SDK doesn't provide a universal tokenizer.
|
||||
- **Provider SDKs:**
|
||||
- `@anthropic-ai/sdk`: Does not expose a public tokenizer directly. Anthropic recommends counting based on their guidelines (roughly 3.5 chars/token for English).
|
||||
- `openai` (for Node.js): The `gpt-3-encoder` or `tiktoken` libraries are commonly used. `tiktoken` is more current.
|
||||
- **`tiktoken`:** This is a good general-purpose library for OpenAI models and those that use compatible tokenization (many models fine-tuned from Llama, etc., might be close). It supports various encodings.
|
||||
- **Recommendation:**
|
||||
1. Start by integrating `tiktoken`. We can select the encoding based on the provider or even specific model if needed (e.g., `cl100k_base` for gpt-4, gpt-3.5-turbo, DALL-E models).
|
||||
2. For providers where `tiktoken` is not accurate (e.g., Google's Gemini, Perplexity if they have proprietary tokenizers), we might have to rely on character-based estimations (e.g., `length / N`) as a rougher guide or investigate if their specific SDKs offer token counting.
|
||||
3. The Vercel AI SDK's `experimental_streamObject` and other functions _might_ internally handle some aspects of token counting for request preparation, but we need explicit counting for our logic _before_ the call.
|
||||
|
||||
For now, let's assume we'll add a `countTokens` utility that uses `tiktoken` for OpenAI and Anthropic (as Anthropic's guidance is character-based but `cl100k_base` is often a decent proxy for their tokenization for rough estimates, though not perfect), and a character-based fallback for others.
|
||||
|
||||
I will now generate the proposed changes for `scripts/modules/supported-models.json`. This will be a large diff.
|
||||
|
||||
After this, we'll move to:
|
||||
|
||||
1. Updating `.taskmasterconfig` defaults in `config-manager.js` and providing you the snippet for your user file.
|
||||
2. Updating `config-manager.js` getters.
|
||||
3. Implementing the logic in `ai-services-unified.js`.
|
||||
|
||||
```json
|
||||
// For a model like Anthropic Claude 3.7 Sonnet (Preview)
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 200000, // From Anthropic's documentation
|
||||
"maxOutputTokens": 8192 // Anthropic default, user can override in .taskmasterconfig
|
||||
}
|
||||
|
||||
// For a model like OpenAI GPT-4o
|
||||
{
|
||||
"id": "gpt-4o",
|
||||
"swe_score": 0.332,
|
||||
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 128000, // Official context window
|
||||
"maxOutputTokens": 16384 // Max output for chat completions is 16k for gpt-4o (used to be 4k/8k for older gpt-4)
|
||||
// but the model card mentions 128k total. The API call parameter for `max_tokens` sets this output cap.
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
// For a model like Anthropic Claude 3.7 Sonnet (Preview)
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 200000, // From Anthropic's documentation
|
||||
"maxOutputTokens": 8192 // Anthropic default, user can override in .taskmasterconfig
|
||||
}
|
||||
|
||||
// For a model like OpenAI GPT-4o
|
||||
{
|
||||
"id": "gpt-4o",
|
||||
"swe_score": 0.332,
|
||||
"cost_per_1m_tokens": { "input": 2.5, "output": 10.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 128000, // Official context window
|
||||
"maxOutputTokens": 16384 // Max output for chat completions is 16k for gpt-4o (used to be 4k/8k for older gpt-4)
|
||||
// but the model card mentions 128k total. The API call parameter for `max_tokens` sets this output cap.
|
||||
}
|
||||
```
|
||||
@@ -15,13 +15,15 @@ Taskmaster uses two primary methods for configuration:
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 64000,
|
||||
"temperature": 0.2
|
||||
"temperature": 0.2,
|
||||
"baseUrl": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
"temperature": 0.1,
|
||||
"baseUrl": "https://api.perplexity.ai/v1"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
@@ -56,8 +58,9 @@ Taskmaster uses two primary methods for configuration:
|
||||
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
|
||||
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
|
||||
- `XAI_API_KEY`: Your X-AI API key.
|
||||
- **Optional Endpoint Overrides (in .taskmasterconfig):**
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key.
|
||||
- **Optional Endpoint Overrides:**
|
||||
- **Per-role `baseUrl` in `.taskmasterconfig`:** You can add a `baseUrl` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseUrl` for the Azure model role).
|
||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||
|
||||
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
|
||||
|
||||
@@ -89,7 +89,7 @@ Initialize a new project:
|
||||
task-master init
|
||||
|
||||
# If installed locally
|
||||
npx task-master-init
|
||||
npx task-master init
|
||||
```
|
||||
|
||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||
|
||||
@@ -94,6 +94,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
|
||||
let manualTaskData = null;
|
||||
let newTaskId;
|
||||
let telemetryData;
|
||||
|
||||
if (isManualCreation) {
|
||||
// Create manual task data object
|
||||
@@ -109,7 +110,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
);
|
||||
|
||||
// Call the addTask function with manual task data
|
||||
newTaskId = await addTask(
|
||||
const result = await addTask(
|
||||
tasksPath,
|
||||
null, // prompt is null for manual creation
|
||||
taskDependencies,
|
||||
@@ -117,13 +118,17 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
{
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
manualTaskData, // Pass the manual task data
|
||||
false, // research flag is false for manual creation
|
||||
projectRoot // Pass projectRoot
|
||||
);
|
||||
newTaskId = result.newTaskId;
|
||||
telemetryData = result.telemetryData;
|
||||
} else {
|
||||
// AI-driven task creation
|
||||
log.info(
|
||||
@@ -131,7 +136,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
);
|
||||
|
||||
// Call the addTask function, passing the research flag
|
||||
newTaskId = await addTask(
|
||||
const result = await addTask(
|
||||
tasksPath,
|
||||
prompt, // Use the prompt for AI creation
|
||||
taskDependencies,
|
||||
@@ -139,12 +144,16 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
{
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
null, // manualTaskData is null for AI creation
|
||||
research // Pass the research flag
|
||||
);
|
||||
newTaskId = result.newTaskId;
|
||||
telemetryData = result.telemetryData;
|
||||
}
|
||||
|
||||
// Restore normal logging
|
||||
@@ -154,7 +163,8 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
success: true,
|
||||
data: {
|
||||
taskId: newTaskId,
|
||||
message: `Successfully added new task #${newTaskId}`
|
||||
message: `Successfully added new task #${newTaskId}`,
|
||||
telemetryData: telemetryData
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
|
||||
@@ -79,17 +79,19 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
}
|
||||
|
||||
let report;
|
||||
let coreResult;
|
||||
|
||||
try {
|
||||
// --- Call Core Function (Pass context separately) ---
|
||||
// Pass coreOptions as the first argument
|
||||
// Pass context object { session, mcpLog } as the second argument
|
||||
report = await analyzeTaskComplexity(
|
||||
coreOptions, // Pass options object
|
||||
{ session, mcpLog: logWrapper } // Pass context object
|
||||
// Removed the explicit 'json' format argument, assuming context handling is sufficient
|
||||
// If issues persist, we might need to add an explicit format param to analyzeTaskComplexity
|
||||
);
|
||||
coreResult = await analyzeTaskComplexity(coreOptions, {
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
commandName: 'analyze-complexity',
|
||||
outputType: 'mcp'
|
||||
});
|
||||
report = coreResult.report;
|
||||
} catch (error) {
|
||||
log.error(
|
||||
`Error in analyzeTaskComplexity core function: ${error.message}`
|
||||
@@ -125,8 +127,11 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
};
|
||||
}
|
||||
|
||||
// Added a check to ensure report is defined before accessing its properties
|
||||
if (!report || typeof report !== 'object') {
|
||||
if (
|
||||
!coreResult ||
|
||||
!coreResult.report ||
|
||||
typeof coreResult.report !== 'object'
|
||||
) {
|
||||
log.error(
|
||||
'Core analysis function returned an invalid or undefined response.'
|
||||
);
|
||||
@@ -141,8 +146,8 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
|
||||
try {
|
||||
// Ensure complexityAnalysis exists and is an array
|
||||
const analysisArray = Array.isArray(report.complexityAnalysis)
|
||||
? report.complexityAnalysis
|
||||
const analysisArray = Array.isArray(coreResult.report.complexityAnalysis)
|
||||
? coreResult.report.complexityAnalysis
|
||||
: [];
|
||||
|
||||
// Count tasks by complexity (remains the same)
|
||||
@@ -159,15 +164,16 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: `Task complexity analysis complete. Report saved to ${outputPath}`, // Use outputPath from args
|
||||
reportPath: outputPath, // Use outputPath from args
|
||||
message: `Task complexity analysis complete. Report saved to ${outputPath}`,
|
||||
reportPath: outputPath,
|
||||
reportSummary: {
|
||||
taskCount: analysisArray.length,
|
||||
highComplexityTasks,
|
||||
mediumComplexityTasks,
|
||||
lowComplexityTasks
|
||||
},
|
||||
fullReport: report // Now includes the full report
|
||||
fullReport: coreResult.report,
|
||||
telemetryData: coreResult.telemetryData
|
||||
}
|
||||
};
|
||||
} catch (parseError) {
|
||||
|
||||
@@ -8,7 +8,6 @@ import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { getCachedOrExecute } from '../../tools/utils.js';
|
||||
|
||||
/**
|
||||
* Direct function wrapper for displaying the complexity report with error handling and caching.
|
||||
@@ -86,30 +85,20 @@ export async function complexityReportDirect(args, log) {
|
||||
|
||||
// Use the caching utility
|
||||
try {
|
||||
const result = await getCachedOrExecute({
|
||||
cacheKey,
|
||||
actionFn: coreActionFn,
|
||||
log
|
||||
});
|
||||
log.info(
|
||||
`complexityReportDirect completed. From cache: ${result.fromCache}`
|
||||
);
|
||||
return result; // Returns { success, data/error, fromCache }
|
||||
const result = await coreActionFn();
|
||||
log.info('complexityReportDirect completed');
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Catch unexpected errors from getCachedOrExecute itself
|
||||
// Ensure silent mode is disabled
|
||||
disableSilentMode();
|
||||
|
||||
log.error(
|
||||
`Unexpected error during getCachedOrExecute for complexityReport: ${error.message}`
|
||||
);
|
||||
log.error(`Unexpected error during complexityReport: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'UNEXPECTED_ERROR',
|
||||
message: error.message
|
||||
},
|
||||
fromCache: false
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
|
||||
@@ -63,12 +63,18 @@ export async function expandAllTasksDirect(args, log, context = {}) {
|
||||
{ session, mcpLog, projectRoot }
|
||||
);
|
||||
|
||||
// Core function now returns a summary object
|
||||
// Core function now returns a summary object including the *aggregated* telemetryData
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
|
||||
details: result // Include the full result details
|
||||
details: {
|
||||
expandedCount: result.expandedCount,
|
||||
failedCount: result.failedCount,
|
||||
skippedCount: result.skippedCount,
|
||||
tasksToExpand: result.tasksToExpand
|
||||
},
|
||||
telemetryData: result.telemetryData // Pass the aggregated object
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
|
||||
@@ -193,13 +193,19 @@ export async function expandTaskDirect(args, log, context = {}) {
|
||||
if (!wasSilent) enableSilentMode();
|
||||
|
||||
// Call the core expandTask function with the wrapped logger and projectRoot
|
||||
const updatedTaskResult = await expandTask(
|
||||
const coreResult = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
numSubtasks,
|
||||
useResearch,
|
||||
additionalContext,
|
||||
{ mcpLog, session, projectRoot },
|
||||
{
|
||||
mcpLog,
|
||||
session,
|
||||
projectRoot,
|
||||
commandName: 'expand-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
forceFlag
|
||||
);
|
||||
|
||||
@@ -215,16 +221,17 @@ export async function expandTaskDirect(args, log, context = {}) {
|
||||
? updatedTask.subtasks.length - subtasksCountBefore
|
||||
: 0;
|
||||
|
||||
// Return the result
|
||||
// Return the result, including telemetryData
|
||||
log.info(
|
||||
`Successfully expanded task ${taskId} with ${subtasksAdded} new subtasks`
|
||||
);
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
task: updatedTask,
|
||||
task: coreResult.task,
|
||||
subtasksAdded,
|
||||
hasExistingSubtasks
|
||||
hasExistingSubtasks,
|
||||
telemetryData: coreResult.telemetryData
|
||||
},
|
||||
fromCache: false
|
||||
};
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
*/
|
||||
|
||||
import { listTasks } from '../../../../scripts/modules/task-manager.js';
|
||||
import { getCachedOrExecute } from '../../tools/utils.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
@@ -19,7 +18,7 @@ import {
|
||||
*/
|
||||
export async function listTasksDirect(args, log) {
|
||||
// Destructure the explicit tasksJsonPath from args
|
||||
const { tasksJsonPath, status, withSubtasks } = args;
|
||||
const { tasksJsonPath, reportPath, status, withSubtasks } = args;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
log.error('listTasksDirect called without tasksJsonPath');
|
||||
@@ -36,7 +35,6 @@ export async function listTasksDirect(args, log) {
|
||||
// Use the explicit tasksJsonPath for cache key
|
||||
const statusFilter = status || 'all';
|
||||
const withSubtasksFilter = withSubtasks || false;
|
||||
const cacheKey = `listTasks:${tasksJsonPath}:${statusFilter}:${withSubtasksFilter}`;
|
||||
|
||||
// Define the action function to be executed on cache miss
|
||||
const coreListTasksAction = async () => {
|
||||
@@ -51,6 +49,7 @@ export async function listTasksDirect(args, log) {
|
||||
const resultData = listTasks(
|
||||
tasksJsonPath,
|
||||
statusFilter,
|
||||
reportPath,
|
||||
withSubtasksFilter,
|
||||
'json'
|
||||
);
|
||||
@@ -65,6 +64,7 @@ export async function listTasksDirect(args, log) {
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
log.info(
|
||||
`Core listTasks function retrieved ${resultData.tasks.length} tasks`
|
||||
);
|
||||
@@ -88,25 +88,19 @@ export async function listTasksDirect(args, log) {
|
||||
}
|
||||
};
|
||||
|
||||
// Use the caching utility
|
||||
try {
|
||||
const result = await getCachedOrExecute({
|
||||
cacheKey,
|
||||
actionFn: coreListTasksAction,
|
||||
log
|
||||
});
|
||||
log.info(`listTasksDirect completed. From cache: ${result.fromCache}`);
|
||||
return result; // Returns { success, data/error, fromCache }
|
||||
const result = await coreListTasksAction();
|
||||
log.info('listTasksDirect completed');
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Catch unexpected errors from getCachedOrExecute itself (though unlikely)
|
||||
log.error(
|
||||
`Unexpected error during getCachedOrExecute for listTasks: ${error.message}`
|
||||
);
|
||||
log.error(`Unexpected error during listTasks: ${error.message}`);
|
||||
console.error(error.stack);
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'CACHE_UTIL_ERROR', message: error.message },
|
||||
fromCache: false
|
||||
error: {
|
||||
code: 'UNEXPECTED_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,8 +4,10 @@
|
||||
*/
|
||||
|
||||
import { findNextTask } from '../../../../scripts/modules/task-manager.js';
|
||||
import { readJSON } from '../../../../scripts/modules/utils.js';
|
||||
import { getCachedOrExecute } from '../../tools/utils.js';
|
||||
import {
|
||||
readJSON,
|
||||
readComplexityReport
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
@@ -21,7 +23,7 @@ import {
|
||||
*/
|
||||
export async function nextTaskDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath } = args;
|
||||
const { tasksJsonPath, reportPath } = args;
|
||||
|
||||
if (!tasksJsonPath) {
|
||||
log.error('nextTaskDirect called without tasksJsonPath');
|
||||
@@ -35,9 +37,6 @@ export async function nextTaskDirect(args, log) {
|
||||
};
|
||||
}
|
||||
|
||||
// Generate cache key using the provided task path
|
||||
const cacheKey = `nextTask:${tasksJsonPath}`;
|
||||
|
||||
// Define the action function to be executed on cache miss
|
||||
const coreNextTaskAction = async () => {
|
||||
try {
|
||||
@@ -59,8 +58,11 @@ export async function nextTaskDirect(args, log) {
|
||||
};
|
||||
}
|
||||
|
||||
// Read the complexity report
|
||||
const complexityReport = readComplexityReport(reportPath);
|
||||
|
||||
// Find the next task
|
||||
const nextTask = findNextTask(data.tasks);
|
||||
const nextTask = findNextTask(data.tasks, complexityReport);
|
||||
|
||||
if (!nextTask) {
|
||||
log.info(
|
||||
@@ -71,24 +73,34 @@ export async function nextTaskDirect(args, log) {
|
||||
data: {
|
||||
message:
|
||||
'No eligible next task found. All tasks are either completed or have unsatisfied dependencies',
|
||||
nextTask: null,
|
||||
allTasks: data.tasks
|
||||
nextTask: null
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Check if it's a subtask
|
||||
const isSubtask =
|
||||
typeof nextTask.id === 'string' && nextTask.id.includes('.');
|
||||
|
||||
const taskOrSubtask = isSubtask ? 'subtask' : 'task';
|
||||
|
||||
const additionalAdvice = isSubtask
|
||||
? 'Subtasks can be updated with timestamped details as you implement them. This is useful for tracking progress, marking milestones and insights (of successful or successive falures in attempting to implement the subtask). Research can be used when updating the subtask to collect up-to-date information, and can be helpful to solve a repeating problem the agent is unable to solve. It is a good idea to get-task the parent task to collect the overall context of the task, and to get-task the subtask to collect the specific details of the subtask.'
|
||||
: 'Tasks can be updated to reflect a change in the direction of the task, or to reformulate the task per your prompt. Research can be used when updating the task to collect up-to-date information. It is best to update subtasks as you work on them, and to update the task for more high-level changes that may affect pending subtasks or the general direction of the task.';
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
|
||||
// Return the next task data with the full tasks array for reference
|
||||
log.info(
|
||||
`Successfully found next task ${nextTask.id}: ${nextTask.title}`
|
||||
`Successfully found next task ${nextTask.id}: ${nextTask.title}. Is subtask: ${isSubtask}`
|
||||
);
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
nextTask,
|
||||
allTasks: data.tasks
|
||||
isSubtask,
|
||||
nextSteps: `When ready to work on the ${taskOrSubtask}, use set-status to set the status to "in progress" ${additionalAdvice}`
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
@@ -108,18 +120,11 @@ export async function nextTaskDirect(args, log) {
|
||||
|
||||
// Use the caching utility
|
||||
try {
|
||||
const result = await getCachedOrExecute({
|
||||
cacheKey,
|
||||
actionFn: coreNextTaskAction,
|
||||
log
|
||||
});
|
||||
log.info(`nextTaskDirect completed. From cache: ${result.fromCache}`);
|
||||
return result; // Returns { success, data/error, fromCache }
|
||||
const result = await coreNextTaskAction();
|
||||
log.info(`nextTaskDirect completed.`);
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Catch unexpected errors from getCachedOrExecute itself
|
||||
log.error(
|
||||
`Unexpected error during getCachedOrExecute for nextTask: ${error.message}`
|
||||
);
|
||||
log.error(`Unexpected error during nextTask: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
|
||||
@@ -34,18 +34,17 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
projectRoot
|
||||
} = args;
|
||||
|
||||
// Create the standard logger wrapper
|
||||
const logWrapper = createLogWrapper(log);
|
||||
|
||||
// --- Input Validation and Path Resolution ---
|
||||
if (!projectRoot || !path.isAbsolute(projectRoot)) {
|
||||
logWrapper.error(
|
||||
'parsePRDDirect requires an absolute projectRoot argument.'
|
||||
);
|
||||
if (!projectRoot) {
|
||||
logWrapper.error('parsePRDDirect requires a projectRoot argument.');
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'MISSING_ARGUMENT',
|
||||
message: 'projectRoot is required and must be absolute.'
|
||||
message: 'projectRoot is required.'
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -57,7 +56,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
};
|
||||
}
|
||||
|
||||
// Resolve input and output paths relative to projectRoot if they aren't absolute
|
||||
// Resolve input and output paths relative to projectRoot
|
||||
const inputPath = path.resolve(projectRoot, inputArg);
|
||||
const outputPath = outputArg
|
||||
? path.resolve(projectRoot, outputArg)
|
||||
@@ -101,16 +100,14 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
// Ensure positive number
|
||||
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
||||
logWrapper.warn(
|
||||
`Invalid numTasks value: ${numTasksArg}. Using default: 10`
|
||||
`Invalid numTasks value: ${numTasksArg}. Using default: ${numTasks}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const useForce = force === true;
|
||||
const useAppend = append === true;
|
||||
if (useAppend) {
|
||||
if (append) {
|
||||
logWrapper.info('Append mode enabled.');
|
||||
if (useForce) {
|
||||
if (force) {
|
||||
logWrapper.warn(
|
||||
'Both --force and --append flags were provided. --force takes precedence; append mode will be ignored.'
|
||||
);
|
||||
@@ -118,7 +115,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
}
|
||||
|
||||
logWrapper.info(
|
||||
`Parsing PRD via direct function. Input: ${inputPath}, Output: ${outputPath}, NumTasks: ${numTasks}, Force: ${useForce}, Append: ${useAppend}, ProjectRoot: ${projectRoot}`
|
||||
`Parsing PRD via direct function. Input: ${inputPath}, Output: ${outputPath}, NumTasks: ${numTasks}, Force: ${force}, Append: ${append}, ProjectRoot: ${projectRoot}`
|
||||
);
|
||||
|
||||
const wasSilent = isSilentMode();
|
||||
@@ -132,22 +129,28 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
inputPath,
|
||||
outputPath,
|
||||
numTasks,
|
||||
{ session, mcpLog: logWrapper, projectRoot, useForce, useAppend },
|
||||
{
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
projectRoot,
|
||||
force,
|
||||
append,
|
||||
commandName: 'parse-prd',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// parsePRD returns { success: true, tasks: processedTasks } on success
|
||||
if (result && result.success && Array.isArray(result.tasks)) {
|
||||
logWrapper.success(
|
||||
`Successfully parsed PRD. Generated ${result.tasks.length} tasks.`
|
||||
);
|
||||
// Adjust check for the new return structure
|
||||
if (result && result.success) {
|
||||
const successMsg = `Successfully parsed PRD and generated tasks in ${result.tasksPath}`;
|
||||
logWrapper.success(successMsg);
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: `Successfully parsed PRD and generated ${result.tasks.length} tasks.`,
|
||||
outputPath: outputPath,
|
||||
taskCount: result.tasks.length
|
||||
// Optionally include tasks if needed by client: tasks: result.tasks
|
||||
message: successMsg,
|
||||
outputPath: result.tasksPath,
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
} else {
|
||||
|
||||
@@ -3,11 +3,10 @@
|
||||
* Direct function implementation for showing task details
|
||||
*/
|
||||
|
||||
import { findTaskById, readJSON } from '../../../../scripts/modules/utils.js';
|
||||
import { getCachedOrExecute } from '../../tools/utils.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
findTaskById,
|
||||
readComplexityReport,
|
||||
readJSON
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { findTasksJsonPath } from '../utils/path-utils.js';
|
||||
|
||||
@@ -17,6 +16,7 @@ import { findTasksJsonPath } from '../utils/path-utils.js';
|
||||
* @param {Object} args - Command arguments.
|
||||
* @param {string} args.id - Task ID to show.
|
||||
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksJsonPath).
|
||||
* @param {string} args.reportPath - Explicit path to the complexity report file.
|
||||
* @param {string} [args.status] - Optional status to filter subtasks by.
|
||||
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
|
||||
* @param {Object} log - Logger object.
|
||||
@@ -27,7 +27,7 @@ export async function showTaskDirect(args, log) {
|
||||
// Destructure session from context if needed later, otherwise ignore
|
||||
// const { session } = context;
|
||||
// Destructure projectRoot and other args. projectRoot is assumed normalized.
|
||||
const { id, file, status, projectRoot } = args;
|
||||
const { id, file, reportPath, status, projectRoot } = args;
|
||||
|
||||
log.info(
|
||||
`Showing task direct function. ID: ${id}, File: ${file}, Status Filter: ${status}, ProjectRoot: ${projectRoot}`
|
||||
@@ -64,9 +64,12 @@ export async function showTaskDirect(args, log) {
|
||||
};
|
||||
}
|
||||
|
||||
const complexityReport = readComplexityReport(reportPath);
|
||||
|
||||
const { task, originalSubtaskCount } = findTaskById(
|
||||
tasksData.tasks,
|
||||
id,
|
||||
complexityReport,
|
||||
status
|
||||
);
|
||||
|
||||
|
||||
@@ -108,18 +108,24 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
||||
|
||||
try {
|
||||
// Execute core updateSubtaskById function
|
||||
const updatedSubtask = await updateSubtaskById(
|
||||
const coreResult = await updateSubtaskById(
|
||||
tasksPath,
|
||||
subtaskIdStr,
|
||||
prompt,
|
||||
useResearch,
|
||||
{ mcpLog: logWrapper, session, projectRoot },
|
||||
{
|
||||
mcpLog: logWrapper,
|
||||
session,
|
||||
projectRoot,
|
||||
commandName: 'update-subtask',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
if (updatedSubtask === null) {
|
||||
if (!coreResult || coreResult.updatedSubtask === null) {
|
||||
const message = `Subtask ${id} or its parent task not found.`;
|
||||
logWrapper.error(message); // Log as error since it couldn't be found
|
||||
logWrapper.error(message);
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'SUBTASK_NOT_FOUND', message: message },
|
||||
@@ -136,9 +142,10 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
|
||||
message: `Successfully updated subtask with ID ${subtaskIdStr}`,
|
||||
subtaskId: subtaskIdStr,
|
||||
parentId: subtaskIdStr.split('.')[0],
|
||||
subtask: updatedSubtask,
|
||||
subtask: coreResult.updatedSubtask,
|
||||
tasksPath,
|
||||
useResearch
|
||||
useResearch,
|
||||
telemetryData: coreResult.telemetryData
|
||||
},
|
||||
fromCache: false
|
||||
};
|
||||
|
||||
@@ -110,7 +110,7 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||
|
||||
try {
|
||||
// Execute core updateTaskById function with proper parameters
|
||||
const updatedTask = await updateTaskById(
|
||||
const coreResult = await updateTaskById(
|
||||
tasksPath,
|
||||
taskId,
|
||||
prompt,
|
||||
@@ -118,19 +118,26 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||
{
|
||||
mcpLog: logWrapper,
|
||||
session,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'update-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Check if the core function indicated the task wasn't updated (e.g., status was 'done')
|
||||
if (updatedTask === null) {
|
||||
// Check if the core function returned null or an object without success
|
||||
if (!coreResult || coreResult.updatedTask === null) {
|
||||
// Core function logs the reason, just return success with info
|
||||
const message = `Task ${taskId} was not updated (likely already completed).`;
|
||||
logWrapper.info(message);
|
||||
return {
|
||||
success: true,
|
||||
data: { message: message, taskId: taskId, updated: false },
|
||||
data: {
|
||||
message: message,
|
||||
taskId: taskId,
|
||||
updated: false,
|
||||
telemetryData: coreResult?.telemetryData
|
||||
},
|
||||
fromCache: false
|
||||
};
|
||||
}
|
||||
@@ -146,7 +153,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
|
||||
tasksPath: tasksPath,
|
||||
useResearch: useResearch,
|
||||
updated: true,
|
||||
updatedTask: updatedTask
|
||||
updatedTask: coreResult.updatedTask,
|
||||
telemetryData: coreResult.telemetryData
|
||||
},
|
||||
fromCache: false
|
||||
};
|
||||
|
||||
@@ -1,130 +1,126 @@
|
||||
/**
|
||||
* update-tasks.js
|
||||
* Direct function implementation for updating tasks based on new context/prompt
|
||||
* Direct function implementation for updating tasks based on new context
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { updateTasks } from '../../../../scripts/modules/task-manager.js';
|
||||
import { createLogWrapper } from '../../tools/utils.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { createLogWrapper } from '../../tools/utils.js';
|
||||
|
||||
/**
|
||||
* Direct function wrapper for updating tasks based on new context/prompt.
|
||||
* Direct function wrapper for updating tasks based on new context.
|
||||
*
|
||||
* @param {Object} args - Command arguments containing from, prompt, research and tasksJsonPath.
|
||||
* @param {Object} args - Command arguments containing projectRoot, from, prompt, research options.
|
||||
* @param {Object} log - Logger object.
|
||||
* @param {Object} context - Context object containing session data.
|
||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||
*/
|
||||
export async function updateTasksDirect(args, log, context = {}) {
|
||||
const { session } = context; // Extract session
|
||||
const { tasksJsonPath, from, prompt, research, projectRoot } = args;
|
||||
const { session } = context;
|
||||
const { from, prompt, research, file: fileArg, projectRoot } = args;
|
||||
|
||||
// Create the standard logger wrapper
|
||||
const logWrapper = {
|
||||
info: (message, ...args) => log.info(message, ...args),
|
||||
warn: (message, ...args) => log.warn(message, ...args),
|
||||
error: (message, ...args) => log.error(message, ...args),
|
||||
debug: (message, ...args) => log.debug && log.debug(message, ...args),
|
||||
success: (message, ...args) => log.info(message, ...args)
|
||||
};
|
||||
const logWrapper = createLogWrapper(log);
|
||||
|
||||
// --- Input Validation (Keep existing checks) ---
|
||||
if (!tasksJsonPath) {
|
||||
log.error('updateTasksDirect called without tasksJsonPath');
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'MISSING_ARGUMENT', message: 'tasksJsonPath is required' },
|
||||
fromCache: false
|
||||
};
|
||||
}
|
||||
if (args.id !== undefined && from === undefined) {
|
||||
// Keep 'from' vs 'id' check
|
||||
const errorMessage =
|
||||
"Use 'from' parameter, not 'id', or use 'update_task' tool.";
|
||||
log.error(errorMessage);
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'PARAMETER_MISMATCH', message: errorMessage },
|
||||
fromCache: false
|
||||
};
|
||||
}
|
||||
if (!from) {
|
||||
log.error('Missing from ID.');
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'MISSING_FROM_ID', message: 'No from ID specified.' },
|
||||
fromCache: false
|
||||
};
|
||||
}
|
||||
if (!prompt) {
|
||||
log.error('Missing prompt.');
|
||||
return {
|
||||
success: false,
|
||||
error: { code: 'MISSING_PROMPT', message: 'No prompt specified.' },
|
||||
fromCache: false
|
||||
};
|
||||
}
|
||||
let fromId;
|
||||
try {
|
||||
fromId = parseInt(from, 10);
|
||||
if (isNaN(fromId) || fromId <= 0) throw new Error();
|
||||
} catch {
|
||||
log.error(`Invalid from ID: ${from}`);
|
||||
// --- Input Validation ---
|
||||
if (!projectRoot) {
|
||||
logWrapper.error('updateTasksDirect requires a projectRoot argument.');
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'INVALID_FROM_ID',
|
||||
message: `Invalid from ID: ${from}. Must be a positive integer.`
|
||||
},
|
||||
fromCache: false
|
||||
code: 'MISSING_ARGUMENT',
|
||||
message: 'projectRoot is required.'
|
||||
}
|
||||
};
|
||||
}
|
||||
const useResearch = research === true;
|
||||
// --- End Input Validation ---
|
||||
|
||||
log.info(
|
||||
`Updating tasks from ID ${fromId}. Research: ${useResearch}. Project Root: ${projectRoot}`
|
||||
if (!from) {
|
||||
logWrapper.error('updateTasksDirect called without from ID');
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'MISSING_ARGUMENT',
|
||||
message: 'Starting task ID (from) is required'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (!prompt) {
|
||||
logWrapper.error('updateTasksDirect called without prompt');
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'MISSING_ARGUMENT',
|
||||
message: 'Update prompt is required'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Resolve tasks file path
|
||||
const tasksFile = fileArg
|
||||
? path.resolve(projectRoot, fileArg)
|
||||
: path.resolve(projectRoot, 'tasks', 'tasks.json');
|
||||
|
||||
logWrapper.info(
|
||||
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksFile}, ProjectRoot: ${projectRoot}`
|
||||
);
|
||||
|
||||
enableSilentMode(); // Enable silent mode
|
||||
try {
|
||||
// Create logger wrapper using the utility
|
||||
const mcpLog = createLogWrapper(log);
|
||||
|
||||
// Execute core updateTasks function, passing session context AND projectRoot
|
||||
await updateTasks(
|
||||
tasksJsonPath,
|
||||
fromId,
|
||||
// Call the core updateTasks function
|
||||
const result = await updateTasks(
|
||||
tasksFile,
|
||||
from,
|
||||
prompt,
|
||||
useResearch,
|
||||
// Pass context with logger wrapper, session, AND projectRoot
|
||||
{ mcpLog, session, projectRoot },
|
||||
'json' // Explicitly request JSON format for MCP
|
||||
research,
|
||||
{
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
projectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Since updateTasks modifies file and doesn't return data, create success message
|
||||
if (result && result.success && Array.isArray(result.updatedTasks)) {
|
||||
logWrapper.success(
|
||||
`Successfully updated ${result.updatedTasks.length} tasks.`
|
||||
);
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: `Successfully initiated update for tasks from ID ${fromId} based on the prompt.`,
|
||||
fromId,
|
||||
tasksPath: tasksJsonPath,
|
||||
useResearch
|
||||
},
|
||||
fromCache: false // Modifies state
|
||||
message: `Successfully updated ${result.updatedTasks.length} tasks.`,
|
||||
tasksFile,
|
||||
updatedCount: result.updatedTasks.length,
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
} else {
|
||||
// Handle case where core function didn't return expected success structure
|
||||
logWrapper.error(
|
||||
'Core updateTasks function did not return a successful structure.'
|
||||
);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CORE_FUNCTION_ERROR',
|
||||
message:
|
||||
result?.message ||
|
||||
'Core function failed to update tasks or returned unexpected result.'
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
log.error(`Error executing core updateTasks: ${error.message}`);
|
||||
logWrapper.error(`Error executing core updateTasks: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'UPDATE_TASKS_CORE_ERROR',
|
||||
message: error.message || 'Unknown error updating tasks'
|
||||
},
|
||||
fromCache: false
|
||||
}
|
||||
};
|
||||
} finally {
|
||||
disableSilentMode(); // Ensure silent mode is disabled
|
||||
|
||||
@@ -339,6 +339,49 @@ export function findPRDDocumentPath(projectRoot, explicitPath, log) {
|
||||
return null;
|
||||
}
|
||||
|
||||
export function findComplexityReportPath(projectRoot, explicitPath, log) {
|
||||
// If explicit path is provided, check if it exists
|
||||
if (explicitPath) {
|
||||
const fullPath = path.isAbsolute(explicitPath)
|
||||
? explicitPath
|
||||
: path.resolve(projectRoot, explicitPath);
|
||||
|
||||
if (fs.existsSync(fullPath)) {
|
||||
log.info(`Using provided PRD document path: ${fullPath}`);
|
||||
return fullPath;
|
||||
} else {
|
||||
log.warn(
|
||||
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Common locations and file patterns for PRD documents
|
||||
const commonLocations = [
|
||||
'', // Project root
|
||||
'scripts/'
|
||||
];
|
||||
|
||||
const commonFileNames = [
|
||||
'complexity-report.json',
|
||||
'task-complexity-report.json'
|
||||
];
|
||||
|
||||
// Check all possible combinations
|
||||
for (const location of commonLocations) {
|
||||
for (const fileName of commonFileNames) {
|
||||
const potentialPath = path.join(projectRoot, location, fileName);
|
||||
if (fs.existsSync(potentialPath)) {
|
||||
log.info(`Found PRD document at: ${potentialPath}`);
|
||||
return potentialPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.warn(`No PRD document found in common locations within ${projectRoot}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolves the tasks output directory path
|
||||
* @param {string} projectRoot - The project root directory
|
||||
|
||||
@@ -10,7 +10,10 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { showTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
import {
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Custom processor function that removes allTasks from the response
|
||||
@@ -50,6 +53,12 @@ export function registerShowTaskTool(server) {
|
||||
.string()
|
||||
.optional()
|
||||
.describe('Path to the tasks file relative to project root'),
|
||||
complexityReport: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Path to the complexity report file (relative to project root or absolute)'
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.optional()
|
||||
@@ -81,9 +90,22 @@ export function registerShowTaskTool(server) {
|
||||
}
|
||||
|
||||
// Call the direct function, passing the normalized projectRoot
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
}
|
||||
const result = await showTaskDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
reportPath: complexityReportPath,
|
||||
// Pass other relevant args
|
||||
id: id,
|
||||
status: status,
|
||||
projectRoot: projectRoot
|
||||
|
||||
@@ -10,7 +10,10 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { listTasksDirect } from '../core/task-master-core.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
import {
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the getTasks tool with the MCP server
|
||||
@@ -38,6 +41,12 @@ export function registerListTasksTool(server) {
|
||||
.describe(
|
||||
'Path to the tasks file (relative to project root or absolute)'
|
||||
),
|
||||
complexityReport: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Path to the complexity report file (relative to project root or absolute)'
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
@@ -60,11 +69,23 @@ export function registerListTasksTool(server) {
|
||||
);
|
||||
}
|
||||
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
args.projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
}
|
||||
const result = await listTasksDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
status: args.status,
|
||||
withSubtasks: args.withSubtasks
|
||||
withSubtasks: args.withSubtasks,
|
||||
reportPath: complexityReportPath
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
@@ -10,7 +10,10 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { nextTaskDirect } from '../core/task-master-core.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
import {
|
||||
findTasksJsonPath,
|
||||
findComplexityReportPath
|
||||
} from '../core/utils/path-utils.js';
|
||||
|
||||
/**
|
||||
* Register the next-task tool with the MCP server
|
||||
@@ -23,6 +26,12 @@ export function registerNextTaskTool(server) {
|
||||
'Find the next task to work on based on dependencies and status',
|
||||
parameters: z.object({
|
||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||
complexityReport: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Path to the complexity report file (relative to project root or absolute)'
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
@@ -45,9 +54,21 @@ export function registerNextTaskTool(server) {
|
||||
);
|
||||
}
|
||||
|
||||
// Resolve the path to complexity report
|
||||
let complexityReportPath;
|
||||
try {
|
||||
complexityReportPath = findComplexityReportPath(
|
||||
args.projectRoot,
|
||||
args.complexityReport,
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error(`Error finding complexity report: ${error.message}`);
|
||||
}
|
||||
const result = await nextTaskDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
reportPath: complexityReportPath
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
@@ -11,6 +11,7 @@ import {
|
||||
} from './utils.js';
|
||||
import { setTaskStatusDirect } from '../core/task-master-core.js';
|
||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
|
||||
|
||||
/**
|
||||
* Register the setTaskStatus tool with the MCP server
|
||||
@@ -27,7 +28,7 @@ export function registerSetTaskStatusTool(server) {
|
||||
"Task ID or subtask ID (e.g., '15', '15.2'). Can be comma-separated to update multiple tasks/subtasks at once."
|
||||
),
|
||||
status: z
|
||||
.string()
|
||||
.enum(TASK_STATUS_OPTIONS)
|
||||
.describe(
|
||||
"New status to set (e.g., 'pending', 'done', 'in-progress', 'review', 'deferred', 'cancelled'."
|
||||
),
|
||||
|
||||
34
package-lock.json
generated
34
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.12.1",
|
||||
"version": "0.13.2",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.12.1",
|
||||
"version": "0.13.2",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "^1.2.10",
|
||||
@@ -19,6 +19,9 @@
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.10",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-table3": "^0.6.5",
|
||||
"commander": "^11.1.0",
|
||||
"cors": "^2.8.5",
|
||||
"dotenv": "^16.3.1",
|
||||
@@ -34,7 +37,8 @@
|
||||
"ollama-ai-provider": "^1.2.0",
|
||||
"openai": "^4.89.0",
|
||||
"ora": "^8.2.0",
|
||||
"uuid": "^11.1.0"
|
||||
"uuid": "^11.1.0",
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"bin": {
|
||||
"task-master": "bin/task-master.js",
|
||||
@@ -45,9 +49,6 @@
|
||||
"@changesets/changelog-github": "^0.5.1",
|
||||
"@changesets/cli": "^2.28.1",
|
||||
"@types/jest": "^29.5.14",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-table3": "^0.6.5",
|
||||
"execa": "^8.0.1",
|
||||
"ink": "^5.0.1",
|
||||
"jest": "^29.7.0",
|
||||
@@ -57,8 +58,7 @@
|
||||
"prettier": "^3.5.3",
|
||||
"react": "^18.3.1",
|
||||
"supertest": "^7.1.0",
|
||||
"tsx": "^4.16.2",
|
||||
"zod": "^3.23.8"
|
||||
"tsx": "^4.16.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
@@ -1238,7 +1238,6 @@
|
||||
"version": "1.5.0",
|
||||
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
|
||||
"integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"engines": {
|
||||
@@ -3307,7 +3306,6 @@
|
||||
"version": "3.0.1",
|
||||
"resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-3.0.1.tgz",
|
||||
"integrity": "sha512-IOfwwBF5iczOjp/WeY4YxyjqAFMQoZufdQWDd19SEExbVLNXqvpzSJ/M7Za4/sCPmQ0+GRquoA7bGcINcxew6w==",
|
||||
"dev": true,
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"string-width": "^4.1.0"
|
||||
@@ -3317,7 +3315,6 @@
|
||||
"version": "5.0.1",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
@@ -3327,14 +3324,12 @@
|
||||
"version": "8.0.0",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/ansi-align/node_modules/string-width": {
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
@@ -3349,7 +3344,6 @@
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^5.0.1"
|
||||
@@ -3699,7 +3693,6 @@
|
||||
"version": "8.0.1",
|
||||
"resolved": "https://registry.npmjs.org/boxen/-/boxen-8.0.1.tgz",
|
||||
"integrity": "sha512-F3PH5k5juxom4xktynS7MoFY+NUWH5LC4CnH11YB8NPew+HLpmBLCybSAEyb2F+4pRXhuhWqFesoQd6DAyc2hw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-align": "^3.0.1",
|
||||
@@ -3850,7 +3843,6 @@
|
||||
"version": "8.0.0",
|
||||
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-8.0.0.tgz",
|
||||
"integrity": "sha512-8WB3Jcas3swSvjIeA2yvCJ+Miyz5l1ZmB6HFb9R1317dt9LCQoswg/BGrmAmkWVEszSrrg4RwmO46qIm2OEnSA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=16"
|
||||
@@ -3935,7 +3927,6 @@
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-3.0.0.tgz",
|
||||
"integrity": "sha512-/lzGpEWL/8PfI0BmBOPRwp0c/wFNX1RdUML3jK/RcSBA9T8mZDdQpqYBKtCFTOfQbwPqWEOpjqW+Fnayc0969g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
@@ -3975,7 +3966,6 @@
|
||||
"version": "0.6.5",
|
||||
"resolved": "https://registry.npmjs.org/cli-table3/-/cli-table3-0.6.5.tgz",
|
||||
"integrity": "sha512-+W/5efTR7y5HRD7gACw9yQjqMVvEMLBHmboM/kPWam+H+Hmyrgjh6YncVKK122YZkXrLudzTuAukUw9FnMf7IQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"string-width": "^4.2.0"
|
||||
@@ -3991,7 +3981,6 @@
|
||||
"version": "5.0.1",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
@@ -4001,14 +3990,12 @@
|
||||
"version": "8.0.0",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/cli-table3/node_modules/string-width": {
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
@@ -4023,7 +4010,6 @@
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^5.0.1"
|
||||
@@ -9488,7 +9474,6 @@
|
||||
"version": "4.37.0",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-4.37.0.tgz",
|
||||
"integrity": "sha512-S/5/0kFftkq27FPNye0XM1e2NsnoD/3FS+pBmbjmmtLT6I+i344KoOf7pvXreaFsDamWeaJX55nczA1m5PsBDg==",
|
||||
"dev": true,
|
||||
"license": "(MIT OR CC0-1.0)",
|
||||
"engines": {
|
||||
"node": ">=16"
|
||||
@@ -9698,7 +9683,6 @@
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/widest-line/-/widest-line-5.0.0.tgz",
|
||||
"integrity": "sha512-c9bZp7b5YtRj2wOe6dlj32MK+Bx/M/d+9VB2SHM1OtsUHR0aV0tdP6DWh/iMt0kWi1t5g1Iudu6hQRNd1A4PVA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"string-width": "^7.0.0"
|
||||
@@ -9714,7 +9698,6 @@
|
||||
"version": "9.0.0",
|
||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-9.0.0.tgz",
|
||||
"integrity": "sha512-G8ura3S+3Z2G+mkgNRq8dqaFZAuxfsxpBB8OCTGRTCtp+l/v9nbFNmCUP1BZMts3G1142MsZfn6eeUKrr4PD1Q==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-styles": "^6.2.1",
|
||||
@@ -9732,7 +9715,6 @@
|
||||
"version": "6.2.1",
|
||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.1.tgz",
|
||||
"integrity": "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
|
||||
21
package.json
21
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.12.1",
|
||||
"version": "0.14.0-rc.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -64,7 +64,11 @@
|
||||
"ollama-ai-provider": "^1.2.0",
|
||||
"openai": "^4.89.0",
|
||||
"ora": "^8.2.0",
|
||||
"uuid": "^11.1.0"
|
||||
"uuid": "^11.1.0",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-table3": "^0.6.5",
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
@@ -78,15 +82,14 @@
|
||||
"url": "https://github.com/eyaltoledano/claude-task-master/issues"
|
||||
},
|
||||
"files": [
|
||||
"scripts/init.js",
|
||||
"scripts/dev.js",
|
||||
"scripts/modules/**",
|
||||
"scripts/**",
|
||||
"assets/**",
|
||||
".cursor/**",
|
||||
"README-task-master.md",
|
||||
"index.js",
|
||||
"bin/**",
|
||||
"mcp-server/**"
|
||||
"mcp-server/**",
|
||||
"src/**"
|
||||
],
|
||||
"overrides": {
|
||||
"node-fetch": "^3.3.2",
|
||||
@@ -96,9 +99,6 @@
|
||||
"@changesets/changelog-github": "^0.5.1",
|
||||
"@changesets/cli": "^2.28.1",
|
||||
"@types/jest": "^29.5.14",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "^5.4.1",
|
||||
"cli-table3": "^0.6.5",
|
||||
"execa": "^8.0.1",
|
||||
"ink": "^5.0.1",
|
||||
"jest": "^29.7.0",
|
||||
@@ -108,7 +108,6 @@
|
||||
"prettier": "^3.5.3",
|
||||
"react": "^18.3.1",
|
||||
"supertest": "^7.1.0",
|
||||
"tsx": "^4.16.2",
|
||||
"zod": "^3.23.8"
|
||||
"tsx": "^4.16.2"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,7 +32,7 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
||||
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
||||
- `DEBUG`: Enable debug logging (default: false)
|
||||
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||
- `TASKMASTER_LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
|
||||
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
|
||||
- `PROJECT_NAME`: Override default project name in tasks.json
|
||||
@@ -225,7 +225,7 @@ To use the Perplexity integration:
|
||||
|
||||
## Logging
|
||||
|
||||
The script supports different logging levels controlled by the `LOG_LEVEL` environment variable:
|
||||
The script supports different logging levels controlled by the `TASKMASTER_LOG_LEVEL` environment variable:
|
||||
|
||||
- `debug`: Detailed information, typically useful for troubleshooting
|
||||
- `info`: Confirmation that things are working as expected (default)
|
||||
|
||||
@@ -38,10 +38,10 @@ const LOG_LEVELS = {
|
||||
success: 4
|
||||
};
|
||||
|
||||
// Get log level from environment or default to info
|
||||
const LOG_LEVEL = process.env.LOG_LEVEL
|
||||
? LOG_LEVELS[process.env.LOG_LEVEL.toLowerCase()]
|
||||
: LOG_LEVELS.info;
|
||||
// Determine log level from environment variable or default to 'info'
|
||||
const LOG_LEVEL = process.env.TASKMASTER_LOG_LEVEL
|
||||
? LOG_LEVELS[process.env.TASKMASTER_LOG_LEVEL.toLowerCase()]
|
||||
: LOG_LEVELS.info; // Default to info
|
||||
|
||||
// Create a color gradient for the banner
|
||||
const coolGradient = gradient(['#00b4d8', '#0077b6', '#03045e']);
|
||||
@@ -180,9 +180,9 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
|
||||
// Map template names to their actual source paths
|
||||
switch (templateName) {
|
||||
case 'scripts_README.md':
|
||||
sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
||||
break;
|
||||
// case 'scripts_README.md':
|
||||
// sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
||||
// break;
|
||||
case 'dev_workflow.mdc':
|
||||
sourcePath = path.join(
|
||||
__dirname,
|
||||
@@ -219,8 +219,8 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
||||
'self_improve.mdc'
|
||||
);
|
||||
break;
|
||||
case 'README-task-master.md':
|
||||
sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
||||
// case 'README-task-master.md':
|
||||
// sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
||||
break;
|
||||
case 'windsurfrules':
|
||||
sourcePath = path.join(__dirname, '..', 'assets', '.windsurfrules');
|
||||
@@ -351,18 +351,18 @@ async function initializeProject(options = {}) {
|
||||
}
|
||||
|
||||
// Debug logging only if not in silent mode
|
||||
if (!isSilentMode()) {
|
||||
console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
||||
console.log('Full options object:', JSON.stringify(options));
|
||||
console.log('options.yes:', options.yes);
|
||||
console.log('==================================================');
|
||||
}
|
||||
// if (!isSilentMode()) {
|
||||
// console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
||||
// console.log('Full options object:', JSON.stringify(options));
|
||||
// console.log('options.yes:', options.yes);
|
||||
// console.log('==================================================');
|
||||
// }
|
||||
|
||||
const skipPrompts = options.yes || (options.name && options.description);
|
||||
|
||||
if (!isSilentMode()) {
|
||||
console.log('Skip prompts determined:', skipPrompts);
|
||||
}
|
||||
// if (!isSilentMode()) {
|
||||
// console.log('Skip prompts determined:', skipPrompts);
|
||||
// }
|
||||
|
||||
if (skipPrompts) {
|
||||
if (!isSilentMode()) {
|
||||
@@ -565,12 +565,12 @@ function createProjectStructure(addAliases, dryRun) {
|
||||
path.join(targetDir, 'scripts', 'example_prd.txt')
|
||||
);
|
||||
|
||||
// Create main README.md
|
||||
copyTemplateFile(
|
||||
'README-task-master.md',
|
||||
path.join(targetDir, 'README-task-master.md'),
|
||||
replacements
|
||||
);
|
||||
// // Create main README.md
|
||||
// copyTemplateFile(
|
||||
// 'README-task-master.md',
|
||||
// path.join(targetDir, 'README-task-master.md'),
|
||||
// replacements
|
||||
// );
|
||||
|
||||
// Initialize git repository if git is available
|
||||
try {
|
||||
@@ -761,21 +761,22 @@ function setupMCPConfiguration(targetDir) {
|
||||
const newMCPServer = {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['-y', 'task-master-mcp'],
|
||||
args: ['-y', '--package=task-master-ai', 'task-master-ai'],
|
||||
env: {
|
||||
ANTHROPIC_API_KEY: 'YOUR_ANTHROPIC_API_KEY',
|
||||
PERPLEXITY_API_KEY: 'YOUR_PERPLEXITY_API_KEY',
|
||||
MODEL: 'claude-3-7-sonnet-20250219',
|
||||
PERPLEXITY_MODEL: 'sonar-pro',
|
||||
MAX_TOKENS: '64000',
|
||||
TEMPERATURE: '0.2',
|
||||
DEFAULT_SUBTASKS: '5',
|
||||
DEFAULT_PRIORITY: 'medium'
|
||||
ANTHROPIC_API_KEY: 'ANTHROPIC_API_KEY_HERE',
|
||||
PERPLEXITY_API_KEY: 'PERPLEXITY_API_KEY_HERE',
|
||||
OPENAI_API_KEY: 'OPENAI_API_KEY_HERE',
|
||||
GOOGLE_API_KEY: 'GOOGLE_API_KEY_HERE',
|
||||
XAI_API_KEY: 'XAI_API_KEY_HERE',
|
||||
OPENROUTER_API_KEY: 'OPENROUTER_API_KEY_HERE',
|
||||
MISTRAL_API_KEY: 'MISTRAL_API_KEY_HERE',
|
||||
AZURE_OPENAI_API_KEY: 'AZURE_OPENAI_API_KEY_HERE',
|
||||
OLLAMA_API_KEY: 'OLLAMA_API_KEY_HERE'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Check if mcp.json already exists
|
||||
// Check if mcp.json already existsimage.png
|
||||
if (fs.existsSync(mcpJsonPath)) {
|
||||
log(
|
||||
'info',
|
||||
@@ -795,14 +796,14 @@ function setupMCPConfiguration(targetDir) {
|
||||
(server) =>
|
||||
server.args &&
|
||||
server.args.some(
|
||||
(arg) => typeof arg === 'string' && arg.includes('task-master-mcp')
|
||||
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
||||
)
|
||||
);
|
||||
|
||||
if (hasMCPString) {
|
||||
log(
|
||||
'info',
|
||||
'Found existing task-master-mcp configuration in mcp.json, leaving untouched'
|
||||
'Found existing task-master-ai MCP configuration in mcp.json, leaving untouched'
|
||||
);
|
||||
return; // Exit early, don't modify the existing configuration
|
||||
}
|
||||
|
||||
@@ -14,9 +14,13 @@ import {
|
||||
getResearchModelId,
|
||||
getFallbackProvider,
|
||||
getFallbackModelId,
|
||||
getParametersForRole
|
||||
getParametersForRole,
|
||||
getUserId,
|
||||
MODEL_MAP,
|
||||
getDebugFlag,
|
||||
getBaseUrlForRole
|
||||
} from './config-manager.js';
|
||||
import { log, resolveEnvVariable, findProjectRoot } from './utils.js';
|
||||
import { log, resolveEnvVariable, isSilentMode } from './utils.js';
|
||||
|
||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||
@@ -26,6 +30,36 @@ import * as xai from '../../src/ai-providers/xai.js';
|
||||
import * as openrouter from '../../src/ai-providers/openrouter.js';
|
||||
// TODO: Import other provider modules when implemented (ollama, etc.)
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
function _getCostForModel(providerName, modelId) {
|
||||
if (!MODEL_MAP || !MODEL_MAP[providerName]) {
|
||||
log(
|
||||
'warn',
|
||||
`Provider "${providerName}" not found in MODEL_MAP. Cannot determine cost for model ${modelId}.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
const modelData = MODEL_MAP[providerName].find((m) => m.id === modelId);
|
||||
|
||||
if (!modelData || !modelData.cost_per_1m_tokens) {
|
||||
log(
|
||||
'debug',
|
||||
`Cost data not found for model "${modelId}" under provider "${providerName}". Assuming zero cost.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
// Ensure currency is part of the returned object, defaulting if not present
|
||||
const currency = modelData.cost_per_1m_tokens.currency || 'USD';
|
||||
|
||||
return {
|
||||
inputCost: modelData.cost_per_1m_tokens.input || 0,
|
||||
outputCost: modelData.cost_per_1m_tokens.output || 0,
|
||||
currency: currency
|
||||
};
|
||||
}
|
||||
|
||||
// --- Provider Function Map ---
|
||||
// Maps provider names (lowercase) to their respective service functions
|
||||
const PROVIDER_FUNCTIONS = {
|
||||
@@ -196,18 +230,22 @@ async function _attemptProviderCallWithRetries(
|
||||
|
||||
while (retries <= MAX_RETRIES) {
|
||||
try {
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
'info',
|
||||
`Attempt ${retries + 1}/${MAX_RETRIES + 1} calling ${fnName} (Provider: ${providerName}, Model: ${modelId}, Role: ${attemptRole})`
|
||||
);
|
||||
}
|
||||
|
||||
// Call the specific provider function directly
|
||||
const result = await providerApiFn(callParams);
|
||||
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
'info',
|
||||
`${fnName} succeeded for role ${attemptRole} (Provider: ${providerName}) on attempt ${retries + 1}`
|
||||
);
|
||||
}
|
||||
return result;
|
||||
} catch (error) {
|
||||
log(
|
||||
@@ -220,13 +258,13 @@ async function _attemptProviderCallWithRetries(
|
||||
const delay = INITIAL_RETRY_DELAY_MS * Math.pow(2, retries - 1);
|
||||
log(
|
||||
'info',
|
||||
`Retryable error detected. Retrying in ${delay / 1000}s...`
|
||||
`Something went wrong on the provider side. Retrying in ${delay / 1000}s...`
|
||||
);
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
`Non-retryable error or max retries reached for role ${attemptRole} (${fnName} / ${providerName}).`
|
||||
`Something went wrong on the provider side. Max retries reached for role ${attemptRole} (${fnName} / ${providerName}).`
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
@@ -242,7 +280,15 @@ async function _attemptProviderCallWithRetries(
|
||||
* Base logic for unified service functions.
|
||||
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
|
||||
* @param {object} params - Original parameters passed to the service function.
|
||||
* @param {string} params.role - The initial client role.
|
||||
* @param {object} [params.session=null] - Optional MCP session object.
|
||||
* @param {string} [params.projectRoot] - Optional project root path.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} params.outputType - 'cli' or 'mcp'.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* @param {string} [params.prompt] - The prompt for the AI.
|
||||
* @param {string} [params.schema] - The Zod schema for the expected object.
|
||||
* @param {string} [params.objectName] - Name for object/tool.
|
||||
* @returns {Promise<any>} Result from the underlying provider call.
|
||||
*/
|
||||
async function _unifiedServiceRunner(serviceType, params) {
|
||||
@@ -254,15 +300,25 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
prompt,
|
||||
schema,
|
||||
objectName,
|
||||
commandName,
|
||||
outputType,
|
||||
...restApiParams
|
||||
} = params;
|
||||
if (getDebugFlag()) {
|
||||
log('info', `${serviceType}Service called`, {
|
||||
role: initialRole,
|
||||
commandName,
|
||||
outputType,
|
||||
projectRoot
|
||||
});
|
||||
}
|
||||
|
||||
// Determine the effective project root (passed in or detected)
|
||||
const effectiveProjectRoot = projectRoot || findProjectRoot();
|
||||
// Determine the effective project root (passed in or detected if needed by config getters)
|
||||
const { findProjectRoot: detectProjectRoot } = await import('./utils.js'); // Dynamically import if needed
|
||||
const effectiveProjectRoot = projectRoot || detectProjectRoot();
|
||||
|
||||
// Get userId from config - ensure effectiveProjectRoot is passed
|
||||
const userId = getUserId(effectiveProjectRoot);
|
||||
|
||||
let sequence;
|
||||
if (initialRole === 'main') {
|
||||
@@ -284,7 +340,15 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
'AI service call failed for all configured roles.';
|
||||
|
||||
for (const currentRole of sequence) {
|
||||
let providerName, modelId, apiKey, roleParams, providerFnSet, providerApiFn;
|
||||
let providerName,
|
||||
modelId,
|
||||
apiKey,
|
||||
roleParams,
|
||||
providerFnSet,
|
||||
providerApiFn,
|
||||
baseUrl,
|
||||
providerResponse,
|
||||
telemetryData = null;
|
||||
|
||||
try {
|
||||
log('info', `New AI service call with role: ${currentRole}`);
|
||||
@@ -325,6 +389,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
// Pass effectiveProjectRoot to getParametersForRole
|
||||
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||
baseUrl = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||
|
||||
// 2. Get Provider Function Set
|
||||
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
||||
@@ -401,12 +466,13 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
maxTokens: roleParams.maxTokens,
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
baseUrl,
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...restApiParams
|
||||
};
|
||||
|
||||
// 6. Attempt the call with retries
|
||||
const result = await _attemptProviderCallWithRetries(
|
||||
providerResponse = await _attemptProviderCallWithRetries(
|
||||
providerApiFn,
|
||||
callParams,
|
||||
providerName,
|
||||
@@ -414,9 +480,53 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
currentRole
|
||||
);
|
||||
|
||||
log('info', `${serviceType}Service succeeded using role: ${currentRole}`);
|
||||
// --- Log Telemetry & Capture Data ---
|
||||
// Use providerResponse which contains the usage data directly for text/object
|
||||
if (userId && providerResponse && providerResponse.usage) {
|
||||
try {
|
||||
telemetryData = await logAiUsage({
|
||||
userId,
|
||||
commandName,
|
||||
providerName,
|
||||
modelId,
|
||||
inputTokens: providerResponse.usage.inputTokens,
|
||||
outputTokens: providerResponse.usage.outputTokens,
|
||||
outputType
|
||||
});
|
||||
} catch (telemetryError) {
|
||||
// logAiUsage already logs its own errors and returns null on failure
|
||||
// No need to log again here, telemetryData will remain null
|
||||
}
|
||||
} else if (userId && providerResponse && !providerResponse.usage) {
|
||||
log(
|
||||
'warn',
|
||||
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data. (May be expected for streams)`
|
||||
);
|
||||
}
|
||||
// --- End Log Telemetry ---
|
||||
|
||||
return result;
|
||||
// --- Extract the correct main result based on serviceType ---
|
||||
let finalMainResult;
|
||||
if (serviceType === 'generateText') {
|
||||
finalMainResult = providerResponse.text;
|
||||
} else if (serviceType === 'generateObject') {
|
||||
finalMainResult = providerResponse.object;
|
||||
} else if (serviceType === 'streamText') {
|
||||
finalMainResult = providerResponse; // Return the whole stream object
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
`Unknown serviceType in _unifiedServiceRunner: ${serviceType}`
|
||||
);
|
||||
finalMainResult = providerResponse; // Default to returning the whole object as fallback
|
||||
}
|
||||
// --- End Main Result Extraction ---
|
||||
|
||||
// Return a composite object including the extracted main result and telemetry data
|
||||
return {
|
||||
mainResult: finalMainResult,
|
||||
telemetryData: telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
const cleanMessage = _extractErrorMessage(error);
|
||||
log(
|
||||
@@ -461,11 +571,16 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||
* @param {string} params.prompt - The prompt for the AI.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* // Other specific generateText params can be included here.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing generated text and usage data.
|
||||
*/
|
||||
async function generateTextService(params) {
|
||||
return _unifiedServiceRunner('generateText', params);
|
||||
// Ensure default outputType if not provided
|
||||
const defaults = { outputType: 'cli' };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateText', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -478,11 +593,18 @@ async function generateTextService(params) {
|
||||
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||
* @param {string} params.prompt - The prompt for the AI.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* // Other specific streamText params can be included here.
|
||||
* @returns {Promise<ReadableStream<string>>} A readable stream of text deltas.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing the stream and usage data.
|
||||
*/
|
||||
async function streamTextService(params) {
|
||||
return _unifiedServiceRunner('streamText', params);
|
||||
const defaults = { outputType: 'cli' };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
// NOTE: Telemetry for streaming might be tricky as usage data often comes at the end.
|
||||
// The current implementation logs *after* the stream is returned.
|
||||
// We might need to adjust how usage is captured/logged for streams.
|
||||
return _unifiedServiceRunner('streamText', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -498,15 +620,89 @@ async function streamTextService(params) {
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* @param {string} [params.objectName='generated_object'] - Name for object/tool.
|
||||
* @param {number} [params.maxRetries=3] - Max retries for object generation.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing the generated object and usage data.
|
||||
*/
|
||||
async function generateObjectService(params) {
|
||||
const defaults = {
|
||||
objectName: 'generated_object',
|
||||
maxRetries: 3
|
||||
maxRetries: 3,
|
||||
outputType: 'cli'
|
||||
};
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateObject', combinedParams);
|
||||
}
|
||||
|
||||
export { generateTextService, streamTextService, generateObjectService };
|
||||
// --- Telemetry Function ---
|
||||
/**
|
||||
* Logs AI usage telemetry data.
|
||||
* For now, it just logs to the console. Sending will be implemented later.
|
||||
* @param {object} params - Telemetry parameters.
|
||||
* @param {string} params.userId - Unique user identifier.
|
||||
* @param {string} params.commandName - The command that triggered the AI call.
|
||||
* @param {string} params.providerName - The AI provider used (e.g., 'openai').
|
||||
* @param {string} params.modelId - The specific AI model ID used.
|
||||
* @param {number} params.inputTokens - Number of input tokens.
|
||||
* @param {number} params.outputTokens - Number of output tokens.
|
||||
*/
|
||||
async function logAiUsage({
|
||||
userId,
|
||||
commandName,
|
||||
providerName,
|
||||
modelId,
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
outputType
|
||||
}) {
|
||||
try {
|
||||
const isMCP = outputType === 'mcp';
|
||||
const timestamp = new Date().toISOString();
|
||||
const totalTokens = (inputTokens || 0) + (outputTokens || 0);
|
||||
|
||||
// Destructure currency along with costs
|
||||
const { inputCost, outputCost, currency } = _getCostForModel(
|
||||
providerName,
|
||||
modelId
|
||||
);
|
||||
|
||||
const totalCost =
|
||||
((inputTokens || 0) / 1_000_000) * inputCost +
|
||||
((outputTokens || 0) / 1_000_000) * outputCost;
|
||||
|
||||
const telemetryData = {
|
||||
timestamp,
|
||||
userId,
|
||||
commandName,
|
||||
modelUsed: modelId, // Consistent field name from requirements
|
||||
providerName, // Keep provider name for context
|
||||
inputTokens: inputTokens || 0,
|
||||
outputTokens: outputTokens || 0,
|
||||
totalTokens,
|
||||
totalCost: parseFloat(totalCost.toFixed(6)),
|
||||
currency // Add currency to the telemetry data
|
||||
};
|
||||
|
||||
if (getDebugFlag()) {
|
||||
log('info', 'AI Usage Telemetry:', telemetryData);
|
||||
}
|
||||
|
||||
// TODO (Subtask 77.2): Send telemetryData securely to the external endpoint.
|
||||
|
||||
return telemetryData;
|
||||
} catch (error) {
|
||||
log('error', `Failed to log AI usage telemetry: ${error.message}`, {
|
||||
error
|
||||
});
|
||||
// Don't re-throw; telemetry failure shouldn't block core functionality.
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
generateTextService,
|
||||
streamTextService,
|
||||
generateObjectService,
|
||||
logAiUsage
|
||||
};
|
||||
|
||||
@@ -10,6 +10,7 @@ import boxen from 'boxen';
|
||||
import fs from 'fs';
|
||||
import https from 'https';
|
||||
import inquirer from 'inquirer';
|
||||
import ora from 'ora'; // Import ora
|
||||
|
||||
import { log, readJSON } from './utils.js';
|
||||
import {
|
||||
@@ -61,7 +62,8 @@ import {
|
||||
stopLoadingIndicator,
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels,
|
||||
displayApiKeyStatus
|
||||
displayApiKeyStatus,
|
||||
displayAiUsageSummary
|
||||
} from './ui.js';
|
||||
|
||||
import { initializeProject } from '../init.js';
|
||||
@@ -72,7 +74,11 @@ import {
|
||||
getApiKeyStatusReport
|
||||
} from './task-manager/models.js';
|
||||
import { findProjectRoot } from './utils.js';
|
||||
|
||||
import {
|
||||
isValidTaskStatus,
|
||||
TASK_STATUS_OPTIONS
|
||||
} from '../../src/constants/task-status.js';
|
||||
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
|
||||
/**
|
||||
* Runs the interactive setup process for model configuration.
|
||||
* @param {string|null} projectRoot - The resolved project root directory.
|
||||
@@ -485,11 +491,6 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// Default help
|
||||
programInstance.on('--help', function () {
|
||||
displayHelp();
|
||||
});
|
||||
|
||||
// parse-prd command
|
||||
programInstance
|
||||
.command('parse-prd')
|
||||
@@ -514,29 +515,41 @@ function registerCommands(programInstance) {
|
||||
const outputPath = options.output;
|
||||
const force = options.force || false;
|
||||
const append = options.append || false;
|
||||
let useForce = force;
|
||||
let useAppend = append;
|
||||
|
||||
// Helper function to check if tasks.json exists and confirm overwrite
|
||||
async function confirmOverwriteIfNeeded() {
|
||||
if (fs.existsSync(outputPath) && !force && !append) {
|
||||
const shouldContinue = await confirmTaskOverwrite(outputPath);
|
||||
if (!shouldContinue) {
|
||||
console.log(chalk.yellow('Operation cancelled by user.'));
|
||||
if (fs.existsSync(outputPath) && !useForce && !useAppend) {
|
||||
const overwrite = await confirmTaskOverwrite(outputPath);
|
||||
if (!overwrite) {
|
||||
log('info', 'Operation cancelled.');
|
||||
return false;
|
||||
}
|
||||
// If user confirms 'y', we should set useForce = true for the parsePRD call
|
||||
// Only overwrite if not appending
|
||||
useForce = true;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// If no input file specified, check for default PRD location
|
||||
let spinner;
|
||||
|
||||
try {
|
||||
if (!inputFile) {
|
||||
if (fs.existsSync(defaultPrdPath)) {
|
||||
console.log(chalk.blue(`Using default PRD file: ${defaultPrdPath}`));
|
||||
|
||||
// Check for existing tasks.json before proceeding
|
||||
console.log(
|
||||
chalk.blue(`Using default PRD file path: ${defaultPrdPath}`)
|
||||
);
|
||||
if (!(await confirmOverwriteIfNeeded())) return;
|
||||
|
||||
console.log(chalk.blue(`Generating ${numTasks} tasks...`));
|
||||
await parsePRD(defaultPrdPath, outputPath, numTasks, { append });
|
||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||
await parsePRD(defaultPrdPath, outputPath, numTasks, {
|
||||
append: useAppend, // Changed key from useAppend to append
|
||||
force: useForce // Changed key from useForce to force
|
||||
});
|
||||
spinner.succeed('Tasks generated successfully!');
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -578,7 +591,13 @@ function registerCommands(programInstance) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for existing tasks.json before proceeding with specified input file
|
||||
if (!fs.existsSync(inputFile)) {
|
||||
console.error(
|
||||
chalk.red(`Error: Input PRD file not found: ${inputFile}`)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!(await confirmOverwriteIfNeeded())) return;
|
||||
|
||||
console.log(chalk.blue(`Parsing PRD file: ${inputFile}`));
|
||||
@@ -587,7 +606,20 @@ function registerCommands(programInstance) {
|
||||
console.log(chalk.blue('Appending to existing tasks...'));
|
||||
}
|
||||
|
||||
await parsePRD(inputFile, outputPath, numTasks, { append });
|
||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||
await parsePRD(inputFile, outputPath, numTasks, {
|
||||
useAppend: useAppend,
|
||||
useForce: useForce
|
||||
});
|
||||
spinner.succeed('Tasks generated successfully!');
|
||||
} catch (error) {
|
||||
if (spinner) {
|
||||
spinner.fail(`Error parsing PRD: ${error.message}`);
|
||||
} else {
|
||||
console.error(chalk.red(`Error parsing PRD: ${error.message}`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
|
||||
// update command
|
||||
@@ -1006,7 +1038,7 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.option(
|
||||
'-s, --status <status>',
|
||||
'New status (todo, in-progress, review, done)'
|
||||
`New status (one of: ${TASK_STATUS_OPTIONS.join(', ')})`
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.action(async (options) => {
|
||||
@@ -1019,6 +1051,16 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!isValidTaskStatus(status)) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Invalid status value: ${status}. Use one of: ${TASK_STATUS_OPTIONS.join(', ')}`
|
||||
)
|
||||
);
|
||||
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(`Setting status of task(s) ${taskId} to: ${status}`)
|
||||
);
|
||||
@@ -1031,10 +1073,16 @@ function registerCommands(programInstance) {
|
||||
.command('list')
|
||||
.description('List all tasks')
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.option('-s, --status <status>', 'Filter by status')
|
||||
.option('--with-subtasks', 'Show subtasks for each task')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file;
|
||||
const reportPath = options.report;
|
||||
const statusFilter = options.status;
|
||||
const withSubtasks = options.withSubtasks || false;
|
||||
|
||||
@@ -1046,7 +1094,7 @@ function registerCommands(programInstance) {
|
||||
console.log(chalk.blue('Including subtasks in listing'));
|
||||
}
|
||||
|
||||
await listTasks(tasksPath, statusFilter, withSubtasks);
|
||||
await listTasks(tasksPath, statusFilter, reportPath, withSubtasks);
|
||||
});
|
||||
|
||||
// expand command
|
||||
@@ -1096,12 +1144,6 @@ function registerCommands(programInstance) {
|
||||
{} // Pass empty context for CLI calls
|
||||
// outputFormat defaults to 'text' in expandAllTasks for CLI
|
||||
);
|
||||
// Optional: Display summary from result
|
||||
console.log(chalk.green(`Expansion Summary:`));
|
||||
console.log(chalk.green(` - Attempted: ${result.tasksToExpand}`));
|
||||
console.log(chalk.green(` - Expanded: ${result.expandedCount}`));
|
||||
console.log(chalk.yellow(` - Skipped: ${result.skippedCount}`));
|
||||
console.log(chalk.red(` - Failed: ${result.failedCount}`));
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.red(`Error expanding all tasks: ${error.message}`)
|
||||
@@ -1231,7 +1273,7 @@ function registerCommands(programInstance) {
|
||||
// add-task command
|
||||
programInstance
|
||||
.command('add-task')
|
||||
.description('Add a new task using AI or manual input')
|
||||
.description('Add a new task using AI, optionally providing manual details')
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-p, --prompt <prompt>',
|
||||
@@ -1246,10 +1288,6 @@ function registerCommands(programInstance) {
|
||||
'--details <details>',
|
||||
'Implementation details (for manual task creation)'
|
||||
)
|
||||
.option(
|
||||
'--test-strategy <testStrategy>',
|
||||
'Test strategy (for manual task creation)'
|
||||
)
|
||||
.option(
|
||||
'--dependencies <dependencies>',
|
||||
'Comma-separated list of task IDs this task depends on'
|
||||
@@ -1276,16 +1314,14 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
try {
|
||||
// Prepare dependencies if provided
|
||||
let dependencies = [];
|
||||
if (options.dependencies) {
|
||||
dependencies = options.dependencies
|
||||
.split(',')
|
||||
.map((id) => parseInt(id.trim(), 10));
|
||||
}
|
||||
const tasksPath =
|
||||
options.file ||
|
||||
path.join(findProjectRoot() || '.', 'tasks', 'tasks.json') || // Ensure tasksPath is also relative to a found root or current dir
|
||||
'tasks/tasks.json';
|
||||
|
||||
// Correctly determine projectRoot
|
||||
const projectRoot = findProjectRoot();
|
||||
|
||||
// Create manual task data if title and description are provided
|
||||
let manualTaskData = null;
|
||||
if (isManualCreation) {
|
||||
manualTaskData = {
|
||||
@@ -1294,56 +1330,54 @@ function registerCommands(programInstance) {
|
||||
details: options.details || '',
|
||||
testStrategy: options.testStrategy || ''
|
||||
};
|
||||
|
||||
// Restore specific logging for manual creation
|
||||
console.log(
|
||||
chalk.blue(`Creating task manually with title: "${options.title}"`)
|
||||
);
|
||||
if (dependencies.length > 0) {
|
||||
console.log(
|
||||
chalk.blue(`Dependencies: [${dependencies.join(', ')}]`)
|
||||
);
|
||||
}
|
||||
if (options.priority) {
|
||||
console.log(chalk.blue(`Priority: ${options.priority}`));
|
||||
}
|
||||
} else {
|
||||
// Restore specific logging for AI creation
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Creating task with AI using prompt: "${options.prompt}"`
|
||||
)
|
||||
chalk.blue(`Creating task with AI using prompt: "${options.prompt}"`)
|
||||
);
|
||||
if (dependencies.length > 0) {
|
||||
}
|
||||
|
||||
// Log dependencies and priority if provided (restored)
|
||||
const dependenciesArray = options.dependencies
|
||||
? options.dependencies.split(',').map((id) => id.trim())
|
||||
: [];
|
||||
if (dependenciesArray.length > 0) {
|
||||
console.log(
|
||||
chalk.blue(`Dependencies: [${dependencies.join(', ')}]`)
|
||||
chalk.blue(`Dependencies: [${dependenciesArray.join(', ')}]`)
|
||||
);
|
||||
}
|
||||
if (options.priority) {
|
||||
console.log(chalk.blue(`Priority: ${options.priority}`));
|
||||
}
|
||||
}
|
||||
|
||||
// Pass mcpLog and session for MCP mode
|
||||
const newTaskId = await addTask(
|
||||
options.file,
|
||||
options.prompt, // Pass prompt (will be null/undefined if not provided)
|
||||
dependencies,
|
||||
const context = {
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'cli'
|
||||
};
|
||||
|
||||
try {
|
||||
const { newTaskId, telemetryData } = await addTask(
|
||||
tasksPath,
|
||||
options.prompt,
|
||||
dependenciesArray,
|
||||
options.priority,
|
||||
{
|
||||
// For CLI, session context isn't directly available like MCP
|
||||
// We don't need to pass session here for CLI API key resolution
|
||||
// as dotenv loads .env, and utils.resolveEnvVariable checks process.env
|
||||
},
|
||||
'text', // outputFormat
|
||||
manualTaskData, // Pass the potentially created manualTaskData object
|
||||
options.research || false // Pass the research flag value
|
||||
context,
|
||||
'text',
|
||||
manualTaskData,
|
||||
options.research
|
||||
);
|
||||
|
||||
console.log(chalk.green(`✓ Added new task #${newTaskId}`));
|
||||
console.log(chalk.gray('Next: Complete this task or add more tasks'));
|
||||
// addTask handles detailed CLI success logging AND telemetry display when outputFormat is 'text'
|
||||
// No need to call displayAiUsageSummary here anymore.
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error adding task: ${error.message}`));
|
||||
if (error.stack && getDebugFlag()) {
|
||||
console.error(error.stack);
|
||||
if (error.details) {
|
||||
console.error(chalk.red(error.details));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
@@ -1356,9 +1390,15 @@ function registerCommands(programInstance) {
|
||||
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
|
||||
)
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file;
|
||||
await displayNextTask(tasksPath);
|
||||
const reportPath = options.report;
|
||||
await displayNextTask(tasksPath, reportPath);
|
||||
});
|
||||
|
||||
// show command
|
||||
@@ -1371,6 +1411,11 @@ function registerCommands(programInstance) {
|
||||
.option('-i, --id <id>', 'Task ID to show')
|
||||
.option('-s, --status <status>', 'Filter subtasks by status') // ADDED status option
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
'scripts/task-complexity-report.json'
|
||||
)
|
||||
.action(async (taskId, options) => {
|
||||
const idArg = taskId || options.id;
|
||||
const statusFilter = options.status; // ADDED: Capture status filter
|
||||
@@ -1381,8 +1426,9 @@ function registerCommands(programInstance) {
|
||||
}
|
||||
|
||||
const tasksPath = options.file;
|
||||
const reportPath = options.report;
|
||||
// PASS statusFilter to the display function
|
||||
await displayTaskById(tasksPath, idArg, statusFilter);
|
||||
await displayTaskById(tasksPath, idArg, reportPath, statusFilter);
|
||||
});
|
||||
|
||||
// add-dependency command
|
||||
@@ -1631,6 +1677,7 @@ function registerCommands(programInstance) {
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
showAddSubtaskHelp();
|
||||
process.exit(1);
|
||||
}
|
||||
})
|
||||
@@ -2037,7 +2084,7 @@ function registerCommands(programInstance) {
|
||||
);
|
||||
|
||||
// Exit with error if any removals failed
|
||||
if (successfulRemovals.length === 0) {
|
||||
if (result.removedTasks.length === 0) {
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
@@ -2334,14 +2381,7 @@ function setupCLI() {
|
||||
return 'unknown'; // Default fallback if package.json fails
|
||||
})
|
||||
.helpOption('-h, --help', 'Display help')
|
||||
.addHelpCommand(false) // Disable default help command
|
||||
.on('--help', () => {
|
||||
displayHelp(); // Use your custom help display instead
|
||||
})
|
||||
.on('-h', () => {
|
||||
displayHelp();
|
||||
process.exit(0);
|
||||
});
|
||||
.addHelpCommand(false); // Disable default help command
|
||||
|
||||
// Modify the help option to use your custom display
|
||||
programInstance.helpInformation = () => {
|
||||
@@ -2361,28 +2401,7 @@ function setupCLI() {
|
||||
*/
|
||||
async function checkForUpdate() {
|
||||
// Get current version from package.json ONLY
|
||||
let currentVersion = 'unknown'; // Initialize with a default
|
||||
try {
|
||||
// Try to get the version from the installed package (if applicable) or current dir
|
||||
let packageJsonPath = path.join(
|
||||
process.cwd(),
|
||||
'node_modules',
|
||||
'task-master-ai',
|
||||
'package.json'
|
||||
);
|
||||
// Fallback to current directory package.json if not found in node_modules
|
||||
if (!fs.existsSync(packageJsonPath)) {
|
||||
packageJsonPath = path.join(process.cwd(), 'package.json');
|
||||
}
|
||||
|
||||
if (fs.existsSync(packageJsonPath)) {
|
||||
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
|
||||
currentVersion = packageJson.version;
|
||||
}
|
||||
} catch (error) {
|
||||
// Silently fail and use default
|
||||
log('debug', `Error reading current package version: ${error.message}`);
|
||||
}
|
||||
const currentVersion = getTaskMasterVersion();
|
||||
|
||||
return new Promise((resolve) => {
|
||||
// Get the latest version from npm registry
|
||||
|
||||
@@ -669,6 +669,34 @@ function isConfigFilePresent(explicitRoot = null) {
|
||||
return fs.existsSync(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the user ID from the configuration.
|
||||
* @param {string|null} explicitRoot - Optional explicit path to the project root.
|
||||
* @returns {string|null} The user ID or null if not found.
|
||||
*/
|
||||
function getUserId(explicitRoot = null) {
|
||||
const config = getConfig(explicitRoot);
|
||||
if (!config.global) {
|
||||
config.global = {}; // Ensure global object exists
|
||||
}
|
||||
if (!config.global.userId) {
|
||||
config.global.userId = '1234567890';
|
||||
// Attempt to write the updated config.
|
||||
// It's important that writeConfig correctly resolves the path
|
||||
// using explicitRoot, similar to how getConfig does.
|
||||
const success = writeConfig(config, explicitRoot);
|
||||
if (!success) {
|
||||
// Log an error or handle the failure to write,
|
||||
// though for now, we'll proceed with the in-memory default.
|
||||
log(
|
||||
'warning',
|
||||
'Failed to write updated configuration with new userId. Please let the developers know.'
|
||||
);
|
||||
}
|
||||
}
|
||||
return config.global.userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets a list of all provider names defined in the MODEL_MAP.
|
||||
* @returns {string[]} An array of provider names.
|
||||
@@ -677,12 +705,19 @@ function getAllProviders() {
|
||||
return Object.keys(MODEL_MAP || {});
|
||||
}
|
||||
|
||||
function getBaseUrlForRole(role, explicitRoot = null) {
|
||||
const roleConfig = getModelConfigForRole(role, explicitRoot);
|
||||
return roleConfig && typeof roleConfig.baseUrl === 'string'
|
||||
? roleConfig.baseUrl
|
||||
: undefined;
|
||||
}
|
||||
|
||||
export {
|
||||
// Core config access
|
||||
getConfig,
|
||||
writeConfig,
|
||||
ConfigurationError, // Export custom error type
|
||||
isConfigFilePresent, // Add the new function export
|
||||
ConfigurationError,
|
||||
isConfigFilePresent,
|
||||
|
||||
// Validation
|
||||
validateProvider,
|
||||
@@ -704,6 +739,7 @@ export {
|
||||
getFallbackModelId,
|
||||
getFallbackMaxTokens,
|
||||
getFallbackTemperature,
|
||||
getBaseUrlForRole,
|
||||
|
||||
// Global setting getters (No env var overrides)
|
||||
getLogLevel,
|
||||
@@ -714,7 +750,7 @@ export {
|
||||
getProjectName,
|
||||
getOllamaBaseUrl,
|
||||
getParametersForRole,
|
||||
|
||||
getUserId,
|
||||
// API Key Checkers (still relevant)
|
||||
isApiKeySet,
|
||||
getMcpApiKeyStatus,
|
||||
|
||||
@@ -195,7 +195,7 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
}
|
||||
|
||||
// Generate updated task files
|
||||
await generateTaskFiles(tasksPath, 'tasks');
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
log('info', 'Task files regenerated with updated dependencies.');
|
||||
} else {
|
||||
@@ -334,7 +334,7 @@ async function removeDependency(tasksPath, taskId, dependencyId) {
|
||||
}
|
||||
|
||||
// Regenerate task files
|
||||
await generateTaskFiles(tasksPath, 'tasks');
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -99,34 +99,39 @@
|
||||
],
|
||||
"google": [
|
||||
{
|
||||
"id": "gemini-2.5-pro-exp-03-25",
|
||||
"id": "gemini-2.5-pro-preview-05-06",
|
||||
"swe_score": 0.638,
|
||||
"cost_per_1m_tokens": null,
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 1048000
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.5-pro-preview-03-25",
|
||||
"swe_score": 0.638,
|
||||
"cost_per_1m_tokens": null,
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 1048000
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.5-flash-preview-04-17",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": null,
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 1048000
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.0-flash",
|
||||
"swe_score": 0.754,
|
||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 1048000
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.0-flash-thinking-experimental",
|
||||
"swe_score": 0.754,
|
||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.0-pro",
|
||||
"id": "gemini-2.0-flash-lite",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": null,
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 1048000
|
||||
}
|
||||
],
|
||||
"perplexity": [
|
||||
|
||||
@@ -23,7 +23,7 @@ import updateSubtaskById from './task-manager/update-subtask-by-id.js';
|
||||
import removeTask from './task-manager/remove-task.js';
|
||||
import taskExists from './task-manager/task-exists.js';
|
||||
import isTaskDependentOn from './task-manager/is-task-dependent.js';
|
||||
|
||||
import { readComplexityReport } from './utils.js';
|
||||
// Export task manager functions
|
||||
export {
|
||||
parsePRD,
|
||||
@@ -45,5 +45,6 @@ export {
|
||||
removeTask,
|
||||
findTaskById,
|
||||
taskExists,
|
||||
isTaskDependentOn
|
||||
isTaskDependentOn,
|
||||
readComplexityReport
|
||||
};
|
||||
|
||||
@@ -8,7 +8,8 @@ import {
|
||||
displayBanner,
|
||||
getStatusWithColor,
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
import { readJSON, writeJSON, log as consoleLog, truncate } from '../utils.js';
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
@@ -44,7 +45,9 @@ const AiTaskDataSchema = z.object({
|
||||
* @param {boolean} useResearch - Whether to use the research model (passed to unified service)
|
||||
* @param {Object} context - Context object containing session and potentially projectRoot
|
||||
* @param {string} [context.projectRoot] - Project root path (for MCP/env fallback)
|
||||
* @returns {number} The new task ID
|
||||
* @param {string} [context.commandName] - The name of the command being executed (for telemetry)
|
||||
* @param {string} [context.outputType] - The output type ('cli' or 'mcp', for telemetry)
|
||||
* @returns {Promise<object>} An object containing newTaskId and telemetryData
|
||||
*/
|
||||
async function addTask(
|
||||
tasksPath,
|
||||
@@ -56,7 +59,7 @@ async function addTask(
|
||||
manualTaskData = null,
|
||||
useResearch = false
|
||||
) {
|
||||
const { session, mcpLog, projectRoot } = context;
|
||||
const { session, mcpLog, projectRoot, commandName, outputType } = context;
|
||||
const isMCP = !!mcpLog;
|
||||
|
||||
// Create a consistent logFn object regardless of context
|
||||
@@ -78,6 +81,7 @@ async function addTask(
|
||||
);
|
||||
|
||||
let loadingIndicator = null;
|
||||
let aiServiceResponse = null; // To store the full response from AI service
|
||||
|
||||
// Create custom reporter that checks for MCP log
|
||||
const report = (message, level = 'info') => {
|
||||
@@ -89,20 +93,6 @@ async function addTask(
|
||||
};
|
||||
|
||||
try {
|
||||
// Only display banner and UI elements for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
displayBanner();
|
||||
|
||||
console.log(
|
||||
boxen(chalk.white.bold(`Creating New Task`), {
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, bottom: 1 }
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
// Read the existing tasks
|
||||
const data = readJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
@@ -169,7 +159,7 @@ async function addTask(
|
||||
} else {
|
||||
report('DEBUG: Taking AI task generation path.', 'debug');
|
||||
// --- Refactored AI Interaction ---
|
||||
report('Generating task data with AI...', 'info');
|
||||
report(`Generating task data with AI with prompt:\n${prompt}`, 'info');
|
||||
|
||||
// Create context string for task creation prompt
|
||||
let contextTasks = '';
|
||||
@@ -229,29 +219,51 @@ async function addTask(
|
||||
// Start the loading indicator - only for text mode
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...`
|
||||
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...\n`
|
||||
);
|
||||
}
|
||||
|
||||
try {
|
||||
// Determine the service role based on the useResearch flag
|
||||
const serviceRole = useResearch ? 'research' : 'main';
|
||||
|
||||
report('DEBUG: Calling generateObjectService...', 'debug');
|
||||
// Call the unified AI service
|
||||
const aiGeneratedTaskData = await generateObjectService({
|
||||
role: serviceRole, // <-- Use the determined role
|
||||
session: session, // Pass session for API key resolution
|
||||
projectRoot: projectRoot, // <<< Pass projectRoot here
|
||||
schema: AiTaskDataSchema, // Pass the Zod schema
|
||||
objectName: 'newTaskData', // Name for the object
|
||||
|
||||
aiServiceResponse = await generateObjectService({
|
||||
// Capture the full response
|
||||
role: serviceRole,
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
schema: AiTaskDataSchema,
|
||||
objectName: 'newTaskData',
|
||||
systemPrompt: systemPrompt,
|
||||
prompt: userPrompt
|
||||
prompt: userPrompt,
|
||||
commandName: commandName || 'add-task', // Use passed commandName or default
|
||||
outputType: outputType || (isMCP ? 'mcp' : 'cli') // Use passed outputType or derive
|
||||
});
|
||||
report('DEBUG: generateObjectService returned successfully.', 'debug');
|
||||
|
||||
if (!aiServiceResponse || !aiServiceResponse.mainResult) {
|
||||
throw new Error(
|
||||
'AI service did not return the expected object structure.'
|
||||
);
|
||||
}
|
||||
|
||||
// Prefer mainResult if it looks like a valid task object, otherwise try mainResult.object
|
||||
if (
|
||||
aiServiceResponse.mainResult.title &&
|
||||
aiServiceResponse.mainResult.description
|
||||
) {
|
||||
taskData = aiServiceResponse.mainResult;
|
||||
} else if (
|
||||
aiServiceResponse.mainResult.object &&
|
||||
aiServiceResponse.mainResult.object.title &&
|
||||
aiServiceResponse.mainResult.object.description
|
||||
) {
|
||||
taskData = aiServiceResponse.mainResult.object;
|
||||
} else {
|
||||
throw new Error('AI service did not return a valid task object.');
|
||||
}
|
||||
|
||||
report('Successfully generated task data from AI.', 'success');
|
||||
taskData = aiGeneratedTaskData; // Assign the validated object
|
||||
} catch (error) {
|
||||
report(
|
||||
`DEBUG: generateObjectService caught error: ${error.message}`,
|
||||
@@ -362,11 +374,25 @@ async function addTask(
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
|
||||
// Display AI Usage Summary if telemetryData is available
|
||||
if (
|
||||
aiServiceResponse &&
|
||||
aiServiceResponse.telemetryData &&
|
||||
(outputType === 'cli' || outputType === 'text')
|
||||
) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
// Return the new task ID
|
||||
report(`DEBUG: Returning new task ID: ${newTaskId}`, 'debug');
|
||||
return newTaskId;
|
||||
report(
|
||||
`DEBUG: Returning new task ID: ${newTaskId} and telemetry.`,
|
||||
'debug'
|
||||
);
|
||||
return {
|
||||
newTaskId: newTaskId,
|
||||
telemetryData: aiServiceResponse ? aiServiceResponse.telemetryData : null
|
||||
};
|
||||
} catch (error) {
|
||||
// Stop any loading indicator on error
|
||||
if (loadingIndicator) {
|
||||
|
||||
@@ -4,7 +4,11 @@ import readline from 'readline';
|
||||
|
||||
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
|
||||
|
||||
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
|
||||
import {
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
|
||||
@@ -196,35 +200,32 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
}
|
||||
|
||||
const prompt = generateInternalComplexityAnalysisPrompt(tasksData);
|
||||
// System prompt remains simple for text generation
|
||||
const systemPrompt =
|
||||
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
|
||||
|
||||
let loadingIndicator = null;
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator('Calling AI service...');
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
`${useResearch ? 'Researching' : 'Analyzing'} the complexity of your tasks with AI...\n`
|
||||
);
|
||||
}
|
||||
|
||||
let fullResponse = ''; // To store the raw text response
|
||||
let aiServiceResponse = null;
|
||||
let complexityAnalysis = null;
|
||||
|
||||
try {
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
reportLog(`Using AI service with role: ${role}`, 'info');
|
||||
|
||||
fullResponse = await generateTextService({
|
||||
aiServiceResponse = await generateTextService({
|
||||
prompt,
|
||||
systemPrompt,
|
||||
role,
|
||||
session,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'analyze-complexity',
|
||||
outputType: mcpLog ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
reportLog(
|
||||
'Successfully received text response via AI service',
|
||||
'success'
|
||||
);
|
||||
|
||||
// --- Stop Loading Indicator (Unchanged) ---
|
||||
if (loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
@@ -236,26 +237,18 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
chalk.green('AI service call complete. Parsing response...')
|
||||
);
|
||||
}
|
||||
// --- End Stop Loading Indicator ---
|
||||
|
||||
// --- Re-introduce Manual JSON Parsing & Cleanup ---
|
||||
reportLog(`Parsing complexity analysis from text response...`, 'info');
|
||||
let complexityAnalysis;
|
||||
try {
|
||||
let cleanedResponse = fullResponse;
|
||||
// Basic trim first
|
||||
let cleanedResponse = aiServiceResponse.mainResult;
|
||||
cleanedResponse = cleanedResponse.trim();
|
||||
|
||||
// Remove potential markdown code block fences
|
||||
const codeBlockMatch = cleanedResponse.match(
|
||||
/```(?:json)?\s*([\s\S]*?)\s*```/
|
||||
);
|
||||
if (codeBlockMatch) {
|
||||
cleanedResponse = codeBlockMatch[1].trim(); // Trim content inside block
|
||||
reportLog('Extracted JSON from code block', 'info');
|
||||
cleanedResponse = codeBlockMatch[1].trim();
|
||||
} else {
|
||||
// If no code block, ensure it starts with '[' and ends with ']'
|
||||
// This is less robust but a common fallback
|
||||
const firstBracket = cleanedResponse.indexOf('[');
|
||||
const lastBracket = cleanedResponse.lastIndexOf(']');
|
||||
if (firstBracket !== -1 && lastBracket > firstBracket) {
|
||||
@@ -263,13 +256,11 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
firstBracket,
|
||||
lastBracket + 1
|
||||
);
|
||||
reportLog('Extracted content between first [ and last ]', 'info');
|
||||
} else {
|
||||
reportLog(
|
||||
'Warning: Response does not appear to be a JSON array.',
|
||||
'warn'
|
||||
);
|
||||
// Keep going, maybe JSON.parse can handle it or will fail informatively
|
||||
}
|
||||
}
|
||||
|
||||
@@ -283,48 +274,23 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
);
|
||||
}
|
||||
|
||||
try {
|
||||
complexityAnalysis = JSON.parse(cleanedResponse);
|
||||
} catch (jsonError) {
|
||||
} catch (parseError) {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
reportLog(
|
||||
'Initial JSON parsing failed. Raw response might be malformed.',
|
||||
'error'
|
||||
);
|
||||
reportLog(`Original JSON Error: ${jsonError.message}`, 'error');
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(chalk.red('--- Start Raw Malformed Response ---'));
|
||||
console.log(chalk.gray(fullResponse));
|
||||
console.log(chalk.red('--- End Raw Malformed Response ---'));
|
||||
}
|
||||
// Re-throw the specific JSON parsing error
|
||||
throw new Error(
|
||||
`Failed to parse JSON response: ${jsonError.message}`
|
||||
);
|
||||
}
|
||||
|
||||
// Ensure it's an array after parsing
|
||||
if (!Array.isArray(complexityAnalysis)) {
|
||||
throw new Error('Parsed response is not a valid JSON array.');
|
||||
}
|
||||
} catch (error) {
|
||||
// Catch errors specifically from the parsing/cleanup block
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
|
||||
reportLog(
|
||||
`Error parsing complexity analysis JSON: ${error.message}`,
|
||||
`Error parsing complexity analysis JSON: ${parseError.message}`,
|
||||
'error'
|
||||
);
|
||||
if (outputFormat === 'text') {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error parsing complexity analysis JSON: ${error.message}`
|
||||
`Error parsing complexity analysis JSON: ${parseError.message}`
|
||||
)
|
||||
);
|
||||
}
|
||||
throw error; // Re-throw parsing error
|
||||
throw parseError;
|
||||
}
|
||||
// --- End Manual JSON Parsing & Cleanup ---
|
||||
|
||||
// --- Post-processing (Missing Task Check) - (Unchanged) ---
|
||||
const taskIds = tasksData.tasks.map((t) => t.id);
|
||||
const analysisTaskIds = complexityAnalysis.map((a) => a.taskId);
|
||||
const missingTaskIds = taskIds.filter(
|
||||
@@ -359,10 +325,8 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
}
|
||||
}
|
||||
}
|
||||
// --- End Post-processing ---
|
||||
|
||||
// --- Report Creation & Writing (Unchanged) ---
|
||||
const finalReport = {
|
||||
const report = {
|
||||
meta: {
|
||||
generatedAt: new Date().toISOString(),
|
||||
tasksAnalyzed: tasksData.tasks.length,
|
||||
@@ -373,15 +337,13 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
complexityAnalysis: complexityAnalysis
|
||||
};
|
||||
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
|
||||
writeJSON(outputPath, finalReport);
|
||||
writeJSON(outputPath, report);
|
||||
|
||||
reportLog(
|
||||
`Task complexity analysis complete. Report written to ${outputPath}`,
|
||||
'success'
|
||||
);
|
||||
// --- End Report Creation & Writing ---
|
||||
|
||||
// --- Display CLI Summary (Unchanged) ---
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
chalk.green(
|
||||
@@ -435,23 +397,28 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
if (getDebugFlag(session)) {
|
||||
console.debug(
|
||||
chalk.gray(
|
||||
`Final analysis object: ${JSON.stringify(finalReport, null, 2)}`
|
||||
`Final analysis object: ${JSON.stringify(report, null, 2)}`
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
// --- End Display CLI Summary ---
|
||||
|
||||
return finalReport;
|
||||
} catch (error) {
|
||||
// Catches errors from generateTextService call
|
||||
if (aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
report: report,
|
||||
telemetryData: aiServiceResponse?.telemetryData
|
||||
};
|
||||
} catch (aiError) {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
reportLog(`Error during AI service call: ${error.message}`, 'error');
|
||||
reportLog(`Error during AI service call: ${aiError.message}`, 'error');
|
||||
if (outputFormat === 'text') {
|
||||
console.error(
|
||||
chalk.red(`Error during AI service call: ${error.message}`)
|
||||
chalk.red(`Error during AI service call: ${aiError.message}`)
|
||||
);
|
||||
if (error.message.includes('API key')) {
|
||||
if (aiError.message.includes('API key')) {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nPlease ensure your API keys are correctly configured in .env or ~/.taskmaster/.env'
|
||||
@@ -462,10 +429,9 @@ async function analyzeTaskComplexity(options, context = {}) {
|
||||
);
|
||||
}
|
||||
}
|
||||
throw error; // Re-throw AI service error
|
||||
throw aiError;
|
||||
}
|
||||
} catch (error) {
|
||||
// Catches general errors (file read, etc.)
|
||||
reportLog(`Error analyzing task complexity: ${error.message}`, 'error');
|
||||
if (outputFormat === 'text') {
|
||||
console.error(
|
||||
|
||||
@@ -3,7 +3,7 @@ import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
|
||||
import { log, readJSON, writeJSON, truncate } from '../utils.js';
|
||||
import { log, readJSON, writeJSON, truncate, isSilentMode } from '../utils.js';
|
||||
import { displayBanner } from '../ui.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
@@ -22,6 +22,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(chalk.white.bold('Clearing Subtasks'), {
|
||||
padding: 1,
|
||||
@@ -30,6 +31,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
margin: { top: 1, bottom: 1 }
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
// Handle multiple task IDs (comma-separated)
|
||||
const taskIdArray = taskIds.split(',').map((id) => id.trim());
|
||||
@@ -85,6 +87,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
writeJSON(tasksPath, data);
|
||||
|
||||
// Show summary table
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(chalk.white.bold('Subtask Clearing Summary:'), {
|
||||
padding: { left: 2, right: 2, top: 0, bottom: 0 },
|
||||
@@ -94,12 +97,14 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
})
|
||||
);
|
||||
console.log(summaryTable.toString());
|
||||
}
|
||||
|
||||
// Regenerate task files to reflect changes
|
||||
log('info', 'Regenerating task files...');
|
||||
generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
// Success message
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(
|
||||
@@ -129,7 +134,9 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
if (!isSilentMode()) {
|
||||
console.log(
|
||||
boxen(chalk.yellow('No subtasks were cleared'), {
|
||||
padding: 1,
|
||||
@@ -139,6 +146,7 @@ function clearSubtasks(tasksPath, taskIds) {
|
||||
})
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export default clearSubtasks;
|
||||
|
||||
@@ -1,7 +1,14 @@
|
||||
import { log, readJSON, isSilentMode } from '../utils.js';
|
||||
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
|
||||
import {
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
import expandTask from './expand-task.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import { aggregateTelemetry } from '../utils.js';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
|
||||
/**
|
||||
* Expand all eligible pending or in-progress tasks using the expandTask function.
|
||||
@@ -14,7 +21,7 @@ import { getDebugFlag } from '../config-manager.js';
|
||||
* @param {Object} [context.session] - Session object from MCP.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). MCP calls should use 'json'.
|
||||
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, message?: string}>} - Result summary.
|
||||
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, telemetryData: Array<Object>}>} - Result summary.
|
||||
*/
|
||||
async function expandAllTasks(
|
||||
tasksPath,
|
||||
@@ -51,8 +58,8 @@ async function expandAllTasks(
|
||||
let loadingIndicator = null;
|
||||
let expandedCount = 0;
|
||||
let failedCount = 0;
|
||||
// No skipped count needed now as the filter handles it upfront
|
||||
let tasksToExpandCount = 0; // Renamed for clarity
|
||||
let tasksToExpandCount = 0;
|
||||
const allTelemetryData = []; // Still collect individual data first
|
||||
|
||||
if (!isMCPCall && outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
@@ -90,6 +97,7 @@ async function expandAllTasks(
|
||||
failedCount: 0,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 0,
|
||||
telemetryData: allTelemetryData,
|
||||
message: 'No tasks eligible for expansion.'
|
||||
};
|
||||
// --- End Fix ---
|
||||
@@ -97,19 +105,6 @@ async function expandAllTasks(
|
||||
|
||||
// Iterate over the already filtered tasks
|
||||
for (const task of tasksToExpand) {
|
||||
// --- Remove Redundant Check ---
|
||||
// The check below is no longer needed as the initial filter handles it
|
||||
/*
|
||||
if (task.subtasks && task.subtasks.length > 0 && !force) {
|
||||
logger.info(
|
||||
`Skipping task ${task.id}: Already has subtasks. Use --force to overwrite.`
|
||||
);
|
||||
skippedCount++;
|
||||
continue;
|
||||
}
|
||||
*/
|
||||
// --- End Removed Redundant Check ---
|
||||
|
||||
// Start indicator for individual task expansion in CLI mode
|
||||
let taskIndicator = null;
|
||||
if (!isMCPCall && outputFormat === 'text') {
|
||||
@@ -117,17 +112,23 @@ async function expandAllTasks(
|
||||
}
|
||||
|
||||
try {
|
||||
// Call the refactored expandTask function
|
||||
await expandTask(
|
||||
// Call the refactored expandTask function AND capture result
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
task.id,
|
||||
numSubtasks, // Pass numSubtasks, expandTask handles defaults/complexity
|
||||
numSubtasks,
|
||||
useResearch,
|
||||
additionalContext,
|
||||
context, // Pass the whole context object { session, mcpLog }
|
||||
force // Pass the force flag down
|
||||
force
|
||||
);
|
||||
expandedCount++;
|
||||
|
||||
// Collect individual telemetry data
|
||||
if (result && result.telemetryData) {
|
||||
allTelemetryData.push(result.telemetryData);
|
||||
}
|
||||
|
||||
if (taskIndicator) {
|
||||
stopLoadingIndicator(taskIndicator, `Task ${task.id} expanded.`);
|
||||
}
|
||||
@@ -146,18 +147,48 @@ async function expandAllTasks(
|
||||
}
|
||||
}
|
||||
|
||||
// Log final summary (removed skipped count from message)
|
||||
// --- AGGREGATION AND DISPLAY ---
|
||||
logger.info(
|
||||
`Expansion complete: ${expandedCount} expanded, ${failedCount} failed.`
|
||||
);
|
||||
|
||||
// Return summary (skippedCount is now 0) - Add success: true here as well for consistency
|
||||
// Aggregate the collected telemetry data
|
||||
const aggregatedTelemetryData = aggregateTelemetry(
|
||||
allTelemetryData,
|
||||
'expand-all-tasks'
|
||||
);
|
||||
|
||||
if (outputFormat === 'text') {
|
||||
const summaryContent =
|
||||
`${chalk.white.bold('Expansion Summary:')}\n\n` +
|
||||
`${chalk.cyan('-')} Attempted: ${chalk.bold(tasksToExpandCount)}\n` +
|
||||
`${chalk.green('-')} Expanded: ${chalk.bold(expandedCount)}\n` +
|
||||
// Skipped count is always 0 now due to pre-filtering
|
||||
`${chalk.gray('-')} Skipped: ${chalk.bold(0)}\n` +
|
||||
`${chalk.red('-')} Failed: ${chalk.bold(failedCount)}`;
|
||||
|
||||
console.log(
|
||||
boxen(summaryContent, {
|
||||
padding: 1,
|
||||
margin: { top: 1 },
|
||||
borderColor: failedCount > 0 ? 'red' : 'green', // Red if failures, green otherwise
|
||||
borderStyle: 'round'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
if (outputFormat === 'text' && aggregatedTelemetryData) {
|
||||
displayAiUsageSummary(aggregatedTelemetryData, 'cli');
|
||||
}
|
||||
|
||||
// Return summary including the AGGREGATED telemetry data
|
||||
return {
|
||||
success: true, // Indicate overall success
|
||||
success: true,
|
||||
expandedCount,
|
||||
failedCount,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: tasksToExpandCount
|
||||
tasksToExpand: tasksToExpandCount,
|
||||
telemetryData: aggregatedTelemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
if (loadingIndicator)
|
||||
|
||||
@@ -4,7 +4,11 @@ import { z } from 'zod';
|
||||
|
||||
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
|
||||
|
||||
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
|
||||
import {
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
|
||||
@@ -142,7 +146,7 @@ function generateResearchUserPrompt(
|
||||
"id": <number>, // Sequential ID starting from ${nextSubtaskId}
|
||||
"title": "<string>",
|
||||
"description": "<string>",
|
||||
"dependencies": [<number>], // e.g., [${nextSubtaskId + 1}]
|
||||
"dependencies": [<number>], // e.g., [${nextSubtaskId + 1}]. If no dependencies, use an empty array [].
|
||||
"details": "<string>",
|
||||
"testStrategy": "<string>" // Optional
|
||||
},
|
||||
@@ -162,6 +166,8 @@ ${contextPrompt}
|
||||
CRITICAL: Respond ONLY with a valid JSON object containing a single key "subtasks". The value must be an array of the generated subtasks, strictly matching this structure:
|
||||
${schemaDescription}
|
||||
|
||||
Important: For the 'dependencies' field, if a subtask has no dependencies, you MUST use an empty array, for example: "dependencies": []. Do not use null or omit the field.
|
||||
|
||||
Do not include ANY explanatory text, markdown, or code block markers. Just the JSON object.`;
|
||||
}
|
||||
|
||||
@@ -182,77 +188,153 @@ function parseSubtasksFromText(
|
||||
parentTaskId,
|
||||
logger
|
||||
) {
|
||||
logger.info('Attempting to parse subtasks object from text response...');
|
||||
if (typeof text !== 'string') {
|
||||
logger.error(
|
||||
`AI response text is not a string. Received type: ${typeof text}, Value: ${text}`
|
||||
);
|
||||
throw new Error('AI response text is not a string.');
|
||||
}
|
||||
|
||||
if (!text || text.trim() === '') {
|
||||
throw new Error('AI response text is empty.');
|
||||
throw new Error('AI response text is empty after trimming.');
|
||||
}
|
||||
|
||||
let cleanedResponse = text.trim();
|
||||
const originalResponseForDebug = cleanedResponse;
|
||||
const originalTrimmedResponse = text.trim(); // Store the original trimmed response
|
||||
let jsonToParse = originalTrimmedResponse; // Initialize jsonToParse with it
|
||||
|
||||
// 1. Extract from Markdown code block first
|
||||
const codeBlockMatch = cleanedResponse.match(
|
||||
/```(?:json)?\s*([\s\S]*?)\s*```/
|
||||
logger.debug(
|
||||
`Original AI Response for parsing (full length: ${jsonToParse.length}): ${jsonToParse.substring(0, 1000)}...`
|
||||
);
|
||||
if (codeBlockMatch) {
|
||||
cleanedResponse = codeBlockMatch[1].trim();
|
||||
logger.info('Extracted JSON content from Markdown code block.');
|
||||
} else {
|
||||
// 2. If no code block, find first '{' and last '}' for the object
|
||||
const firstBrace = cleanedResponse.indexOf('{');
|
||||
const lastBrace = cleanedResponse.lastIndexOf('}');
|
||||
if (firstBrace !== -1 && lastBrace > firstBrace) {
|
||||
cleanedResponse = cleanedResponse.substring(firstBrace, lastBrace + 1);
|
||||
logger.info('Extracted content between first { and last }.');
|
||||
} else {
|
||||
logger.warn(
|
||||
'Response does not appear to contain a JSON object structure. Parsing raw response.'
|
||||
|
||||
// --- Pre-emptive cleanup for known AI JSON issues ---
|
||||
// Fix for "dependencies": , or "dependencies":,
|
||||
if (jsonToParse.includes('"dependencies":')) {
|
||||
const malformedPattern = /"dependencies":\s*,/g;
|
||||
if (malformedPattern.test(jsonToParse)) {
|
||||
logger.warn('Attempting to fix malformed "dependencies": , issue.');
|
||||
jsonToParse = jsonToParse.replace(
|
||||
malformedPattern,
|
||||
'"dependencies": [],'
|
||||
);
|
||||
logger.debug(
|
||||
`JSON after fixing "dependencies": ${jsonToParse.substring(0, 500)}...`
|
||||
);
|
||||
}
|
||||
}
|
||||
// --- End pre-emptive cleanup ---
|
||||
|
||||
// 3. Attempt to parse the object
|
||||
let parsedObject;
|
||||
let primaryParseAttemptFailed = false;
|
||||
|
||||
// --- Attempt 1: Simple Parse (with optional Markdown cleanup) ---
|
||||
logger.debug('Attempting simple parse...');
|
||||
try {
|
||||
parsedObject = JSON.parse(cleanedResponse);
|
||||
} catch (parseError) {
|
||||
logger.error(`Failed to parse JSON object: ${parseError.message}`);
|
||||
logger.error(
|
||||
`Problematic JSON string (first 500 chars): ${cleanedResponse.substring(0, 500)}`
|
||||
);
|
||||
logger.error(
|
||||
`Original Raw Response (first 500 chars): ${originalResponseForDebug.substring(0, 500)}`
|
||||
);
|
||||
throw new Error(
|
||||
`Failed to parse JSON response object: ${parseError.message}`
|
||||
// Check for markdown code block
|
||||
const codeBlockMatch = jsonToParse.match(/```(?:json)?\s*([\s\S]*?)\s*```/);
|
||||
let contentToParseDirectly = jsonToParse;
|
||||
if (codeBlockMatch && codeBlockMatch[1]) {
|
||||
contentToParseDirectly = codeBlockMatch[1].trim();
|
||||
logger.debug('Simple parse: Extracted content from markdown code block.');
|
||||
} else {
|
||||
logger.debug(
|
||||
'Simple parse: No markdown code block found, using trimmed original.'
|
||||
);
|
||||
}
|
||||
|
||||
// 4. Validate the object structure and extract the subtasks array
|
||||
parsedObject = JSON.parse(contentToParseDirectly);
|
||||
logger.debug('Simple parse successful!');
|
||||
|
||||
// Quick check if it looks like our target object
|
||||
if (
|
||||
!parsedObject ||
|
||||
typeof parsedObject !== 'object' ||
|
||||
!Array.isArray(parsedObject.subtasks)
|
||||
) {
|
||||
logger.warn(
|
||||
'Simple parse succeeded, but result is not the expected {"subtasks": []} structure. Will proceed to advanced extraction.'
|
||||
);
|
||||
primaryParseAttemptFailed = true;
|
||||
parsedObject = null; // Reset parsedObject so we enter the advanced logic
|
||||
}
|
||||
// If it IS the correct structure, we'll skip advanced extraction.
|
||||
} catch (e) {
|
||||
logger.warn(
|
||||
`Simple parse failed: ${e.message}. Proceeding to advanced extraction logic.`
|
||||
);
|
||||
primaryParseAttemptFailed = true;
|
||||
// jsonToParse is already originalTrimmedResponse if simple parse failed before modifying it for markdown
|
||||
}
|
||||
|
||||
// --- Attempt 2: Advanced Extraction (if simple parse failed or produced wrong structure) ---
|
||||
if (primaryParseAttemptFailed || !parsedObject) {
|
||||
// Ensure we try advanced if simple parse gave wrong structure
|
||||
logger.debug('Attempting advanced extraction logic...');
|
||||
// Reset jsonToParse to the original full trimmed response for advanced logic
|
||||
jsonToParse = originalTrimmedResponse;
|
||||
|
||||
// (Insert the more complex extraction logic here - the one we worked on with:
|
||||
// - targetPattern = '{"subtasks":';
|
||||
// - careful brace counting for that targetPattern
|
||||
// - fallbacks to last '{' and '}' if targetPattern logic fails)
|
||||
// This was the logic from my previous message. Let's assume it's here.
|
||||
// This block should ultimately set `jsonToParse` to the best candidate string.
|
||||
|
||||
// Example snippet of that advanced logic's start:
|
||||
const targetPattern = '{"subtasks":';
|
||||
const patternStartIndex = jsonToParse.indexOf(targetPattern);
|
||||
|
||||
if (patternStartIndex !== -1) {
|
||||
let openBraces = 0;
|
||||
let firstBraceFound = false;
|
||||
let extractedJsonBlock = '';
|
||||
// ... (loop for brace counting as before) ...
|
||||
// ... (if successful, jsonToParse = extractedJsonBlock) ...
|
||||
// ... (if that fails, fallbacks as before) ...
|
||||
} else {
|
||||
// ... (fallback to last '{' and '}' if targetPattern not found) ...
|
||||
}
|
||||
// End of advanced logic excerpt
|
||||
|
||||
logger.debug(
|
||||
`Advanced extraction: JSON string that will be parsed: ${jsonToParse.substring(0, 500)}...`
|
||||
);
|
||||
try {
|
||||
parsedObject = JSON.parse(jsonToParse);
|
||||
logger.debug('Advanced extraction parse successful!');
|
||||
} catch (parseError) {
|
||||
logger.error(
|
||||
`Advanced extraction: Failed to parse JSON object: ${parseError.message}`
|
||||
);
|
||||
logger.error(
|
||||
`Advanced extraction: Problematic JSON string for parse (first 500 chars): ${jsonToParse.substring(0, 500)}`
|
||||
);
|
||||
throw new Error( // Re-throw a more specific error if advanced also fails
|
||||
`Failed to parse JSON response object after both simple and advanced attempts: ${parseError.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Validation (applies to successfully parsedObject from either attempt) ---
|
||||
if (
|
||||
!parsedObject ||
|
||||
typeof parsedObject !== 'object' ||
|
||||
!Array.isArray(parsedObject.subtasks)
|
||||
) {
|
||||
logger.error(
|
||||
`Parsed content is not an object or missing 'subtasks' array. Content: ${JSON.stringify(parsedObject).substring(0, 200)}`
|
||||
`Final parsed content is not an object or missing 'subtasks' array. Content: ${JSON.stringify(parsedObject).substring(0, 200)}`
|
||||
);
|
||||
throw new Error(
|
||||
'Parsed AI response is not a valid object containing a "subtasks" array.'
|
||||
'Parsed AI response is not a valid object containing a "subtasks" array after all attempts.'
|
||||
);
|
||||
}
|
||||
const parsedSubtasks = parsedObject.subtasks; // Extract the array
|
||||
const parsedSubtasks = parsedObject.subtasks;
|
||||
|
||||
logger.info(
|
||||
`Successfully parsed ${parsedSubtasks.length} potential subtasks from the object.`
|
||||
);
|
||||
if (expectedCount && parsedSubtasks.length !== expectedCount) {
|
||||
logger.warn(
|
||||
`Expected ${expectedCount} subtasks, but parsed ${parsedSubtasks.length}.`
|
||||
);
|
||||
}
|
||||
|
||||
// 5. Validate and Normalize each subtask using Zod schema
|
||||
let currentId = startId;
|
||||
const validatedSubtasks = [];
|
||||
const validationErrors = [];
|
||||
@@ -260,22 +342,21 @@ function parseSubtasksFromText(
|
||||
for (const rawSubtask of parsedSubtasks) {
|
||||
const correctedSubtask = {
|
||||
...rawSubtask,
|
||||
id: currentId, // Enforce sequential ID
|
||||
id: currentId,
|
||||
dependencies: Array.isArray(rawSubtask.dependencies)
|
||||
? rawSubtask.dependencies
|
||||
.map((dep) => (typeof dep === 'string' ? parseInt(dep, 10) : dep))
|
||||
.filter(
|
||||
(depId) => !isNaN(depId) && depId >= startId && depId < currentId
|
||||
) // Ensure deps are numbers, valid range
|
||||
)
|
||||
: [],
|
||||
status: 'pending' // Enforce pending status
|
||||
// parentTaskId can be added if needed: parentTaskId: parentTaskId
|
||||
status: 'pending'
|
||||
};
|
||||
|
||||
const result = subtaskSchema.safeParse(correctedSubtask);
|
||||
|
||||
if (result.success) {
|
||||
validatedSubtasks.push(result.data); // Add the validated data
|
||||
validatedSubtasks.push(result.data);
|
||||
} else {
|
||||
logger.warn(
|
||||
`Subtask validation failed for raw data: ${JSON.stringify(rawSubtask).substring(0, 100)}...`
|
||||
@@ -285,18 +366,14 @@ function parseSubtasksFromText(
|
||||
logger.warn(errorMessage);
|
||||
validationErrors.push(`Subtask ${currentId}: ${errorMessage}`);
|
||||
});
|
||||
// Optionally, decide whether to include partially valid tasks or skip them
|
||||
// For now, we'll skip invalid ones
|
||||
}
|
||||
currentId++; // Increment ID for the next *potential* subtask
|
||||
currentId++;
|
||||
}
|
||||
|
||||
if (validationErrors.length > 0) {
|
||||
logger.error(
|
||||
`Found ${validationErrors.length} validation errors in the generated subtasks.`
|
||||
);
|
||||
// Optionally throw an error here if strict validation is required
|
||||
// throw new Error(`Subtask validation failed:\n${validationErrors.join('\n')}`);
|
||||
logger.warn('Proceeding with only the successfully validated subtasks.');
|
||||
}
|
||||
|
||||
@@ -305,8 +382,6 @@ function parseSubtasksFromText(
|
||||
'AI response contained potential subtasks, but none passed validation.'
|
||||
);
|
||||
}
|
||||
|
||||
// Ensure we don't return more than expected, preferring validated ones
|
||||
return validatedSubtasks.slice(0, expectedCount || validatedSubtasks.length);
|
||||
}
|
||||
|
||||
@@ -336,9 +411,13 @@ async function expandTask(
|
||||
context = {},
|
||||
force = false
|
||||
) {
|
||||
const { session, mcpLog } = context;
|
||||
const { session, mcpLog, projectRoot: contextProjectRoot } = context;
|
||||
const outputFormat = mcpLog ? 'json' : 'text';
|
||||
|
||||
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
|
||||
const projectRoot =
|
||||
contextProjectRoot || path.dirname(path.dirname(tasksPath));
|
||||
|
||||
// Use mcpLog if available, otherwise use the default console log wrapper
|
||||
const logger = mcpLog || {
|
||||
info: (msg) => !isSilentMode() && log('info', msg),
|
||||
@@ -363,7 +442,9 @@ async function expandTask(
|
||||
);
|
||||
if (taskIndex === -1) throw new Error(`Task ${taskId} not found`);
|
||||
const task = data.tasks[taskIndex];
|
||||
logger.info(`Expanding task ${taskId}: ${task.title}`);
|
||||
logger.info(
|
||||
`Expanding task ${taskId}: ${task.title}${useResearch ? ' with research' : ''}`
|
||||
);
|
||||
// --- End Task Loading/Filtering ---
|
||||
|
||||
// --- Handle Force Flag: Clear existing subtasks if force=true ---
|
||||
@@ -381,7 +462,6 @@ async function expandTask(
|
||||
let complexityReasoningContext = '';
|
||||
let systemPrompt; // Declare systemPrompt here
|
||||
|
||||
const projectRoot = path.dirname(path.dirname(tasksPath));
|
||||
const complexityReportPath = path.join(
|
||||
projectRoot,
|
||||
'scripts/task-complexity-report.json'
|
||||
@@ -488,28 +568,27 @@ async function expandTask(
|
||||
let loadingIndicator = null;
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
`Generating ${finalSubtaskCount} subtasks...`
|
||||
`Generating ${finalSubtaskCount} subtasks...\n`
|
||||
);
|
||||
}
|
||||
|
||||
let responseText = '';
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
logger.info(`Using AI service with role: ${role}`);
|
||||
|
||||
// Call generateTextService with the determined prompts
|
||||
responseText = await generateTextService({
|
||||
// Call generateTextService with the determined prompts and telemetry params
|
||||
aiServiceResponse = await generateTextService({
|
||||
prompt: promptContent,
|
||||
systemPrompt: systemPrompt, // Use the determined system prompt
|
||||
systemPrompt: systemPrompt,
|
||||
role,
|
||||
session,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'expand-task',
|
||||
outputType: outputFormat
|
||||
});
|
||||
logger.info(
|
||||
'Successfully received text response from AI service',
|
||||
'success'
|
||||
);
|
||||
responseText = aiServiceResponse.mainResult;
|
||||
|
||||
// Parse Subtasks
|
||||
generatedSubtasks = parseSubtasksFromText(
|
||||
@@ -550,14 +629,23 @@ async function expandTask(
|
||||
// --- End Change: Append instead of replace ---
|
||||
|
||||
data.tasks[taskIndex] = task; // Assign the modified task back
|
||||
logger.info(`Writing updated tasks to ${tasksPath}`);
|
||||
writeJSON(tasksPath, data);
|
||||
logger.info(`Generating individual task files...`);
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
logger.info(`Task files generated.`);
|
||||
// --- End Task Update & File Writing ---
|
||||
|
||||
return task; // Return the updated task object
|
||||
// Display AI Usage Summary for CLI
|
||||
if (
|
||||
outputFormat === 'text' &&
|
||||
aiServiceResponse &&
|
||||
aiServiceResponse.telemetryData
|
||||
) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
|
||||
// Return the updated task object AND telemetry data
|
||||
return {
|
||||
task,
|
||||
telemetryData: aiServiceResponse?.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
// Catches errors from file reading, parsing, AI call etc.
|
||||
logger.error(`Error expanding task ${taskId}: ${error.message}`, 'error');
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
import { log } from '../utils.js';
|
||||
import { addComplexityToTask } from '../utils.js';
|
||||
|
||||
/**
|
||||
* Return the next work item:
|
||||
* • Prefer an eligible SUBTASK that belongs to any parent task
|
||||
@@ -15,9 +18,10 @@
|
||||
* ─ parentId → number (present only when it's a subtask)
|
||||
*
|
||||
* @param {Object[]} tasks – full array of top-level tasks, each may contain .subtasks[]
|
||||
* @param {Object} [complexityReport=null] - Optional complexity report object
|
||||
* @returns {Object|null} – next work item or null if nothing is eligible
|
||||
*/
|
||||
function findNextTask(tasks) {
|
||||
function findNextTask(tasks, complexityReport = null) {
|
||||
// ---------- helpers ----------------------------------------------------
|
||||
const priorityValues = { high: 3, medium: 2, low: 1 };
|
||||
|
||||
@@ -91,7 +95,14 @@ function findNextTask(tasks) {
|
||||
if (aPar !== bPar) return aPar - bPar;
|
||||
return aSub - bSub;
|
||||
});
|
||||
return candidateSubtasks[0];
|
||||
const nextTask = candidateSubtasks[0];
|
||||
|
||||
// Add complexity to the task before returning
|
||||
if (nextTask && complexityReport) {
|
||||
addComplexityToTask(nextTask, complexityReport);
|
||||
}
|
||||
|
||||
return nextTask;
|
||||
}
|
||||
|
||||
// ---------- 2) fall back to top-level tasks (original logic) ------------
|
||||
@@ -116,6 +127,11 @@ function findNextTask(tasks) {
|
||||
return a.id - b.id;
|
||||
})[0];
|
||||
|
||||
// Add complexity to the task before returning
|
||||
if (nextTask && complexityReport) {
|
||||
addComplexityToTask(nextTask, complexityReport);
|
||||
}
|
||||
|
||||
return nextTask;
|
||||
}
|
||||
|
||||
|
||||
@@ -19,8 +19,6 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
// Determine if we're in MCP mode by checking for mcpLog
|
||||
const isMcpMode = !!options?.mcpLog;
|
||||
|
||||
log('info', `Preparing to regenerate task files in ${tasksPath}`);
|
||||
|
||||
const data = readJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`No valid tasks found in ${tasksPath}`);
|
||||
@@ -31,7 +29,7 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
log('info', `Found ${data.tasks.length} tasks to regenerate`);
|
||||
log('info', `Preparing to regenerate ${data.tasks.length} task files`);
|
||||
|
||||
// Validate and fix dependencies before generating files
|
||||
log('info', `Validating and fixing dependencies`);
|
||||
|
||||
@@ -2,13 +2,20 @@ import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
|
||||
import { log, readJSON, truncate } from '../utils.js';
|
||||
import {
|
||||
log,
|
||||
readJSON,
|
||||
truncate,
|
||||
readComplexityReport,
|
||||
addComplexityToTask
|
||||
} from '../utils.js';
|
||||
import findNextTask from './find-next-task.js';
|
||||
|
||||
import {
|
||||
displayBanner,
|
||||
getStatusWithColor,
|
||||
formatDependenciesWithStatus,
|
||||
getComplexityWithColor,
|
||||
createProgressBar
|
||||
} from '../ui.js';
|
||||
|
||||
@@ -16,6 +23,7 @@ import {
|
||||
* List all tasks
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {string} statusFilter - Filter by status
|
||||
* @param {string} reportPath - Path to the complexity report
|
||||
* @param {boolean} withSubtasks - Whether to show subtasks
|
||||
* @param {string} outputFormat - Output format (text or json)
|
||||
* @returns {Object} - Task list result for json format
|
||||
@@ -23,6 +31,7 @@ import {
|
||||
function listTasks(
|
||||
tasksPath,
|
||||
statusFilter,
|
||||
reportPath = null,
|
||||
withSubtasks = false,
|
||||
outputFormat = 'text'
|
||||
) {
|
||||
@@ -37,6 +46,13 @@ function listTasks(
|
||||
throw new Error(`No valid tasks found in ${tasksPath}`);
|
||||
}
|
||||
|
||||
// Add complexity scores to tasks if report exists
|
||||
const complexityReport = readComplexityReport(reportPath);
|
||||
// Apply complexity scores to tasks
|
||||
if (complexityReport && complexityReport.complexityAnalysis) {
|
||||
data.tasks.forEach((task) => addComplexityToTask(task, complexityReport));
|
||||
}
|
||||
|
||||
// Filter tasks by status if specified
|
||||
const filteredTasks =
|
||||
statusFilter && statusFilter.toLowerCase() !== 'all' // <-- Added check for 'all'
|
||||
@@ -257,8 +273,8 @@ function listTasks(
|
||||
);
|
||||
const avgDependenciesPerTask = totalDependencies / data.tasks.length;
|
||||
|
||||
// Find next task to work on
|
||||
const nextItem = findNextTask(data.tasks);
|
||||
// Find next task to work on, passing the complexity report
|
||||
const nextItem = findNextTask(data.tasks, complexityReport);
|
||||
|
||||
// Get terminal width - more reliable method
|
||||
let terminalWidth;
|
||||
@@ -301,8 +317,11 @@ function listTasks(
|
||||
`${chalk.blue('•')} ${chalk.white('Avg dependencies per task:')} ${avgDependenciesPerTask.toFixed(1)}\n\n` +
|
||||
chalk.cyan.bold('Next Task to Work On:') +
|
||||
'\n' +
|
||||
`ID: ${chalk.cyan(nextItem ? nextItem.id : 'N/A')} - ${nextItem ? chalk.white.bold(truncate(nextItem.title, 40)) : chalk.yellow('No task available')}\n` +
|
||||
`Priority: ${nextItem ? chalk.white(nextItem.priority || 'medium') : ''} Dependencies: ${nextItem ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true) : ''}`;
|
||||
`ID: ${chalk.cyan(nextItem ? nextItem.id : 'N/A')} - ${nextItem ? chalk.white.bold(truncate(nextItem.title, 40)) : chalk.yellow('No task available')}
|
||||
` +
|
||||
`Priority: ${nextItem ? chalk.white(nextItem.priority || 'medium') : ''} Dependencies: ${nextItem ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true, complexityReport) : ''}
|
||||
` +
|
||||
`Complexity: ${nextItem && nextItem.complexityScore ? getComplexityWithColor(nextItem.complexityScore) : chalk.gray('N/A')}`;
|
||||
|
||||
// Calculate width for side-by-side display
|
||||
// Box borders, padding take approximately 4 chars on each side
|
||||
@@ -412,9 +431,16 @@ function listTasks(
|
||||
// Make dependencies column smaller as requested (-20%)
|
||||
const depsWidthPct = 20;
|
||||
|
||||
const complexityWidthPct = 10;
|
||||
|
||||
// Calculate title/description width as remaining space (+20% from dependencies reduction)
|
||||
const titleWidthPct =
|
||||
100 - idWidthPct - statusWidthPct - priorityWidthPct - depsWidthPct;
|
||||
100 -
|
||||
idWidthPct -
|
||||
statusWidthPct -
|
||||
priorityWidthPct -
|
||||
depsWidthPct -
|
||||
complexityWidthPct;
|
||||
|
||||
// Allow 10 characters for borders and padding
|
||||
const availableWidth = terminalWidth - 10;
|
||||
@@ -424,6 +450,9 @@ function listTasks(
|
||||
const statusWidth = Math.floor(availableWidth * (statusWidthPct / 100));
|
||||
const priorityWidth = Math.floor(availableWidth * (priorityWidthPct / 100));
|
||||
const depsWidth = Math.floor(availableWidth * (depsWidthPct / 100));
|
||||
const complexityWidth = Math.floor(
|
||||
availableWidth * (complexityWidthPct / 100)
|
||||
);
|
||||
const titleWidth = Math.floor(availableWidth * (titleWidthPct / 100));
|
||||
|
||||
// Create a table with correct borders and spacing
|
||||
@@ -433,9 +462,17 @@ function listTasks(
|
||||
chalk.cyan.bold('Title'),
|
||||
chalk.cyan.bold('Status'),
|
||||
chalk.cyan.bold('Priority'),
|
||||
chalk.cyan.bold('Dependencies')
|
||||
chalk.cyan.bold('Dependencies'),
|
||||
chalk.cyan.bold('Complexity')
|
||||
],
|
||||
colWidths: [
|
||||
idWidth,
|
||||
titleWidth,
|
||||
statusWidth,
|
||||
priorityWidth,
|
||||
depsWidth,
|
||||
complexityWidth // Added complexity column width
|
||||
],
|
||||
colWidths: [idWidth, titleWidth, statusWidth, priorityWidth, depsWidth],
|
||||
style: {
|
||||
head: [], // No special styling for header
|
||||
border: [], // No special styling for border
|
||||
@@ -454,7 +491,8 @@ function listTasks(
|
||||
depText = formatDependenciesWithStatus(
|
||||
task.dependencies,
|
||||
data.tasks,
|
||||
true
|
||||
true,
|
||||
complexityReport
|
||||
);
|
||||
} else {
|
||||
depText = chalk.gray('None');
|
||||
@@ -480,7 +518,10 @@ function listTasks(
|
||||
truncate(cleanTitle, titleWidth - 3),
|
||||
status,
|
||||
priorityColor(truncate(task.priority || 'medium', priorityWidth - 2)),
|
||||
depText // No truncation for dependencies
|
||||
depText,
|
||||
task.complexityScore
|
||||
? getComplexityWithColor(task.complexityScore)
|
||||
: chalk.gray('N/A')
|
||||
]);
|
||||
|
||||
// Add subtasks if requested
|
||||
@@ -516,6 +557,8 @@ function listTasks(
|
||||
// Default to regular task dependency
|
||||
const depTask = data.tasks.find((t) => t.id === depId);
|
||||
if (depTask) {
|
||||
// Add complexity to depTask before checking status
|
||||
addComplexityToTask(depTask, complexityReport);
|
||||
const isDone =
|
||||
depTask.status === 'done' || depTask.status === 'completed';
|
||||
const isInProgress = depTask.status === 'in-progress';
|
||||
@@ -541,7 +584,10 @@ function listTasks(
|
||||
chalk.dim(`└─ ${truncate(subtask.title, titleWidth - 5)}`),
|
||||
getStatusWithColor(subtask.status, true),
|
||||
chalk.dim('-'),
|
||||
subtaskDepText // No truncation for dependencies
|
||||
subtaskDepText,
|
||||
subtask.complexityScore
|
||||
? chalk.gray(`${subtask.complexityScore}`)
|
||||
: chalk.gray('N/A')
|
||||
]);
|
||||
});
|
||||
}
|
||||
@@ -597,6 +643,8 @@ function listTasks(
|
||||
subtasksSection = `\n\n${chalk.white.bold('Subtasks:')}\n`;
|
||||
subtasksSection += parentTaskForSubtasks.subtasks
|
||||
.map((subtask) => {
|
||||
// Add complexity to subtask before display
|
||||
addComplexityToTask(subtask, complexityReport);
|
||||
// Using a more simplified format for subtask status display
|
||||
const status = subtask.status || 'pending';
|
||||
const statusColors = {
|
||||
@@ -625,8 +673,8 @@ function listTasks(
|
||||
'\n\n' +
|
||||
// Use nextItem.priority, nextItem.status, nextItem.dependencies
|
||||
`${chalk.white('Priority:')} ${priorityColors[nextItem.priority || 'medium'](nextItem.priority || 'medium')} ${chalk.white('Status:')} ${getStatusWithColor(nextItem.status, true)}\n` +
|
||||
`${chalk.white('Dependencies:')} ${nextItem.dependencies && nextItem.dependencies.length > 0 ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true) : chalk.gray('None')}\n\n` +
|
||||
// Use nextItem.description (Note: findNextTask doesn't return description, need to fetch original task/subtask for this)
|
||||
`${chalk.white('Dependencies:')} ${nextItem.dependencies && nextItem.dependencies.length > 0 ? formatDependenciesWithStatus(nextItem.dependencies, data.tasks, true, complexityReport) : chalk.gray('None')}\n\n` +
|
||||
// Use nextTask.description (Note: findNextTask doesn't return description, need to fetch original task/subtask for this)
|
||||
// *** Fetching original item for description and details ***
|
||||
`${chalk.white('Description:')} ${getWorkItemDescription(nextItem, data.tasks)}` +
|
||||
subtasksSection + // <-- Subtasks are handled above now
|
||||
|
||||
@@ -17,6 +17,7 @@ import {
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import { displayAiUsageSummary } from '../ui.js';
|
||||
|
||||
// Define the Zod schema for a SINGLE task object
|
||||
const prdSingleTaskSchema = z.object({
|
||||
@@ -47,8 +48,8 @@ const prdResponseSchema = z.object({
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {number} numTasks - Number of tasks to generate
|
||||
* @param {Object} options - Additional options
|
||||
* @param {boolean} [options.useForce=false] - Whether to overwrite existing tasks.json.
|
||||
* @param {boolean} [options.useAppend=false] - Append to existing tasks file.
|
||||
* @param {boolean} [options.force=false] - Whether to overwrite existing tasks.json.
|
||||
* @param {boolean} [options.append=false] - Append to existing tasks file.
|
||||
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
||||
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
||||
* @param {Object} [options.session] - Session object from MCP server (optional).
|
||||
@@ -61,8 +62,8 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
mcpLog,
|
||||
session,
|
||||
projectRoot,
|
||||
useForce = false,
|
||||
useAppend = false
|
||||
force = false,
|
||||
append = false
|
||||
} = options;
|
||||
const isMCP = !!mcpLog;
|
||||
const outputFormat = isMCP ? 'json' : 'text';
|
||||
@@ -89,17 +90,16 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
}
|
||||
};
|
||||
|
||||
report(
|
||||
`Parsing PRD file: ${prdPath}, Force: ${useForce}, Append: ${useAppend}`
|
||||
);
|
||||
report(`Parsing PRD file: ${prdPath}, Force: ${force}, Append: ${append}`);
|
||||
|
||||
let existingTasks = [];
|
||||
let nextId = 1;
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
// Handle file existence and overwrite/append logic
|
||||
if (fs.existsSync(tasksPath)) {
|
||||
if (useAppend) {
|
||||
if (append) {
|
||||
report(
|
||||
`Append mode enabled. Reading existing tasks from ${tasksPath}`,
|
||||
'info'
|
||||
@@ -121,7 +121,7 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
||||
);
|
||||
existingTasks = []; // Reset if read fails
|
||||
}
|
||||
} else if (!useForce) {
|
||||
} else if (!force) {
|
||||
// Not appending and not forcing overwrite
|
||||
const overwriteError = new Error(
|
||||
`Output file ${tasksPath} already exists. Use --force to overwrite or --append.`
|
||||
@@ -206,8 +206,8 @@ Guidelines:
|
||||
// Call the unified AI service
|
||||
report('Calling AI service to generate tasks from PRD...', 'info');
|
||||
|
||||
// Call generateObjectService with the CORRECT schema
|
||||
const generatedData = await generateObjectService({
|
||||
// Call generateObjectService with the CORRECT schema and additional telemetry params
|
||||
aiServiceResponse = await generateObjectService({
|
||||
role: 'main',
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
@@ -215,7 +215,8 @@ Guidelines:
|
||||
objectName: 'tasks_data',
|
||||
systemPrompt: systemPrompt,
|
||||
prompt: userPrompt,
|
||||
reportProgress
|
||||
commandName: 'parse-prd',
|
||||
outputType: isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
// Create the directory if it doesn't exist
|
||||
@@ -223,12 +224,32 @@ Guidelines:
|
||||
if (!fs.existsSync(tasksDir)) {
|
||||
fs.mkdirSync(tasksDir, { recursive: true });
|
||||
}
|
||||
logFn.success('Successfully parsed PRD via AI service.'); // Assumes generateObjectService validated
|
||||
logFn.success('Successfully parsed PRD via AI service.\n');
|
||||
|
||||
// Validate and Process Tasks
|
||||
// const generatedData = aiServiceResponse?.mainResult?.object;
|
||||
|
||||
// Robustly get the actual AI-generated object
|
||||
let generatedData = null;
|
||||
if (aiServiceResponse?.mainResult) {
|
||||
if (
|
||||
typeof aiServiceResponse.mainResult === 'object' &&
|
||||
aiServiceResponse.mainResult !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult
|
||||
) {
|
||||
// If mainResult itself is the object with a 'tasks' property
|
||||
generatedData = aiServiceResponse.mainResult;
|
||||
} else if (
|
||||
typeof aiServiceResponse.mainResult.object === 'object' &&
|
||||
aiServiceResponse.mainResult.object !== null &&
|
||||
'tasks' in aiServiceResponse.mainResult.object
|
||||
) {
|
||||
// If mainResult.object is the object with a 'tasks' property
|
||||
generatedData = aiServiceResponse.mainResult.object;
|
||||
}
|
||||
}
|
||||
|
||||
if (!generatedData || !Array.isArray(generatedData.tasks)) {
|
||||
// This error *shouldn't* happen if generateObjectService enforced prdResponseSchema
|
||||
// But keep it as a safeguard
|
||||
logFn.error(
|
||||
`Internal Error: generateObjectService returned unexpected data structure: ${JSON.stringify(generatedData)}`
|
||||
);
|
||||
@@ -265,36 +286,27 @@ Guidelines:
|
||||
);
|
||||
});
|
||||
|
||||
const allTasks = useAppend
|
||||
const finalTasks = append
|
||||
? [...existingTasks, ...processedNewTasks]
|
||||
: processedNewTasks;
|
||||
const outputData = { tasks: finalTasks };
|
||||
|
||||
const finalTaskData = { tasks: allTasks }; // Use the combined list
|
||||
|
||||
// Write the tasks to the file
|
||||
writeJSON(tasksPath, finalTaskData);
|
||||
// Write the final tasks to the file
|
||||
writeJSON(tasksPath, outputData);
|
||||
report(
|
||||
`Successfully wrote ${allTasks.length} total tasks to ${tasksPath} (${processedNewTasks.length} new).`,
|
||||
`Successfully ${append ? 'appended' : 'generated'} ${processedNewTasks.length} tasks in ${tasksPath}`,
|
||||
'success'
|
||||
);
|
||||
report(`Tasks saved to: ${tasksPath}`, 'info');
|
||||
|
||||
// Generate individual task files
|
||||
if (reportProgress && mcpLog) {
|
||||
// Enable silent mode when being called from MCP server
|
||||
enableSilentMode();
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
disableSilentMode();
|
||||
} else {
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
}
|
||||
// Generate markdown task files after writing tasks.json
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), { mcpLog });
|
||||
|
||||
// Only show success boxes for text output (CLI)
|
||||
// Handle CLI output (e.g., success message)
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(
|
||||
`Successfully generated ${processedNewTasks.length} new tasks. Total tasks in ${tasksPath}: ${allTasks.length}`
|
||||
`Successfully generated ${processedNewTasks.length} new tasks. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
||||
),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
@@ -314,9 +326,18 @@ Guidelines:
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
if (aiServiceResponse && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
return { success: true, tasks: processedNewTasks };
|
||||
// Return telemetry data
|
||||
return {
|
||||
success: true,
|
||||
tasksPath,
|
||||
telemetryData: aiServiceResponse?.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
report(`Error parsing PRD: ${error.message}`, 'error');
|
||||
|
||||
|
||||
@@ -8,6 +8,10 @@ import { validateTaskDependencies } from '../dependency-manager.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import updateSingleTaskStatus from './update-single-task-status.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import {
|
||||
isValidTaskStatus,
|
||||
TASK_STATUS_OPTIONS
|
||||
} from '../../../src/constants/task-status.js';
|
||||
|
||||
/**
|
||||
* Set the status of a task
|
||||
@@ -19,6 +23,11 @@ import generateTaskFiles from './generate-task-files.js';
|
||||
*/
|
||||
async function setTaskStatus(tasksPath, taskIdInput, newStatus, options = {}) {
|
||||
try {
|
||||
if (!isValidTaskStatus(newStatus)) {
|
||||
throw new Error(
|
||||
`Error: Invalid status value: ${newStatus}. Use one of: ${TASK_STATUS_OPTIONS.join(', ')}`
|
||||
);
|
||||
}
|
||||
// Determine if we're in MCP mode by checking for mcpLog
|
||||
const isMcpMode = !!options?.mcpLog;
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import chalk from 'chalk';
|
||||
|
||||
import { log } from '../utils.js';
|
||||
import { isValidTaskStatus } from '../../../src/constants/task-status.js';
|
||||
|
||||
/**
|
||||
* Update the status of a single task
|
||||
@@ -17,6 +18,12 @@ async function updateSingleTaskStatus(
|
||||
data,
|
||||
showUi = true
|
||||
) {
|
||||
if (!isValidTaskStatus(newStatus)) {
|
||||
throw new Error(
|
||||
`Error: Invalid status value: ${newStatus}. Use one of: ${TASK_STATUS_OPTIONS.join(', ')}`
|
||||
);
|
||||
}
|
||||
|
||||
// Check if it's a subtask (e.g., "1.2")
|
||||
if (taskIdInput.includes('.')) {
|
||||
const [parentId, subtaskId] = taskIdInput
|
||||
|
||||
@@ -3,12 +3,12 @@ import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { z } from 'zod';
|
||||
|
||||
import {
|
||||
getStatusWithColor,
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
import {
|
||||
log as consoleLog,
|
||||
@@ -17,10 +17,7 @@ import {
|
||||
truncate,
|
||||
isSilentMode
|
||||
} from '../utils.js';
|
||||
import {
|
||||
generateObjectService,
|
||||
generateTextService
|
||||
} from '../ai-services-unified.js';
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
@@ -64,7 +61,6 @@ async function updateSubtaskById(
|
||||
try {
|
||||
report('info', `Updating subtask ${subtaskId} with prompt: "${prompt}"`);
|
||||
|
||||
// Validate subtask ID format
|
||||
if (
|
||||
!subtaskId ||
|
||||
typeof subtaskId !== 'string' ||
|
||||
@@ -75,19 +71,16 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Validate prompt
|
||||
if (!prompt || typeof prompt !== 'string' || prompt.trim() === '') {
|
||||
throw new Error(
|
||||
'Prompt cannot be empty. Please provide context for the subtask update.'
|
||||
);
|
||||
}
|
||||
|
||||
// Validate tasks file exists
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
throw new Error(`Tasks file not found at path: ${tasksPath}`);
|
||||
}
|
||||
|
||||
// Read the tasks file
|
||||
const data = readJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(
|
||||
@@ -95,7 +88,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Parse parent and subtask IDs
|
||||
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
|
||||
const parentId = parseInt(parentIdStr, 10);
|
||||
const subtaskIdNum = parseInt(subtaskIdStr, 10);
|
||||
@@ -111,7 +103,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Find the parent task
|
||||
const parentTask = data.tasks.find((task) => task.id === parentId);
|
||||
if (!parentTask) {
|
||||
throw new Error(
|
||||
@@ -119,7 +110,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Find the subtask
|
||||
if (!parentTask.subtasks || !Array.isArray(parentTask.subtasks)) {
|
||||
throw new Error(`Parent task ${parentId} has no subtasks.`);
|
||||
}
|
||||
@@ -135,20 +125,7 @@ async function updateSubtaskById(
|
||||
|
||||
const subtask = parentTask.subtasks[subtaskIndex];
|
||||
|
||||
const subtaskSchema = z.object({
|
||||
id: z.number().int().positive(),
|
||||
title: z.string(),
|
||||
description: z.string().optional(),
|
||||
status: z.string(),
|
||||
dependencies: z.array(z.union([z.string(), z.number()])).optional(),
|
||||
priority: z.string().optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional()
|
||||
});
|
||||
|
||||
// Only show UI elements for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
// Show the subtask that will be updated
|
||||
const table = new Table({
|
||||
head: [
|
||||
chalk.cyan.bold('ID'),
|
||||
@@ -157,13 +134,11 @@ async function updateSubtaskById(
|
||||
],
|
||||
colWidths: [10, 55, 10]
|
||||
});
|
||||
|
||||
table.push([
|
||||
subtaskId,
|
||||
truncate(subtask.title, 52),
|
||||
getStatusWithColor(subtask.status)
|
||||
]);
|
||||
|
||||
console.log(
|
||||
boxen(chalk.white.bold(`Updating Subtask #${subtaskId}`), {
|
||||
padding: 1,
|
||||
@@ -172,10 +147,7 @@ async function updateSubtaskById(
|
||||
margin: { top: 1, bottom: 0 }
|
||||
})
|
||||
);
|
||||
|
||||
console.log(table.toString());
|
||||
|
||||
// Start the loading indicator - only for text output
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
useResearch
|
||||
? 'Updating subtask with research...'
|
||||
@@ -183,15 +155,15 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
let parsedAIResponse;
|
||||
let generatedContentString = '';
|
||||
let newlyAddedSnippet = '';
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
// --- GET PARENT & SIBLING CONTEXT ---
|
||||
const parentContext = {
|
||||
id: parentTask.id,
|
||||
title: parentTask.title
|
||||
// Avoid sending full parent description/details unless necessary
|
||||
};
|
||||
|
||||
const prevSubtask =
|
||||
subtaskIndex > 0
|
||||
? {
|
||||
@@ -200,7 +172,6 @@ async function updateSubtaskById(
|
||||
status: parentTask.subtasks[subtaskIndex - 1].status
|
||||
}
|
||||
: null;
|
||||
|
||||
const nextSubtask =
|
||||
subtaskIndex < parentTask.subtasks.length - 1
|
||||
? {
|
||||
@@ -214,154 +185,123 @@ async function updateSubtaskById(
|
||||
Parent Task: ${JSON.stringify(parentContext)}
|
||||
${prevSubtask ? `Previous Subtask: ${JSON.stringify(prevSubtask)}` : ''}
|
||||
${nextSubtask ? `Next Subtask: ${JSON.stringify(nextSubtask)}` : ''}
|
||||
Current Subtask Details (for context only):\n${subtask.details || '(No existing details)'}
|
||||
`;
|
||||
|
||||
const systemPrompt = `You are an AI assistant updating a parent task's subtask. This subtask will be part of a larger parent task and will be used to direct AI agents to complete the subtask. Your goal is to GENERATE new, relevant information based on the user's request (which may be high-level, mid-level or low-level) and APPEND it to the existing subtask 'details' field, wrapped in specific XML-like tags with an ISO 8601 timestamp. Intelligently determine the level of detail to include based on the user's request. Some requests are meant simply to update the subtask with some mid-implementation details, while others are meant to update the subtask with a detailed plan or strategy.
|
||||
const systemPrompt = `You are an AI assistant helping to update a subtask. You will be provided with the subtask's existing details, context about its parent and sibling tasks, and a user request string.
|
||||
|
||||
Context Provided:
|
||||
- The current subtask object.
|
||||
- Basic info about the parent task (ID, title).
|
||||
- Basic info about the immediately preceding subtask (ID, title, status), if it exists.
|
||||
- Basic info about the immediately succeeding subtask (ID, title, status), if it exists.
|
||||
- A user request string.
|
||||
Your Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the subtask's details.
|
||||
Focus *only* on generating the substance of the update.
|
||||
|
||||
Guidelines:
|
||||
1. Analyze the user request considering the provided subtask details AND the context of the parent and sibling tasks.
|
||||
2. GENERATE new, relevant text content that should be added to the 'details' field. Focus *only* on the substance of the update based on the user request and context. Do NOT add timestamps or any special formatting yourself. Avoid over-engineering the details, provide .
|
||||
3. Update the 'details' field in the subtask object with the GENERATED text content. It's okay if this overwrites previous details in the object you return, as the calling code will handle the final appending.
|
||||
4. Return the *entire* updated subtask object (with your generated content in the 'details' field) as a valid JSON object conforming to the provided schema. Do NOT return explanations or markdown formatting.`;
|
||||
Output Requirements:
|
||||
1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.
|
||||
2. Your string response should NOT include any of the subtask's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.
|
||||
3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.
|
||||
4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with "Okay, here's the update...").`;
|
||||
|
||||
const subtaskDataString = JSON.stringify(subtask, null, 2);
|
||||
// Updated user prompt including context
|
||||
const userPrompt = `Task Context:\n${contextString}\nCurrent Subtask:\n${subtaskDataString}\n\nUser Request: "${prompt}"\n\nPlease GENERATE new, relevant text content for the 'details' field based on the user request and the provided context. Return the entire updated subtask object as a valid JSON object matching the schema, with the newly generated text placed in the 'details' field.`;
|
||||
// --- END UPDATED PROMPTS ---
|
||||
// Pass the existing subtask.details in the user prompt for the AI's context.
|
||||
const userPrompt = `Task Context:\n${contextString}\n\nUser Request: "${prompt}"\n\nBased on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.`;
|
||||
|
||||
// Call Unified AI Service using generateObjectService
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
report('info', `Using AI object service with role: ${role}`);
|
||||
report('info', `Using AI text service with role: ${role}`);
|
||||
|
||||
parsedAIResponse = await generateObjectService({
|
||||
aiServiceResponse = await generateTextService({
|
||||
prompt: userPrompt,
|
||||
systemPrompt: systemPrompt,
|
||||
schema: subtaskSchema,
|
||||
objectName: 'updatedSubtask',
|
||||
role,
|
||||
session,
|
||||
projectRoot,
|
||||
maxRetries: 2
|
||||
maxRetries: 2,
|
||||
commandName: 'update-subtask',
|
||||
outputType: isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
|
||||
if (
|
||||
aiServiceResponse &&
|
||||
aiServiceResponse.mainResult &&
|
||||
typeof aiServiceResponse.mainResult === 'string'
|
||||
) {
|
||||
generatedContentString = aiServiceResponse.mainResult;
|
||||
} else {
|
||||
generatedContentString = '';
|
||||
report(
|
||||
'success',
|
||||
'Successfully received object response from AI service'
|
||||
'warn',
|
||||
'AI service response did not contain expected text string.'
|
||||
);
|
||||
}
|
||||
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
if (!parsedAIResponse || typeof parsedAIResponse !== 'object') {
|
||||
throw new Error('AI did not return a valid object.');
|
||||
}
|
||||
|
||||
report(
|
||||
'success',
|
||||
`Successfully generated object using AI role: ${role}.`
|
||||
);
|
||||
} catch (aiError) {
|
||||
report('error', `AI service call failed: ${aiError.message}`);
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator); // Ensure stop on error
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
throw aiError;
|
||||
}
|
||||
|
||||
// --- TIMESTAMP & FORMATTING LOGIC (Handled Locally) ---
|
||||
// Extract only the generated content from the AI's response details field.
|
||||
const generatedContent = parsedAIResponse.details || ''; // Default to empty string
|
||||
if (generatedContentString && generatedContentString.trim()) {
|
||||
// Check if the string is not empty
|
||||
const timestamp = new Date().toISOString();
|
||||
const formattedBlock = `<info added on ${timestamp}>\n${generatedContentString.trim()}\n</info added on ${timestamp}>`;
|
||||
newlyAddedSnippet = formattedBlock; // <--- ADD THIS LINE: Store for display
|
||||
|
||||
if (generatedContent.trim()) {
|
||||
// Generate timestamp locally
|
||||
const timestamp = new Date().toISOString(); // <<< Local Timestamp
|
||||
|
||||
// Format the content with XML-like tags and timestamp LOCALLY
|
||||
const formattedBlock = `<info added on ${timestamp}>\n${generatedContent.trim()}\n</info added on ${timestamp}>`; // <<< Local Formatting
|
||||
|
||||
// Append the formatted block to the *original* subtask details
|
||||
subtask.details =
|
||||
(subtask.details ? subtask.details + '\n' : '') + formattedBlock; // <<< Local Appending
|
||||
report(
|
||||
'info',
|
||||
'Appended timestamped, formatted block with AI-generated content to subtask.details.'
|
||||
);
|
||||
(subtask.details ? subtask.details + '\n' : '') + formattedBlock;
|
||||
} else {
|
||||
report(
|
||||
'warn',
|
||||
'AI response object did not contain generated content in the "details" field. Original details remain unchanged.'
|
||||
'AI response was empty or whitespace after trimming. Original details remain unchanged.'
|
||||
);
|
||||
newlyAddedSnippet = 'No new details were added by the AI.';
|
||||
}
|
||||
// --- END TIMESTAMP & FORMATTING LOGIC ---
|
||||
|
||||
// Get a reference to the subtask *after* its details have been updated
|
||||
const updatedSubtask = parentTask.subtasks[subtaskIndex]; // subtask === updatedSubtask now
|
||||
const updatedSubtask = parentTask.subtasks[subtaskIndex];
|
||||
|
||||
report('info', 'Updated subtask details locally after AI generation.');
|
||||
// --- END UPDATE SUBTASK ---
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask details AFTER AI update:',
|
||||
updatedSubtask.details // Use updatedSubtask
|
||||
updatedSubtask.details
|
||||
);
|
||||
}
|
||||
|
||||
// Description update logic (keeping as is for now)
|
||||
if (updatedSubtask.description) {
|
||||
// Use updatedSubtask
|
||||
if (prompt.length < 100) {
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask description BEFORE append:',
|
||||
updatedSubtask.description // Use updatedSubtask
|
||||
updatedSubtask.description
|
||||
);
|
||||
}
|
||||
updatedSubtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`; // Use updatedSubtask
|
||||
updatedSubtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`;
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask description AFTER append:',
|
||||
updatedSubtask.description // Use updatedSubtask
|
||||
updatedSubtask.description
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: About to call writeJSON with updated data...');
|
||||
}
|
||||
|
||||
// Write the updated tasks to the file (parentTask already contains the updated subtask)
|
||||
writeJSON(tasksPath, data);
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: writeJSON call completed.');
|
||||
}
|
||||
|
||||
report('success', `Successfully updated subtask ${subtaskId}`);
|
||||
|
||||
// Generate individual task files
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
// Stop indicator before final console output - only for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
if (loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(`Successfully updated subtask #${subtaskId}`) +
|
||||
@@ -370,31 +310,30 @@ Guidelines:
|
||||
' ' +
|
||||
updatedSubtask.title +
|
||||
'\n\n' +
|
||||
// Update the display to show the new details field
|
||||
chalk.white.bold('Updated Details:') +
|
||||
chalk.white.bold('Newly Added Snippet:') +
|
||||
'\n' +
|
||||
chalk.white(truncate(updatedSubtask.details || '', 500, true)), // Use updatedSubtask
|
||||
chalk.white(newlyAddedSnippet),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return updatedSubtask; // Return the modified subtask object
|
||||
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
|
||||
return {
|
||||
updatedSubtask: updatedSubtask,
|
||||
telemetryData: aiServiceResponse.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
// Outer catch block handles final errors after loop/attempts
|
||||
// Stop indicator on error - only for text output (CLI)
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
report('error', `Error updating subtask: ${error.message}`);
|
||||
|
||||
// Only show error UI for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
|
||||
// Provide helpful error messages based on error type
|
||||
if (error.message?.includes('ANTHROPIC_API_KEY')) {
|
||||
console.log(
|
||||
chalk.yellow('\nTo fix this issue, set your Anthropic API key:')
|
||||
@@ -409,7 +348,6 @@ Guidelines:
|
||||
' 2. Or run without the research flag: task-master update-subtask --id=<id> --prompt="..."'
|
||||
);
|
||||
} else if (error.message?.includes('overloaded')) {
|
||||
// Catch final overload error
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nAI model overloaded, and fallback failed or was unavailable:'
|
||||
@@ -417,7 +355,6 @@ Guidelines:
|
||||
);
|
||||
console.log(' 1. Try again in a few minutes.');
|
||||
console.log(' 2. Ensure PERPLEXITY_API_KEY is set for fallback.');
|
||||
console.log(' 3. Consider breaking your prompt into smaller updates.');
|
||||
} else if (error.message?.includes('not found')) {
|
||||
console.log(chalk.yellow('\nTo fix this issue:'));
|
||||
console.log(
|
||||
@@ -426,22 +363,22 @@ Guidelines:
|
||||
console.log(
|
||||
' 2. Use a valid subtask ID with the --id parameter in format "parentId.subtaskId"'
|
||||
);
|
||||
} else if (error.message?.includes('empty stream response')) {
|
||||
} else if (
|
||||
error.message?.includes('empty stream response') ||
|
||||
error.message?.includes('AI did not return a valid text string')
|
||||
) {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nThe AI model returned an empty response. This might be due to the prompt or API issues. Try rephrasing or trying again later.'
|
||||
'\nThe AI model returned an empty or invalid response. This might be due to the prompt or API issues. Try rephrasing or trying again later.'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
if (getDebugFlag(session)) {
|
||||
// Use getter
|
||||
console.error(error);
|
||||
}
|
||||
} else {
|
||||
throw error; // Re-throw for JSON output
|
||||
throw error;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,7 +16,8 @@ import {
|
||||
import {
|
||||
getStatusWithColor,
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
@@ -94,10 +95,6 @@ function parseUpdatedTaskFromText(text, expectedTaskId, logFn, isMCP) {
|
||||
// It worked! Use this as the primary cleaned response.
|
||||
cleanedResponse = potentialJsonFromBraces;
|
||||
parseMethodUsed = 'braces';
|
||||
report(
|
||||
'info',
|
||||
'Successfully parsed JSON content extracted between first { and last }.'
|
||||
);
|
||||
} catch (e) {
|
||||
report(
|
||||
'info',
|
||||
@@ -376,63 +373,37 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
||||
const userPrompt = `Here is the task to update:\n${taskDataString}\n\nPlease update this task based on the following new context:\n${prompt}\n\nIMPORTANT: In the task JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items.\n\nReturn only the updated task as a valid JSON object.`;
|
||||
// --- End Build Prompts ---
|
||||
|
||||
let updatedTask;
|
||||
let loadingIndicator = null;
|
||||
if (outputFormat === 'text') {
|
||||
let aiServiceResponse = null;
|
||||
|
||||
if (!isMCP && outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
useResearch ? 'Updating task with research...\n' : 'Updating task...\n'
|
||||
);
|
||||
}
|
||||
|
||||
let responseText = '';
|
||||
try {
|
||||
// --- Call Unified AI Service (generateTextService) ---
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
report('info', `Using AI service with role: ${role}`);
|
||||
|
||||
responseText = await generateTextService({
|
||||
prompt: userPrompt,
|
||||
const serviceRole = useResearch ? 'research' : 'main';
|
||||
aiServiceResponse = await generateTextService({
|
||||
role: serviceRole,
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
systemPrompt: systemPrompt,
|
||||
role,
|
||||
session,
|
||||
projectRoot
|
||||
prompt: userPrompt,
|
||||
commandName: 'update-task',
|
||||
outputType: isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
report('success', 'Successfully received text response from AI service');
|
||||
// --- End AI Service Call ---
|
||||
} catch (error) {
|
||||
// Catch errors from generateTextService
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
report('error', `Error during AI service call: ${error.message}`);
|
||||
if (error.message.includes('API key')) {
|
||||
report('error', 'Please ensure API keys are configured correctly.');
|
||||
}
|
||||
throw error; // Re-throw error
|
||||
} finally {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
}
|
||||
|
||||
// --- Parse and Validate Response ---
|
||||
try {
|
||||
// Pass logFn and isMCP flag to the parser
|
||||
updatedTask = parseUpdatedTaskFromText(
|
||||
responseText,
|
||||
if (loadingIndicator)
|
||||
stopLoadingIndicator(loadingIndicator, 'AI update complete.');
|
||||
|
||||
// Use mainResult (text) for parsing
|
||||
const updatedTask = parseUpdatedTaskFromText(
|
||||
aiServiceResponse.mainResult,
|
||||
taskId,
|
||||
logFn,
|
||||
isMCP
|
||||
);
|
||||
} catch (parseError) {
|
||||
report(
|
||||
'error',
|
||||
`Failed to parse updated task from AI response: ${parseError.message}`
|
||||
);
|
||||
if (getDebugFlag(session)) {
|
||||
report('error', `Raw AI Response:\n${responseText}`);
|
||||
}
|
||||
throw new Error(
|
||||
`Failed to parse valid updated task from AI response: ${parseError.message}`
|
||||
);
|
||||
}
|
||||
// --- End Parse/Validate ---
|
||||
|
||||
// --- Task Validation/Correction (Keep existing logic) ---
|
||||
if (!updatedTask || typeof updatedTask !== 'object')
|
||||
@@ -458,7 +429,10 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
||||
// Preserve completed subtasks (Keep existing logic)
|
||||
if (taskToUpdate.subtasks?.length > 0) {
|
||||
if (!updatedTask.subtasks) {
|
||||
report('warn', 'Subtasks removed by AI. Restoring original subtasks.');
|
||||
report(
|
||||
'warn',
|
||||
'Subtasks removed by AI. Restoring original subtasks.'
|
||||
);
|
||||
updatedTask.subtasks = taskToUpdate.subtasks;
|
||||
} else {
|
||||
const completedOriginal = taskToUpdate.subtasks.filter(
|
||||
@@ -502,19 +476,31 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
||||
data.tasks[taskIndex] = updatedTask;
|
||||
// --- End Update Task Data ---
|
||||
|
||||
// --- Write File and Generate (Keep existing) ---
|
||||
// --- Write File and Generate (Unchanged) ---
|
||||
writeJSON(tasksPath, data);
|
||||
report('success', `Successfully updated task ${taskId}`);
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
// --- End Write File ---
|
||||
|
||||
// --- Final CLI Output (Keep existing) ---
|
||||
if (outputFormat === 'text') {
|
||||
/* ... success boxen ... */
|
||||
// --- Display CLI Telemetry ---
|
||||
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli'); // <<< ADD display
|
||||
}
|
||||
// --- End Final CLI Output ---
|
||||
|
||||
return updatedTask; // Return the updated task
|
||||
// --- Return Success with Telemetry ---
|
||||
return {
|
||||
updatedTask: updatedTask, // Return the updated task object
|
||||
telemetryData: aiServiceResponse.telemetryData // <<< ADD telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
// Catch errors from generateTextService
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
report('error', `Error during AI service call: ${error.message}`);
|
||||
if (error.message.includes('API key')) {
|
||||
report('error', 'Please ensure API keys are configured correctly.');
|
||||
}
|
||||
throw error; // Re-throw error
|
||||
}
|
||||
} catch (error) {
|
||||
// General error catch
|
||||
// --- General Error Handling (Keep existing) ---
|
||||
|
||||
@@ -15,7 +15,8 @@ import {
|
||||
import {
|
||||
getStatusWithColor,
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
@@ -93,10 +94,6 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
||||
// It worked! Use this as the primary cleaned response.
|
||||
cleanedResponse = potentialJsonFromArray;
|
||||
parseMethodUsed = 'brackets';
|
||||
report(
|
||||
'info',
|
||||
'Successfully parsed JSON content extracted between first [ and last ].'
|
||||
);
|
||||
} catch (e) {
|
||||
report(
|
||||
'info',
|
||||
@@ -275,7 +272,7 @@ async function updateTasks(
|
||||
chalk.cyan.bold('Title'),
|
||||
chalk.cyan.bold('Status')
|
||||
],
|
||||
colWidths: [5, 60, 10]
|
||||
colWidths: [5, 70, 20]
|
||||
});
|
||||
|
||||
tasksToUpdate.forEach((task) => {
|
||||
@@ -350,94 +347,61 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
const userPrompt = `Here are the tasks to update:\n${taskDataString}\n\nPlease update these tasks based on the following new context:\n${prompt}\n\nIMPORTANT: In the tasks JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items.\n\nReturn only the updated tasks as a valid JSON array.`;
|
||||
// --- End Build Prompts ---
|
||||
|
||||
// --- AI Call ---
|
||||
let loadingIndicator = null;
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator('Updating tasks...\n');
|
||||
let aiServiceResponse = null;
|
||||
|
||||
if (!isMCP && outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator('Updating tasks with AI...\n');
|
||||
}
|
||||
|
||||
let responseText = '';
|
||||
let updatedTasks;
|
||||
|
||||
try {
|
||||
// --- Call Unified AI Service ---
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
if (isMCP) logFn.info(`Using AI service with role: ${role}`);
|
||||
else logFn('info', `Using AI service with role: ${role}`);
|
||||
// Determine role based on research flag
|
||||
const serviceRole = useResearch ? 'research' : 'main';
|
||||
|
||||
responseText = await generateTextService({
|
||||
prompt: userPrompt,
|
||||
// Call the unified AI service
|
||||
aiServiceResponse = await generateTextService({
|
||||
role: serviceRole,
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
systemPrompt: systemPrompt,
|
||||
role,
|
||||
session,
|
||||
projectRoot
|
||||
prompt: userPrompt,
|
||||
commandName: 'update-tasks',
|
||||
outputType: isMCP ? 'mcp' : 'cli'
|
||||
});
|
||||
if (isMCP) logFn.info('Successfully received text response');
|
||||
else
|
||||
logFn('success', 'Successfully received text response via AI service');
|
||||
// --- End AI Service Call ---
|
||||
} catch (error) {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
if (isMCP) logFn.error(`Error during AI service call: ${error.message}`);
|
||||
else logFn('error', `Error during AI service call: ${error.message}`);
|
||||
if (error.message.includes('API key')) {
|
||||
if (isMCP)
|
||||
logFn.error(
|
||||
'Please ensure API keys are configured correctly in .env or mcp.json.'
|
||||
);
|
||||
else
|
||||
logFn(
|
||||
'error',
|
||||
'Please ensure API keys are configured correctly in .env or mcp.json.'
|
||||
);
|
||||
}
|
||||
throw error; // Re-throw error
|
||||
} finally {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
}
|
||||
|
||||
// --- Parse and Validate Response ---
|
||||
try {
|
||||
updatedTasks = parseUpdatedTasksFromText(
|
||||
responseText,
|
||||
if (loadingIndicator)
|
||||
stopLoadingIndicator(loadingIndicator, 'AI update complete.');
|
||||
|
||||
// Use the mainResult (text) for parsing
|
||||
const parsedUpdatedTasks = parseUpdatedTasksFromText(
|
||||
aiServiceResponse.mainResult,
|
||||
tasksToUpdate.length,
|
||||
logFn,
|
||||
isMCP
|
||||
);
|
||||
} catch (parseError) {
|
||||
|
||||
// --- Update Tasks Data (Unchanged) ---
|
||||
if (!Array.isArray(parsedUpdatedTasks)) {
|
||||
// Should be caught by parser, but extra check
|
||||
throw new Error(
|
||||
'Parsed AI response for updated tasks was not an array.'
|
||||
);
|
||||
}
|
||||
if (isMCP)
|
||||
logFn.error(
|
||||
`Failed to parse updated tasks from AI response: ${parseError.message}`
|
||||
logFn.info(
|
||||
`Received ${parsedUpdatedTasks.length} updated tasks from AI.`
|
||||
);
|
||||
else
|
||||
logFn(
|
||||
'error',
|
||||
`Failed to parse updated tasks from AI response: ${parseError.message}`
|
||||
'info',
|
||||
`Received ${parsedUpdatedTasks.length} updated tasks from AI.`
|
||||
);
|
||||
if (getDebugFlag(session)) {
|
||||
if (isMCP) logFn.error(`Raw AI Response:\n${responseText}`);
|
||||
else logFn('error', `Raw AI Response:\n${responseText}`);
|
||||
}
|
||||
throw new Error(
|
||||
`Failed to parse valid updated tasks from AI response: ${parseError.message}`
|
||||
);
|
||||
}
|
||||
// --- End Parse/Validate ---
|
||||
|
||||
// --- Update Tasks Data (Unchanged) ---
|
||||
if (!Array.isArray(updatedTasks)) {
|
||||
// Should be caught by parser, but extra check
|
||||
throw new Error('Parsed AI response for updated tasks was not an array.');
|
||||
}
|
||||
if (isMCP)
|
||||
logFn.info(`Received ${updatedTasks.length} updated tasks from AI.`);
|
||||
else
|
||||
logFn('info', `Received ${updatedTasks.length} updated tasks from AI.`);
|
||||
// Create a map for efficient lookup
|
||||
const updatedTasksMap = new Map(
|
||||
updatedTasks.map((task) => [task.id, task])
|
||||
parsedUpdatedTasks.map((task) => [task.id, task])
|
||||
);
|
||||
|
||||
// Iterate through the original data and update based on the map
|
||||
let actualUpdateCount = 0;
|
||||
data.tasks.forEach((task, index) => {
|
||||
if (updatedTasksMap.has(task.id)) {
|
||||
@@ -455,9 +419,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
'info',
|
||||
`Applied updates to ${actualUpdateCount} tasks in the dataset.`
|
||||
);
|
||||
// --- End Update Tasks Data ---
|
||||
|
||||
// --- Write File and Generate (Unchanged) ---
|
||||
writeJSON(tasksPath, data);
|
||||
if (isMCP)
|
||||
logFn.info(
|
||||
@@ -469,19 +431,35 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
`Successfully updated ${actualUpdateCount} tasks in ${tasksPath}`
|
||||
);
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
// --- End Write File ---
|
||||
|
||||
// --- Final CLI Output (Unchanged) ---
|
||||
if (outputFormat === 'text') {
|
||||
console.log(
|
||||
boxen(chalk.green(`Successfully updated ${actualUpdateCount} tasks`), {
|
||||
padding: 1,
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round'
|
||||
})
|
||||
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
updatedTasks: parsedUpdatedTasks,
|
||||
telemetryData: aiServiceResponse.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
if (isMCP) logFn.error(`Error during AI service call: ${error.message}`);
|
||||
else logFn('error', `Error during AI service call: ${error.message}`);
|
||||
if (error.message.includes('API key')) {
|
||||
if (isMCP)
|
||||
logFn.error(
|
||||
'Please ensure API keys are configured correctly in .env or mcp.json.'
|
||||
);
|
||||
else
|
||||
logFn(
|
||||
'error',
|
||||
'Please ensure API keys are configured correctly in .env or mcp.json.'
|
||||
);
|
||||
}
|
||||
// --- End Final CLI Output ---
|
||||
throw error;
|
||||
} finally {
|
||||
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
|
||||
}
|
||||
} catch (error) {
|
||||
// --- General Error Handling (Unchanged) ---
|
||||
if (isMCP) logFn.error(`Error updating tasks: ${error.message}`);
|
||||
|
||||
@@ -9,11 +9,22 @@ import boxen from 'boxen';
|
||||
import ora from 'ora';
|
||||
import Table from 'cli-table3';
|
||||
import gradient from 'gradient-string';
|
||||
import { log, findTaskById, readJSON, truncate } from './utils.js';
|
||||
import path from 'path';
|
||||
import {
|
||||
log,
|
||||
findTaskById,
|
||||
readJSON,
|
||||
truncate,
|
||||
isSilentMode
|
||||
} from './utils.js';
|
||||
import fs from 'fs';
|
||||
import { findNextTask, analyzeTaskComplexity } from './task-manager.js';
|
||||
import {
|
||||
findNextTask,
|
||||
analyzeTaskComplexity,
|
||||
readComplexityReport
|
||||
} from './task-manager.js';
|
||||
import { getProjectName, getDefaultSubtasks } from './config-manager.js';
|
||||
import { TASK_STATUS_OPTIONS } from '../../src/constants/task-status.js';
|
||||
import { getTaskMasterVersion } from '../../src/utils/getVersion.js';
|
||||
|
||||
// Create a color gradient for the banner
|
||||
const coolGradient = gradient(['#00b4d8', '#0077b6', '#03045e']);
|
||||
@@ -23,6 +34,8 @@ const warmGradient = gradient(['#fb8b24', '#e36414', '#9a031e']);
|
||||
* Display a fancy banner for the CLI
|
||||
*/
|
||||
function displayBanner() {
|
||||
if (isSilentMode()) return;
|
||||
|
||||
console.clear();
|
||||
const bannerText = figlet.textSync('Task Master', {
|
||||
font: 'Standard',
|
||||
@@ -38,17 +51,7 @@ function displayBanner() {
|
||||
);
|
||||
|
||||
// Read version directly from package.json
|
||||
let version = 'unknown'; // Initialize with a default
|
||||
try {
|
||||
const packageJsonPath = path.join(process.cwd(), 'package.json');
|
||||
if (fs.existsSync(packageJsonPath)) {
|
||||
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
|
||||
version = packageJson.version;
|
||||
}
|
||||
} catch (error) {
|
||||
// Silently fall back to default version
|
||||
log('warn', 'Could not read package.json for version info.');
|
||||
}
|
||||
const version = getTaskMasterVersion();
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
@@ -265,12 +268,14 @@ function getStatusWithColor(status, forTable = false) {
|
||||
* @param {Array} dependencies - Array of dependency IDs
|
||||
* @param {Array} allTasks - Array of all tasks
|
||||
* @param {boolean} forConsole - Whether the output is for console display
|
||||
* @param {Object|null} complexityReport - Optional pre-loaded complexity report
|
||||
* @returns {string} Formatted dependencies string
|
||||
*/
|
||||
function formatDependenciesWithStatus(
|
||||
dependencies,
|
||||
allTasks,
|
||||
forConsole = false
|
||||
forConsole = false,
|
||||
complexityReport = null // Add complexityReport parameter
|
||||
) {
|
||||
if (
|
||||
!dependencies ||
|
||||
@@ -334,7 +339,11 @@ function formatDependenciesWithStatus(
|
||||
typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
|
||||
// Look up the task using the numeric ID
|
||||
const depTaskResult = findTaskById(allTasks, numericDepId);
|
||||
const depTaskResult = findTaskById(
|
||||
allTasks,
|
||||
numericDepId,
|
||||
complexityReport
|
||||
);
|
||||
const depTask = depTaskResult.task; // Access the task object from the result
|
||||
|
||||
if (!depTask) {
|
||||
@@ -450,7 +459,7 @@ function displayHelp() {
|
||||
{
|
||||
name: 'set-status',
|
||||
args: '--id=<id> --status=<status>',
|
||||
desc: 'Update task status (done, pending, etc.)'
|
||||
desc: `Update task status (${TASK_STATUS_OPTIONS.join(', ')})`
|
||||
},
|
||||
{
|
||||
name: 'update',
|
||||
@@ -753,7 +762,7 @@ function truncateString(str, maxLength) {
|
||||
* Display the next task to work on
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
*/
|
||||
async function displayNextTask(tasksPath) {
|
||||
async function displayNextTask(tasksPath, complexityReportPath = null) {
|
||||
displayBanner();
|
||||
|
||||
// Read the tasks file
|
||||
@@ -763,8 +772,11 @@ async function displayNextTask(tasksPath) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Read complexity report once
|
||||
const complexityReport = readComplexityReport(complexityReportPath);
|
||||
|
||||
// Find the next task
|
||||
const nextTask = findNextTask(data.tasks);
|
||||
const nextTask = findNextTask(data.tasks, complexityReport);
|
||||
|
||||
if (!nextTask) {
|
||||
console.log(
|
||||
@@ -801,12 +813,7 @@ async function displayNextTask(tasksPath) {
|
||||
'padding-bottom': 0,
|
||||
compact: true
|
||||
},
|
||||
chars: {
|
||||
mid: '',
|
||||
'left-mid': '',
|
||||
'mid-mid': '',
|
||||
'right-mid': ''
|
||||
},
|
||||
chars: { mid: '', 'left-mid': '', 'mid-mid': '', 'right-mid': '' },
|
||||
colWidths: [15, Math.min(75, process.stdout.columns - 20 || 60)],
|
||||
wordWrap: true
|
||||
});
|
||||
@@ -830,7 +837,18 @@ async function displayNextTask(tasksPath) {
|
||||
],
|
||||
[
|
||||
chalk.cyan.bold('Dependencies:'),
|
||||
formatDependenciesWithStatus(nextTask.dependencies, data.tasks, true)
|
||||
formatDependenciesWithStatus(
|
||||
nextTask.dependencies,
|
||||
data.tasks,
|
||||
true,
|
||||
complexityReport
|
||||
)
|
||||
],
|
||||
[
|
||||
chalk.cyan.bold('Complexity:'),
|
||||
nextTask.complexityScore
|
||||
? getComplexityWithColor(nextTask.complexityScore)
|
||||
: chalk.gray('N/A')
|
||||
],
|
||||
[chalk.cyan.bold('Description:'), nextTask.description]
|
||||
);
|
||||
@@ -852,8 +870,11 @@ async function displayNextTask(tasksPath) {
|
||||
);
|
||||
}
|
||||
|
||||
// Show subtasks if they exist
|
||||
if (nextTask.subtasks && nextTask.subtasks.length > 0) {
|
||||
// Determine if the nextTask is a subtask
|
||||
const isSubtask = !!nextTask.parentId;
|
||||
|
||||
// Show subtasks if they exist (only for parent tasks)
|
||||
if (!isSubtask && nextTask.subtasks && nextTask.subtasks.length > 0) {
|
||||
console.log(
|
||||
boxen(chalk.white.bold('Subtasks'), {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
@@ -894,12 +915,7 @@ async function displayNextTask(tasksPath) {
|
||||
'padding-bottom': 0,
|
||||
compact: true
|
||||
},
|
||||
chars: {
|
||||
mid: '',
|
||||
'left-mid': '',
|
||||
'mid-mid': '',
|
||||
'right-mid': ''
|
||||
},
|
||||
chars: { mid: '', 'left-mid': '', 'mid-mid': '', 'right-mid': '' },
|
||||
wordWrap: true
|
||||
});
|
||||
|
||||
@@ -958,8 +974,10 @@ async function displayNextTask(tasksPath) {
|
||||
});
|
||||
|
||||
console.log(subtaskTable.toString());
|
||||
} else {
|
||||
// Suggest expanding if no subtasks
|
||||
}
|
||||
|
||||
// Suggest expanding if no subtasks (only for parent tasks without subtasks)
|
||||
if (!isSubtask && (!nextTask.subtasks || nextTask.subtasks.length === 0)) {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow('No subtasks found. Consider breaking down this task:') +
|
||||
@@ -978,22 +996,30 @@ async function displayNextTask(tasksPath) {
|
||||
}
|
||||
|
||||
// Show action suggestions
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.white.bold('Suggested Actions:') +
|
||||
'\n' +
|
||||
let suggestedActionsContent = chalk.white.bold('Suggested Actions:') + '\n';
|
||||
if (isSubtask) {
|
||||
// Suggested actions for a subtask
|
||||
suggestedActionsContent +=
|
||||
`${chalk.cyan('1.')} Mark as in-progress: ${chalk.yellow(`task-master set-status --id=${nextTask.id} --status=in-progress`)}\n` +
|
||||
`${chalk.cyan('2.')} Mark as done when completed: ${chalk.yellow(`task-master set-status --id=${nextTask.id} --status=done`)}\n` +
|
||||
`${chalk.cyan('3.')} View parent task: ${chalk.yellow(`task-master show --id=${nextTask.parentId}`)}`;
|
||||
} else {
|
||||
// Suggested actions for a parent task
|
||||
suggestedActionsContent +=
|
||||
`${chalk.cyan('1.')} Mark as in-progress: ${chalk.yellow(`task-master set-status --id=${nextTask.id} --status=in-progress`)}\n` +
|
||||
`${chalk.cyan('2.')} Mark as done when completed: ${chalk.yellow(`task-master set-status --id=${nextTask.id} --status=done`)}\n` +
|
||||
(nextTask.subtasks && nextTask.subtasks.length > 0
|
||||
? `${chalk.cyan('3.')} Update subtask status: ${chalk.yellow(`task-master set-status --id=${nextTask.id}.1 --status=done`)}`
|
||||
: `${chalk.cyan('3.')} Break down into subtasks: ${chalk.yellow(`task-master expand --id=${nextTask.id}`)}`),
|
||||
{
|
||||
? `${chalk.cyan('3.')} Update subtask status: ${chalk.yellow(`task-master set-status --id=${nextTask.id}.1 --status=done`)}` // Example: first subtask
|
||||
: `${chalk.cyan('3.')} Break down into subtasks: ${chalk.yellow(`task-master expand --id=${nextTask.id}`)}`);
|
||||
}
|
||||
|
||||
console.log(
|
||||
boxen(suggestedActionsContent, {
|
||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
||||
borderColor: 'green',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1003,7 +1029,12 @@ async function displayNextTask(tasksPath) {
|
||||
* @param {string|number} taskId - The ID of the task to display
|
||||
* @param {string} [statusFilter] - Optional status to filter subtasks by
|
||||
*/
|
||||
async function displayTaskById(tasksPath, taskId, statusFilter = null) {
|
||||
async function displayTaskById(
|
||||
tasksPath,
|
||||
taskId,
|
||||
complexityReportPath = null,
|
||||
statusFilter = null
|
||||
) {
|
||||
displayBanner();
|
||||
|
||||
// Read the tasks file
|
||||
@@ -1013,11 +1044,15 @@ async function displayTaskById(tasksPath, taskId, statusFilter = null) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Read complexity report once
|
||||
const complexityReport = readComplexityReport(complexityReportPath);
|
||||
|
||||
// Find the task by ID, applying the status filter if provided
|
||||
// Returns { task, originalSubtaskCount, originalSubtasks }
|
||||
const { task, originalSubtaskCount, originalSubtasks } = findTaskById(
|
||||
data.tasks,
|
||||
taskId,
|
||||
complexityReport,
|
||||
statusFilter
|
||||
);
|
||||
|
||||
@@ -1072,6 +1107,12 @@ async function displayTaskById(tasksPath, taskId, statusFilter = null) {
|
||||
chalk.cyan.bold('Status:'),
|
||||
getStatusWithColor(task.status || 'pending', true)
|
||||
],
|
||||
[
|
||||
chalk.cyan.bold('Complexity:'),
|
||||
task.complexityScore
|
||||
? getComplexityWithColor(task.complexityScore)
|
||||
: chalk.gray('N/A')
|
||||
],
|
||||
[
|
||||
chalk.cyan.bold('Description:'),
|
||||
task.description || 'No description provided.'
|
||||
@@ -1150,7 +1191,18 @@ async function displayTaskById(tasksPath, taskId, statusFilter = null) {
|
||||
[chalk.cyan.bold('Priority:'), priorityColor(task.priority || 'medium')],
|
||||
[
|
||||
chalk.cyan.bold('Dependencies:'),
|
||||
formatDependenciesWithStatus(task.dependencies, data.tasks, true)
|
||||
formatDependenciesWithStatus(
|
||||
task.dependencies,
|
||||
data.tasks,
|
||||
true,
|
||||
complexityReport
|
||||
)
|
||||
],
|
||||
[
|
||||
chalk.cyan.bold('Complexity:'),
|
||||
task.complexityScore
|
||||
? getComplexityWithColor(task.complexityScore)
|
||||
: chalk.gray('N/A')
|
||||
],
|
||||
[chalk.cyan.bold('Description:'), task.description]
|
||||
);
|
||||
@@ -1966,6 +2018,51 @@ function displayAvailableModels(availableModels) {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Displays AI usage telemetry summary in the CLI.
|
||||
* @param {object} telemetryData - The telemetry data object.
|
||||
* @param {string} outputType - 'cli' or 'mcp' (though typically only called for 'cli').
|
||||
*/
|
||||
function displayAiUsageSummary(telemetryData, outputType = 'cli') {
|
||||
if (
|
||||
(outputType !== 'cli' && outputType !== 'text') ||
|
||||
!telemetryData ||
|
||||
isSilentMode()
|
||||
) {
|
||||
return; // Only display for CLI and if data exists and not in silent mode
|
||||
}
|
||||
|
||||
const {
|
||||
modelUsed,
|
||||
providerName,
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
totalTokens,
|
||||
totalCost,
|
||||
commandName
|
||||
} = telemetryData;
|
||||
|
||||
let summary = chalk.bold.blue('AI Usage Summary:') + '\n';
|
||||
summary += chalk.gray(` Command: ${commandName}\n`);
|
||||
summary += chalk.gray(` Provider: ${providerName}\n`);
|
||||
summary += chalk.gray(` Model: ${modelUsed}\n`);
|
||||
summary += chalk.gray(
|
||||
` Tokens: ${totalTokens} (Input: ${inputTokens}, Output: ${outputTokens})\n`
|
||||
);
|
||||
summary += chalk.gray(` Est. Cost: $${totalCost.toFixed(6)}`);
|
||||
|
||||
console.log(
|
||||
boxen(summary, {
|
||||
padding: 1,
|
||||
margin: { top: 1 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
title: '💡 Telemetry',
|
||||
titleAlignment: 'center'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
// Export UI functions
|
||||
export {
|
||||
displayBanner,
|
||||
@@ -1983,5 +2080,6 @@ export {
|
||||
confirmTaskOverwrite,
|
||||
displayApiKeyStatus,
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels
|
||||
displayAvailableModels,
|
||||
displayAiUsageSummary
|
||||
};
|
||||
|
||||
@@ -275,6 +275,22 @@ function findTaskInComplexityReport(report, taskId) {
|
||||
return report.complexityAnalysis.find((task) => task.taskId === taskId);
|
||||
}
|
||||
|
||||
function addComplexityToTask(task, complexityReport) {
|
||||
let taskId;
|
||||
if (task.isSubtask) {
|
||||
taskId = task.parentTask.id;
|
||||
} else if (task.parentId) {
|
||||
taskId = task.parentId;
|
||||
} else {
|
||||
taskId = task.id;
|
||||
}
|
||||
|
||||
const taskAnalysis = findTaskInComplexityReport(complexityReport, taskId);
|
||||
if (taskAnalysis) {
|
||||
task.complexityScore = taskAnalysis.complexityScore;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a task exists in the tasks array
|
||||
* @param {Array} tasks - The tasks array
|
||||
@@ -325,10 +341,17 @@ function formatTaskId(id) {
|
||||
* Finds a task by ID in the tasks array. Optionally filters subtasks by status.
|
||||
* @param {Array} tasks - The tasks array
|
||||
* @param {string|number} taskId - The task ID to find
|
||||
* @param {Object|null} complexityReport - Optional pre-loaded complexity report
|
||||
* @returns {Object|null} The task object or null if not found
|
||||
* @param {string} [statusFilter] - Optional status to filter subtasks by
|
||||
* @returns {{task: Object|null, originalSubtaskCount: number|null}} The task object (potentially with filtered subtasks) and the original subtask count if filtered, or nulls if not found.
|
||||
*/
|
||||
function findTaskById(tasks, taskId, statusFilter = null) {
|
||||
function findTaskById(
|
||||
tasks,
|
||||
taskId,
|
||||
complexityReport = null,
|
||||
statusFilter = null
|
||||
) {
|
||||
if (!taskId || !tasks || !Array.isArray(tasks)) {
|
||||
return { task: null, originalSubtaskCount: null };
|
||||
}
|
||||
@@ -356,10 +379,17 @@ function findTaskById(tasks, taskId, statusFilter = null) {
|
||||
subtask.isSubtask = true;
|
||||
}
|
||||
|
||||
// Return the found subtask (or null) and null for originalSubtaskCount
|
||||
// If we found a task, check for complexity data
|
||||
if (subtask && complexityReport) {
|
||||
addComplexityToTask(subtask, complexityReport);
|
||||
}
|
||||
|
||||
return { task: subtask || null, originalSubtaskCount: null };
|
||||
}
|
||||
|
||||
let taskResult = null;
|
||||
let originalSubtaskCount = null;
|
||||
|
||||
// Find the main task
|
||||
const id = parseInt(taskId, 10);
|
||||
const task = tasks.find((t) => t.id === id) || null;
|
||||
@@ -369,6 +399,8 @@ function findTaskById(tasks, taskId, statusFilter = null) {
|
||||
return { task: null, originalSubtaskCount: null };
|
||||
}
|
||||
|
||||
taskResult = task;
|
||||
|
||||
// If task found and statusFilter provided, filter its subtasks
|
||||
if (statusFilter && task.subtasks && Array.isArray(task.subtasks)) {
|
||||
const originalSubtaskCount = task.subtasks.length;
|
||||
@@ -379,12 +411,18 @@ function findTaskById(tasks, taskId, statusFilter = null) {
|
||||
subtask.status &&
|
||||
subtask.status.toLowerCase() === statusFilter.toLowerCase()
|
||||
);
|
||||
// Return the filtered task and the original count
|
||||
return { task: filteredTask, originalSubtaskCount: originalSubtaskCount };
|
||||
|
||||
taskResult = filteredTask;
|
||||
originalSubtaskCount = originalSubtaskCount;
|
||||
}
|
||||
|
||||
// Return original task and null count if no filter or no subtasks
|
||||
return { task: task, originalSubtaskCount: null };
|
||||
// If task found and complexityReport provided, add complexity data
|
||||
if (taskResult && complexityReport) {
|
||||
addComplexityToTask(taskResult, complexityReport);
|
||||
}
|
||||
|
||||
// Return the found task and original subtask count
|
||||
return { task: taskResult, originalSubtaskCount };
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -508,6 +546,61 @@ function detectCamelCaseFlags(args) {
|
||||
return camelCaseFlags;
|
||||
}
|
||||
|
||||
/**
|
||||
* Aggregates an array of telemetry objects into a single summary object.
|
||||
* @param {Array<Object>} telemetryArray - Array of telemetryData objects.
|
||||
* @param {string} overallCommandName - The name for the aggregated command.
|
||||
* @returns {Object|null} Aggregated telemetry object or null if input is empty.
|
||||
*/
|
||||
function aggregateTelemetry(telemetryArray, overallCommandName) {
|
||||
if (!telemetryArray || telemetryArray.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const aggregated = {
|
||||
timestamp: new Date().toISOString(), // Use current time for aggregation time
|
||||
userId: telemetryArray[0].userId, // Assume userId is consistent
|
||||
commandName: overallCommandName,
|
||||
modelUsed: 'Multiple', // Default if models vary
|
||||
providerName: 'Multiple', // Default if providers vary
|
||||
inputTokens: 0,
|
||||
outputTokens: 0,
|
||||
totalTokens: 0,
|
||||
totalCost: 0,
|
||||
currency: telemetryArray[0].currency || 'USD' // Assume consistent currency or default
|
||||
};
|
||||
|
||||
const uniqueModels = new Set();
|
||||
const uniqueProviders = new Set();
|
||||
const uniqueCurrencies = new Set();
|
||||
|
||||
telemetryArray.forEach((item) => {
|
||||
aggregated.inputTokens += item.inputTokens || 0;
|
||||
aggregated.outputTokens += item.outputTokens || 0;
|
||||
aggregated.totalCost += item.totalCost || 0;
|
||||
uniqueModels.add(item.modelUsed);
|
||||
uniqueProviders.add(item.providerName);
|
||||
uniqueCurrencies.add(item.currency || 'USD');
|
||||
});
|
||||
|
||||
aggregated.totalTokens = aggregated.inputTokens + aggregated.outputTokens;
|
||||
aggregated.totalCost = parseFloat(aggregated.totalCost.toFixed(6)); // Fix precision
|
||||
|
||||
if (uniqueModels.size === 1) {
|
||||
aggregated.modelUsed = [...uniqueModels][0];
|
||||
}
|
||||
if (uniqueProviders.size === 1) {
|
||||
aggregated.providerName = [...uniqueProviders][0];
|
||||
}
|
||||
if (uniqueCurrencies.size > 1) {
|
||||
aggregated.currency = 'Multiple'; // Mark if currencies actually differ
|
||||
} else if (uniqueCurrencies.size === 1) {
|
||||
aggregated.currency = [...uniqueCurrencies][0];
|
||||
}
|
||||
|
||||
return aggregated;
|
||||
}
|
||||
|
||||
// Export all utility functions and configuration
|
||||
export {
|
||||
LOG_LEVELS,
|
||||
@@ -524,10 +617,12 @@ export {
|
||||
findCycles,
|
||||
toKebabCase,
|
||||
detectCamelCaseFlags,
|
||||
enableSilentMode,
|
||||
disableSilentMode,
|
||||
isSilentMode,
|
||||
resolveEnvVariable,
|
||||
enableSilentMode,
|
||||
getTaskManager,
|
||||
findProjectRoot
|
||||
isSilentMode,
|
||||
addComplexityToTask,
|
||||
resolveEnvVariable,
|
||||
findProjectRoot,
|
||||
aggregateTelemetry
|
||||
};
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
Task Master PRD
|
||||
|
||||
Create a CLI tool for task management
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2025-05-01T18:17:08.817Z",
|
||||
"tasksAnalyzed": 35,
|
||||
"generatedAt": "2025-05-17T22:29:22.179Z",
|
||||
"tasksAnalyzed": 40,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Taskmaster",
|
||||
"usedResearch": false
|
||||
@@ -12,280 +12,320 @@
|
||||
"taskTitle": "Implement AI-Powered Test Generation Command",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the 'generate-test' command into detailed subtasks covering command structure, AI prompt engineering, test file generation, and integration with existing systems.",
|
||||
"reasoning": "This task involves creating a new CLI command that leverages AI to generate test files. It requires integration with Claude API, understanding of Jest testing, file system operations, and complex prompt engineering. The task already has 3 subtasks but would benefit from further breakdown to address error handling, documentation, and test validation components."
|
||||
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
|
||||
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
|
||||
},
|
||||
{
|
||||
"taskId": 26,
|
||||
"taskTitle": "Implement Context Foundation for AI Operations",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of the context foundation for AI operations into detailed subtasks covering file context handling, cursor rules integration, context extraction utilities, and command handler updates.",
|
||||
"reasoning": "This task involves creating a foundation for context integration in Task Master. It requires implementing file reading functionality, cursor rules integration, and context extraction utilities. The task already has 4 subtasks but would benefit from additional subtasks for testing, documentation, and integration with existing AI operations."
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
|
||||
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
|
||||
},
|
||||
{
|
||||
"taskId": 27,
|
||||
"taskTitle": "Implement Context Enhancements for AI Operations",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of context enhancements for AI operations into detailed subtasks covering code context extraction, task history context, PRD context integration, and context formatting improvements.",
|
||||
"reasoning": "This task builds upon the foundation from task #26 and adds more sophisticated context features. It involves implementing code context extraction, task history awareness, and PRD integration. The task already has 4 subtasks but would benefit from additional subtasks for testing, documentation, and integration with the foundation context system."
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
|
||||
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
|
||||
},
|
||||
{
|
||||
"taskId": 28,
|
||||
"taskTitle": "Implement Advanced ContextManager System",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "Break down the implementation of the advanced ContextManager system into detailed subtasks covering class structure, optimization pipeline, command interface, AI service integration, and performance monitoring.",
|
||||
"reasoning": "This task involves creating a comprehensive ContextManager class with advanced features like context optimization, prioritization, and intelligent selection. It builds on the previous context tasks and requires sophisticated algorithms for token management and context relevance scoring. The task already has 5 subtasks but would benefit from additional subtasks for testing, documentation, and integration with existing systems."
|
||||
},
|
||||
{
|
||||
"taskId": 32,
|
||||
"taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
||||
"complexityScore": 9,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "Break down the implementation of the 'learn' command for automatic Cursor rule generation into detailed subtasks covering chat history analysis, rule management, AI integration, and command structure.",
|
||||
"reasoning": "This task involves creating a complex system that analyzes Cursor's chat history and code changes to automatically generate rule files. It requires sophisticated data analysis, pattern recognition, and AI integration. The task already has 15 subtasks, which is appropriate given its complexity, but could benefit from reorganization into logical groupings."
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
|
||||
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
|
||||
},
|
||||
{
|
||||
"taskId": 40,
|
||||
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the 'plan' command for task implementation planning into detailed subtasks covering command structure, AI integration, plan formatting, and error handling.",
|
||||
"reasoning": "This task involves creating a new command that generates implementation plans for tasks. It requires integration with AI services, understanding of task structure, and proper formatting of generated plans. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
|
||||
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
|
||||
},
|
||||
{
|
||||
"taskId": 41,
|
||||
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "Break down the implementation of the visual task dependency graph in terminal into detailed subtasks covering graph layout algorithms, ASCII/Unicode rendering, color coding, circular dependency detection, and filtering options.",
|
||||
"reasoning": "This task involves creating a complex visualization system for task dependencies using ASCII/Unicode characters. It requires sophisticated layout algorithms, rendering logic, and user interface considerations. The task already has 10 subtasks, which is appropriate given its complexity."
|
||||
"recommendedSubtasks": 10,
|
||||
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
|
||||
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
|
||||
},
|
||||
{
|
||||
"taskId": 42,
|
||||
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
||||
"complexityScore": 9,
|
||||
"recommendedSubtasks": 10,
|
||||
"expansionPrompt": "Break down the implementation of the MCP-to-MCP communication protocol into detailed subtasks covering protocol definition, adapter pattern, client module, reference implementation, and mode switching.",
|
||||
"reasoning": "This task involves designing and implementing a standardized communication protocol for Taskmaster to interact with external MCP tools. It requires sophisticated protocol design, authentication mechanisms, error handling, and support for different operational modes. The task already has 8 subtasks but would benefit from additional subtasks for security, testing, and documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 43,
|
||||
"taskTitle": "Add Research Flag to Add-Task Command",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Break down the implementation of the research flag for the add-task command into detailed subtasks covering command argument parsing, research subtask generation, integration with existing command, and documentation.",
|
||||
"reasoning": "This task involves modifying the add-task command to support a new flag that generates research-oriented subtasks. It's relatively straightforward as it builds on existing functionality. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
|
||||
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
|
||||
},
|
||||
{
|
||||
"taskId": 44,
|
||||
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "Break down the implementation of task automation with webhooks and event triggers into detailed subtasks covering webhook registration, event system, trigger definition, authentication, and payload templating.",
|
||||
"reasoning": "This task involves creating a sophisticated automation system with webhooks and event triggers. It requires implementing webhook registration, event capturing, trigger definitions, authentication, and integration with existing systems. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
|
||||
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
|
||||
},
|
||||
{
|
||||
"taskId": 45,
|
||||
"taskTitle": "Implement GitHub Issue Import Feature",
|
||||
"complexityScore": 5,
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the GitHub issue import feature into detailed subtasks covering URL parsing, GitHub API integration, task generation, authentication, and error handling.",
|
||||
"reasoning": "This task involves adding a feature to import GitHub issues as tasks. It requires integration with the GitHub API, URL parsing, authentication handling, and proper error management. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
|
||||
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
|
||||
},
|
||||
{
|
||||
"taskId": 46,
|
||||
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of the ICE analysis command for task prioritization into detailed subtasks covering scoring algorithm, report generation, CLI rendering, and integration with existing analysis tools.",
|
||||
"reasoning": "This task involves creating a new command that analyzes and ranks tasks based on Impact, Confidence, and Ease scoring. It requires implementing scoring algorithms, report generation, CLI rendering, and integration with existing analysis tools. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
|
||||
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
|
||||
},
|
||||
{
|
||||
"taskId": 47,
|
||||
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
||||
"complexityScore": 7,
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the enhancement of the task suggestion actions card workflow into detailed subtasks covering task expansion phase, context addition phase, task management phase, and UI/UX improvements.",
|
||||
"reasoning": "This task involves redesigning the suggestion actions card to implement a structured workflow. It requires implementing multiple phases (expansion, context addition, management) with appropriate UI/UX considerations. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
|
||||
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 48,
|
||||
"taskTitle": "Refactor Prompts into Centralized Structure",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Break down the refactoring of prompts into a centralized structure into detailed subtasks covering directory creation, prompt extraction, function modification, and documentation.",
|
||||
"reasoning": "This task involves restructuring how prompts are managed in the codebase. It's a relatively straightforward refactoring task that requires creating a new directory structure, extracting prompts from functions, and updating references. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 3,
|
||||
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
|
||||
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
|
||||
},
|
||||
{
|
||||
"taskId": 49,
|
||||
"taskTitle": "Implement Code Quality Analysis Command",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "Break down the implementation of the code quality analysis command into detailed subtasks covering pattern recognition, best practice verification, improvement recommendations, task integration, and reporting.",
|
||||
"reasoning": "This task involves creating a sophisticated command that analyzes code quality, identifies patterns, verifies against best practices, and generates improvement recommendations. It requires complex algorithms for code analysis and integration with AI services. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
|
||||
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
|
||||
},
|
||||
{
|
||||
"taskId": 50,
|
||||
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
||||
"complexityScore": 9,
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "Break down the implementation of the test coverage tracking system by task into detailed subtasks covering data structure design, coverage report parsing, tracking and update generation, CLI commands, and AI-powered test generation.",
|
||||
"reasoning": "This task involves creating a comprehensive system for tracking test coverage at the task level. It requires implementing data structures, coverage report parsing, tracking mechanisms, CLI commands, and AI integration. The task already has 5 subtasks but would benefit from additional subtasks for integration testing, documentation, and user experience."
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
|
||||
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
|
||||
},
|
||||
{
|
||||
"taskId": 51,
|
||||
"taskTitle": "Implement Perplexity Research Command",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of the Perplexity research command into detailed subtasks covering API client service, task context extraction, CLI interface, results processing, and caching system.",
|
||||
"reasoning": "This task involves creating a command that integrates with Perplexity AI for research purposes. It requires implementing an API client, context extraction, CLI interface, results processing, and caching. The task already has 5 subtasks, which is appropriate for its complexity."
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
|
||||
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
|
||||
},
|
||||
{
|
||||
"taskId": 52,
|
||||
"taskTitle": "Implement Task Suggestion Command for CLI",
|
||||
"complexityScore": 5,
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the task suggestion command for CLI into detailed subtasks covering task data collection, AI integration, suggestion presentation, interactive interface, and configuration options.",
|
||||
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions. It requires collecting existing task data, integrating with AI services, presenting suggestions, and implementing an interactive interface. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
|
||||
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
|
||||
},
|
||||
{
|
||||
"taskId": 53,
|
||||
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of the subtask suggestion feature for parent tasks into detailed subtasks covering parent task validation, context gathering, AI integration, interactive interface, and subtask linking.",
|
||||
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for existing parent tasks. It requires implementing parent task validation, context gathering, AI integration, an interactive interface, and subtask linking. The task already has 6 subtasks, which is appropriate for its complexity."
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
|
||||
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 55,
|
||||
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of positional arguments support for CLI commands into detailed subtasks covering argument parsing logic, command mapping, help text updates, error handling, and testing.",
|
||||
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. It requires updating argument parsing, mapping positional arguments to parameters, updating help text, and handling edge cases. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
|
||||
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 57,
|
||||
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the enhancement of the Task-Master CLI user experience and interface into detailed subtasks covering log management, visual enhancements, interactive elements, output formatting, and help documentation.",
|
||||
"reasoning": "This task involves improving the CLI's user experience through various enhancements to logging, visuals, interactivity, and documentation. It requires implementing log levels, visual improvements, interactive elements, and better formatting. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
|
||||
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
|
||||
},
|
||||
{
|
||||
"taskId": 60,
|
||||
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "Break down the implementation of the mentor system with round-table discussion feature into detailed subtasks covering mentor management, round-table discussion, task system integration, LLM integration, and documentation.",
|
||||
"reasoning": "This task involves creating a sophisticated mentor system with round-table discussions. It requires implementing mentor management, discussion simulation, task integration, and LLM integration. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||
},
|
||||
{
|
||||
"taskId": 61,
|
||||
"taskTitle": "Implement Flexible AI Model Management",
|
||||
"complexityScore": 10,
|
||||
"recommendedSubtasks": 10,
|
||||
"expansionPrompt": "Break down the implementation of flexible AI model management into detailed subtasks covering configuration management, CLI command parsing, AI SDK integration, service module development, environment variable handling, and documentation.",
|
||||
"reasoning": "This task involves implementing comprehensive support for multiple AI models with a unified interface. It's extremely complex, requiring configuration management, CLI commands, SDK integration, service modules, and environment handling. The task already has 45 subtasks, which is appropriate given its complexity and scope."
|
||||
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
|
||||
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 62,
|
||||
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of the --simple flag for update commands into detailed subtasks covering command parser updates, AI processing bypass, timestamp formatting, visual indicators, and documentation.",
|
||||
"reasoning": "This task involves modifying update commands to accept a flag that bypasses AI processing. It requires updating command parsers, implementing conditional logic, formatting user input, and updating documentation. The task already has 8 subtasks, which is more than sufficient for its complexity."
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
|
||||
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 63,
|
||||
"taskTitle": "Add pnpm Support for the Taskmaster Package",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of pnpm support for the Taskmaster package into detailed subtasks covering documentation updates, package script compatibility, lockfile generation, installation testing, CI/CD integration, and website consistency verification.",
|
||||
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with pnpm. It requires updating documentation, ensuring script compatibility, testing installation, and integrating with CI/CD. The task already has 8 subtasks, which is appropriate for its complexity."
|
||||
"recommendedSubtasks": 8,
|
||||
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
|
||||
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
|
||||
},
|
||||
{
|
||||
"taskId": 64,
|
||||
"taskTitle": "Add Yarn Support for Taskmaster Installation",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of Yarn support for Taskmaster installation into detailed subtasks covering package.json updates, Yarn-specific configuration, compatibility testing, documentation updates, package manager detection, and website consistency verification.",
|
||||
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with Yarn. It requires updating package.json, adding Yarn-specific configuration, testing compatibility, and updating documentation. The task already has 9 subtasks, which is appropriate for its complexity."
|
||||
"recommendedSubtasks": 9,
|
||||
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
|
||||
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 65,
|
||||
"taskTitle": "Add Bun Support for Taskmaster Installation",
|
||||
"complexityScore": 5,
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of Bun support for Taskmaster installation into detailed subtasks covering package.json updates, Bun-specific configuration, compatibility testing, documentation updates, package manager detection, and troubleshooting guidance.",
|
||||
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with Bun. It requires updating package.json, adding Bun-specific configuration, testing compatibility, and updating documentation. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||
},
|
||||
{
|
||||
"taskId": 66,
|
||||
"taskTitle": "Support Status Filtering in Show Command for Subtasks",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Break down the implementation of status filtering in the show command for subtasks into detailed subtasks covering command parser updates, filtering logic, help documentation, and testing.",
|
||||
"reasoning": "This task involves enhancing the show command to support status-based filtering of subtasks. It's relatively straightforward, requiring updates to the command parser, filtering logic, and documentation. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
|
||||
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 67,
|
||||
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of CLI JSON output and Cursor keybindings integration into detailed subtasks covering JSON flag implementation, output formatting, keybindings command structure, OS detection, file handling, and keybinding definition.",
|
||||
"reasoning": "This task involves two main components: adding JSON output to CLI commands and creating a new command for Cursor keybindings. It requires implementing a JSON flag, formatting output, creating a new command, detecting OS, handling files, and defining keybindings. The task already has 5 subtasks, which is appropriate for its complexity."
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
|
||||
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
|
||||
},
|
||||
{
|
||||
"taskId": 68,
|
||||
"taskTitle": "Ability to create tasks without parsing PRD",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Break down the implementation of creating tasks without parsing PRD into detailed subtasks covering tasks.json creation, function reuse from parse-prd, command modification, and documentation.",
|
||||
"reasoning": "This task involves modifying the task creation process to work without a PRD. It's relatively straightforward, requiring tasks.json creation, function reuse, command modification, and documentation. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 2,
|
||||
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
|
||||
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
|
||||
},
|
||||
{
|
||||
"taskId": 69,
|
||||
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the enhancement of analyze-complexity for specific task IDs into detailed subtasks covering core logic modification, CLI command updates, MCP tool updates, report handling, and testing.",
|
||||
"reasoning": "This task involves modifying the analyze-complexity feature to support analyzing specific task IDs. It requires updating core logic, CLI commands, MCP tools, and report handling. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for enhancing the analyze-complexity feature appear well-structured. Consider if any additional subtasks are needed for performance optimization with large task sets or visualization improvements.",
|
||||
"reasoning": "This task involves modifying the existing analyze-complexity feature to support analyzing specific task IDs and updating reports. The complexity is moderate as it requires careful handling of report merging and filtering logic. The 4 existing subtasks cover the main implementation areas from core logic to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 70,
|
||||
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Break down the implementation of the 'diagram' command for Mermaid diagram generation into detailed subtasks covering command structure, task data collection, diagram generation, rendering options, file export, and documentation.",
|
||||
"reasoning": "This task involves creating a new command that generates Mermaid diagrams for task dependencies. It requires implementing command structure, collecting task data, generating diagrams, providing rendering options, and supporting file export. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "The current 4 subtasks for implementing the 'diagram' command appear well-structured. Consider if any additional subtasks are needed for handling large dependency graphs, additional output formats, or integration with existing visualization tools.",
|
||||
"reasoning": "This task involves creating a new command that generates Mermaid diagrams to visualize task dependencies. The complexity is moderate as it requires parsing task relationships, generating proper Mermaid syntax, and handling various output options. The 4 existing subtasks cover the main implementation areas from interface design to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 72,
|
||||
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "Break down the implementation of PDF generation for project progress and dependency overview into detailed subtasks covering command structure, data collection, progress summary generation, dependency visualization, PDF creation, styling, and documentation.",
|
||||
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependencies. It requires implementing command structure, collecting data, generating summaries, visualizing dependencies, creating PDFs, and styling the output. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||
},
|
||||
{
|
||||
"taskId": 73,
|
||||
"taskTitle": "Implement Custom Model ID Support for Ollama/OpenRouter",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "Break down the implementation of custom model ID support for Ollama/OpenRouter into detailed subtasks covering CLI flag implementation, model validation, interactive setup, configuration updates, and documentation.",
|
||||
"reasoning": "This task involves allowing users to specify custom model IDs for Ollama and OpenRouter. It requires implementing CLI flags, validating models, updating the interactive setup, modifying configuration, and updating documentation. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
|
||||
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
|
||||
},
|
||||
{
|
||||
"taskId": 75,
|
||||
"taskTitle": "Integrate Google Search Grounding for Research Role",
|
||||
"complexityScore": 4,
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Break down the integration of Google Search Grounding for research role into detailed subtasks covering AI service layer modification, conditional logic implementation, model configuration updates, and testing.",
|
||||
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for the research role. It's relatively straightforward, requiring modifications to the AI service, implementing conditional logic, updating model configurations, and testing. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
|
||||
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
|
||||
},
|
||||
{
|
||||
"taskId": 76,
|
||||
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 7,
|
||||
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
|
||||
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 77,
|
||||
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 18,
|
||||
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
|
||||
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
|
||||
},
|
||||
{
|
||||
"taskId": 80,
|
||||
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 5,
|
||||
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
|
||||
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
|
||||
},
|
||||
{
|
||||
"taskId": 81,
|
||||
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
|
||||
"complexityScore": 8,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
|
||||
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
|
||||
},
|
||||
{
|
||||
"taskId": 82,
|
||||
"taskTitle": "Update supported-models.json with token limit fields",
|
||||
"complexityScore": 3,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
|
||||
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 83,
|
||||
"taskTitle": "Update config-manager.js defaults and getters",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
|
||||
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 84,
|
||||
"taskTitle": "Implement token counting utility",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
|
||||
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 85,
|
||||
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
|
||||
"complexityScore": 6,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing dynamic token limit adjustment based on input length and model capabilities.",
|
||||
"reasoning": "This task involves modifying the AI service runner to use the new token counting utility and dynamically adjust output token limits. The complexity is moderate to high as it requires careful integration with existing code and handling various edge cases. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 86,
|
||||
"taskTitle": "Update .taskmasterconfig schema and user guide",
|
||||
"complexityScore": 4,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on creating clear migration guidance and updating documentation.",
|
||||
"reasoning": "This task involves creating a migration guide for users to update their configuration files and documenting the new token limit options. The complexity is relatively low as it primarily involves documentation and schema validation. No subtasks are necessary as the task is well-defined and focused."
|
||||
},
|
||||
{
|
||||
"taskId": 87,
|
||||
"taskTitle": "Implement validation and error handling",
|
||||
"complexityScore": 5,
|
||||
"recommendedSubtasks": 1,
|
||||
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on comprehensive validation and helpful error messages throughout the system.",
|
||||
"reasoning": "This task involves adding validation and error handling for token limits throughout the system. The complexity is moderate as it requires careful integration with multiple components and creating helpful error messages. No subtasks are necessary as the task is well-defined and focused."
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user