Compare commits
1 Commits
v0.13.1
...
fix/mcp.to
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4b540fe7da |
5
.changeset/beige-rats-accept.md
Normal file
5
.changeset/beige-rats-accept.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
- Add support for Google Gemini models via Vercel AI SDK integration.
|
||||||
5
.changeset/blue-spies-kick.md
Normal file
5
.changeset/blue-spies-kick.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add xAI provider and Grok models support
|
||||||
8
.changeset/cuddly-zebras-matter.md
Normal file
8
.changeset/cuddly-zebras-matter.md
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
feat(expand): Enhance `expand` and `expand-all` commands
|
||||||
|
|
||||||
|
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
||||||
|
- Change default behavior to *append* new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
||||||
9
.changeset/curvy-candies-eat.md
Normal file
9
.changeset/curvy-candies-eat.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Better support for file paths on Windows, Linux & WSL.
|
||||||
|
|
||||||
|
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||||
|
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||||
|
- Simplifies tool implementation by centralizing normalization logic.
|
||||||
7
.changeset/easy-toys-wash.md
Normal file
7
.changeset/easy-toys-wash.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs.
|
||||||
|
- IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.
|
||||||
|
|
||||||
5
.changeset/every-stars-sell.md
Normal file
5
.changeset/every-stars-sell.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add integration for Roo Code
|
||||||
8
.changeset/fine-monkeys-eat.md
Normal file
8
.changeset/fine-monkeys-eat.md
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improved update-subtask
|
||||||
|
- Now it has context about the parent task details
|
||||||
|
- It also has context about the subtask before it and the subtask after it (if they exist)
|
||||||
|
- Not passing all subtasks to stay token efficient
|
||||||
13
.changeset/fine-signs-add.md
Normal file
13
.changeset/fine-signs-add.md
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve and adjust `init` command for robustness and updated dependencies.
|
||||||
|
|
||||||
|
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
||||||
|
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
||||||
|
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
||||||
|
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
|
||||||
|
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
|
||||||
|
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
|
||||||
|
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.
|
||||||
5
.changeset/gentle-views-jump.md
Normal file
5
.changeset/gentle-views-jump.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.
|
||||||
5
.changeset/mighty-mirrors-watch.md
Normal file
5
.changeset/mighty-mirrors-watch.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
||||||
5
.changeset/neat-donkeys-shave.md
Normal file
5
.changeset/neat-donkeys-shave.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140
|
||||||
10
.changeset/nine-rocks-sink.md
Normal file
10
.changeset/nine-rocks-sink.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improves next command to be subtask-aware
|
||||||
|
- The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'.
|
||||||
|
- The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority.
|
||||||
|
- If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
|
||||||
|
|
||||||
|
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
|
||||||
11
.changeset/ninety-ghosts-relax.md
Normal file
11
.changeset/ninety-ghosts-relax.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds custom model ID support for Ollama and OpenRouter providers.
|
||||||
|
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
||||||
|
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
||||||
|
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
||||||
|
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
|
||||||
|
- Added warnings when setting custom/unvalidated models.
|
||||||
|
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
|
||||||
5
.changeset/ninety-wombats-pull.md
Normal file
5
.changeset/ninety-wombats-pull.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add `--status` flag to `show` command to filter displayed subtasks.
|
||||||
7
.changeset/public-cooks-fetch.md
Normal file
7
.changeset/public-cooks-fetch.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Integrate OpenAI as a new AI provider.
|
||||||
|
- Enhance `models` command/tool to display API key status.
|
||||||
|
- Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.
|
||||||
9
.changeset/tricky-papayas-hang.md
Normal file
9
.changeset/tricky-papayas-hang.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information
|
||||||
|
- Forces temp at 0.1 for highly deterministic output, no variations
|
||||||
|
- Adds a system prompt to further improve the output
|
||||||
|
- Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity
|
||||||
|
- Specificies to use a high degree of research across the web
|
||||||
|
- Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
|
||||||
5
.changeset/violet-papayas-see.md
Normal file
5
.changeset/violet-papayas-see.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix --task to --num-tasks in ui + related tests - issue #324
|
||||||
9
.changeset/violet-parrots-march.md
Normal file
9
.changeset/violet-parrots-march.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models."
|
||||||
|
- In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup.
|
||||||
|
- In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both.
|
||||||
|
- Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information.
|
||||||
|
- Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.
|
||||||
72
CHANGELOG.md
72
CHANGELOG.md
@@ -1,77 +1,5 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
## 0.13.1
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#399](https://github.com/eyaltoledano/claude-task-master/pull/399) [`734a4fd`](https://github.com/eyaltoledano/claude-task-master/commit/734a4fdcfc89c2e089255618cf940561ad13a3c8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
|
|
||||||
|
|
||||||
## 0.13.0
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ef782ff`](https://github.com/eyaltoledano/claude-task-master/commit/ef782ff5bd4ceb3ed0dc9ea82087aae5f79ac933) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - feat(expand): Enhance `expand` and `expand-all` commands
|
|
||||||
|
|
||||||
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
|
||||||
- Change default behavior to _append_ new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`87d97bb`](https://github.com/eyaltoledano/claude-task-master/commit/87d97bba00d84e905756d46ef96b2d5b984e0f38) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring an `OPENROUTER_API_KEY`) via the `task-master models` command, granting access to a wide range of additional LLMs. - IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like **free** models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`1ab836f`](https://github.com/eyaltoledano/claude-task-master/commit/1ab836f191cb8969153593a9a0bd47fc9aa4a831) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`c8722b0`](https://github.com/eyaltoledano/claude-task-master/commit/c8722b0a7a443a73b95d1bcd4a0b68e0fce2a1cd) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds custom model ID support for Ollama and OpenRouter providers.
|
|
||||||
|
|
||||||
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
|
||||||
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
|
||||||
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
|
||||||
- Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
|
|
||||||
- Added warnings when setting custom/unvalidated models.
|
|
||||||
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`2517bc1`](https://github.com/eyaltoledano/claude-task-master/commit/2517bc112c9a497110f3286ca4bfb4130c9addcb) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Integrate OpenAI as a new AI provider. - Enhance `models` command/tool to display API key status. - Implement model-specific `maxTokens` override based on `supported-models.json` to save you if you use an incorrect max token value.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`9a48278`](https://github.com/eyaltoledano/claude-task-master/commit/9a482789f7894f57f655fb8d30ba68542bd0df63) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information - Forces temp at 0.1 for highly deterministic output, no variations - Adds a system prompt to further improve the output - Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity - Specificies to use a high degree of research across the web - Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`842eaf7`](https://github.com/eyaltoledano/claude-task-master/commit/842eaf722498ddf7307800b4cdcef4ac4fd7e5b0) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - - Add support for Google Gemini models via Vercel AI SDK integration.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ed79d4f`](https://github.com/eyaltoledano/claude-task-master/commit/ed79d4f4735dfab4124fa189214c0bd5e23a6860) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add xAI provider and Grok models support
|
|
||||||
|
|
||||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`ad89253`](https://github.com/eyaltoledano/claude-task-master/commit/ad89253e313a395637aa48b9f92cc39b1ef94ad8) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Better support for file paths on Windows, Linux & WSL.
|
|
||||||
|
|
||||||
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
|
||||||
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
|
||||||
- Simplifies tool implementation by centralizing normalization logic.
|
|
||||||
|
|
||||||
- [#285](https://github.com/eyaltoledano/claude-task-master/pull/285) [`2acba94`](https://github.com/eyaltoledano/claude-task-master/commit/2acba945c0afee9460d8af18814c87e80f747e9f) Thanks [@neno-is-ooo](https://github.com/neno-is-ooo)! - Add integration for Roo Code
|
|
||||||
|
|
||||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`d63964a`](https://github.com/eyaltoledano/claude-task-master/commit/d63964a10eed9be17856757661ff817ad6bacfdc) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improved update-subtask - Now it has context about the parent task details - It also has context about the subtask before it and the subtask after it (if they exist) - Not passing all subtasks to stay token efficient
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`5f504fa`](https://github.com/eyaltoledano/claude-task-master/commit/5f504fafb8bdaa0043c2d20dee8bbb8ec2040d85) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improve and adjust `init` command for robustness and updated dependencies.
|
|
||||||
|
|
||||||
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
|
||||||
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
|
||||||
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
|
||||||
- **Refactor `init.js`:** Remove internal `isInteractive` flag logic.
|
|
||||||
- **Update `init` Instructions:** Tweak the "Getting Started" text displayed after `init`.
|
|
||||||
- **Fix MCP Server Launch:** Update `.cursor/mcp.json` template to use `node ./mcp-server/server.js` instead of `npx task-master-mcp`.
|
|
||||||
- **Update Default Model:** Change the default main model in the `.taskmasterconfig` template.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`96aeeff`](https://github.com/eyaltoledano/claude-task-master/commit/96aeeffc195372722c6a07370540e235bfe0e4d8) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`5aea93d`](https://github.com/eyaltoledano/claude-task-master/commit/5aea93d4c0490c242d7d7042a210611977848e0a) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`66ac9ab`](https://github.com/eyaltoledano/claude-task-master/commit/66ac9ab9f66d006da518d6e8a3244e708af2764d) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improves next command to be subtask-aware - The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'. - The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority. - If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).
|
|
||||||
|
|
||||||
This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ca7b045`](https://github.com/eyaltoledano/claude-task-master/commit/ca7b0457f1dc65fd9484e92527d9fd6d69db758d) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add `--status` flag to `show` command to filter displayed subtasks.
|
|
||||||
|
|
||||||
- [#328](https://github.com/eyaltoledano/claude-task-master/pull/328) [`5a2371b`](https://github.com/eyaltoledano/claude-task-master/commit/5a2371b7cc0c76f5e95d43921c1e8cc8081bf14e) Thanks [@knoxgraeme](https://github.com/knoxgraeme)! - Fix --task to --num-tasks in ui + related tests - issue #324
|
|
||||||
|
|
||||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`6cb213e`](https://github.com/eyaltoledano/claude-task-master/commit/6cb213ebbd51116ae0688e35b575d09443d17c3b) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models." - In the CLI, `task-master models` shows the current models config. Using the `--setup` flag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Use `q` during the interactive setup to cancel the setup. - In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use the `models` tool with different arguments, including `listAvailableModels` to get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both. - Updated the CLI help menu when you run `task-master` to include missing commands and .taskmasterconfig information. - Adds `--research` flag to `add-task` so you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.
|
|
||||||
|
|
||||||
## 0.12.1
|
## 0.12.1
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ npm install task-master-ai
|
|||||||
task-master init
|
task-master init
|
||||||
|
|
||||||
# If installed locally
|
# If installed locally
|
||||||
npx task-master init
|
npx task-master-init
|
||||||
```
|
```
|
||||||
|
|
||||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ Initialize a new project:
|
|||||||
task-master init
|
task-master init
|
||||||
|
|
||||||
# If installed locally
|
# If installed locally
|
||||||
npx task-master init
|
npx task-master-init
|
||||||
```
|
```
|
||||||
|
|
||||||
This will prompt you for project details and set up a new project with the necessary files and structure.
|
This will prompt you for project details and set up a new project with the necessary files and structure.
|
||||||
|
|||||||
@@ -71,34 +71,24 @@ export async function nextTaskDirect(args, log) {
|
|||||||
data: {
|
data: {
|
||||||
message:
|
message:
|
||||||
'No eligible next task found. All tasks are either completed or have unsatisfied dependencies',
|
'No eligible next task found. All tasks are either completed or have unsatisfied dependencies',
|
||||||
nextTask: null
|
nextTask: null,
|
||||||
|
allTasks: data.tasks
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if it's a subtask
|
|
||||||
const isSubtask =
|
|
||||||
typeof nextTask.id === 'string' && nextTask.id.includes('.');
|
|
||||||
|
|
||||||
const taskOrSubtask = isSubtask ? 'subtask' : 'task';
|
|
||||||
|
|
||||||
const additionalAdvice = isSubtask
|
|
||||||
? 'Subtasks can be updated with timestamped details as you implement them. This is useful for tracking progress, marking milestones and insights (of successful or successive falures in attempting to implement the subtask). Research can be used when updating the subtask to collect up-to-date information, and can be helpful to solve a repeating problem the agent is unable to solve. It is a good idea to get-task the parent task to collect the overall context of the task, and to get-task the subtask to collect the specific details of the subtask.'
|
|
||||||
: 'Tasks can be updated to reflect a change in the direction of the task, or to reformulate the task per your prompt. Research can be used when updating the task to collect up-to-date information. It is best to update subtasks as you work on them, and to update the task for more high-level changes that may affect pending subtasks or the general direction of the task.';
|
|
||||||
|
|
||||||
// Restore normal logging
|
// Restore normal logging
|
||||||
disableSilentMode();
|
disableSilentMode();
|
||||||
|
|
||||||
// Return the next task data with the full tasks array for reference
|
// Return the next task data with the full tasks array for reference
|
||||||
log.info(
|
log.info(
|
||||||
`Successfully found next task ${nextTask.id}: ${nextTask.title}. Is subtask: ${isSubtask}`
|
`Successfully found next task ${nextTask.id}: ${nextTask.title}`
|
||||||
);
|
);
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
nextTask,
|
nextTask,
|
||||||
isSubtask,
|
allTasks: data.tasks
|
||||||
nextSteps: `When ready to work on the ${taskOrSubtask}, use set-status to set the status to "in progress" ${additionalAdvice}`
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|||||||
@@ -34,17 +34,18 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
projectRoot
|
projectRoot
|
||||||
} = args;
|
} = args;
|
||||||
|
|
||||||
// Create the standard logger wrapper
|
|
||||||
const logWrapper = createLogWrapper(log);
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
// --- Input Validation and Path Resolution ---
|
// --- Input Validation and Path Resolution ---
|
||||||
if (!projectRoot) {
|
if (!projectRoot || !path.isAbsolute(projectRoot)) {
|
||||||
logWrapper.error('parsePRDDirect requires a projectRoot argument.');
|
logWrapper.error(
|
||||||
|
'parsePRDDirect requires an absolute projectRoot argument.'
|
||||||
|
);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'MISSING_ARGUMENT',
|
code: 'MISSING_ARGUMENT',
|
||||||
message: 'projectRoot is required.'
|
message: 'projectRoot is required and must be absolute.'
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
@@ -56,7 +57,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resolve input and output paths relative to projectRoot
|
// Resolve input and output paths relative to projectRoot if they aren't absolute
|
||||||
const inputPath = path.resolve(projectRoot, inputArg);
|
const inputPath = path.resolve(projectRoot, inputArg);
|
||||||
const outputPath = outputArg
|
const outputPath = outputArg
|
||||||
? path.resolve(projectRoot, outputArg)
|
? path.resolve(projectRoot, outputArg)
|
||||||
@@ -100,7 +101,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
// Ensure positive number
|
// Ensure positive number
|
||||||
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
||||||
logWrapper.warn(
|
logWrapper.warn(
|
||||||
`Invalid numTasks value: ${numTasksArg}. Using default: ${numTasks}`
|
`Invalid numTasks value: ${numTasksArg}. Using default: 10`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -146,6 +147,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
message: `Successfully parsed PRD and generated ${result.tasks.length} tasks.`,
|
message: `Successfully parsed PRD and generated ${result.tasks.length} tasks.`,
|
||||||
outputPath: outputPath,
|
outputPath: outputPath,
|
||||||
taskCount: result.tasks.length
|
taskCount: result.tasks.length
|
||||||
|
// Optionally include tasks if needed by client: tasks: result.tasks
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -1,122 +1,121 @@
|
|||||||
/**
|
/**
|
||||||
* update-tasks.js
|
* update-tasks.js
|
||||||
* Direct function implementation for updating tasks based on new context
|
* Direct function implementation for updating tasks based on new context/prompt
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import path from 'path';
|
|
||||||
import { updateTasks } from '../../../../scripts/modules/task-manager.js';
|
import { updateTasks } from '../../../../scripts/modules/task-manager.js';
|
||||||
|
import {
|
||||||
|
enableSilentMode,
|
||||||
|
disableSilentMode
|
||||||
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { createLogWrapper } from '../../tools/utils.js';
|
import { createLogWrapper } from '../../tools/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for updating tasks based on new context.
|
* Direct function wrapper for updating tasks based on new context/prompt.
|
||||||
*
|
*
|
||||||
* @param {Object} args - Command arguments containing projectRoot, from, prompt, research options.
|
* @param {Object} args - Command arguments containing from, prompt, research and tasksJsonPath.
|
||||||
* @param {Object} log - Logger object.
|
* @param {Object} log - Logger object.
|
||||||
* @param {Object} context - Context object containing session data.
|
* @param {Object} context - Context object containing session data.
|
||||||
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
* @returns {Promise<Object>} - Result object with success status and data/error information.
|
||||||
*/
|
*/
|
||||||
export async function updateTasksDirect(args, log, context = {}) {
|
export async function updateTasksDirect(args, log, context = {}) {
|
||||||
const { session } = context;
|
const { session } = context; // Extract session
|
||||||
const { from, prompt, research, file: fileArg, projectRoot } = args;
|
const { tasksJsonPath, from, prompt, research, projectRoot } = args;
|
||||||
|
|
||||||
// Create the standard logger wrapper
|
// --- Input Validation (Keep existing checks) ---
|
||||||
const logWrapper = createLogWrapper(log);
|
if (!tasksJsonPath) {
|
||||||
|
log.error('updateTasksDirect called without tasksJsonPath');
|
||||||
// --- Input Validation ---
|
|
||||||
if (!projectRoot) {
|
|
||||||
logWrapper.error('updateTasksDirect requires a projectRoot argument.');
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: { code: 'MISSING_ARGUMENT', message: 'tasksJsonPath is required' },
|
||||||
code: 'MISSING_ARGUMENT',
|
fromCache: false
|
||||||
message: 'projectRoot is required.'
|
};
|
||||||
}
|
}
|
||||||
|
if (args.id !== undefined && from === undefined) {
|
||||||
|
// Keep 'from' vs 'id' check
|
||||||
|
const errorMessage =
|
||||||
|
"Use 'from' parameter, not 'id', or use 'update_task' tool.";
|
||||||
|
log.error(errorMessage);
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'PARAMETER_MISMATCH', message: errorMessage },
|
||||||
|
fromCache: false
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!from) {
|
if (!from) {
|
||||||
logWrapper.error('updateTasksDirect called without from ID');
|
log.error('Missing from ID.');
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: { code: 'MISSING_FROM_ID', message: 'No from ID specified.' },
|
||||||
code: 'MISSING_ARGUMENT',
|
fromCache: false
|
||||||
message: 'Starting task ID (from) is required'
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!prompt) {
|
if (!prompt) {
|
||||||
logWrapper.error('updateTasksDirect called without prompt');
|
log.error('Missing prompt.');
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: { code: 'MISSING_PROMPT', message: 'No prompt specified.' },
|
||||||
|
fromCache: false
|
||||||
|
};
|
||||||
|
}
|
||||||
|
let fromId;
|
||||||
|
try {
|
||||||
|
fromId = parseInt(from, 10);
|
||||||
|
if (isNaN(fromId) || fromId <= 0) throw new Error();
|
||||||
|
} catch {
|
||||||
|
log.error(`Invalid from ID: ${from}`);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'MISSING_ARGUMENT',
|
code: 'INVALID_FROM_ID',
|
||||||
message: 'Update prompt is required'
|
message: `Invalid from ID: ${from}. Must be a positive integer.`
|
||||||
}
|
},
|
||||||
|
fromCache: false
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
const useResearch = research === true;
|
||||||
|
// --- End Input Validation ---
|
||||||
|
|
||||||
// Resolve tasks file path
|
log.info(
|
||||||
const tasksFile = fileArg
|
`Updating tasks from ID ${fromId}. Research: ${useResearch}. Project Root: ${projectRoot}`
|
||||||
? path.resolve(projectRoot, fileArg)
|
|
||||||
: path.resolve(projectRoot, 'tasks', 'tasks.json');
|
|
||||||
|
|
||||||
logWrapper.info(
|
|
||||||
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksFile}, ProjectRoot: ${projectRoot}`
|
|
||||||
);
|
);
|
||||||
|
|
||||||
enableSilentMode(); // Enable silent mode
|
enableSilentMode(); // Enable silent mode
|
||||||
try {
|
try {
|
||||||
// Call the core updateTasks function
|
// Create logger wrapper using the utility
|
||||||
const result = await updateTasks(
|
const mcpLog = createLogWrapper(log);
|
||||||
tasksFile,
|
|
||||||
from,
|
// Execute core updateTasks function, passing session context AND projectRoot
|
||||||
|
await updateTasks(
|
||||||
|
tasksJsonPath,
|
||||||
|
fromId,
|
||||||
prompt,
|
prompt,
|
||||||
research,
|
useResearch,
|
||||||
{
|
// Pass context with logger wrapper, session, AND projectRoot
|
||||||
session,
|
{ mcpLog, session, projectRoot },
|
||||||
mcpLog: logWrapper,
|
'json' // Explicitly request JSON format for MCP
|
||||||
projectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
);
|
||||||
|
|
||||||
// updateTasks returns { success: true, updatedTasks: [...] } on success
|
// Since updateTasks modifies file and doesn't return data, create success message
|
||||||
if (result && result.success && Array.isArray(result.updatedTasks)) {
|
|
||||||
logWrapper.success(
|
|
||||||
`Successfully updated ${result.updatedTasks.length} tasks.`
|
|
||||||
);
|
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message: `Successfully updated ${result.updatedTasks.length} tasks.`,
|
message: `Successfully initiated update for tasks from ID ${fromId} based on the prompt.`,
|
||||||
tasksFile,
|
fromId,
|
||||||
updatedCount: result.updatedTasks.length
|
tasksPath: tasksJsonPath,
|
||||||
}
|
useResearch
|
||||||
|
},
|
||||||
|
fromCache: false // Modifies state
|
||||||
};
|
};
|
||||||
} else {
|
|
||||||
// Handle case where core function didn't return expected success structure
|
|
||||||
logWrapper.error(
|
|
||||||
'Core updateTasks function did not return a successful structure.'
|
|
||||||
);
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
code: 'CORE_FUNCTION_ERROR',
|
|
||||||
message:
|
|
||||||
result?.message ||
|
|
||||||
'Core function failed to update tasks or returned unexpected result.'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logWrapper.error(`Error executing core updateTasks: ${error.message}`);
|
log.error(`Error executing core updateTasks: ${error.message}`);
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
error: {
|
error: {
|
||||||
code: 'UPDATE_TASKS_CORE_ERROR',
|
code: 'UPDATE_TASKS_CORE_ERROR',
|
||||||
message: error.message || 'Unknown error updating tasks'
|
message: error.message || 'Unknown error updating tasks'
|
||||||
}
|
},
|
||||||
|
fromCache: false
|
||||||
};
|
};
|
||||||
} finally {
|
} finally {
|
||||||
disableSilentMode(); // Ensure silent mode is disabled
|
disableSilentMode(); // Ensure silent mode is disabled
|
||||||
|
|||||||
34
package-lock.json
generated
34
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.13.0",
|
"version": "0.12.1",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.13.0",
|
"version": "0.12.1",
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/anthropic": "^1.2.10",
|
"@ai-sdk/anthropic": "^1.2.10",
|
||||||
@@ -19,9 +19,6 @@
|
|||||||
"@anthropic-ai/sdk": "^0.39.0",
|
"@anthropic-ai/sdk": "^0.39.0",
|
||||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||||
"ai": "^4.3.10",
|
"ai": "^4.3.10",
|
||||||
"boxen": "^8.0.1",
|
|
||||||
"chalk": "^5.4.1",
|
|
||||||
"cli-table3": "^0.6.5",
|
|
||||||
"commander": "^11.1.0",
|
"commander": "^11.1.0",
|
||||||
"cors": "^2.8.5",
|
"cors": "^2.8.5",
|
||||||
"dotenv": "^16.3.1",
|
"dotenv": "^16.3.1",
|
||||||
@@ -37,8 +34,7 @@
|
|||||||
"ollama-ai-provider": "^1.2.0",
|
"ollama-ai-provider": "^1.2.0",
|
||||||
"openai": "^4.89.0",
|
"openai": "^4.89.0",
|
||||||
"ora": "^8.2.0",
|
"ora": "^8.2.0",
|
||||||
"uuid": "^11.1.0",
|
"uuid": "^11.1.0"
|
||||||
"zod": "^3.23.8"
|
|
||||||
},
|
},
|
||||||
"bin": {
|
"bin": {
|
||||||
"task-master": "bin/task-master.js",
|
"task-master": "bin/task-master.js",
|
||||||
@@ -49,6 +45,9 @@
|
|||||||
"@changesets/changelog-github": "^0.5.1",
|
"@changesets/changelog-github": "^0.5.1",
|
||||||
"@changesets/cli": "^2.28.1",
|
"@changesets/cli": "^2.28.1",
|
||||||
"@types/jest": "^29.5.14",
|
"@types/jest": "^29.5.14",
|
||||||
|
"boxen": "^8.0.1",
|
||||||
|
"chalk": "^5.4.1",
|
||||||
|
"cli-table3": "^0.6.5",
|
||||||
"execa": "^8.0.1",
|
"execa": "^8.0.1",
|
||||||
"ink": "^5.0.1",
|
"ink": "^5.0.1",
|
||||||
"jest": "^29.7.0",
|
"jest": "^29.7.0",
|
||||||
@@ -58,7 +57,8 @@
|
|||||||
"prettier": "^3.5.3",
|
"prettier": "^3.5.3",
|
||||||
"react": "^18.3.1",
|
"react": "^18.3.1",
|
||||||
"supertest": "^7.1.0",
|
"supertest": "^7.1.0",
|
||||||
"tsx": "^4.16.2"
|
"tsx": "^4.16.2",
|
||||||
|
"zod": "^3.23.8"
|
||||||
},
|
},
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=14.0.0"
|
"node": ">=14.0.0"
|
||||||
@@ -1238,6 +1238,7 @@
|
|||||||
"version": "1.5.0",
|
"version": "1.5.0",
|
||||||
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
|
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
|
||||||
"integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==",
|
"integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"optional": true,
|
"optional": true,
|
||||||
"engines": {
|
"engines": {
|
||||||
@@ -3306,6 +3307,7 @@
|
|||||||
"version": "3.0.1",
|
"version": "3.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-3.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-3.0.1.tgz",
|
||||||
"integrity": "sha512-IOfwwBF5iczOjp/WeY4YxyjqAFMQoZufdQWDd19SEExbVLNXqvpzSJ/M7Za4/sCPmQ0+GRquoA7bGcINcxew6w==",
|
"integrity": "sha512-IOfwwBF5iczOjp/WeY4YxyjqAFMQoZufdQWDd19SEExbVLNXqvpzSJ/M7Za4/sCPmQ0+GRquoA7bGcINcxew6w==",
|
||||||
|
"dev": true,
|
||||||
"license": "ISC",
|
"license": "ISC",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"string-width": "^4.1.0"
|
"string-width": "^4.1.0"
|
||||||
@@ -3315,6 +3317,7 @@
|
|||||||
"version": "5.0.1",
|
"version": "5.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
||||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
@@ -3324,12 +3327,14 @@
|
|||||||
"version": "8.0.0",
|
"version": "8.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/ansi-align/node_modules/string-width": {
|
"node_modules/ansi-align/node_modules/string-width": {
|
||||||
"version": "4.2.3",
|
"version": "4.2.3",
|
||||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"emoji-regex": "^8.0.0",
|
"emoji-regex": "^8.0.0",
|
||||||
@@ -3344,6 +3349,7 @@
|
|||||||
"version": "6.0.1",
|
"version": "6.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
||||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"ansi-regex": "^5.0.1"
|
"ansi-regex": "^5.0.1"
|
||||||
@@ -3693,6 +3699,7 @@
|
|||||||
"version": "8.0.1",
|
"version": "8.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/boxen/-/boxen-8.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/boxen/-/boxen-8.0.1.tgz",
|
||||||
"integrity": "sha512-F3PH5k5juxom4xktynS7MoFY+NUWH5LC4CnH11YB8NPew+HLpmBLCybSAEyb2F+4pRXhuhWqFesoQd6DAyc2hw==",
|
"integrity": "sha512-F3PH5k5juxom4xktynS7MoFY+NUWH5LC4CnH11YB8NPew+HLpmBLCybSAEyb2F+4pRXhuhWqFesoQd6DAyc2hw==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"ansi-align": "^3.0.1",
|
"ansi-align": "^3.0.1",
|
||||||
@@ -3843,6 +3850,7 @@
|
|||||||
"version": "8.0.0",
|
"version": "8.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-8.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-8.0.0.tgz",
|
||||||
"integrity": "sha512-8WB3Jcas3swSvjIeA2yvCJ+Miyz5l1ZmB6HFb9R1317dt9LCQoswg/BGrmAmkWVEszSrrg4RwmO46qIm2OEnSA==",
|
"integrity": "sha512-8WB3Jcas3swSvjIeA2yvCJ+Miyz5l1ZmB6HFb9R1317dt9LCQoswg/BGrmAmkWVEszSrrg4RwmO46qIm2OEnSA==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=16"
|
"node": ">=16"
|
||||||
@@ -3927,6 +3935,7 @@
|
|||||||
"version": "3.0.0",
|
"version": "3.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-3.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-3.0.0.tgz",
|
||||||
"integrity": "sha512-/lzGpEWL/8PfI0BmBOPRwp0c/wFNX1RdUML3jK/RcSBA9T8mZDdQpqYBKtCFTOfQbwPqWEOpjqW+Fnayc0969g==",
|
"integrity": "sha512-/lzGpEWL/8PfI0BmBOPRwp0c/wFNX1RdUML3jK/RcSBA9T8mZDdQpqYBKtCFTOfQbwPqWEOpjqW+Fnayc0969g==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=10"
|
"node": ">=10"
|
||||||
@@ -3966,6 +3975,7 @@
|
|||||||
"version": "0.6.5",
|
"version": "0.6.5",
|
||||||
"resolved": "https://registry.npmjs.org/cli-table3/-/cli-table3-0.6.5.tgz",
|
"resolved": "https://registry.npmjs.org/cli-table3/-/cli-table3-0.6.5.tgz",
|
||||||
"integrity": "sha512-+W/5efTR7y5HRD7gACw9yQjqMVvEMLBHmboM/kPWam+H+Hmyrgjh6YncVKK122YZkXrLudzTuAukUw9FnMf7IQ==",
|
"integrity": "sha512-+W/5efTR7y5HRD7gACw9yQjqMVvEMLBHmboM/kPWam+H+Hmyrgjh6YncVKK122YZkXrLudzTuAukUw9FnMf7IQ==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"string-width": "^4.2.0"
|
"string-width": "^4.2.0"
|
||||||
@@ -3981,6 +3991,7 @@
|
|||||||
"version": "5.0.1",
|
"version": "5.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
||||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
@@ -3990,12 +4001,14 @@
|
|||||||
"version": "8.0.0",
|
"version": "8.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/cli-table3/node_modules/string-width": {
|
"node_modules/cli-table3/node_modules/string-width": {
|
||||||
"version": "4.2.3",
|
"version": "4.2.3",
|
||||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"emoji-regex": "^8.0.0",
|
"emoji-regex": "^8.0.0",
|
||||||
@@ -4010,6 +4023,7 @@
|
|||||||
"version": "6.0.1",
|
"version": "6.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
||||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"ansi-regex": "^5.0.1"
|
"ansi-regex": "^5.0.1"
|
||||||
@@ -9474,6 +9488,7 @@
|
|||||||
"version": "4.37.0",
|
"version": "4.37.0",
|
||||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-4.37.0.tgz",
|
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-4.37.0.tgz",
|
||||||
"integrity": "sha512-S/5/0kFftkq27FPNye0XM1e2NsnoD/3FS+pBmbjmmtLT6I+i344KoOf7pvXreaFsDamWeaJX55nczA1m5PsBDg==",
|
"integrity": "sha512-S/5/0kFftkq27FPNye0XM1e2NsnoD/3FS+pBmbjmmtLT6I+i344KoOf7pvXreaFsDamWeaJX55nczA1m5PsBDg==",
|
||||||
|
"dev": true,
|
||||||
"license": "(MIT OR CC0-1.0)",
|
"license": "(MIT OR CC0-1.0)",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=16"
|
"node": ">=16"
|
||||||
@@ -9683,6 +9698,7 @@
|
|||||||
"version": "5.0.0",
|
"version": "5.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/widest-line/-/widest-line-5.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/widest-line/-/widest-line-5.0.0.tgz",
|
||||||
"integrity": "sha512-c9bZp7b5YtRj2wOe6dlj32MK+Bx/M/d+9VB2SHM1OtsUHR0aV0tdP6DWh/iMt0kWi1t5g1Iudu6hQRNd1A4PVA==",
|
"integrity": "sha512-c9bZp7b5YtRj2wOe6dlj32MK+Bx/M/d+9VB2SHM1OtsUHR0aV0tdP6DWh/iMt0kWi1t5g1Iudu6hQRNd1A4PVA==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"string-width": "^7.0.0"
|
"string-width": "^7.0.0"
|
||||||
@@ -9698,6 +9714,7 @@
|
|||||||
"version": "9.0.0",
|
"version": "9.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-9.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-9.0.0.tgz",
|
||||||
"integrity": "sha512-G8ura3S+3Z2G+mkgNRq8dqaFZAuxfsxpBB8OCTGRTCtp+l/v9nbFNmCUP1BZMts3G1142MsZfn6eeUKrr4PD1Q==",
|
"integrity": "sha512-G8ura3S+3Z2G+mkgNRq8dqaFZAuxfsxpBB8OCTGRTCtp+l/v9nbFNmCUP1BZMts3G1142MsZfn6eeUKrr4PD1Q==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"ansi-styles": "^6.2.1",
|
"ansi-styles": "^6.2.1",
|
||||||
@@ -9715,6 +9732,7 @@
|
|||||||
"version": "6.2.1",
|
"version": "6.2.1",
|
||||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.1.tgz",
|
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.1.tgz",
|
||||||
"integrity": "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==",
|
"integrity": "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=12"
|
"node": ">=12"
|
||||||
|
|||||||
14
package.json
14
package.json
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.13.1",
|
"version": "0.12.1",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@@ -64,11 +64,7 @@
|
|||||||
"ollama-ai-provider": "^1.2.0",
|
"ollama-ai-provider": "^1.2.0",
|
||||||
"openai": "^4.89.0",
|
"openai": "^4.89.0",
|
||||||
"ora": "^8.2.0",
|
"ora": "^8.2.0",
|
||||||
"uuid": "^11.1.0",
|
"uuid": "^11.1.0"
|
||||||
"boxen": "^8.0.1",
|
|
||||||
"chalk": "^5.4.1",
|
|
||||||
"cli-table3": "^0.6.5",
|
|
||||||
"zod": "^3.23.8"
|
|
||||||
},
|
},
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=14.0.0"
|
"node": ">=14.0.0"
|
||||||
@@ -100,6 +96,9 @@
|
|||||||
"@changesets/changelog-github": "^0.5.1",
|
"@changesets/changelog-github": "^0.5.1",
|
||||||
"@changesets/cli": "^2.28.1",
|
"@changesets/cli": "^2.28.1",
|
||||||
"@types/jest": "^29.5.14",
|
"@types/jest": "^29.5.14",
|
||||||
|
"boxen": "^8.0.1",
|
||||||
|
"chalk": "^5.4.1",
|
||||||
|
"cli-table3": "^0.6.5",
|
||||||
"execa": "^8.0.1",
|
"execa": "^8.0.1",
|
||||||
"ink": "^5.0.1",
|
"ink": "^5.0.1",
|
||||||
"jest": "^29.7.0",
|
"jest": "^29.7.0",
|
||||||
@@ -109,6 +108,7 @@
|
|||||||
"prettier": "^3.5.3",
|
"prettier": "^3.5.3",
|
||||||
"react": "^18.3.1",
|
"react": "^18.3.1",
|
||||||
"supertest": "^7.1.0",
|
"supertest": "^7.1.0",
|
||||||
"tsx": "^4.16.2"
|
"tsx": "^4.16.2",
|
||||||
|
"zod": "^3.23.8"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -180,9 +180,9 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
|||||||
|
|
||||||
// Map template names to their actual source paths
|
// Map template names to their actual source paths
|
||||||
switch (templateName) {
|
switch (templateName) {
|
||||||
// case 'scripts_README.md':
|
case 'scripts_README.md':
|
||||||
// sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
sourcePath = path.join(__dirname, '..', 'assets', 'scripts_README.md');
|
||||||
// break;
|
break;
|
||||||
case 'dev_workflow.mdc':
|
case 'dev_workflow.mdc':
|
||||||
sourcePath = path.join(
|
sourcePath = path.join(
|
||||||
__dirname,
|
__dirname,
|
||||||
@@ -219,8 +219,8 @@ function copyTemplateFile(templateName, targetPath, replacements = {}) {
|
|||||||
'self_improve.mdc'
|
'self_improve.mdc'
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
// case 'README-task-master.md':
|
case 'README-task-master.md':
|
||||||
// sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
sourcePath = path.join(__dirname, '..', 'README-task-master.md');
|
||||||
break;
|
break;
|
||||||
case 'windsurfrules':
|
case 'windsurfrules':
|
||||||
sourcePath = path.join(__dirname, '..', 'assets', '.windsurfrules');
|
sourcePath = path.join(__dirname, '..', 'assets', '.windsurfrules');
|
||||||
@@ -351,18 +351,18 @@ async function initializeProject(options = {}) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Debug logging only if not in silent mode
|
// Debug logging only if not in silent mode
|
||||||
// if (!isSilentMode()) {
|
if (!isSilentMode()) {
|
||||||
// console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
console.log('===== DEBUG: INITIALIZE PROJECT OPTIONS RECEIVED =====');
|
||||||
// console.log('Full options object:', JSON.stringify(options));
|
console.log('Full options object:', JSON.stringify(options));
|
||||||
// console.log('options.yes:', options.yes);
|
console.log('options.yes:', options.yes);
|
||||||
// console.log('==================================================');
|
console.log('==================================================');
|
||||||
// }
|
}
|
||||||
|
|
||||||
const skipPrompts = options.yes || (options.name && options.description);
|
const skipPrompts = options.yes || (options.name && options.description);
|
||||||
|
|
||||||
// if (!isSilentMode()) {
|
if (!isSilentMode()) {
|
||||||
// console.log('Skip prompts determined:', skipPrompts);
|
console.log('Skip prompts determined:', skipPrompts);
|
||||||
// }
|
}
|
||||||
|
|
||||||
if (skipPrompts) {
|
if (skipPrompts) {
|
||||||
if (!isSilentMode()) {
|
if (!isSilentMode()) {
|
||||||
@@ -565,12 +565,12 @@ function createProjectStructure(addAliases, dryRun) {
|
|||||||
path.join(targetDir, 'scripts', 'example_prd.txt')
|
path.join(targetDir, 'scripts', 'example_prd.txt')
|
||||||
);
|
);
|
||||||
|
|
||||||
// // Create main README.md
|
// Create main README.md
|
||||||
// copyTemplateFile(
|
copyTemplateFile(
|
||||||
// 'README-task-master.md',
|
'README-task-master.md',
|
||||||
// path.join(targetDir, 'README-task-master.md'),
|
path.join(targetDir, 'README-task-master.md'),
|
||||||
// replacements
|
replacements
|
||||||
// );
|
);
|
||||||
|
|
||||||
// Initialize git repository if git is available
|
// Initialize git repository if git is available
|
||||||
try {
|
try {
|
||||||
@@ -761,22 +761,21 @@ function setupMCPConfiguration(targetDir) {
|
|||||||
const newMCPServer = {
|
const newMCPServer = {
|
||||||
'task-master-ai': {
|
'task-master-ai': {
|
||||||
command: 'npx',
|
command: 'npx',
|
||||||
args: ['-y', '--package=task-master-ai', 'task-master-ai'],
|
args: ['-y', 'task-master-mcp'],
|
||||||
env: {
|
env: {
|
||||||
ANTHROPIC_API_KEY: 'ANTHROPIC_API_KEY_HERE',
|
ANTHROPIC_API_KEY: 'YOUR_ANTHROPIC_API_KEY',
|
||||||
PERPLEXITY_API_KEY: 'PERPLEXITY_API_KEY_HERE',
|
PERPLEXITY_API_KEY: 'YOUR_PERPLEXITY_API_KEY',
|
||||||
OPENAI_API_KEY: 'OPENAI_API_KEY_HERE',
|
MODEL: 'claude-3-7-sonnet-20250219',
|
||||||
GOOGLE_API_KEY: 'GOOGLE_API_KEY_HERE',
|
PERPLEXITY_MODEL: 'sonar-pro',
|
||||||
XAI_API_KEY: 'XAI_API_KEY_HERE',
|
MAX_TOKENS: '64000',
|
||||||
OPENROUTER_API_KEY: 'OPENROUTER_API_KEY_HERE',
|
TEMPERATURE: '0.2',
|
||||||
MISTRAL_API_KEY: 'MISTRAL_API_KEY_HERE',
|
DEFAULT_SUBTASKS: '5',
|
||||||
AZURE_OPENAI_API_KEY: 'AZURE_OPENAI_API_KEY_HERE',
|
DEFAULT_PRIORITY: 'medium'
|
||||||
OLLAMA_API_KEY: 'OLLAMA_API_KEY_HERE'
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Check if mcp.json already existsimage.png
|
// Check if mcp.json already exists
|
||||||
if (fs.existsSync(mcpJsonPath)) {
|
if (fs.existsSync(mcpJsonPath)) {
|
||||||
log(
|
log(
|
||||||
'info',
|
'info',
|
||||||
@@ -796,14 +795,14 @@ function setupMCPConfiguration(targetDir) {
|
|||||||
(server) =>
|
(server) =>
|
||||||
server.args &&
|
server.args &&
|
||||||
server.args.some(
|
server.args.some(
|
||||||
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
(arg) => typeof arg === 'string' && arg.includes('task-master-mcp')
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
|
||||||
if (hasMCPString) {
|
if (hasMCPString) {
|
||||||
log(
|
log(
|
||||||
'info',
|
'info',
|
||||||
'Found existing task-master-ai MCP configuration in mcp.json, leaving untouched'
|
'Found existing task-master-mcp configuration in mcp.json, leaving untouched'
|
||||||
);
|
);
|
||||||
return; // Exit early, don't modify the existing configuration
|
return; // Exit early, don't modify the existing configuration
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ import boxen from 'boxen';
|
|||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import https from 'https';
|
import https from 'https';
|
||||||
import inquirer from 'inquirer';
|
import inquirer from 'inquirer';
|
||||||
import ora from 'ora'; // Import ora
|
|
||||||
|
|
||||||
import { log, readJSON } from './utils.js';
|
import { log, readJSON } from './utils.js';
|
||||||
import {
|
import {
|
||||||
@@ -515,41 +514,29 @@ function registerCommands(programInstance) {
|
|||||||
const outputPath = options.output;
|
const outputPath = options.output;
|
||||||
const force = options.force || false;
|
const force = options.force || false;
|
||||||
const append = options.append || false;
|
const append = options.append || false;
|
||||||
let useForce = false;
|
|
||||||
let useAppend = false;
|
|
||||||
|
|
||||||
// Helper function to check if tasks.json exists and confirm overwrite
|
// Helper function to check if tasks.json exists and confirm overwrite
|
||||||
async function confirmOverwriteIfNeeded() {
|
async function confirmOverwriteIfNeeded() {
|
||||||
if (fs.existsSync(outputPath) && !useForce && !useAppend) {
|
if (fs.existsSync(outputPath) && !force && !append) {
|
||||||
const overwrite = await confirmTaskOverwrite(outputPath);
|
const shouldContinue = await confirmTaskOverwrite(outputPath);
|
||||||
if (!overwrite) {
|
if (!shouldContinue) {
|
||||||
log('info', 'Operation cancelled.');
|
console.log(chalk.yellow('Operation cancelled by user.'));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
// If user confirms 'y', we should set useForce = true for the parsePRD call
|
|
||||||
// Only overwrite if not appending
|
|
||||||
useForce = true;
|
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
let spinner;
|
// If no input file specified, check for default PRD location
|
||||||
|
|
||||||
try {
|
|
||||||
if (!inputFile) {
|
if (!inputFile) {
|
||||||
if (fs.existsSync(defaultPrdPath)) {
|
if (fs.existsSync(defaultPrdPath)) {
|
||||||
console.log(
|
console.log(chalk.blue(`Using default PRD file: ${defaultPrdPath}`));
|
||||||
chalk.blue(`Using default PRD file path: ${defaultPrdPath}`)
|
|
||||||
);
|
// Check for existing tasks.json before proceeding
|
||||||
if (!(await confirmOverwriteIfNeeded())) return;
|
if (!(await confirmOverwriteIfNeeded())) return;
|
||||||
|
|
||||||
console.log(chalk.blue(`Generating ${numTasks} tasks...`));
|
console.log(chalk.blue(`Generating ${numTasks} tasks...`));
|
||||||
spinner = ora('Parsing PRD and generating tasks...').start();
|
await parsePRD(defaultPrdPath, outputPath, numTasks, { append });
|
||||||
await parsePRD(defaultPrdPath, outputPath, numTasks, {
|
|
||||||
useAppend,
|
|
||||||
useForce
|
|
||||||
});
|
|
||||||
spinner.succeed('Tasks generated successfully!');
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -591,13 +578,7 @@ function registerCommands(programInstance) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!fs.existsSync(inputFile)) {
|
// Check for existing tasks.json before proceeding with specified input file
|
||||||
console.error(
|
|
||||||
chalk.red(`Error: Input PRD file not found: ${inputFile}`)
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!(await confirmOverwriteIfNeeded())) return;
|
if (!(await confirmOverwriteIfNeeded())) return;
|
||||||
|
|
||||||
console.log(chalk.blue(`Parsing PRD file: ${inputFile}`));
|
console.log(chalk.blue(`Parsing PRD file: ${inputFile}`));
|
||||||
@@ -606,20 +587,7 @@ function registerCommands(programInstance) {
|
|||||||
console.log(chalk.blue('Appending to existing tasks...'));
|
console.log(chalk.blue('Appending to existing tasks...'));
|
||||||
}
|
}
|
||||||
|
|
||||||
spinner = ora('Parsing PRD and generating tasks...').start();
|
await parsePRD(inputFile, outputPath, numTasks, { append });
|
||||||
await parsePRD(inputFile, outputPath, numTasks, {
|
|
||||||
append: useAppend,
|
|
||||||
force: useForce
|
|
||||||
});
|
|
||||||
spinner.succeed('Tasks generated successfully!');
|
|
||||||
} catch (error) {
|
|
||||||
if (spinner) {
|
|
||||||
spinner.fail(`Error parsing PRD: ${error.message}`);
|
|
||||||
} else {
|
|
||||||
console.error(chalk.red(`Error parsing PRD: ${error.message}`));
|
|
||||||
}
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// update command
|
// update command
|
||||||
|
|||||||
@@ -195,7 +195,7 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Generate updated task files
|
// Generate updated task files
|
||||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
await generateTaskFiles(tasksPath, 'tasks');
|
||||||
|
|
||||||
log('info', 'Task files regenerated with updated dependencies.');
|
log('info', 'Task files regenerated with updated dependencies.');
|
||||||
} else {
|
} else {
|
||||||
@@ -334,7 +334,7 @@ async function removeDependency(tasksPath, taskId, dependencyId) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Regenerate task files
|
// Regenerate task files
|
||||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
await generateTaskFiles(tasksPath, 'tasks');
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -275,7 +275,7 @@ async function updateTasks(
|
|||||||
chalk.cyan.bold('Title'),
|
chalk.cyan.bold('Title'),
|
||||||
chalk.cyan.bold('Status')
|
chalk.cyan.bold('Status')
|
||||||
],
|
],
|
||||||
colWidths: [5, 70, 20]
|
colWidths: [5, 60, 10]
|
||||||
});
|
});
|
||||||
|
|
||||||
tasksToUpdate.forEach((task) => {
|
tasksToUpdate.forEach((task) => {
|
||||||
|
|||||||
3
scripts/sample-prd.txt
Normal file
3
scripts/sample-prd.txt
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
Task Master PRD
|
||||||
|
|
||||||
|
Create a CLI tool for task management
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"generatedAt": "2025-05-03T04:45:36.864Z",
|
"generatedAt": "2025-05-01T18:17:08.817Z",
|
||||||
"tasksAnalyzed": 36,
|
"tasksAnalyzed": 35,
|
||||||
"thresholdScore": 5,
|
"thresholdScore": 5,
|
||||||
"projectName": "Taskmaster",
|
"projectName": "Taskmaster",
|
||||||
"usedResearch": false
|
"usedResearch": false
|
||||||
@@ -10,290 +10,282 @@
|
|||||||
{
|
{
|
||||||
"taskId": 24,
|
"taskId": 24,
|
||||||
"taskTitle": "Implement AI-Powered Test Generation Command",
|
"taskTitle": "Implement AI-Powered Test Generation Command",
|
||||||
"complexityScore": 8,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement AI-Powered Test Generation Command' task by detailing the specific steps required for AI prompt engineering, including data extraction, prompt formatting, and error handling.",
|
"expansionPrompt": "Break down the implementation of the 'generate-test' command into detailed subtasks covering command structure, AI prompt engineering, test file generation, and integration with existing systems.",
|
||||||
"reasoning": "Requires AI integration, complex logic, and thorough testing. Prompt engineering and API interaction add significant complexity."
|
"reasoning": "This task involves creating a new CLI command that leverages AI to generate test files. It requires integration with Claude API, understanding of Jest testing, file system operations, and complex prompt engineering. The task already has 3 subtasks but would benefit from further breakdown to address error handling, documentation, and test validation components."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 26,
|
"taskId": 26,
|
||||||
"taskTitle": "Implement Context Foundation for AI Operations",
|
"taskTitle": "Implement Context Foundation for AI Operations",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement Context Foundation for AI Operations' task by detailing the specific steps for integrating file reading, cursor rules, and basic context extraction into the Claude API prompts.",
|
"expansionPrompt": "Break down the implementation of the context foundation for AI operations into detailed subtasks covering file context handling, cursor rules integration, context extraction utilities, and command handler updates.",
|
||||||
"reasoning": "Involves modifying multiple commands and integrating different context sources. Error handling and backwards compatibility are crucial."
|
"reasoning": "This task involves creating a foundation for context integration in Task Master. It requires implementing file reading functionality, cursor rules integration, and context extraction utilities. The task already has 4 subtasks but would benefit from additional subtasks for testing, documentation, and integration with existing AI operations."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 27,
|
"taskId": 27,
|
||||||
"taskTitle": "Implement Context Enhancements for AI Operations",
|
"taskTitle": "Implement Context Enhancements for AI Operations",
|
||||||
"complexityScore": 8,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement Context Enhancements for AI Operations' task by detailing the specific steps for code context extraction, task history integration, and PRD context integration, including parsing, summarization, and formatting.",
|
"expansionPrompt": "Break down the implementation of context enhancements for AI operations into detailed subtasks covering code context extraction, task history context, PRD context integration, and context formatting improvements.",
|
||||||
"reasoning": "Builds upon the previous task with more sophisticated context extraction and integration. Requires intelligent parsing and summarization."
|
"reasoning": "This task builds upon the foundation from task #26 and adds more sophisticated context features. It involves implementing code context extraction, task history awareness, and PRD integration. The task already has 4 subtasks but would benefit from additional subtasks for testing, documentation, and integration with the foundation context system."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 28,
|
"taskId": 28,
|
||||||
"taskTitle": "Implement Advanced ContextManager System",
|
"taskTitle": "Implement Advanced ContextManager System",
|
||||||
"complexityScore": 9,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 7,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the 'Implement Advanced ContextManager System' task by detailing the specific steps for creating the ContextManager class, implementing the optimization pipeline, and adding command interface enhancements, including caching and performance monitoring.",
|
"expansionPrompt": "Break down the implementation of the advanced ContextManager system into detailed subtasks covering class structure, optimization pipeline, command interface, AI service integration, and performance monitoring.",
|
||||||
"reasoning": "A comprehensive system requiring careful design, optimization, and testing. Involves complex algorithms and performance considerations."
|
"reasoning": "This task involves creating a comprehensive ContextManager class with advanced features like context optimization, prioritization, and intelligent selection. It builds on the previous context tasks and requires sophisticated algorithms for token management and context relevance scoring. The task already has 5 subtasks but would benefit from additional subtasks for testing, documentation, and integration with existing systems."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 32,
|
"taskId": 32,
|
||||||
"taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
"taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
||||||
"complexityScore": 9,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 10,
|
"recommendedSubtasks": 8,
|
||||||
"expansionPrompt": "Expand the 'Implement \"learn\" Command for Automatic Cursor Rule Generation' task by detailing the specific steps for Cursor data analysis, rule management, and AI integration, including error handling and performance optimization.",
|
"expansionPrompt": "Break down the implementation of the 'learn' command for automatic Cursor rule generation into detailed subtasks covering chat history analysis, rule management, AI integration, and command structure.",
|
||||||
"reasoning": "Requires deep integration with Cursor's data, complex pattern analysis, and AI interaction. Significant error handling and performance optimization are needed."
|
"reasoning": "This task involves creating a complex system that analyzes Cursor's chat history and code changes to automatically generate rule files. It requires sophisticated data analysis, pattern recognition, and AI integration. The task already has 15 subtasks, which is appropriate given its complexity, but could benefit from reorganization into logical groupings."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 40,
|
"taskId": 40,
|
||||||
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
|
||||||
"complexityScore": 6,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement 'plan' Command for Task Implementation Planning' task by detailing the steps for retrieving task content, generating implementation plans with AI, and formatting the plan within XML tags.",
|
"expansionPrompt": "Break down the implementation of the 'plan' command for task implementation planning into detailed subtasks covering command structure, AI integration, plan formatting, and error handling.",
|
||||||
"reasoning": "Involves AI integration and requires careful formatting and error handling. Switching between Claude and Perplexity adds complexity."
|
"reasoning": "This task involves creating a new command that generates implementation plans for tasks. It requires integration with AI services, understanding of task structure, and proper formatting of generated plans. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 41,
|
"taskId": 41,
|
||||||
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 8,
|
||||||
"expansionPrompt": "Expand the 'Implement Visual Task Dependency Graph in Terminal' task by detailing the steps for designing the graph rendering system, implementing layout algorithms, and handling circular dependencies and filtering options.",
|
"expansionPrompt": "Break down the implementation of the visual task dependency graph in terminal into detailed subtasks covering graph layout algorithms, ASCII/Unicode rendering, color coding, circular dependency detection, and filtering options.",
|
||||||
"reasoning": "Requires complex graph algorithms and terminal rendering. Accessibility and performance are important considerations."
|
"reasoning": "This task involves creating a complex visualization system for task dependencies using ASCII/Unicode characters. It requires sophisticated layout algorithms, rendering logic, and user interface considerations. The task already has 10 subtasks, which is appropriate given its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 42,
|
"taskId": 42,
|
||||||
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
|
||||||
"complexityScore": 8,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 7,
|
"recommendedSubtasks": 10,
|
||||||
"expansionPrompt": "Expand the 'Implement MCP-to-MCP Communication Protocol' task by detailing the steps for defining the protocol, implementing the adapter pattern, and building the client module, including error handling and security considerations.",
|
"expansionPrompt": "Break down the implementation of the MCP-to-MCP communication protocol into detailed subtasks covering protocol definition, adapter pattern, client module, reference implementation, and mode switching.",
|
||||||
"reasoning": "Requires designing a new protocol and implementing communication with external systems. Security and error handling are critical."
|
"reasoning": "This task involves designing and implementing a standardized communication protocol for Taskmaster to interact with external MCP tools. It requires sophisticated protocol design, authentication mechanisms, error handling, and support for different operational modes. The task already has 8 subtasks but would benefit from additional subtasks for security, testing, and documentation."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 43,
|
"taskId": 43,
|
||||||
"taskTitle": "Add Research Flag to Add-Task Command",
|
"taskTitle": "Add Research Flag to Add-Task Command",
|
||||||
"complexityScore": 5,
|
"complexityScore": 3,
|
||||||
"recommendedSubtasks": 3,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Expand the 'Add Research Flag to Add-Task Command' task by detailing the steps for updating the command parser, generating research subtasks, and linking them to the parent task.",
|
"expansionPrompt": "Break down the implementation of the research flag for the add-task command into detailed subtasks covering command argument parsing, research subtask generation, integration with existing command, and documentation.",
|
||||||
"reasoning": "Relatively straightforward, but requires careful handling of subtask generation and linking."
|
"reasoning": "This task involves modifying the add-task command to support a new flag that generates research-oriented subtasks. It's relatively straightforward as it builds on existing functionality. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 44,
|
"taskId": 44,
|
||||||
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 7,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the 'Implement Task Automation with Webhooks and Event Triggers' task by detailing the steps for implementing the webhook registration system, event system, and trigger definition interface, including security and error handling.",
|
"expansionPrompt": "Break down the implementation of task automation with webhooks and event triggers into detailed subtasks covering webhook registration, event system, trigger definition, authentication, and payload templating.",
|
||||||
"reasoning": "Requires designing a robust event system and integrating with external services. Security and error handling are critical."
|
"reasoning": "This task involves creating a sophisticated automation system with webhooks and event triggers. It requires implementing webhook registration, event capturing, trigger definitions, authentication, and integration with existing systems. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 45,
|
"taskId": 45,
|
||||||
"taskTitle": "Implement GitHub Issue Import Feature",
|
"taskTitle": "Implement GitHub Issue Import Feature",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement GitHub Issue Import Feature' task by detailing the steps for parsing the URL, fetching issue details from the GitHub API, and generating a well-formatted task.",
|
"expansionPrompt": "Break down the implementation of the GitHub issue import feature into detailed subtasks covering URL parsing, GitHub API integration, task generation, authentication, and error handling.",
|
||||||
"reasoning": "Requires interacting with the GitHub API and handling various error conditions. Authentication adds complexity."
|
"reasoning": "This task involves adding a feature to import GitHub issues as tasks. It requires integration with the GitHub API, URL parsing, authentication handling, and proper error management. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 46,
|
"taskId": 46,
|
||||||
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement ICE Analysis Command for Task Prioritization' task by detailing the steps for calculating ICE scores, generating the report file, and implementing the CLI rendering.",
|
"expansionPrompt": "Break down the implementation of the ICE analysis command for task prioritization into detailed subtasks covering scoring algorithm, report generation, CLI rendering, and integration with existing analysis tools.",
|
||||||
"reasoning": "Requires AI integration for scoring and careful formatting of the report. Integration with existing complexity reports adds complexity."
|
"reasoning": "This task involves creating a new command that analyzes and ranks tasks based on Impact, Confidence, and Ease scoring. It requires implementing scoring algorithms, report generation, CLI rendering, and integration with existing analysis tools. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 47,
|
"taskId": 47,
|
||||||
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Enhance Task Suggestion Actions Card Workflow' task by detailing the steps for implementing the task expansion, context addition, and task management phases, including UI/UX considerations.",
|
"expansionPrompt": "Break down the enhancement of the task suggestion actions card workflow into detailed subtasks covering task expansion phase, context addition phase, task management phase, and UI/UX improvements.",
|
||||||
"reasoning": "Requires significant UI/UX work and careful state management. Integration with existing functionality is crucial."
|
"reasoning": "This task involves redesigning the suggestion actions card to implement a structured workflow. It requires implementing multiple phases (expansion, context addition, management) with appropriate UI/UX considerations. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 48,
|
"taskId": 48,
|
||||||
"taskTitle": "Refactor Prompts into Centralized Structure",
|
"taskTitle": "Refactor Prompts into Centralized Structure",
|
||||||
"complexityScore": 5,
|
"complexityScore": 4,
|
||||||
"recommendedSubtasks": 3,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Expand the 'Refactor Prompts into Centralized Structure' task by detailing the steps for creating the 'prompts' directory, extracting prompts into individual files, and updating functions to import them.",
|
"expansionPrompt": "Break down the refactoring of prompts into a centralized structure into detailed subtasks covering directory creation, prompt extraction, function modification, and documentation.",
|
||||||
"reasoning": "Primarily a refactoring task, but requires careful attention to detail to avoid breaking existing functionality."
|
"reasoning": "This task involves restructuring how prompts are managed in the codebase. It's a relatively straightforward refactoring task that requires creating a new directory structure, extracting prompts from functions, and updating references. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 49,
|
"taskId": 49,
|
||||||
"taskTitle": "Implement Code Quality Analysis Command",
|
"taskTitle": "Implement Code Quality Analysis Command",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the 'Implement Code Quality Analysis Command' task by detailing the steps for pattern recognition, best practice verification, and improvement recommendations, including AI integration and task creation.",
|
"expansionPrompt": "Break down the implementation of the code quality analysis command into detailed subtasks covering pattern recognition, best practice verification, improvement recommendations, task integration, and reporting.",
|
||||||
"reasoning": "Requires complex code analysis and AI integration. Generating actionable recommendations adds complexity."
|
"reasoning": "This task involves creating a sophisticated command that analyzes code quality, identifies patterns, verifies against best practices, and generates improvement recommendations. It requires complex algorithms for code analysis and integration with AI services. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 50,
|
"taskId": 50,
|
||||||
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
"taskTitle": "Implement Test Coverage Tracking System by Task",
|
||||||
"complexityScore": 9,
|
"complexityScore": 9,
|
||||||
"recommendedSubtasks": 7,
|
"recommendedSubtasks": 8,
|
||||||
"expansionPrompt": "Expand the 'Implement Test Coverage Tracking System by Task' task by detailing the steps for creating the tests.json file structure, developing the coverage report parser, and implementing the CLI commands and AI-powered test generation system.",
|
"expansionPrompt": "Break down the implementation of the test coverage tracking system by task into detailed subtasks covering data structure design, coverage report parsing, tracking and update generation, CLI commands, and AI-powered test generation.",
|
||||||
"reasoning": "A comprehensive system requiring deep integration with testing tools and AI. Maintaining bidirectional relationships adds complexity."
|
"reasoning": "This task involves creating a comprehensive system for tracking test coverage at the task level. It requires implementing data structures, coverage report parsing, tracking mechanisms, CLI commands, and AI integration. The task already has 5 subtasks but would benefit from additional subtasks for integration testing, documentation, and user experience."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 51,
|
"taskId": 51,
|
||||||
"taskTitle": "Implement Perplexity Research Command",
|
"taskTitle": "Implement Perplexity Research Command",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement Perplexity Research Command' task by detailing the steps for creating the Perplexity API client, implementing task context extraction, and building the CLI interface.",
|
"expansionPrompt": "Break down the implementation of the Perplexity research command into detailed subtasks covering API client service, task context extraction, CLI interface, results processing, and caching system.",
|
||||||
"reasoning": "Requires API integration and careful formatting of the research results. Caching adds complexity."
|
"reasoning": "This task involves creating a command that integrates with Perplexity AI for research purposes. It requires implementing an API client, context extraction, CLI interface, results processing, and caching. The task already has 5 subtasks, which is appropriate for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 52,
|
"taskId": 52,
|
||||||
"taskTitle": "Implement Task Suggestion Command for CLI",
|
"taskTitle": "Implement Task Suggestion Command for CLI",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement Task Suggestion Command for CLI' task by detailing the steps for collecting existing task data, generating task suggestions with AI, and implementing the interactive CLI interface.",
|
"expansionPrompt": "Break down the implementation of the task suggestion command for CLI into detailed subtasks covering task data collection, AI integration, suggestion presentation, interactive interface, and configuration options.",
|
||||||
"reasoning": "Requires AI integration and careful design of the interactive interface. Handling various flag combinations adds complexity."
|
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions. It requires collecting existing task data, integrating with AI services, presenting suggestions, and implementing an interactive interface. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 53,
|
"taskId": 53,
|
||||||
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement Subtask Suggestion Feature for Parent Tasks' task by detailing the steps for validating parent tasks, gathering context, generating subtask suggestions with AI, and implementing the interactive CLI interface.",
|
"expansionPrompt": "Break down the implementation of the subtask suggestion feature for parent tasks into detailed subtasks covering parent task validation, context gathering, AI integration, interactive interface, and subtask linking.",
|
||||||
"reasoning": "Requires AI integration and careful design of the interactive interface. Linking subtasks to parent tasks adds complexity."
|
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for existing parent tasks. It requires implementing parent task validation, context gathering, AI integration, an interactive interface, and subtask linking. The task already has 6 subtasks, which is appropriate for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 55,
|
"taskId": 55,
|
||||||
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement Positional Arguments Support for CLI Commands' task by detailing the steps for updating the argument parsing logic, defining the positional argument order, and handling edge cases.",
|
"expansionPrompt": "Break down the implementation of positional arguments support for CLI commands into detailed subtasks covering argument parsing logic, command mapping, help text updates, error handling, and testing.",
|
||||||
"reasoning": "Requires careful modification of the command parsing logic and ensuring backward compatibility. Handling edge cases adds complexity."
|
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. It requires updating argument parsing, mapping positional arguments to parameters, updating help text, and handling edge cases. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 57,
|
"taskId": 57,
|
||||||
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
|
||||||
"complexityScore": 7,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Enhance Task-Master CLI User Experience and Interface' task by detailing the steps for log management, visual enhancements, interactive elements, and output formatting.",
|
"expansionPrompt": "Break down the enhancement of the Task-Master CLI user experience and interface into detailed subtasks covering log management, visual enhancements, interactive elements, output formatting, and help documentation.",
|
||||||
"reasoning": "Requires significant UI/UX work and careful consideration of different terminal environments. Reducing verbose logging adds complexity."
|
"reasoning": "This task involves improving the CLI's user experience through various enhancements to logging, visuals, interactivity, and documentation. It requires implementing log levels, visual improvements, interactive elements, and better formatting. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 60,
|
"taskId": 60,
|
||||||
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
|
||||||
"complexityScore": 8,
|
"complexityScore": 8,
|
||||||
"recommendedSubtasks": 7,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the 'Implement Mentor System with Round-Table Discussion Feature' task by detailing the steps for mentor management, round-table discussion implementation, and integration with the task system, including LLM integration.",
|
"expansionPrompt": "Break down the implementation of the mentor system with round-table discussion feature into detailed subtasks covering mentor management, round-table discussion, task system integration, LLM integration, and documentation.",
|
||||||
"reasoning": "Requires complex AI simulation and careful formatting of the discussion output. Integrating with the task system adds complexity."
|
"reasoning": "This task involves creating a sophisticated mentor system with round-table discussions. It requires implementing mentor management, discussion simulation, task integration, and LLM integration. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 61,
|
"taskId": 61,
|
||||||
"taskTitle": "Implement Flexible AI Model Management",
|
"taskTitle": "Implement Flexible AI Model Management",
|
||||||
"complexityScore": 9,
|
"complexityScore": 10,
|
||||||
"recommendedSubtasks": 8,
|
"recommendedSubtasks": 10,
|
||||||
"expansionPrompt": "Expand the 'Implement Flexible AI Model Management' task by detailing the steps for creating the configuration management module, implementing the CLI command parser, and integrating the Vercel AI SDK.",
|
"expansionPrompt": "Break down the implementation of flexible AI model management into detailed subtasks covering configuration management, CLI command parsing, AI SDK integration, service module development, environment variable handling, and documentation.",
|
||||||
"reasoning": "Requires deep integration with multiple AI models and careful management of API keys and configuration options. Vercel AI SDK integration adds complexity."
|
"reasoning": "This task involves implementing comprehensive support for multiple AI models with a unified interface. It's extremely complex, requiring configuration management, CLI commands, SDK integration, service modules, and environment handling. The task already has 45 subtasks, which is appropriate given its complexity and scope."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 62,
|
"taskId": 62,
|
||||||
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
|
||||||
"complexityScore": 5,
|
"complexityScore": 4,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Add --simple Flag to Update Commands for Direct Text Input' task by detailing the steps for updating the command parsers, implementing the conditional logic, and formatting the user input with a timestamp.",
|
"expansionPrompt": "Break down the implementation of the --simple flag for update commands into detailed subtasks covering command parser updates, AI processing bypass, timestamp formatting, visual indicators, and documentation.",
|
||||||
"reasoning": "Relatively straightforward, but requires careful attention to formatting and ensuring consistency with AI-processed updates."
|
"reasoning": "This task involves modifying update commands to accept a flag that bypasses AI processing. It requires updating command parsers, implementing conditional logic, formatting user input, and updating documentation. The task already has 8 subtasks, which is more than sufficient for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 63,
|
"taskId": 63,
|
||||||
"taskTitle": "Add pnpm Support for the Taskmaster Package",
|
"taskTitle": "Add pnpm Support for the Taskmaster Package",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Add pnpm Support for the Taskmaster Package' task by detailing the steps for updating the documentation, ensuring package scripts compatibility, and testing the installation and operation with pnpm.",
|
"expansionPrompt": "Break down the implementation of pnpm support for the Taskmaster package into detailed subtasks covering documentation updates, package script compatibility, lockfile generation, installation testing, CI/CD integration, and website consistency verification.",
|
||||||
"reasoning": "Requires careful attention to detail to ensure compatibility with pnpm's execution model. Testing and documentation are crucial."
|
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with pnpm. It requires updating documentation, ensuring script compatibility, testing installation, and integrating with CI/CD. The task already has 8 subtasks, which is appropriate for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 64,
|
"taskId": 64,
|
||||||
"taskTitle": "Add Yarn Support for Taskmaster Installation",
|
"taskTitle": "Add Yarn Support for Taskmaster Installation",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Add Yarn Support for Taskmaster Installation' task by detailing the steps for updating package.json, adding Yarn-specific configuration files, and testing the installation and operation with Yarn.",
|
"expansionPrompt": "Break down the implementation of Yarn support for Taskmaster installation into detailed subtasks covering package.json updates, Yarn-specific configuration, compatibility testing, documentation updates, package manager detection, and website consistency verification.",
|
||||||
"reasoning": "Requires careful attention to detail to ensure compatibility with Yarn's execution model. Testing and documentation are crucial."
|
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with Yarn. It requires updating package.json, adding Yarn-specific configuration, testing compatibility, and updating documentation. The task already has 9 subtasks, which is appropriate for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 65,
|
"taskId": 65,
|
||||||
"taskTitle": "Add Bun Support for Taskmaster Installation",
|
"taskTitle": "Add Bun Support for Taskmaster Installation",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Add Bun Support for Taskmaster Installation' task by detailing the steps for updating the installation scripts, testing the installation and operation with Bun, and updating the documentation.",
|
"expansionPrompt": "Break down the implementation of Bun support for Taskmaster installation into detailed subtasks covering package.json updates, Bun-specific configuration, compatibility testing, documentation updates, package manager detection, and troubleshooting guidance.",
|
||||||
"reasoning": "Requires careful attention to detail to ensure compatibility with Bun's execution model. Testing and documentation are crucial."
|
"reasoning": "This task involves ensuring the Taskmaster package works seamlessly with Bun. It requires updating package.json, adding Bun-specific configuration, testing compatibility, and updating documentation. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 66,
|
"taskId": 66,
|
||||||
"taskTitle": "Support Status Filtering in Show Command for Subtasks",
|
"taskTitle": "Support Status Filtering in Show Command for Subtasks",
|
||||||
"complexityScore": 5,
|
"complexityScore": 3,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Expand the 'Support Status Filtering in Show Command for Subtasks' task by detailing the steps for updating the command parser, modifying the show command handler, and updating the help documentation.",
|
"expansionPrompt": "Break down the implementation of status filtering in the show command for subtasks into detailed subtasks covering command parser updates, filtering logic, help documentation, and testing.",
|
||||||
"reasoning": "Relatively straightforward, but requires careful handling of status validation and filtering."
|
"reasoning": "This task involves enhancing the show command to support status-based filtering of subtasks. It's relatively straightforward, requiring updates to the command parser, filtering logic, and documentation. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 67,
|
"taskId": 67,
|
||||||
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
|
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Add CLI JSON output and Cursor keybindings integration' task by detailing the steps for implementing the JSON output logic, creating the install-keybindings command structure, and handling keybinding file manipulation.",
|
"expansionPrompt": "Break down the implementation of CLI JSON output and Cursor keybindings integration into detailed subtasks covering JSON flag implementation, output formatting, keybindings command structure, OS detection, file handling, and keybinding definition.",
|
||||||
"reasoning": "Requires careful formatting of the JSON output and handling of file system operations. OS detection adds complexity."
|
"reasoning": "This task involves two main components: adding JSON output to CLI commands and creating a new command for Cursor keybindings. It requires implementing a JSON flag, formatting output, creating a new command, detecting OS, handling files, and defining keybindings. The task already has 5 subtasks, which is appropriate for its complexity."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 68,
|
"taskId": 68,
|
||||||
"taskTitle": "Ability to create tasks without parsing PRD",
|
"taskTitle": "Ability to create tasks without parsing PRD",
|
||||||
"complexityScore": 3,
|
"complexityScore": 4,
|
||||||
"recommendedSubtasks": 2,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Expand the 'Ability to create tasks without parsing PRD' task by detailing the steps for creating tasks without a PRD.",
|
"expansionPrompt": "Break down the implementation of creating tasks without parsing PRD into detailed subtasks covering tasks.json creation, function reuse from parse-prd, command modification, and documentation.",
|
||||||
"reasoning": "Simple task to allow task creation without a PRD."
|
"reasoning": "This task involves modifying the task creation process to work without a PRD. It's relatively straightforward, requiring tasks.json creation, function reuse, command modification, and documentation. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 69,
|
"taskId": 69,
|
||||||
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
||||||
"complexityScore": 6,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Enhance Analyze Complexity for Specific Task IDs' task by detailing the steps for modifying the core logic, updating the CLI, and updating the MCP tool.",
|
"expansionPrompt": "Break down the enhancement of analyze-complexity for specific task IDs into detailed subtasks covering core logic modification, CLI command updates, MCP tool updates, report handling, and testing.",
|
||||||
"reasoning": "Requires modifying existing functionality and ensuring compatibility with both CLI and MCP."
|
"reasoning": "This task involves modifying the analyze-complexity feature to support analyzing specific task IDs. It requires updating core logic, CLI commands, MCP tools, and report handling. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 70,
|
"taskId": 70,
|
||||||
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
||||||
"complexityScore": 6,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 6,
|
||||||
"expansionPrompt": "Expand the 'Implement 'diagram' command for Mermaid diagram generation' task by detailing the steps for creating the command, generating the Mermaid diagram, and handling different output options.",
|
"expansionPrompt": "Break down the implementation of the 'diagram' command for Mermaid diagram generation into detailed subtasks covering command structure, task data collection, diagram generation, rendering options, file export, and documentation.",
|
||||||
"reasoning": "Requires generating Mermaid diagrams and handling different output options."
|
"reasoning": "This task involves creating a new command that generates Mermaid diagrams for task dependencies. It requires implementing command structure, collecting task data, generating diagrams, providing rendering options, and supporting file export. The task has no subtasks yet, so creating 6 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 72,
|
"taskId": 72,
|
||||||
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
||||||
"complexityScore": 8,
|
"complexityScore": 7,
|
||||||
"recommendedSubtasks": 6,
|
"recommendedSubtasks": 7,
|
||||||
"expansionPrompt": "Expand the 'Implement PDF Generation for Project Progress and Dependency Overview' task by detailing the steps for summarizing project progress, visualizing the dependency chain, and generating the PDF document.",
|
"expansionPrompt": "Break down the implementation of PDF generation for project progress and dependency overview into detailed subtasks covering command structure, data collection, progress summary generation, dependency visualization, PDF creation, styling, and documentation.",
|
||||||
"reasoning": "Requires integrating with the diagram command and using a PDF generation library. Handling large dependency chains adds complexity."
|
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependencies. It requires implementing command structure, collecting data, generating summaries, visualizing dependencies, creating PDFs, and styling the output. The task has no subtasks yet, so creating 7 subtasks would provide a clear implementation path for this moderately complex feature."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 73,
|
"taskId": 73,
|
||||||
"taskTitle": "Implement Custom Model ID Support for Ollama/OpenRouter",
|
"taskTitle": "Implement Custom Model ID Support for Ollama/OpenRouter",
|
||||||
"complexityScore": 7,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 5,
|
||||||
"expansionPrompt": "Expand the 'Implement Custom Model ID Support for Ollama/OpenRouter' task by detailing the steps for modifying the CLI, implementing the interactive setup, and handling validation and warnings.",
|
"expansionPrompt": "Break down the implementation of custom model ID support for Ollama/OpenRouter into detailed subtasks covering CLI flag implementation, model validation, interactive setup, configuration updates, and documentation.",
|
||||||
"reasoning": "Requires integrating with external APIs and handling different model types. Validation and warnings are crucial."
|
"reasoning": "This task involves allowing users to specify custom model IDs for Ollama and OpenRouter. It requires implementing CLI flags, validating models, updating the interactive setup, modifying configuration, and updating documentation. The task has no subtasks yet, so creating 5 subtasks would provide a clear implementation path."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 75,
|
"taskId": 75,
|
||||||
"taskTitle": "Integrate Google Search Grounding for Research Role",
|
"taskTitle": "Integrate Google Search Grounding for Research Role",
|
||||||
"complexityScore": 6,
|
"complexityScore": 4,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 4,
|
||||||
"expansionPrompt": "Expand the 'Integrate Google Search Grounding for Research Role' task by detailing the steps for modifying the AI service layer, implementing the conditional logic, and updating the supported models.",
|
"expansionPrompt": "Break down the integration of Google Search Grounding for research role into detailed subtasks covering AI service layer modification, conditional logic implementation, model configuration updates, and testing.",
|
||||||
"reasoning": "Requires conditional logic and integration with the Google Search Grounding API."
|
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for the research role. It's relatively straightforward, requiring modifications to the AI service, implementing conditional logic, updating model configurations, and testing. The task has no subtasks yet, so creating 4 subtasks would provide a clear implementation path."
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 76,
|
|
||||||
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
|
|
||||||
"complexityScore": 9,
|
|
||||||
"recommendedSubtasks": 7,
|
|
||||||
"expansionPrompt": "Expand the 'Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)' task by detailing the steps for launching the FastMCP server, implementing the message protocol handler, and developing the request/response correlation mechanism.",
|
|
||||||
"reasoning": "Requires complex system integration and robust error handling. Designing a comprehensive test framework adds complexity."
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -37,29 +37,3 @@ Test cases should include:
|
|||||||
- Running the command on tasks with existing implementation plans to ensure proper appending
|
- Running the command on tasks with existing implementation plans to ensure proper appending
|
||||||
|
|
||||||
Manually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.
|
Manually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.
|
||||||
|
|
||||||
# Subtasks:
|
|
||||||
## 1. Retrieve Task Content [in-progress]
|
|
||||||
### Dependencies: None
|
|
||||||
### Description: Fetch the content of the specified task from the task management system. This includes the task title, description, and any associated details.
|
|
||||||
### Details:
|
|
||||||
Implement a function to retrieve task details based on a task ID. Handle cases where the task does not exist.
|
|
||||||
|
|
||||||
## 2. Generate Implementation Plan with AI [pending]
|
|
||||||
### Dependencies: 40.1
|
|
||||||
### Description: Use an AI model (Claude or Perplexity) to generate an implementation plan based on the retrieved task content. The plan should outline the steps required to complete the task.
|
|
||||||
### Details:
|
|
||||||
Implement logic to switch between Claude and Perplexity APIs. Handle API authentication and rate limiting. Prompt the AI model with the task content and request a detailed implementation plan.
|
|
||||||
|
|
||||||
## 3. Format Plan in XML [pending]
|
|
||||||
### Dependencies: 40.2, 40.2
|
|
||||||
### Description: Format the generated implementation plan within XML tags. Each step in the plan should be represented as an XML element with appropriate attributes.
|
|
||||||
### Details:
|
|
||||||
Define the XML schema for the implementation plan. Implement a function to convert the AI-generated plan into the defined XML format. Ensure proper XML syntax and validation.
|
|
||||||
|
|
||||||
## 4. Error Handling and Output [pending]
|
|
||||||
### Dependencies: 40.3
|
|
||||||
### Description: Implement error handling for all steps, including API failures and XML formatting errors. Output the formatted XML plan to the console or a file.
|
|
||||||
### Details:
|
|
||||||
Add try-except blocks to handle potential exceptions. Log errors for debugging. Provide informative error messages to the user. Output the XML plan in a user-friendly format.
|
|
||||||
|
|
||||||
|
|||||||
@@ -1962,7 +1962,7 @@ Implementation notes:
|
|||||||
- This stricter approach enforces configuration-as-code principles, ensures reproducibility, and prevents configuration drift, aligning with modern best practices for immutable infrastructure and automated configuration management[2][4].
|
- This stricter approach enforces configuration-as-code principles, ensures reproducibility, and prevents configuration drift, aligning with modern best practices for immutable infrastructure and automated configuration management[2][4].
|
||||||
</info added on 2025-04-22T02:41:51.174Z>
|
</info added on 2025-04-22T02:41:51.174Z>
|
||||||
|
|
||||||
## 31. Implement Integration Tests for Unified AI Service [done]
|
## 31. Implement Integration Tests for Unified AI Service [pending]
|
||||||
### Dependencies: 61.18
|
### Dependencies: 61.18
|
||||||
### Description: Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]
|
### Description: Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]
|
||||||
### Details:
|
### Details:
|
||||||
@@ -2586,7 +2586,7 @@ These enhancements ensure robust validation, unified service usage, and maintain
|
|||||||
### Details:
|
### Details:
|
||||||
|
|
||||||
|
|
||||||
## 43. Remove all unnecessary console logs [done]
|
## 43. Remove all unnecessary console logs [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description:
|
### Description:
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -2377,48 +2377,7 @@
|
|||||||
"dependencies": [],
|
"dependencies": [],
|
||||||
"priority": "medium",
|
"priority": "medium",
|
||||||
"details": "Implement a new 'plan' command that will append a structured implementation plan to existing tasks or subtasks. The implementation should:\n\n1. Accept an '--id' parameter that can reference either a task or subtask ID\n2. Determine whether the ID refers to a task or subtask and retrieve the appropriate content from tasks.json and/or individual task files\n3. Generate a step-by-step implementation plan using AI (Claude by default)\n4. Support a '--research' flag to use Perplexity instead of Claude when needed\n5. Format the generated plan within XML tags like `<implementation_plan as of timestamp>...</implementation_plan>`\n6. Append this plan to the implementation details section of the task/subtask\n7. Display a confirmation card indicating the implementation plan was successfully created\n\nThe implementation plan should be detailed and actionable, containing specific steps such as searching for files, creating new files, modifying existing files, etc. The goal is to frontload planning work into the task/subtask so execution can begin immediately.\n\nReference the existing 'update-subtask' command implementation as a starting point, as it uses a similar approach for appending content to tasks. Ensure proper error handling for cases where the specified ID doesn't exist or when API calls fail.",
|
"details": "Implement a new 'plan' command that will append a structured implementation plan to existing tasks or subtasks. The implementation should:\n\n1. Accept an '--id' parameter that can reference either a task or subtask ID\n2. Determine whether the ID refers to a task or subtask and retrieve the appropriate content from tasks.json and/or individual task files\n3. Generate a step-by-step implementation plan using AI (Claude by default)\n4. Support a '--research' flag to use Perplexity instead of Claude when needed\n5. Format the generated plan within XML tags like `<implementation_plan as of timestamp>...</implementation_plan>`\n6. Append this plan to the implementation details section of the task/subtask\n7. Display a confirmation card indicating the implementation plan was successfully created\n\nThe implementation plan should be detailed and actionable, containing specific steps such as searching for files, creating new files, modifying existing files, etc. The goal is to frontload planning work into the task/subtask so execution can begin immediately.\n\nReference the existing 'update-subtask' command implementation as a starting point, as it uses a similar approach for appending content to tasks. Ensure proper error handling for cases where the specified ID doesn't exist or when API calls fail.",
|
||||||
"testStrategy": "Testing should verify:\n\n1. Command correctly identifies and retrieves content for both task and subtask IDs\n2. Implementation plans are properly generated and formatted with XML tags and timestamps\n3. Plans are correctly appended to the implementation details section without overwriting existing content\n4. The '--research' flag successfully switches the backend from Claude to Perplexity\n5. Appropriate error messages are displayed for invalid IDs or API failures\n6. Confirmation card is displayed after successful plan creation\n\nTest cases should include:\n- Running 'plan --id 123' on an existing task\n- Running 'plan --id 123.1' on an existing subtask\n- Running 'plan --id 123 --research' to test the Perplexity integration\n- Running 'plan --id 999' with a non-existent ID to verify error handling\n- Running the command on tasks with existing implementation plans to ensure proper appending\n\nManually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.",
|
"testStrategy": "Testing should verify:\n\n1. Command correctly identifies and retrieves content for both task and subtask IDs\n2. Implementation plans are properly generated and formatted with XML tags and timestamps\n3. Plans are correctly appended to the implementation details section without overwriting existing content\n4. The '--research' flag successfully switches the backend from Claude to Perplexity\n5. Appropriate error messages are displayed for invalid IDs or API failures\n6. Confirmation card is displayed after successful plan creation\n\nTest cases should include:\n- Running 'plan --id 123' on an existing task\n- Running 'plan --id 123.1' on an existing subtask\n- Running 'plan --id 123 --research' to test the Perplexity integration\n- Running 'plan --id 999' with a non-existent ID to verify error handling\n- Running the command on tasks with existing implementation plans to ensure proper appending\n\nManually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements."
|
||||||
"subtasks": [
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"title": "Retrieve Task Content",
|
|
||||||
"description": "Fetch the content of the specified task from the task management system. This includes the task title, description, and any associated details.",
|
|
||||||
"dependencies": [],
|
|
||||||
"details": "Implement a function to retrieve task details based on a task ID. Handle cases where the task does not exist.",
|
|
||||||
"status": "in-progress"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 2,
|
|
||||||
"title": "Generate Implementation Plan with AI",
|
|
||||||
"description": "Use an AI model (Claude or Perplexity) to generate an implementation plan based on the retrieved task content. The plan should outline the steps required to complete the task.",
|
|
||||||
"dependencies": [
|
|
||||||
1
|
|
||||||
],
|
|
||||||
"details": "Implement logic to switch between Claude and Perplexity APIs. Handle API authentication and rate limiting. Prompt the AI model with the task content and request a detailed implementation plan.",
|
|
||||||
"status": "pending"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 3,
|
|
||||||
"title": "Format Plan in XML",
|
|
||||||
"description": "Format the generated implementation plan within XML tags. Each step in the plan should be represented as an XML element with appropriate attributes.",
|
|
||||||
"dependencies": [
|
|
||||||
2,
|
|
||||||
"40.2"
|
|
||||||
],
|
|
||||||
"details": "Define the XML schema for the implementation plan. Implement a function to convert the AI-generated plan into the defined XML format. Ensure proper XML syntax and validation.",
|
|
||||||
"status": "pending"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 4,
|
|
||||||
"title": "Error Handling and Output",
|
|
||||||
"description": "Implement error handling for all steps, including API failures and XML formatting errors. Output the formatted XML plan to the console or a file.",
|
|
||||||
"dependencies": [
|
|
||||||
3
|
|
||||||
],
|
|
||||||
"details": "Add try-except blocks to handle potential exceptions. Log errors for debugging. Provide informative error messages to the user. Output the XML plan in a user-friendly format.",
|
|
||||||
"status": "pending"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": 41,
|
"id": 41,
|
||||||
@@ -3352,7 +3311,7 @@
|
|||||||
"id": 31,
|
"id": 31,
|
||||||
"title": "Implement Integration Tests for Unified AI Service",
|
"title": "Implement Integration Tests for Unified AI Service",
|
||||||
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]",
|
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]",
|
||||||
"status": "done",
|
"status": "pending",
|
||||||
"dependencies": [
|
"dependencies": [
|
||||||
"61.18"
|
"61.18"
|
||||||
],
|
],
|
||||||
@@ -3467,7 +3426,7 @@
|
|||||||
"title": "Remove all unnecessary console logs",
|
"title": "Remove all unnecessary console logs",
|
||||||
"description": "",
|
"description": "",
|
||||||
"details": "<info added on 2025-05-02T20:47:07.566Z>\n1. Identify all files within the project directory that contain console log statements.\n2. Use a code editor or IDE with search functionality to locate all instances of console.log().\n3. Review each console log statement to determine if it is necessary for debugging or logging purposes.\n4. For each unnecessary console log, remove the statement from the code.\n5. Ensure that the removal of console logs does not affect the functionality of the application.\n6. Test the application thoroughly to confirm that no errors are introduced by the removal of these logs.\n7. Commit the changes to the version control system with a message indicating the cleanup of console logs.\n</info added on 2025-05-02T20:47:07.566Z>\n<info added on 2025-05-02T20:47:56.080Z>\nHere are more detailed steps for removing unnecessary console logs:\n\n1. Identify all files within the project directory that contain console log statements:\n - Use grep or similar tools: `grep -r \"console.log\" --include=\"*.js\" --include=\"*.jsx\" --include=\"*.ts\" --include=\"*.tsx\" ./src`\n - Alternatively, use your IDE's project-wide search functionality with regex pattern `console\\.(log|debug|info|warn|error)`\n\n2. Categorize console logs:\n - Essential logs: Error reporting, critical application state changes\n - Debugging logs: Temporary logs used during development\n - Informational logs: Non-critical information that might be useful\n - Redundant logs: Duplicated information or trivial data\n\n3. Create a spreadsheet or document to track:\n - File path\n - Line number\n - Console log content\n - Category (essential/debugging/informational/redundant)\n - Decision (keep/remove)\n\n4. Apply these specific removal criteria:\n - Remove all logs with comments like \"TODO\", \"TEMP\", \"DEBUG\"\n - Remove logs that only show function entry/exit without meaningful data\n - Remove logs that duplicate information already available in the UI\n - Keep logs related to error handling or critical user actions\n - Consider replacing some logs with proper error handling\n\n5. For logs you decide to keep:\n - Add clear comments explaining why they're necessary\n - Consider moving them to a centralized logging service\n - Implement log levels (debug, info, warn, error) if not already present\n\n6. Use search and replace with regex to batch remove similar patterns:\n - Example: `console\\.log\\(\\s*['\"]Processing.*?['\"]\\s*\\);`\n\n7. After removal, implement these testing steps:\n - Run all unit tests\n - Check browser console for any remaining logs during manual testing\n - Verify error handling still works properly\n - Test edge cases where logs might have been masking issues\n\n8. Consider implementing a linting rule to prevent unnecessary console logs in future code:\n - Add ESLint rule \"no-console\" with appropriate exceptions\n - Configure CI/CD pipeline to fail if new console logs are added\n\n9. Document any logging standards for the team to follow going forward.\n\n10. After committing changes, monitor the application in staging environment to ensure no critical information is lost.\n</info added on 2025-05-02T20:47:56.080Z>",
|
"details": "<info added on 2025-05-02T20:47:07.566Z>\n1. Identify all files within the project directory that contain console log statements.\n2. Use a code editor or IDE with search functionality to locate all instances of console.log().\n3. Review each console log statement to determine if it is necessary for debugging or logging purposes.\n4. For each unnecessary console log, remove the statement from the code.\n5. Ensure that the removal of console logs does not affect the functionality of the application.\n6. Test the application thoroughly to confirm that no errors are introduced by the removal of these logs.\n7. Commit the changes to the version control system with a message indicating the cleanup of console logs.\n</info added on 2025-05-02T20:47:07.566Z>\n<info added on 2025-05-02T20:47:56.080Z>\nHere are more detailed steps for removing unnecessary console logs:\n\n1. Identify all files within the project directory that contain console log statements:\n - Use grep or similar tools: `grep -r \"console.log\" --include=\"*.js\" --include=\"*.jsx\" --include=\"*.ts\" --include=\"*.tsx\" ./src`\n - Alternatively, use your IDE's project-wide search functionality with regex pattern `console\\.(log|debug|info|warn|error)`\n\n2. Categorize console logs:\n - Essential logs: Error reporting, critical application state changes\n - Debugging logs: Temporary logs used during development\n - Informational logs: Non-critical information that might be useful\n - Redundant logs: Duplicated information or trivial data\n\n3. Create a spreadsheet or document to track:\n - File path\n - Line number\n - Console log content\n - Category (essential/debugging/informational/redundant)\n - Decision (keep/remove)\n\n4. Apply these specific removal criteria:\n - Remove all logs with comments like \"TODO\", \"TEMP\", \"DEBUG\"\n - Remove logs that only show function entry/exit without meaningful data\n - Remove logs that duplicate information already available in the UI\n - Keep logs related to error handling or critical user actions\n - Consider replacing some logs with proper error handling\n\n5. For logs you decide to keep:\n - Add clear comments explaining why they're necessary\n - Consider moving them to a centralized logging service\n - Implement log levels (debug, info, warn, error) if not already present\n\n6. Use search and replace with regex to batch remove similar patterns:\n - Example: `console\\.log\\(\\s*['\"]Processing.*?['\"]\\s*\\);`\n\n7. After removal, implement these testing steps:\n - Run all unit tests\n - Check browser console for any remaining logs during manual testing\n - Verify error handling still works properly\n - Test edge cases where logs might have been masking issues\n\n8. Consider implementing a linting rule to prevent unnecessary console logs in future code:\n - Add ESLint rule \"no-console\" with appropriate exceptions\n - Configure CI/CD pipeline to fail if new console logs are added\n\n9. Document any logging standards for the team to follow going forward.\n\n10. After committing changes, monitor the application in staging environment to ensure no critical information is lost.\n</info added on 2025-05-02T20:47:56.080Z>",
|
||||||
"status": "done",
|
"status": "pending",
|
||||||
"dependencies": [],
|
"dependencies": [],
|
||||||
"parentTaskId": 61
|
"parentTaskId": 61
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -5,47 +5,6 @@ set -u
|
|||||||
# Prevent errors in pipelines from being masked.
|
# Prevent errors in pipelines from being masked.
|
||||||
set -o pipefail
|
set -o pipefail
|
||||||
|
|
||||||
# --- Default Settings ---
|
|
||||||
run_verification_test=true
|
|
||||||
|
|
||||||
# --- Argument Parsing ---
|
|
||||||
# Simple loop to check for the skip flag
|
|
||||||
# Note: This needs to happen *before* the main block piped to tee
|
|
||||||
# if we want the decision logged early. Or handle args inside.
|
|
||||||
# Let's handle it before for clarity.
|
|
||||||
processed_args=()
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case "$1" in
|
|
||||||
--skip-verification)
|
|
||||||
run_verification_test=false
|
|
||||||
echo "[INFO] Argument '--skip-verification' detected. Fallback verification will be skipped."
|
|
||||||
shift # Consume the flag
|
|
||||||
;;
|
|
||||||
--analyze-log)
|
|
||||||
# Keep the analyze-log flag handling separate for now
|
|
||||||
# It exits early, so doesn't conflict with the main run flags
|
|
||||||
processed_args+=("$1")
|
|
||||||
if [[ $# -gt 1 ]]; then
|
|
||||||
processed_args+=("$2")
|
|
||||||
shift 2
|
|
||||||
else
|
|
||||||
shift 1
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
# Unknown argument, pass it along or handle error
|
|
||||||
# For now, just pass it along in case --analyze-log needs it later
|
|
||||||
processed_args+=("$1")
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
# Restore processed arguments ONLY if the array is not empty
|
|
||||||
if [ ${#processed_args[@]} -gt 0 ]; then
|
|
||||||
set -- "${processed_args[@]}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
# --- Configuration ---
|
# --- Configuration ---
|
||||||
# Assumes script is run from the project root (claude-task-master)
|
# Assumes script is run from the project root (claude-task-master)
|
||||||
TASKMASTER_SOURCE_DIR="." # Current directory is the source
|
TASKMASTER_SOURCE_DIR="." # Current directory is the source
|
||||||
@@ -65,7 +24,7 @@ source "$TASKMASTER_SOURCE_DIR/tests/e2e/e2e_helpers.sh"
|
|||||||
export -f log_info log_success log_error log_step _format_duration _get_elapsed_time_for_log
|
export -f log_info log_success log_error log_step _format_duration _get_elapsed_time_for_log
|
||||||
|
|
||||||
# --- Argument Parsing for Analysis-Only Mode ---
|
# --- Argument Parsing for Analysis-Only Mode ---
|
||||||
# This remains the same, as it exits early if matched
|
# Check if the first argument is --analyze-log
|
||||||
if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then
|
if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then
|
||||||
LOG_TO_ANALYZE=""
|
LOG_TO_ANALYZE=""
|
||||||
# Check if a log file path was provided as the second argument
|
# Check if a log file path was provided as the second argument
|
||||||
@@ -212,13 +171,6 @@ log_step() {
|
|||||||
# called *inside* this block depend on it. If not, it can be removed.
|
# called *inside* this block depend on it. If not, it can be removed.
|
||||||
start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block
|
start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block
|
||||||
|
|
||||||
# Log the verification decision
|
|
||||||
if [ "$run_verification_test" = true ]; then
|
|
||||||
log_info "Fallback verification test will be run as part of this E2E test."
|
|
||||||
else
|
|
||||||
log_info "Fallback verification test will be SKIPPED (--skip-verification flag detected)."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# --- Dependency Checks ---
|
# --- Dependency Checks ---
|
||||||
log_step "Checking for dependencies (jq)"
|
log_step "Checking for dependencies (jq)"
|
||||||
if ! command -v jq &> /dev/null; then
|
if ! command -v jq &> /dev/null; then
|
||||||
@@ -353,7 +305,6 @@ log_step() {
|
|||||||
# === End Model Commands Test ===
|
# === End Model Commands Test ===
|
||||||
|
|
||||||
# === Fallback Model generateObjectService Verification ===
|
# === Fallback Model generateObjectService Verification ===
|
||||||
if [ "$run_verification_test" = true ]; then
|
|
||||||
log_step "Starting Fallback Model (generateObjectService) Verification (Calls separate script)"
|
log_step "Starting Fallback Model (generateObjectService) Verification (Calls separate script)"
|
||||||
verification_script_path="$ORIGINAL_DIR/tests/e2e/run_fallback_verification.sh"
|
verification_script_path="$ORIGINAL_DIR/tests/e2e/run_fallback_verification.sh"
|
||||||
|
|
||||||
@@ -378,9 +329,6 @@ log_step() {
|
|||||||
# Decide whether to exit or continue
|
# Decide whether to exit or continue
|
||||||
# exit 1
|
# exit 1
|
||||||
fi
|
fi
|
||||||
else
|
|
||||||
log_info "Skipping Fallback Verification test as requested by flag."
|
|
||||||
fi
|
|
||||||
# === END Verification Section ===
|
# === END Verification Section ===
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -57,19 +57,24 @@ log_step() {
|
|||||||
# --- Signal Handling ---
|
# --- Signal Handling ---
|
||||||
# Global variable to hold child PID
|
# Global variable to hold child PID
|
||||||
child_pid=0
|
child_pid=0
|
||||||
# Use a persistent log file name
|
# Keep track of the summary file for cleanup
|
||||||
PROGRESS_LOG_FILE="fallback_verification_progress.log"
|
verification_summary_file="fallback_verification_summary.log" # Temp file in cwd
|
||||||
|
|
||||||
cleanup() {
|
cleanup() {
|
||||||
echo "" # Newline after ^C
|
echo "" # Newline after ^C
|
||||||
log_error "Interrupt received. Cleaning up any running child process..."
|
log_error "Interrupt received. Cleaning up..."
|
||||||
if [ "$child_pid" -ne 0 ]; then
|
if [ "$child_pid" -ne 0 ]; then
|
||||||
log_info "Killing child process (PID: $child_pid) and its group..."
|
log_info "Killing child process (PID: $child_pid) and its group..."
|
||||||
|
# Kill the process group (timeout and task-master) - TERM first, then KILL
|
||||||
kill -TERM -- "-$child_pid" 2>/dev/null || kill -KILL -- "-$child_pid" 2>/dev/null
|
kill -TERM -- "-$child_pid" 2>/dev/null || kill -KILL -- "-$child_pid" 2>/dev/null
|
||||||
child_pid=0
|
child_pid=0 # Reset pid after attempting kill
|
||||||
fi
|
fi
|
||||||
# DO NOT delete the progress log file on interrupt
|
# Clean up temporary file if it exists
|
||||||
log_info "Progress saved in: $PROGRESS_LOG_FILE"
|
if [ -f "$verification_summary_file" ]; then
|
||||||
|
log_info "Removing temporary summary file: $verification_summary_file"
|
||||||
|
rm -f "$verification_summary_file"
|
||||||
|
fi
|
||||||
|
# Ensure script exits after cleanup
|
||||||
exit 130 # Exit with code indicating interrupt
|
exit 130 # Exit with code indicating interrupt
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -121,10 +126,13 @@ fi
|
|||||||
echo "[INFO] Now operating inside: $(pwd)"
|
echo "[INFO] Now operating inside: $(pwd)"
|
||||||
|
|
||||||
# --- Now we are inside the target run directory ---
|
# --- Now we are inside the target run directory ---
|
||||||
|
# Define overall_start_time and test_step_count *after* changing dir
|
||||||
overall_start_time=$(date +%s)
|
overall_start_time=$(date +%s)
|
||||||
test_step_count=0
|
test_step_count=0 # Local step counter for this script
|
||||||
|
|
||||||
|
# Log that helpers were sourced (now that functions are available)
|
||||||
|
# No longer sourcing, just log start
|
||||||
log_info "Starting fallback verification script execution in $(pwd)"
|
log_info "Starting fallback verification script execution in $(pwd)"
|
||||||
log_info "Progress will be logged to: $(pwd)/$PROGRESS_LOG_FILE"
|
|
||||||
|
|
||||||
# --- Dependency Checks ---
|
# --- Dependency Checks ---
|
||||||
log_step "Checking for dependencies (jq) in verification script"
|
log_step "Checking for dependencies (jq) in verification script"
|
||||||
@@ -135,9 +143,9 @@ fi
|
|||||||
log_success "Dependency 'jq' found."
|
log_success "Dependency 'jq' found."
|
||||||
|
|
||||||
# --- Verification Logic ---
|
# --- Verification Logic ---
|
||||||
log_step "Starting/Resuming Fallback Model (generateObjectService) Verification"
|
log_step "Starting Fallback Model (generateObjectService) Verification"
|
||||||
# Ensure progress log exists, create if not
|
# Initialise summary file (path defined earlier)
|
||||||
touch "$PROGRESS_LOG_FILE"
|
echo "--- Fallback Verification Summary ---" > "$verification_summary_file"
|
||||||
|
|
||||||
# Ensure the supported models file exists (using absolute path)
|
# Ensure the supported models file exists (using absolute path)
|
||||||
if [ ! -f "$SUPPORTED_MODELS_FILE" ]; then
|
if [ ! -f "$SUPPORTED_MODELS_FILE" ]; then
|
||||||
@@ -158,41 +166,36 @@ if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/
|
|||||||
fi
|
fi
|
||||||
log_info "Subtask 1.1 found in $(pwd)/tasks/tasks.json, proceeding with verification."
|
log_info "Subtask 1.1 found in $(pwd)/tasks/tasks.json, proceeding with verification."
|
||||||
|
|
||||||
# Read providers and models using jq
|
# Read providers and models using jq (using absolute path to models file)
|
||||||
jq -c 'to_entries[] | .key as $provider | .value[] | select(.allowed_roles[]? == "fallback") | {provider: $provider, id: .id}' "$SUPPORTED_MODELS_FILE" | while IFS= read -r model_info; do
|
jq -c 'to_entries[] | .key as $provider | .value[] | select(.allowed_roles[]? == "fallback") | {provider: $provider, id: .id}' "$SUPPORTED_MODELS_FILE" | while IFS= read -r model_info; do
|
||||||
provider=$(echo "$model_info" | jq -r '.provider')
|
provider=$(echo "$model_info" | jq -r '.provider')
|
||||||
model_id=$(echo "$model_info" | jq -r '.id')
|
model_id=$(echo "$model_info" | jq -r '.id')
|
||||||
flag="" # Default flag
|
flag="" # Default flag
|
||||||
|
|
||||||
# Check if already tested
|
|
||||||
# Use grep -Fq for fixed string and quiet mode
|
|
||||||
if grep -Fq "${provider},${model_id}," "$PROGRESS_LOG_FILE"; then
|
|
||||||
log_info "--- Skipping: $provider / $model_id (already tested, result in $PROGRESS_LOG_FILE) ---"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_info "--- Verifying: $provider / $model_id ---"
|
|
||||||
|
|
||||||
# Determine provider flag
|
# Determine provider flag
|
||||||
if [ "$provider" == "openrouter" ]; then
|
if [ "$provider" == "openrouter" ]; then
|
||||||
flag="--openrouter"
|
flag="--openrouter"
|
||||||
elif [ "$provider" == "ollama" ]; then
|
elif [ "$provider" == "ollama" ]; then
|
||||||
flag="--ollama"
|
flag="--ollama"
|
||||||
|
# Add elif for other providers requiring flags
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
log_info "--- Verifying: $provider / $model_id ---"
|
||||||
|
|
||||||
# 1. Set the main model
|
# 1. Set the main model
|
||||||
|
# Ensure task-master command is available (might need linking if run totally standalone)
|
||||||
if ! command -v task-master &> /dev/null; then
|
if ! command -v task-master &> /dev/null; then
|
||||||
log_error "task-master command not found."
|
log_error "task-master command not found. Ensure it's linked globally or available in PATH."
|
||||||
|
# Attempt to link if possible? Risky. Better to instruct user.
|
||||||
echo "[INSTRUCTION] Please run 'npm link task-master-ai' in the project root first."
|
echo "[INSTRUCTION] Please run 'npm link task-master-ai' in the project root first."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
log_info "Setting main model to $model_id ${flag:+using flag $flag}..."
|
log_info "Setting main model to $model_id ${flag:+using flag $flag}..."
|
||||||
set_model_cmd="task-master models --set-main \"$model_id\" $flag"
|
set_model_cmd="task-master models --set-main \"$model_id\" $flag"
|
||||||
model_set_status="SUCCESS"
|
if ! eval $set_model_cmd > /dev/null 2>&1; then # Hide verbose output of models cmd
|
||||||
if ! eval $set_model_cmd > /dev/null 2>&1; then
|
log_error "Failed to set main model for $provider / $model_id. Skipping."
|
||||||
log_error "Failed to set main model for $provider / $model_id. Skipping test."
|
echo "$provider,$model_id,SET_MODEL_FAILED" >> "$verification_summary_file"
|
||||||
echo "$provider,$model_id,SET_MODEL_FAILED" >> "$PROGRESS_LOG_FILE"
|
continue
|
||||||
continue # Skip the actual test if setting fails
|
|
||||||
fi
|
fi
|
||||||
log_info "Set main model ok."
|
log_info "Set main model ok."
|
||||||
|
|
||||||
@@ -200,69 +203,69 @@ jq -c 'to_entries[] | .key as $provider | .value[] | select(.allowed_roles[]? ==
|
|||||||
log_info "Running update-subtask --id=1.1 --prompt='Test generateObjectService' (timeout 120s)"
|
log_info "Running update-subtask --id=1.1 --prompt='Test generateObjectService' (timeout 120s)"
|
||||||
update_subtask_output_file="update_subtask_raw_output_${provider}_${model_id//\//_}.log"
|
update_subtask_output_file="update_subtask_raw_output_${provider}_${model_id//\//_}.log"
|
||||||
|
|
||||||
|
# Run timeout command in the background
|
||||||
timeout 120s task-master update-subtask --id=1.1 --prompt="Simple test prompt to verify generateObjectService call." > "$update_subtask_output_file" 2>&1 &
|
timeout 120s task-master update-subtask --id=1.1 --prompt="Simple test prompt to verify generateObjectService call." > "$update_subtask_output_file" 2>&1 &
|
||||||
child_pid=$!
|
child_pid=$! # Store the PID of the background process (timeout)
|
||||||
|
|
||||||
|
# Wait specifically for the child process PID
|
||||||
wait "$child_pid"
|
wait "$child_pid"
|
||||||
update_subtask_exit_code=$?
|
update_subtask_exit_code=$?
|
||||||
child_pid=0
|
child_pid=0 # Reset child_pid after it finishes or is killed/interrupted
|
||||||
|
|
||||||
# 3. Check result and log persistently
|
# 3. Check for success
|
||||||
result_status=""
|
# SIGINT = 130 (128 + 2), SIGTERM = 143 (128 + 15)
|
||||||
|
# Check exit code AND grep for the success message in the output file
|
||||||
if [ $update_subtask_exit_code -eq 0 ] && grep -q "Successfully updated subtask #1.1" "$update_subtask_output_file"; then
|
if [ $update_subtask_exit_code -eq 0 ] && grep -q "Successfully updated subtask #1.1" "$update_subtask_output_file"; then
|
||||||
|
# Success (Exit code 0 AND success message found)
|
||||||
log_success "update-subtask succeeded for $provider / $model_id (Verified Output)."
|
log_success "update-subtask succeeded for $provider / $model_id (Verified Output)."
|
||||||
result_status="SUCCESS"
|
echo "$provider,$model_id,SUCCESS" >> "$verification_summary_file"
|
||||||
elif [ $update_subtask_exit_code -eq 124 ]; then
|
elif [ $update_subtask_exit_code -eq 124 ]; then
|
||||||
|
# Timeout
|
||||||
log_error "update-subtask TIMED OUT for $provider / $model_id. Check $update_subtask_output_file."
|
log_error "update-subtask TIMED OUT for $provider / $model_id. Check $update_subtask_output_file."
|
||||||
result_status="FAILED_TIMEOUT"
|
echo "$provider,$model_id,FAILED_TIMEOUT" >> "$verification_summary_file"
|
||||||
elif [ $update_subtask_exit_code -eq 130 ] || [ $update_subtask_exit_code -eq 143 ]; then
|
elif [ $update_subtask_exit_code -eq 130 ] || [ $update_subtask_exit_code -eq 143 ]; then
|
||||||
|
# Interrupted by trap
|
||||||
log_error "update-subtask INTERRUPTED for $provider / $model_id."
|
log_error "update-subtask INTERRUPTED for $provider / $model_id."
|
||||||
result_status="INTERRUPTED" # Record interruption
|
# Trap handler already exited the script. No need to write to summary.
|
||||||
# Don't exit the loop, allow script to finish or be interrupted again
|
# If we reach here unexpectedly, something is wrong with the trap.
|
||||||
else
|
else # Covers non-zero exit code OR zero exit code but missing success message
|
||||||
|
# Other failure
|
||||||
log_error "update-subtask FAILED for $provider / $model_id (Exit Code: $update_subtask_exit_code). Check $update_subtask_output_file."
|
log_error "update-subtask FAILED for $provider / $model_id (Exit Code: $update_subtask_exit_code). Check $update_subtask_output_file."
|
||||||
result_status="FAILED"
|
echo "$provider,$model_id,FAILED" >> "$verification_summary_file"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Append result to the persistent log file
|
|
||||||
echo "$provider,$model_id,$result_status" >> "$PROGRESS_LOG_FILE"
|
|
||||||
|
|
||||||
done # End of fallback verification loop
|
done # End of fallback verification loop
|
||||||
|
|
||||||
# --- Generate Final Verification Report to STDOUT ---
|
# --- Generate Final Verification Report to STDOUT ---
|
||||||
# Report reads from the persistent PROGRESS_LOG_FILE
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "--- Fallback Model Verification Report (via $0) ---"
|
echo "--- Fallback Model Verification Report (via $0) ---"
|
||||||
echo "Executed inside run directory: $(pwd)"
|
echo "Executed inside run directory: $(pwd)"
|
||||||
echo "Progress log: $(pwd)/$PROGRESS_LOG_FILE"
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "Test Command: task-master update-subtask --id=1.1 --prompt=\"...\" (tests generateObjectService)"
|
echo "Test Command: task-master update-subtask --id=1.1 --prompt=\"...\" (tests generateObjectService)"
|
||||||
echo "Models were tested by setting them as the 'main' model temporarily."
|
echo "Models were tested by setting them as the 'main' model temporarily."
|
||||||
echo "Results based on exit code and output verification:"
|
echo "Results based on exit code of the test command:"
|
||||||
echo ""
|
echo ""
|
||||||
echo "Models CONFIRMED to support generateObjectService (Keep 'fallback' role):"
|
echo "Models CONFIRMED to support generateObjectService (Keep 'fallback' role):"
|
||||||
awk -F',' '$3 == "SUCCESS" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
awk -F',' '$3 == "SUCCESS" { print "- " $1 " / " $2 }' "$verification_summary_file" | sort
|
||||||
echo ""
|
echo ""
|
||||||
echo "Models FAILED generateObjectService test (Suggest REMOVING 'fallback' role):"
|
echo "Models FAILED generateObjectService test (Suggest REMOVING 'fallback' role from supported-models.json):"
|
||||||
awk -F',' '$3 == "FAILED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
awk -F',' '$3 == "FAILED" { print "- " $1 " / " $2 }' "$verification_summary_file" | sort
|
||||||
echo ""
|
echo ""
|
||||||
echo "Models TIMED OUT during test (Suggest REMOVING 'fallback' role):"
|
echo "Models TIMED OUT during generateObjectService test (Likely Failure - Suggest REMOVING 'fallback' role):"
|
||||||
awk -F',' '$3 == "FAILED_TIMEOUT" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
awk -F',' '$3 == "FAILED_TIMEOUT" { print "- " $1 " / " $2 }' "$verification_summary_file" | sort
|
||||||
echo ""
|
echo ""
|
||||||
echo "Models where setting the model failed (Inconclusive):"
|
echo "Models where setting the model failed (Inconclusive - investigate separately):"
|
||||||
awk -F',' '$3 == "SET_MODEL_FAILED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
awk -F',' '$3 == "SET_MODEL_FAILED" { print "- " $1 " / " $2 }' "$verification_summary_file" | sort
|
||||||
echo ""
|
|
||||||
echo "Models INTERRUPTED during test (Inconclusive - Rerun):"
|
|
||||||
awk -F',' '$3 == "INTERRUPTED" { print "- " $1 " / " $2 }' "$PROGRESS_LOG_FILE" | sort
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "-------------------------------------------------------"
|
echo "-------------------------------------------------------"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Don't clean up the progress log
|
# Clean up temporary summary file
|
||||||
# if [ -f "$PROGRESS_LOG_FILE" ]; then
|
if [ -f "$verification_summary_file" ]; then
|
||||||
# rm "$PROGRESS_LOG_FILE"
|
rm "$verification_summary_file"
|
||||||
# fi
|
fi
|
||||||
|
|
||||||
log_info "Finished Fallback Model (generateObjectService) Verification Script"
|
log_step "Finished Fallback Model (generateObjectService) Verification Script"
|
||||||
|
|
||||||
# Remove trap before exiting normally
|
# Remove trap before exiting normally
|
||||||
trap - INT TERM
|
trap - INT TERM
|
||||||
|
|||||||
Reference in New Issue
Block a user