Compare commits
85 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5d2b61297 | ||
|
|
0726bc966c | ||
|
|
7fea9968ef | ||
|
|
f7fbdd6755 | ||
|
|
c99df64f65 | ||
|
|
5eafc5ea11 | ||
|
|
a33d6ecfeb | ||
|
|
dd96f51179 | ||
|
|
2852149a47 | ||
|
|
43e0025f4c | ||
|
|
598e687067 | ||
|
|
f38abd6843 | ||
|
|
24e9206da0 | ||
|
|
8d9fcf2064 | ||
|
|
56a415ef79 | ||
|
|
f081bba83c | ||
|
|
6fd5e23396 | ||
|
|
e4456b11bc | ||
|
|
295087a5b8 | ||
|
|
5f2b7323ad | ||
|
|
9ddc521757 | ||
|
|
e7087cf88f | ||
|
|
08f86f19c3 | ||
|
|
f272748965 | ||
|
|
15e15a1f17 | ||
|
|
3a30e9acd4 | ||
|
|
15286c029d | ||
|
|
c39e5158b4 | ||
|
|
4bda8f4d76 | ||
|
|
49976e864b | ||
|
|
30b873a7da | ||
|
|
ab37859a7e | ||
|
|
e704ba12fd | ||
|
|
64b2d8f79e | ||
|
|
bbb4bbcc11 | ||
|
|
8e38348203 | ||
|
|
01b651bddc | ||
|
|
0840ad8316 | ||
|
|
5c726dc542 | ||
|
|
21d988691b | ||
|
|
21839b1cd6 | ||
|
|
6160089b8e | ||
|
|
82bb50619f | ||
|
|
898f15e699 | ||
|
|
1a157567dc | ||
|
|
eb8a3a85a1 | ||
|
|
59a4ec9e1a | ||
|
|
403d7b00ca | ||
|
|
b78614b44e | ||
|
|
19d795d63f | ||
|
|
07ec89ab17 | ||
|
|
eaa7f24280 | ||
|
|
b3d43c5992 | ||
|
|
c5de4f8b68 | ||
|
|
b9299c5af0 | ||
|
|
122a0465d8 | ||
|
|
cf2c06697a | ||
|
|
727f1ec4eb | ||
|
|
648353794e | ||
|
|
a2a3229fd0 | ||
|
|
b592dff8bc | ||
|
|
e9d1bc2385 | ||
|
|
030694bb96 | ||
|
|
3e0f696c49 | ||
|
|
4b0c9d9af6 | ||
|
|
3fa91f56e5 | ||
|
|
e69ac5d5cf | ||
|
|
c60c9354a4 | ||
|
|
30b895be2c | ||
|
|
9995075093 | ||
|
|
b62cb1bbe7 | ||
|
|
7defcba465 | ||
|
|
3e838ed34b | ||
|
|
1b8c320c57 | ||
|
|
5da5b59bde | ||
|
|
04f44a2d3d | ||
|
|
36fe838fd5 | ||
|
|
415b1835d4 | ||
|
|
78112277b3 | ||
|
|
2bb4260966 | ||
|
|
3a2325a963 | ||
|
|
1bd6d4f246 | ||
|
|
a09a2d0967 | ||
|
|
02e0db09df | ||
|
|
3bcce8d70e |
@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
|
||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||
|
||||
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
|
||||
### Workflow Transition Examples
|
||||
|
||||
|
||||
@@ -272,7 +272,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
@@ -29,6 +29,8 @@
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||
"userId": "1234567890",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/",
|
||||
"defaultTag": "master"
|
||||
}
|
||||
"defaultTag": "master",
|
||||
"responseLanguage": "English"
|
||||
},
|
||||
"claudeCode": {}
|
||||
}
|
||||
|
||||
231
CHANGELOG.md
231
CHANGELOG.md
@@ -1,11 +1,42 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.19.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#897](https://github.com/eyaltoledano/claude-task-master/pull/897) [`dd96f51`](https://github.com/eyaltoledano/claude-task-master/commit/dd96f51179d9901f6ae854b0c60f0bcc8c13ae0d) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Adds support for gemini-cli as a provider, enabling free or subscription use through Google Accounts and paid Gemini Cloud Assist (GCA) subscriptions.
|
||||
|
||||
- [#884](https://github.com/eyaltoledano/claude-task-master/pull/884) [`5eafc5e`](https://github.com/eyaltoledano/claude-task-master/commit/5eafc5ea112c91326bb8abda7a78d7c2a4fa16a1) Thanks [@geoh](https://github.com/geoh)! - Added option for the AI to determine the number of tasks required based entirely on complexity
|
||||
|
||||
- [#872](https://github.com/eyaltoledano/claude-task-master/pull/872) [`f7fbdd6`](https://github.com/eyaltoledano/claude-task-master/commit/f7fbdd6755c4a1ee3ab2a3f435961f249fa19c15) Thanks [@geoh](https://github.com/geoh)! - Add advanced settings for Claude Code AI Provider
|
||||
|
||||
- [#870](https://github.com/eyaltoledano/claude-task-master/pull/870) [`6fd5e23`](https://github.com/eyaltoledano/claude-task-master/commit/6fd5e23396a7e348ea2300e67cbd0c97141c081f) Thanks [@nishedcob](https://github.com/nishedcob)! - Include additional Anthropic models running on Bedrock in what is supported
|
||||
|
||||
- [#510](https://github.com/eyaltoledano/claude-task-master/pull/510) [`c99df64`](https://github.com/eyaltoledano/claude-task-master/commit/c99df64f651fb40bae5d7979ee2b2428586f44d3) Thanks [@shenysun](https://github.com/shenysun)! - Add support for custom response language
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#892](https://github.com/eyaltoledano/claude-task-master/pull/892) [`56a415e`](https://github.com/eyaltoledano/claude-task-master/commit/56a415ef795c5aa0e52e7419af8d4f4862611a8c) Thanks [@joedanz](https://github.com/joedanz)! - Ensure projectRoot is a string (potential WSL fix)
|
||||
|
||||
- [#856](https://github.com/eyaltoledano/claude-task-master/pull/856) [`43e0025`](https://github.com/eyaltoledano/claude-task-master/commit/43e0025f4c5870a3c56682cbb8fe0348d711953b) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix bulk update tag corruption in tagged task lists
|
||||
|
||||
- [#857](https://github.com/eyaltoledano/claude-task-master/pull/857) [`598e687`](https://github.com/eyaltoledano/claude-task-master/commit/598e687067d1af44f1a9916266ae94af3e752067) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Fix expand-task to use tag-specific complexity reports
|
||||
|
||||
The expand-task function now correctly uses complexity reports specific to the current tag context (e.g., task-complexity-report_feature-branch.json) instead of always using the default task-complexity-report.json file. This enables proper task expansion behavior when working with multiple tag contexts.
|
||||
|
||||
- [#855](https://github.com/eyaltoledano/claude-task-master/pull/855) [`e4456b1`](https://github.com/eyaltoledano/claude-task-master/commit/e4456b11bc3ae46e120d244fc32c1807a8a58a57) Thanks [@joedanz](https://github.com/joedanz)! - Fix .gitignore missing trailing newline during project initialization
|
||||
|
||||
- [#846](https://github.com/eyaltoledano/claude-task-master/pull/846) [`59a4ec9`](https://github.com/eyaltoledano/claude-task-master/commit/59a4ec9e1a452079e5c78c00428d140f13a1c8f6) Thanks [@joedanz](https://github.com/joedanz)! - Default to Cursor profile for MCP init when no rules specified
|
||||
|
||||
- [#852](https://github.com/eyaltoledano/claude-task-master/pull/852) [`f38abd6`](https://github.com/eyaltoledano/claude-task-master/commit/f38abd68436ea5d093b2e22c2b8520b6e6906251) Thanks [@hrmshandy](https://github.com/hrmshandy)! - fixes a critical issue where subtask generation fails on gemini-2.5-pro unless explicitly prompted to return 'details' field as a string not an object
|
||||
|
||||
- [#908](https://github.com/eyaltoledano/claude-task-master/pull/908) [`24e9206`](https://github.com/eyaltoledano/claude-task-master/commit/24e9206da0d5d3f2f7819ed94fa0c9b459fc9f9b) Thanks [@joedanz](https://github.com/joedanz)! - Fix rules command to use reliable project root detection like other commands
|
||||
|
||||
## 0.18.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
||||
|
||||
- For example:
|
||||
- `OPENAI_BASE_URL`
|
||||
|
||||
@@ -20,13 +51,11 @@
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
|
||||
- Resolves #338
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Make task-master more compatible with the "o" family models of OpenAI
|
||||
|
||||
Now works well with:
|
||||
|
||||
- o3
|
||||
- o3-mini
|
||||
- etc.
|
||||
@@ -34,7 +63,6 @@
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - - **Git Worktree Detection:**
|
||||
|
||||
- Now properly skips Git initialization when inside existing Git worktree
|
||||
- Prevents accidental nested repository creation
|
||||
- **Flag System Overhaul:**
|
||||
@@ -48,7 +76,6 @@
|
||||
- Supports both CLI and MCP interfaces with proper parameter passing
|
||||
|
||||
**Implementation Details:**
|
||||
|
||||
- Added explicit Git worktree detection before initialization
|
||||
- Refactored flag processing to ensure consistent behavior
|
||||
- Fixes #734
|
||||
@@ -58,7 +85,6 @@
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
@@ -76,7 +102,6 @@
|
||||
### Patch Changes
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
|
||||
|
||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
||||
@@ -95,14 +120,12 @@
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixes issue with expand CLI command "Complexity report not found"
|
||||
|
||||
- Closes #735
|
||||
- Closes #728
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Store tasks in Git by default
|
||||
|
||||
- [#840](https://github.com/eyaltoledano/claude-task-master/pull/840) [`b40139c`](https://github.com/eyaltoledano/claude-task-master/commit/b40139ca0517fd76aea4f41d0ed4c10e658a5d2b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
|
||||
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
@@ -120,7 +143,6 @@
|
||||
### Minor Changes
|
||||
|
||||
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
||||
|
||||
- For example:
|
||||
- `OPENAI_BASE_URL`
|
||||
|
||||
@@ -135,13 +157,11 @@
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
|
||||
- Resolves #338
|
||||
|
||||
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
||||
|
||||
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
|
||||
|
||||
- Now properly skips Git initialization when inside existing Git worktree
|
||||
- Prevents accidental nested repository creation
|
||||
- **Flag System Overhaul:**
|
||||
@@ -155,7 +175,6 @@
|
||||
- Supports both CLI and MCP interfaces with proper parameter passing
|
||||
|
||||
**Implementation Details:**
|
||||
|
||||
- Added explicit Git worktree detection before initialization
|
||||
- Refactored flag processing to ensure consistent behavior
|
||||
- Fixes #734
|
||||
@@ -165,7 +184,6 @@
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
@@ -183,7 +201,6 @@
|
||||
### Patch Changes
|
||||
|
||||
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
|
||||
|
||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
||||
@@ -200,7 +217,103 @@
|
||||
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
|
||||
|
||||
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
|
||||
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
|
||||
|
||||
- [#633](https://github.com/eyaltoledano/claude-task-master/pull/633) [`3a2325a`](https://github.com/eyaltoledano/claude-task-master/commit/3a2325a963fed82377ab52546eedcbfebf507a7e) Thanks [@nmarley](https://github.com/nmarley)! - Fix weird `task-master init` bug when using in certain environments
|
||||
|
||||
- [#831](https://github.com/eyaltoledano/claude-task-master/pull/831) [`b592dff`](https://github.com/eyaltoledano/claude-task-master/commit/b592dff8bc5c5d7966843fceaa0adf4570934336) Thanks [@joedanz](https://github.com/joedanz)! - Rename Roo Code Boomerang role to Orchestrator
|
||||
|
||||
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
|
||||
|
||||
## 0.17.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')`
|
||||
|
||||
## 0.18.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
||||
- For example:
|
||||
- `OPENAI_BASE_URL`
|
||||
|
||||
- [#460](https://github.com/eyaltoledano/claude-task-master/pull/460) [`a09a2d0`](https://github.com/eyaltoledano/claude-task-master/commit/a09a2d0967a10276623e3f3ead3ed577c15ce62f) Thanks [@joedanz](https://github.com/joedanz)! - Added comprehensive rule profile management:
|
||||
|
||||
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
|
||||
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
|
||||
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
|
||||
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
|
||||
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
|
||||
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
|
||||
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
|
||||
|
||||
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
|
||||
- Resolves #338
|
||||
|
||||
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
||||
|
||||
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
|
||||
- Now properly skips Git initialization when inside existing Git worktree
|
||||
- Prevents accidental nested repository creation
|
||||
- **Flag System Overhaul:**
|
||||
- `--git`/`--no-git` controls repository initialization
|
||||
- `--aliases`/`--no-aliases` consistently manages shell alias creation
|
||||
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
|
||||
- `--dry-run` accurately previews all initialization behaviors
|
||||
- **GitTasks Functionality:**
|
||||
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
|
||||
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
|
||||
- Supports both CLI and MCP interfaces with proper parameter passing
|
||||
|
||||
**Implementation Details:**
|
||||
- Added explicit Git worktree detection before initialization
|
||||
- Refactored flag processing to ensure consistent behavior
|
||||
- Fixes #734
|
||||
|
||||
- [#829](https://github.com/eyaltoledano/claude-task-master/pull/829) [`4b0c9d9`](https://github.com/eyaltoledano/claude-task-master/commit/4b0c9d9af62d00359fca3f43283cf33223d410bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
|
||||
|
||||
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
|
||||
|
||||
Key features:
|
||||
- New claude-code provider with support for opus and sonnet models
|
||||
- No API key required - uses local Claude Code CLI installation
|
||||
- Optional dependency - won't affect users who don't need Claude Code
|
||||
- Lazy loading ensures the provider only loads when requested
|
||||
- Full integration with existing Task Master commands and workflows
|
||||
- Comprehensive test coverage for reliability
|
||||
- New --claude-code flag for the models command
|
||||
|
||||
Users can now configure Claude Code models with:
|
||||
task-master models --set-main sonnet --claude-code
|
||||
task-master models --set-research opus --claude-code
|
||||
|
||||
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
|
||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
||||
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
|
||||
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
|
||||
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
|
||||
|
||||
- [#833](https://github.com/eyaltoledano/claude-task-master/pull/833) [`cf2c066`](https://github.com/eyaltoledano/claude-task-master/commit/cf2c06697a0b5b952fb6ca4b3c923e9892604d08) Thanks [@joedanz](https://github.com/joedanz)! - Call rules interactive setup during init
|
||||
|
||||
- [#826](https://github.com/eyaltoledano/claude-task-master/pull/826) [`7811227`](https://github.com/eyaltoledano/claude-task-master/commit/78112277b3caa4539e6e29805341a944799fb0e7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
|
||||
|
||||
- [#834](https://github.com/eyaltoledano/claude-task-master/pull/834) [`6483537`](https://github.com/eyaltoledano/claude-task-master/commit/648353794eb60d11ffceda87370a321ad310fbd7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
|
||||
|
||||
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
|
||||
|
||||
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
|
||||
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
|
||||
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
|
||||
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
|
||||
@@ -228,7 +341,6 @@
|
||||
The new `research` command provides AI-powered research capabilities that automatically gather relevant project context to answer your questions. The command intelligently selects context from multiple sources and supports interactive follow-up questions in CLI mode.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- **Intelligent Task Discovery**: Automatically finds relevant tasks and subtasks using fuzzy search based on your query keywords, supplementing any explicitly provided task IDs
|
||||
- **Multi-Source Context**: Gathers context from tasks, files, project structure, and custom text to provide comprehensive answers
|
||||
- **Interactive Follow-ups**: CLI users can ask follow-up questions that build on the conversation history while allowing fresh context discovery for each question
|
||||
@@ -253,14 +365,12 @@
|
||||
```
|
||||
|
||||
**Context Sources:**
|
||||
|
||||
- **Tasks**: Automatically discovers relevant tasks/subtasks via fuzzy search, plus any explicitly specified via `--id`
|
||||
- **Files**: Include specific files via `--files` for code-aware responses
|
||||
- **Project Tree**: Add `--tree` to include project structure overview
|
||||
- **Custom Context**: Provide additional context via `--context` for domain-specific information
|
||||
|
||||
**Interactive Features (CLI only):**
|
||||
|
||||
- Follow-up questions that maintain conversation history
|
||||
- Fresh fuzzy search for each follow-up to discover newly relevant tasks
|
||||
- Cumulative context building across the conversation
|
||||
@@ -271,7 +381,6 @@
|
||||
**Save Functionality:**
|
||||
|
||||
The research command now supports saving complete conversation threads to tasks or subtasks:
|
||||
|
||||
- Save research results and follow-up conversations to any task (e.g., "15") or subtask (e.g., "15.2")
|
||||
- Automatic timestamping and formatting of conversation history
|
||||
- Validation of task/subtask existence before saving
|
||||
@@ -289,7 +398,6 @@
|
||||
```
|
||||
|
||||
**MCP Integration:**
|
||||
|
||||
- `saveTo` parameter for automatic saving to specified task/subtask ID
|
||||
- Structured response format with telemetry data
|
||||
- Silent operation mode for programmatic usage
|
||||
@@ -302,12 +410,10 @@
|
||||
Adds the `--append` flag to `update-task` command, enabling it to behave like `update-subtask` with timestamped information appending. This provides more flexible task updating options:
|
||||
|
||||
**CLI Enhancement:**
|
||||
|
||||
- `task-master update-task --id=5 --prompt="New info"` - Full task update (existing behavior)
|
||||
- `task-master update-task --id=5 --append --prompt="Progress update"` - Append timestamped info to task details
|
||||
|
||||
**Full MCP Integration:**
|
||||
|
||||
- MCP tool `update_task` now supports `append` parameter
|
||||
- Seamless integration with Cursor and other MCP clients
|
||||
- Consistent behavior between CLI and MCP interfaces
|
||||
@@ -317,7 +423,6 @@
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`c0b3f43`](https://github.com/eyaltoledano/claude-task-master/commit/c0b3f432a60891550b00acb113dc877bd432995f) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add --tag flag support to core commands for multi-context task management. Commands like parse-prd, analyze-complexity, and others now support targeting specific task lists, enabling rapid prototyping and parallel development workflows.
|
||||
|
||||
Key features:
|
||||
|
||||
- parse-prd --tag=feature-name: Parse PRDs into separate task contexts on the fly
|
||||
- analyze-complexity --tag=branch: Generate tag-specific complexity reports
|
||||
- All task operations can target specific contexts while preserving other lists
|
||||
@@ -330,7 +435,6 @@
|
||||
**🏷️ Tagged Task Lists Architecture:**
|
||||
|
||||
The new tagged system fundamentally improves how tasks are organized:
|
||||
|
||||
- **Legacy Format**: `{ "tasks": [...] }`
|
||||
- **New Tagged Format**: `{ "master": { "tasks": [...], "metadata": {...} }, "feature-xyz": { "tasks": [...], "metadata": {...} } }`
|
||||
- **Automatic Migration**: Existing projects will seamlessly migrate to tagged format with zero user intervention
|
||||
@@ -342,7 +446,6 @@
|
||||
**🚀 Complete Tag Management Suite:**
|
||||
|
||||
**Core Tag Commands:**
|
||||
|
||||
- `task-master tags [--show-metadata]` - List all tags with task counts, completion stats, and metadata
|
||||
- `task-master add-tag <name> [options]` - Create new tag contexts with optional task copying
|
||||
- `task-master delete-tag <name> [--yes]` - Delete tags (and attached tasks) with double confirmation protection
|
||||
@@ -353,7 +456,6 @@
|
||||
**🤖 Full MCP Integration for Tag Management:**
|
||||
|
||||
Task Master's multi-context capabilities are now fully exposed through the MCP server, enabling powerful agentic workflows:
|
||||
|
||||
- **`list_tags`**: List all available tag contexts.
|
||||
- **`add_tag`**: Programmatically create new tags.
|
||||
- **`delete_tag`**: Remove tag contexts.
|
||||
@@ -362,7 +464,6 @@
|
||||
- **`copy_tag`**: Duplicate entire task contexts for experimentation.
|
||||
|
||||
**Tag Creation Options:**
|
||||
|
||||
- `--copy-from-current` - Copy tasks from currently active tag
|
||||
- `--copy-from=<tag>` - Copy tasks from specific tag
|
||||
- `--from-branch` - Creates a new tag using the active git branch name (for `add-tag` only)
|
||||
@@ -372,7 +473,6 @@
|
||||
**🎯 Universal --tag Flag Support:**
|
||||
|
||||
Every task operation now supports tag-specific execution:
|
||||
|
||||
- `task-master list --tag=feature-branch` - View tasks in specific context
|
||||
- `task-master add-task --tag=experiment --prompt="..."` - Create tasks in specific tag
|
||||
- `task-master parse-prd document.txt --tag=v2-redesign` - Parse PRDs into dedicated contexts
|
||||
@@ -385,20 +485,17 @@
|
||||
**📊 Enhanced Workflow Features:**
|
||||
|
||||
**Smart Context Switching:**
|
||||
|
||||
- `use-tag` command shows immediate next task after switching
|
||||
- Automatic tag creation when targeting non-existent tags
|
||||
- Current tag persistence across terminal sessions
|
||||
- Branch-tag mapping for future Git integration
|
||||
|
||||
**Intelligent File Management:**
|
||||
|
||||
- Tag-specific complexity reports: `task-complexity-report_tagname.json`
|
||||
- Master tag uses default filenames: `task-complexity-report.json`
|
||||
- Automatic file isolation prevents cross-tag contamination
|
||||
|
||||
**Advanced Confirmation Logic:**
|
||||
|
||||
- Commands only prompt when target tag has existing tasks
|
||||
- Empty tags allow immediate operations without confirmation
|
||||
- Smart append vs overwrite detection
|
||||
@@ -406,14 +503,12 @@
|
||||
**🔄 Seamless Migration & Compatibility:**
|
||||
|
||||
**Zero-Disruption Migration:**
|
||||
|
||||
- Existing `tasks.json` files automatically migrate on first command
|
||||
- Master tag receives proper metadata (creation date, description)
|
||||
- Migration notice shown once with helpful explanation
|
||||
- All existing commands work identically to before
|
||||
|
||||
**State Management:**
|
||||
|
||||
- `.taskmaster/state.json` tracks current tag and migration status
|
||||
- Automatic state creation and maintenance
|
||||
- Branch-tag mapping foundation for Git integration
|
||||
@@ -421,7 +516,6 @@
|
||||
- Grounds for future context additions
|
||||
|
||||
**Backward Compatibility:**
|
||||
|
||||
- All existing workflows continue unchanged
|
||||
- Legacy commands work exactly as before
|
||||
- Gradual adoption - users can ignore tags entirely if desired
|
||||
@@ -430,25 +524,21 @@
|
||||
**💡 Real-World Use Cases:**
|
||||
|
||||
**Team Collaboration:**
|
||||
|
||||
- `task-master add-tag alice --copy-from-current` - Create teammate-specific contexts
|
||||
- `task-master add-tag bob --copy-from=master` - Onboard new team members
|
||||
- `task-master use-tag alice` - Switch to teammate's work context
|
||||
|
||||
**Feature Development:**
|
||||
|
||||
- `task-master parse-prd feature-spec.txt --tag=user-auth` - Dedicated feature planning
|
||||
- `task-master add-tag experiment --copy-from=user-auth` - Safe experimentation
|
||||
- `task-master analyze-complexity --tag=user-auth` - Feature-specific analysis
|
||||
|
||||
**Release Management:**
|
||||
|
||||
- `task-master add-tag v2.0 --description="Next major release"` - Version-specific planning
|
||||
- `task-master copy-tag master v2.1` - Release branch preparation
|
||||
- `task-master use-tag hotfix` - Emergency fix context
|
||||
|
||||
**Project Phases:**
|
||||
|
||||
- `task-master add-tag research --description="Discovery phase"` - Research tasks
|
||||
- `task-master add-tag implementation --copy-from=research` - Development phase
|
||||
- `task-master add-tag testing --copy-from=implementation` - QA phase
|
||||
@@ -456,21 +546,18 @@
|
||||
**🛠️ Technical Implementation:**
|
||||
|
||||
**Data Structure:**
|
||||
|
||||
- Tagged format with complete isolation between contexts
|
||||
- Rich metadata per tag (creation date, description, update tracking)
|
||||
- Automatic metadata enhancement for existing tags
|
||||
- Clean separation of tag data and internal state
|
||||
|
||||
**Performance Optimizations:**
|
||||
|
||||
- Dynamic task counting without stored counters
|
||||
- Efficient tag resolution and caching
|
||||
- Minimal file I/O with smart data loading
|
||||
- Responsive table layouts adapting to terminal width
|
||||
|
||||
**Error Handling:**
|
||||
|
||||
- Comprehensive validation for tag names (alphanumeric, hyphens, underscores)
|
||||
- Reserved name protection (master, main, default)
|
||||
- Graceful handling of missing tags and corrupted data
|
||||
@@ -485,18 +572,15 @@
|
||||
Added comprehensive save-to-file capability to the research command, enabling users to preserve research sessions for future reference and documentation.
|
||||
|
||||
**CLI Integration:**
|
||||
|
||||
- New `--save-file` flag for `task-master research` command
|
||||
- Consistent with existing `--save` and `--save-to` flags for intuitive usage
|
||||
- Interactive "Save to file" option in follow-up questions menu
|
||||
|
||||
**MCP Integration:**
|
||||
|
||||
- New `saveToFile` boolean parameter for the `research` MCP tool
|
||||
- Enables programmatic research saving for AI agents and integrated tools
|
||||
|
||||
**File Management:**
|
||||
|
||||
- Automatically creates `.taskmaster/docs/research/` directory structure
|
||||
- Generates timestamped, slugified filenames (e.g., `2025-01-13_what-is-typescript.md`)
|
||||
- Comprehensive Markdown format with metadata headers including query, timestamp, and context sources
|
||||
@@ -507,14 +591,12 @@
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`c0b3f43`](https://github.com/eyaltoledano/claude-task-master/commit/c0b3f432a60891550b00acb113dc877bd432995f) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations
|
||||
|
||||
**New Features:**
|
||||
|
||||
- **Multiple Task Retrieval**: Pass comma-separated IDs to get/show multiple tasks at once (e.g., `task-master show 1,3,5` or MCP `get_task` with `id: "1,3,5"`)
|
||||
- **Smart Display Logic**: Single ID shows detailed view, multiple IDs show compact summary table with interactive options
|
||||
- **Batch Action Menu**: Interactive menu for multiple tasks with copy-paste ready commands for common operations (mark as done/in-progress, expand all, view dependencies, etc.)
|
||||
- **MCP Array Response**: MCP tool returns structured array of task objects for efficient AI agent context gathering
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- **Faster Context Gathering**: AI agents can collect multiple tasks/subtasks in one call instead of iterating
|
||||
- **Improved Workflow**: Interactive batch operations reduce repetitive command execution
|
||||
- **Better UX**: Responsive layout adapts to terminal width, maintains consistency with existing UI patterns
|
||||
@@ -533,7 +615,6 @@
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`5ec1f61`](https://github.com/eyaltoledano/claude-task-master/commit/5ec1f61c13f468648b7fdc8fa112e95aec25f76d) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility
|
||||
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`c0b3f43`](https://github.com/eyaltoledano/claude-task-master/commit/c0b3f432a60891550b00acb113dc877bd432995f) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix critical bugs in task move functionality:
|
||||
|
||||
- **Fixed moving tasks to become subtasks of empty parents**: When moving a task to become a subtask of a parent that had no existing subtasks (e.g., task 89 → task 98.1), the operation would fail with validation errors.
|
||||
- **Fixed moving subtasks between parents**: Subtasks can now be properly moved between different parent tasks, including to parents that previously had no subtasks.
|
||||
- **Improved comma-separated batch moves**: Multiple tasks can now be moved simultaneously using comma-separated IDs (e.g., "88,90" → "92,93") with proper error handling and atomic operations.
|
||||
@@ -543,7 +624,6 @@
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`d76bea4`](https://github.com/eyaltoledano/claude-task-master/commit/d76bea49b381c523183f39e33c2a4269371576ed) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Update o3 model price
|
||||
|
||||
- [#779](https://github.com/eyaltoledano/claude-task-master/pull/779) [`0849c0c`](https://github.com/eyaltoledano/claude-task-master/commit/0849c0c2cedb16ac44ba5cc2d109625a9b4efd67) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fixes issue with expand CLI command "Complexity report not found"
|
||||
|
||||
- Closes #735
|
||||
- Closes #728
|
||||
|
||||
@@ -567,32 +647,27 @@
|
||||
- [#699](https://github.com/eyaltoledano/claude-task-master/pull/699) [`27edbd8`](https://github.com/eyaltoledano/claude-task-master/commit/27edbd8f3fe5e2ac200b80e7f27f4c0e74a074d6) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Enhanced add-task fuzzy search intelligence and improved user experience
|
||||
|
||||
**Smarter Task Discovery:**
|
||||
|
||||
- Remove hardcoded category system that always matched "Task management"
|
||||
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
|
||||
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
|
||||
- Generate context-driven task recommendations based on true semantic similarity
|
||||
|
||||
**Enhanced Terminal Experience:**
|
||||
|
||||
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
|
||||
- Remove console.clear() and redundant displayBanner() calls from UI functions
|
||||
- Preserve command history for better development workflow
|
||||
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
|
||||
|
||||
**Visual Improvements:**
|
||||
|
||||
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
|
||||
- Improve consistency and readability of task complexity display
|
||||
|
||||
**AI Provider Compatibility:**
|
||||
|
||||
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
|
||||
- Add qwen3-235n-a22b:free model support (closes #687)
|
||||
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
|
||||
|
||||
**Technical Improvements:**
|
||||
|
||||
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
|
||||
- Improved dependency analysis and common pattern detection
|
||||
- Better handling of task relationships and relevance scoring
|
||||
@@ -601,7 +676,6 @@
|
||||
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.
|
||||
|
||||
- [#655](https://github.com/eyaltoledano/claude-task-master/pull/655) [`edaa5fe`](https://github.com/eyaltoledano/claude-task-master/commit/edaa5fe0d56e0e4e7c4370670a7a388eebd922ac) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix double .taskmaster directory paths in file resolution utilities
|
||||
|
||||
- Closes #636
|
||||
|
||||
- [#671](https://github.com/eyaltoledano/claude-task-master/pull/671) [`86ea6d1`](https://github.com/eyaltoledano/claude-task-master/commit/86ea6d1dbc03eeb39f524f565b50b7017b1d2c9c) Thanks [@joedanz](https://github.com/joedanz)! - Add one-click MCP server installation for Cursor
|
||||
@@ -611,13 +685,11 @@
|
||||
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
|
||||
|
||||
**Features:**
|
||||
|
||||
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
|
||||
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
|
||||
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
|
||||
|
||||
**Usage:**
|
||||
|
||||
- `task-master sync-readme` - Export tasks without subtasks
|
||||
- `task-master sync-readme --with-subtasks` - Include subtasks in export
|
||||
- `task-master sync-readme --status=pending` - Only export pending tasks
|
||||
@@ -636,32 +708,27 @@
|
||||
- [#699](https://github.com/eyaltoledano/claude-task-master/pull/699) [`27edbd8`](https://github.com/eyaltoledano/claude-task-master/commit/27edbd8f3fe5e2ac200b80e7f27f4c0e74a074d6) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Enhanced add-task fuzzy search intelligence and improved user experience
|
||||
|
||||
**Smarter Task Discovery:**
|
||||
|
||||
- Remove hardcoded category system that always matched "Task management"
|
||||
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
|
||||
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
|
||||
- Generate context-driven task recommendations based on true semantic similarity
|
||||
|
||||
**Enhanced Terminal Experience:**
|
||||
|
||||
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
|
||||
- Remove console.clear() and redundant displayBanner() calls from UI functions
|
||||
- Preserve command history for better development workflow
|
||||
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
|
||||
|
||||
**Visual Improvements:**
|
||||
|
||||
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
|
||||
- Improve consistency and readability of task complexity display
|
||||
|
||||
**AI Provider Compatibility:**
|
||||
|
||||
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
|
||||
- Add qwen3-235n-a22b:free model support (closes #687)
|
||||
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
|
||||
|
||||
**Technical Improvements:**
|
||||
|
||||
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
|
||||
- Improved dependency analysis and common pattern detection
|
||||
- Better handling of task relationships and relevance scoring
|
||||
@@ -670,7 +737,6 @@
|
||||
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.
|
||||
|
||||
- [#655](https://github.com/eyaltoledano/claude-task-master/pull/655) [`edaa5fe`](https://github.com/eyaltoledano/claude-task-master/commit/edaa5fe0d56e0e4e7c4370670a7a388eebd922ac) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix double .taskmaster directory paths in file resolution utilities
|
||||
|
||||
- Closes #636
|
||||
|
||||
- [#671](https://github.com/eyaltoledano/claude-task-master/pull/671) [`86ea6d1`](https://github.com/eyaltoledano/claude-task-master/commit/86ea6d1dbc03eeb39f524f565b50b7017b1d2c9c) Thanks [@joedanz](https://github.com/joedanz)! - Add one-click MCP server installation for Cursor
|
||||
@@ -680,13 +746,11 @@
|
||||
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
|
||||
|
||||
**Features:**
|
||||
|
||||
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
|
||||
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
|
||||
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
|
||||
|
||||
**Usage:**
|
||||
|
||||
- `task-master sync-readme` - Export tasks without subtasks
|
||||
- `task-master sync-readme --with-subtasks` - Include subtasks in export
|
||||
- `task-master sync-readme --status=pending` - Only export pending tasks
|
||||
@@ -699,7 +763,6 @@
|
||||
### Patch Changes
|
||||
|
||||
- [#655](https://github.com/eyaltoledano/claude-task-master/pull/655) [`edaa5fe`](https://github.com/eyaltoledano/claude-task-master/commit/edaa5fe0d56e0e4e7c4370670a7a388eebd922ac) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix double .taskmaster directory paths in file resolution utilities
|
||||
|
||||
- Closes #636
|
||||
|
||||
- [#671](https://github.com/eyaltoledano/claude-task-master/pull/671) [`86ea6d1`](https://github.com/eyaltoledano/claude-task-master/commit/86ea6d1dbc03eeb39f524f565b50b7017b1d2c9c) Thanks [@joedanz](https://github.com/joedanz)! - Add one-click MCP server installation for Cursor
|
||||
@@ -725,7 +788,6 @@
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
|
||||
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
|
||||
|
||||
- Implemented `VertexAIProvider` class extending BaseAIProvider
|
||||
- Added authentication and configuration handling for Vertex AI
|
||||
- Updated configuration manager with Vertex-specific getters
|
||||
@@ -741,7 +803,6 @@
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
|
||||
|
||||
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
|
||||
|
||||
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
|
||||
|
||||
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
|
||||
@@ -749,7 +810,6 @@
|
||||
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
|
||||
|
||||
**New Directory Structure:**
|
||||
|
||||
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
|
||||
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
|
||||
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
|
||||
@@ -757,14 +817,12 @@
|
||||
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
|
||||
|
||||
**Migration & Backward Compatibility:**
|
||||
|
||||
- Existing projects continue to work with legacy file locations
|
||||
- New projects use the consolidated structure automatically
|
||||
- Run `task-master migrate` to move existing projects to the new structure
|
||||
- All CLI commands and MCP tools automatically detect and use appropriate file locations
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Cleaner project root with Task Master files organized in one location
|
||||
- Reduced file scatter across multiple directories
|
||||
- Improved project navigation and maintenance
|
||||
@@ -785,7 +843,6 @@
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
|
||||
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
|
||||
|
||||
- Implemented `VertexAIProvider` class extending BaseAIProvider
|
||||
- Added authentication and configuration handling for Vertex AI
|
||||
- Updated configuration manager with Vertex-specific getters
|
||||
@@ -801,7 +858,6 @@
|
||||
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
|
||||
|
||||
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
|
||||
|
||||
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
|
||||
|
||||
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
|
||||
@@ -809,7 +865,6 @@
|
||||
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
|
||||
|
||||
**New Directory Structure:**
|
||||
|
||||
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
|
||||
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
|
||||
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
|
||||
@@ -817,14 +872,12 @@
|
||||
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
|
||||
|
||||
**Migration & Backward Compatibility:**
|
||||
|
||||
- Existing projects continue to work with legacy file locations
|
||||
- New projects use the consolidated structure automatically
|
||||
- Run `task-master migrate` to move existing projects to the new structure
|
||||
- All CLI commands and MCP tools automatically detect and use appropriate file locations
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Cleaner project root with Task Master files organized in one location
|
||||
- Reduced file scatter across multiple directories
|
||||
- Improved project navigation and maintenance
|
||||
@@ -843,7 +896,6 @@
|
||||
### Minor Changes
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`09add37`](https://github.com/eyaltoledano/claude-task-master/commit/09add37423d70b809d5c28f3cde9fccd5a7e64e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Added comprehensive Ollama model validation and interactive setup support
|
||||
|
||||
- **Interactive Setup Enhancement**: Added "Custom Ollama model" option to `task-master models --setup`, matching the existing OpenRouter functionality
|
||||
- **Live Model Validation**: When setting Ollama models, Taskmaster now validates against the local Ollama instance by querying `/api/tags` endpoint
|
||||
- **Configurable Endpoints**: Uses the `ollamaBaseUrl` from `.taskmasterconfig` (with role-specific `baseUrl` overrides supported)
|
||||
@@ -855,14 +907,12 @@
|
||||
- **Improved User Experience**: Clear feedback during model validation with informative success/error messages
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`4c83526`](https://github.com/eyaltoledano/claude-task-master/commit/4c835264ac6c1f74896cddabc3b3c69a5c435417) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds and updates supported AI models with costs:
|
||||
|
||||
- Added new OpenRouter models: GPT-4.1 series, O3, Codex Mini, Llama 4 Maverick, Llama 4 Scout, Qwen3-235b
|
||||
- Added Mistral models: Devstral Small, Mistral Nemo
|
||||
- Updated Ollama models with latest variants: Devstral, Qwen3, Mistral-small3.1, Llama3.3
|
||||
- Updated Gemini model to latest 2.5 Flash preview version
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`70f4054`](https://github.com/eyaltoledano/claude-task-master/commit/70f4054f268f9f8257870e64c24070263d4e2966) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add `--research` flag to parse-prd command, enabling enhanced task generation from PRD files. When used, Taskmaster leverages the research model to:
|
||||
|
||||
- Research current technologies and best practices relevant to the project
|
||||
- Identify technical challenges and security concerns not explicitly mentioned in the PRD
|
||||
- Include specific library recommendations with version numbers
|
||||
@@ -885,7 +935,6 @@
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`04af16d`](https://github.com/eyaltoledano/claude-task-master/commit/04af16de27295452e134b17b3c7d0f44bbb84c29) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add move command to enable moving tasks and subtasks within the task hierarchy. This new command supports moving standalone tasks to become subtasks, subtasks to become standalone tasks, and moving subtasks between different parents. The implementation handles circular dependencies, validation, and proper updating of parent-child relationships.
|
||||
|
||||
**Usage:**
|
||||
|
||||
- CLI command: `task-master move --from=<id> --to=<id>`
|
||||
- MCP tool: `move_task` with parameters:
|
||||
- `from`: ID of task/subtask to move (e.g., "5" or "5.2")
|
||||
@@ -893,7 +942,6 @@
|
||||
- `file` (optional): Custom path to tasks.json
|
||||
|
||||
**Example scenarios:**
|
||||
|
||||
- Move task to become subtask: `--from="5" --to="7"`
|
||||
- Move subtask to standalone task: `--from="5.2" --to="7"`
|
||||
- Move subtask to different parent: `--from="5.2" --to="7.3"`
|
||||
@@ -905,7 +953,6 @@
|
||||
The command supports moving multiple tasks simultaneously by providing comma-separated lists for both `--from` and `--to` parameters. The number of source and destination IDs must match. This is particularly useful for resolving merge conflicts in task files when multiple team members have created tasks on different branches.
|
||||
|
||||
**Validation Features:**
|
||||
|
||||
- Allows moving tasks to new, non-existent IDs (automatically creates placeholders)
|
||||
- Prevents moving to existing task IDs that already contain content (to avoid overwriting)
|
||||
- Validates source tasks exist before attempting to move them
|
||||
@@ -930,7 +977,6 @@
|
||||
### Minor Changes
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`09add37`](https://github.com/eyaltoledano/claude-task-master/commit/09add37423d70b809d5c28f3cde9fccd5a7e64e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Added comprehensive Ollama model validation and interactive setup support
|
||||
|
||||
- **Interactive Setup Enhancement**: Added "Custom Ollama model" option to `task-master models --setup`, matching the existing OpenRouter functionality
|
||||
- **Live Model Validation**: When setting Ollama models, Taskmaster now validates against the local Ollama instance by querying `/api/tags` endpoint
|
||||
- **Configurable Endpoints**: Uses the `ollamaBaseUrl` from `.taskmasterconfig` (with role-specific `baseUrl` overrides supported)
|
||||
@@ -942,14 +988,12 @@
|
||||
- **Improved User Experience**: Clear feedback during model validation with informative success/error messages
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`4c83526`](https://github.com/eyaltoledano/claude-task-master/commit/4c835264ac6c1f74896cddabc3b3c69a5c435417) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds and updates supported AI models with costs:
|
||||
|
||||
- Added new OpenRouter models: GPT-4.1 series, O3, Codex Mini, Llama 4 Maverick, Llama 4 Scout, Qwen3-235b
|
||||
- Added Mistral models: Devstral Small, Mistral Nemo
|
||||
- Updated Ollama models with latest variants: Devstral, Qwen3, Mistral-small3.1, Llama3.3
|
||||
- Updated Gemini model to latest 2.5 Flash preview version
|
||||
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`70f4054`](https://github.com/eyaltoledano/claude-task-master/commit/70f4054f268f9f8257870e64c24070263d4e2966) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add `--research` flag to parse-prd command, enabling enhanced task generation from PRD files. When used, Taskmaster leverages the research model to:
|
||||
|
||||
- Research current technologies and best practices relevant to the project
|
||||
- Identify technical challenges and security concerns not explicitly mentioned in the PRD
|
||||
- Include specific library recommendations with version numbers
|
||||
@@ -972,7 +1016,6 @@
|
||||
- [#567](https://github.com/eyaltoledano/claude-task-master/pull/567) [`04af16d`](https://github.com/eyaltoledano/claude-task-master/commit/04af16de27295452e134b17b3c7d0f44bbb84c29) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add move command to enable moving tasks and subtasks within the task hierarchy. This new command supports moving standalone tasks to become subtasks, subtasks to become standalone tasks, and moving subtasks between different parents. The implementation handles circular dependencies, validation, and proper updating of parent-child relationships.
|
||||
|
||||
**Usage:**
|
||||
|
||||
- CLI command: `task-master move --from=<id> --to=<id>`
|
||||
- MCP tool: `move_task` with parameters:
|
||||
- `from`: ID of task/subtask to move (e.g., "5" or "5.2")
|
||||
@@ -980,7 +1023,6 @@
|
||||
- `file` (optional): Custom path to tasks.json
|
||||
|
||||
**Example scenarios:**
|
||||
|
||||
- Move task to become subtask: `--from="5" --to="7"`
|
||||
- Move subtask to standalone task: `--from="5.2" --to="7"`
|
||||
- Move subtask to different parent: `--from="5.2" --to="7.3"`
|
||||
@@ -992,7 +1034,6 @@
|
||||
The command supports moving multiple tasks simultaneously by providing comma-separated lists for both `--from` and `--to` parameters. The number of source and destination IDs must match. This is particularly useful for resolving merge conflicts in task files when multiple team members have created tasks on different branches.
|
||||
|
||||
**Validation Features:**
|
||||
|
||||
- Allows moving tasks to new, non-existent IDs (automatically creates placeholders)
|
||||
- Prevents moving to existing task IDs that already contain content (to avoid overwriting)
|
||||
- Validates source tasks exist before attempting to move them
|
||||
@@ -1019,7 +1060,6 @@
|
||||
- [#521](https://github.com/eyaltoledano/claude-task-master/pull/521) [`ed17cb0`](https://github.com/eyaltoledano/claude-task-master/commit/ed17cb0e0a04dedde6c616f68f24f3660f68dd04) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - .taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
|
||||
|
||||
- [#536](https://github.com/eyaltoledano/claude-task-master/pull/536) [`f4a83ec`](https://github.com/eyaltoledano/claude-task-master/commit/f4a83ec047b057196833e3a9b861d4bceaec805d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Ollama as a supported AI provider.
|
||||
|
||||
- You can now add it by running `task-master models --setup` and selecting it.
|
||||
- Ollama is a local model provider, so no API key is required.
|
||||
- Ollama models are available at `http://localhost:11434/api` by default.
|
||||
@@ -1035,7 +1075,6 @@
|
||||
- [#478](https://github.com/eyaltoledano/claude-task-master/pull/478) [`4117f71`](https://github.com/eyaltoledano/claude-task-master/commit/4117f71c18ee4d321a9c91308d00d5d69bfac61e) Thanks [@joedanz](https://github.com/joedanz)! - Fix CLI --force flag for parse-prd command
|
||||
|
||||
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
||||
|
||||
- Fixes #477
|
||||
|
||||
- [#511](https://github.com/eyaltoledano/claude-task-master/pull/511) [`17294ff`](https://github.com/eyaltoledano/claude-task-master/commit/17294ff25918d64278674e558698a1a9ad785098) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Task Master no longer tells you to update when you're already up to date
|
||||
@@ -1049,7 +1088,6 @@
|
||||
- [#523](https://github.com/eyaltoledano/claude-task-master/pull/523) [`da317f2`](https://github.com/eyaltoledano/claude-task-master/commit/da317f2607ca34db1be78c19954996f634c40923) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix the error handling of task status settings
|
||||
|
||||
- [#527](https://github.com/eyaltoledano/claude-task-master/pull/527) [`a8dabf4`](https://github.com/eyaltoledano/claude-task-master/commit/a8dabf44856713f488960224ee838761716bba26) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
||||
|
||||
- Fixes issues users where having where they were getting stale data
|
||||
|
||||
- [#417](https://github.com/eyaltoledano/claude-task-master/pull/417) [`a1f8d52`](https://github.com/eyaltoledano/claude-task-master/commit/a1f8d52474fdbdf48e17a63e3f567a6d63010d9f) Thanks [@ksylvan](https://github.com/ksylvan)! - Fix for issue #409 LOG_LEVEL Pydantic validation error
|
||||
@@ -1057,7 +1095,6 @@
|
||||
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`0288311`](https://github.com/eyaltoledano/claude-task-master/commit/0288311965ae2a343ebee4a0c710dde94d2ae7e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Small fixes - `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI - fixes the `append` flag so it properly works in the CLI
|
||||
|
||||
- [#501](https://github.com/eyaltoledano/claude-task-master/pull/501) [`0a61184`](https://github.com/eyaltoledano/claude-task-master/commit/0a611843b56a856ef0a479dc34078326e05ac3a8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix initial .env.example to work out of the box
|
||||
|
||||
- Closes #419
|
||||
|
||||
- [#435](https://github.com/eyaltoledano/claude-task-master/pull/435) [`a96215a`](https://github.com/eyaltoledano/claude-task-master/commit/a96215a359b25061fd3b3f3c7b10e8ac0390c062) Thanks [@lebsral](https://github.com/lebsral)! - Fix default fallback model and maxTokens in Taskmaster initialization
|
||||
@@ -1065,7 +1102,6 @@
|
||||
- [#517](https://github.com/eyaltoledano/claude-task-master/pull/517) [`e96734a`](https://github.com/eyaltoledano/claude-task-master/commit/e96734a6cc6fec7731de72eb46b182a6e3743d02) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug when updating tasks on the MCP server (#412)
|
||||
|
||||
- [#496](https://github.com/eyaltoledano/claude-task-master/pull/496) [`efce374`](https://github.com/eyaltoledano/claude-task-master/commit/efce37469bc58eceef46763ba32df1ed45242211) Thanks [@joedanz](https://github.com/joedanz)! - Fix duplicate output on CLI help screen
|
||||
|
||||
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
||||
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
||||
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
||||
@@ -1077,7 +1113,6 @@
|
||||
### Minor Changes
|
||||
|
||||
- [#536](https://github.com/eyaltoledano/claude-task-master/pull/536) [`f4a83ec`](https://github.com/eyaltoledano/claude-task-master/commit/f4a83ec047b057196833e3a9b861d4bceaec805d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Ollama as a supported AI provider.
|
||||
|
||||
- You can now add it by running `task-master models --setup` and selecting it.
|
||||
- Ollama is a local model provider, so no API key is required.
|
||||
- Ollama models are available at `http://localhost:11434/api` by default.
|
||||
@@ -1103,7 +1138,6 @@
|
||||
- [#478](https://github.com/eyaltoledano/claude-task-master/pull/478) [`4117f71`](https://github.com/eyaltoledano/claude-task-master/commit/4117f71c18ee4d321a9c91308d00d5d69bfac61e) Thanks [@joedanz](https://github.com/joedanz)! - Fix CLI --force flag for parse-prd command
|
||||
|
||||
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
||||
|
||||
- Fixes #477
|
||||
|
||||
- [#511](https://github.com/eyaltoledano/claude-task-master/pull/511) [`17294ff`](https://github.com/eyaltoledano/claude-task-master/commit/17294ff25918d64278674e558698a1a9ad785098) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Task Master no longer tells you to update when you're already up to date
|
||||
@@ -1111,13 +1145,11 @@
|
||||
- [#523](https://github.com/eyaltoledano/claude-task-master/pull/523) [`da317f2`](https://github.com/eyaltoledano/claude-task-master/commit/da317f2607ca34db1be78c19954996f634c40923) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix the error handling of task status settings
|
||||
|
||||
- [#527](https://github.com/eyaltoledano/claude-task-master/pull/527) [`a8dabf4`](https://github.com/eyaltoledano/claude-task-master/commit/a8dabf44856713f488960224ee838761716bba26) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
||||
|
||||
- Fixes issues users where having where they were getting stale data
|
||||
|
||||
- [#417](https://github.com/eyaltoledano/claude-task-master/pull/417) [`a1f8d52`](https://github.com/eyaltoledano/claude-task-master/commit/a1f8d52474fdbdf48e17a63e3f567a6d63010d9f) Thanks [@ksylvan](https://github.com/ksylvan)! - Fix for issue #409 LOG_LEVEL Pydantic validation error
|
||||
|
||||
- [#501](https://github.com/eyaltoledano/claude-task-master/pull/501) [`0a61184`](https://github.com/eyaltoledano/claude-task-master/commit/0a611843b56a856ef0a479dc34078326e05ac3a8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix initial .env.example to work out of the box
|
||||
|
||||
- Closes #419
|
||||
|
||||
- [#435](https://github.com/eyaltoledano/claude-task-master/pull/435) [`a96215a`](https://github.com/eyaltoledano/claude-task-master/commit/a96215a359b25061fd3b3f3c7b10e8ac0390c062) Thanks [@lebsral](https://github.com/lebsral)! - Fix default fallback model and maxTokens in Taskmaster initialization
|
||||
@@ -1125,7 +1157,6 @@
|
||||
- [#517](https://github.com/eyaltoledano/claude-task-master/pull/517) [`e96734a`](https://github.com/eyaltoledano/claude-task-master/commit/e96734a6cc6fec7731de72eb46b182a6e3743d02) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug when updating tasks on the MCP server (#412)
|
||||
|
||||
- [#496](https://github.com/eyaltoledano/claude-task-master/pull/496) [`efce374`](https://github.com/eyaltoledano/claude-task-master/commit/efce37469bc58eceef46763ba32df1ed45242211) Thanks [@joedanz](https://github.com/joedanz)! - Fix duplicate output on CLI help screen
|
||||
|
||||
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
||||
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
||||
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
||||
@@ -1143,7 +1174,6 @@
|
||||
### Minor Changes
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ef782ff`](https://github.com/eyaltoledano/claude-task-master/commit/ef782ff5bd4ceb3ed0dc9ea82087aae5f79ac933) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - feat(expand): Enhance `expand` and `expand-all` commands
|
||||
|
||||
- Integrate `task-complexity-report.json` to automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just run `task-master update --id=[id of task] --research` and it will use that prompt automatically. No extra prompt needed.
|
||||
- Change default behavior to _append_ new subtasks to existing ones. Use the `--force` flag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
|
||||
|
||||
@@ -1152,7 +1182,6 @@
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`1ab836f`](https://github.com/eyaltoledano/claude-task-master/commit/1ab836f191cb8969153593a9a0bd47fc9aa4a831) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config."
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`c8722b0`](https://github.com/eyaltoledano/claude-task-master/commit/c8722b0a7a443a73b95d1bcd4a0b68e0fce2a1cd) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds custom model ID support for Ollama and OpenRouter providers.
|
||||
|
||||
- Adds the `--ollama` and `--openrouter` flags to `task-master models --set-<role>` command to set models for those providers outside of the support models list.
|
||||
- Updated `task-master models --setup` interactive mode with options to explicitly enter custom Ollama or OpenRouter model IDs.
|
||||
- Implemented live validation against OpenRouter API (`/api/v1/models`) when setting a custom OpenRouter model ID (via flag or setup).
|
||||
@@ -1171,7 +1200,6 @@
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`ed79d4f`](https://github.com/eyaltoledano/claude-task-master/commit/ed79d4f4735dfab4124fa189214c0bd5e23a6860) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add xAI provider and Grok models support
|
||||
|
||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`ad89253`](https://github.com/eyaltoledano/claude-task-master/commit/ad89253e313a395637aa48b9f92cc39b1ef94ad8) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Better support for file paths on Windows, Linux & WSL.
|
||||
|
||||
- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
|
||||
- Ensures tools receive a clean, absolute path suitable for the server OS.
|
||||
- Simplifies tool implementation by centralizing normalization logic.
|
||||
@@ -1181,7 +1209,6 @@
|
||||
- [#378](https://github.com/eyaltoledano/claude-task-master/pull/378) [`d63964a`](https://github.com/eyaltoledano/claude-task-master/commit/d63964a10eed9be17856757661ff817ad6bacfdc) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improved update-subtask - Now it has context about the parent task details - It also has context about the subtask before it and the subtask after it (if they exist) - Not passing all subtasks to stay token efficient
|
||||
|
||||
- [#240](https://github.com/eyaltoledano/claude-task-master/pull/240) [`5f504fa`](https://github.com/eyaltoledano/claude-task-master/commit/5f504fafb8bdaa0043c2d20dee8bbb8ec2040d85) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Improve and adjust `init` command for robustness and updated dependencies.
|
||||
|
||||
- **Update Initialization Dependencies:** Ensure newly initialized projects (`task-master init`) include all required AI SDK dependencies (`@ai-sdk/*`, `ai`, provider wrappers) in their `package.json` for out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g., `uuid`) from the init template.
|
||||
- **Silence `npm install` during `init`:** Prevent `npm install` output from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode.
|
||||
- **Improve Conditional Model Setup:** Reliably skip interactive `models --setup` during non-interactive `init` runs (e.g., `init -y` or MCP) by checking `isSilentMode()` instead of passing flags.
|
||||
@@ -1221,7 +1248,6 @@
|
||||
### Patch Changes
|
||||
|
||||
- [#243](https://github.com/eyaltoledano/claude-task-master/pull/243) [`454a1d9`](https://github.com/eyaltoledano/claude-task-master/commit/454a1d9d37439c702656eedc0702c2f7a4451517) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - - Fixes shebang issue not allowing task-master to run on certain windows operating systems
|
||||
|
||||
- Resolves #241 #211 #184 #193
|
||||
|
||||
- [#268](https://github.com/eyaltoledano/claude-task-master/pull/268) [`3e872f8`](https://github.com/eyaltoledano/claude-task-master/commit/3e872f8afbb46cd3978f3852b858c233450b9f33) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix remove-task command to handle multiple comma-separated task IDs
|
||||
@@ -1233,7 +1259,6 @@
|
||||
- [#264](https://github.com/eyaltoledano/claude-task-master/pull/264) [`ff8e75c`](https://github.com/eyaltoledano/claude-task-master/commit/ff8e75cded91fb677903040002626f7a82fd5f88) Thanks [@joedanz](https://github.com/joedanz)! - Add quotes around numeric env vars in mcp.json (Windsurf, etc.)
|
||||
|
||||
- [#248](https://github.com/eyaltoledano/claude-task-master/pull/248) [`d99fa00`](https://github.com/eyaltoledano/claude-task-master/commit/d99fa00980fc61695195949b33dcda7781006f90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - - Fix `task-master init` polluting codebase with new packages inside `package.json` and modifying project `README`
|
||||
|
||||
- Now only initializes with cursor rules, windsurf rules, mcp.json, scripts/example_prd.txt, .gitignore modifications, and `README-task-master.md`
|
||||
|
||||
- [#266](https://github.com/eyaltoledano/claude-task-master/pull/266) [`41b979c`](https://github.com/eyaltoledano/claude-task-master/commit/41b979c23963483e54331015a86e7c5079f657e4) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed a bug that prevented the task-master from running in a Linux container
|
||||
|
||||
@@ -323,8 +323,11 @@ Here's a comprehensive reference of all available commands:
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
# Limit the number of tasks generated (default is 10)
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=5
|
||||
|
||||
# Allow task master to determine the number of tasks based on complexity
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=0
|
||||
```
|
||||
|
||||
### List Tasks
|
||||
@@ -397,6 +400,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
|
||||
task-master expand --id=<id> --num=0
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"maxTokens": 100000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
@@ -14,9 +14,9 @@
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-5-sonnet-20240620",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 8192,
|
||||
"temperature": 0.1
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
@@ -28,6 +28,7 @@
|
||||
"defaultTag": "master",
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/",
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com"
|
||||
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
|
||||
"responseLanguage": "English"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
|
||||
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
|
||||
5. **Master List Curation**: Keep only the most valuable initiatives in master
|
||||
|
||||
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
|
||||
|
||||
### Workflow Transition Examples
|
||||
|
||||
|
||||
@@ -271,7 +271,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
||||
* **CLI Command:** `task-master clear-subtasks [options]`
|
||||
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
|
||||
* **Key Parameters/Options:**
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
|
||||
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
|
||||
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
|
||||
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
|
||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||
|
||||
@@ -8,8 +8,11 @@ Here's a comprehensive reference of all available commands:
|
||||
# Parse a PRD file and generate tasks
|
||||
task-master parse-prd <prd-file.txt>
|
||||
|
||||
# Limit the number of tasks generated
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=10
|
||||
# Limit the number of tasks generated (default is 10)
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=5
|
||||
|
||||
# Allow task master to determine the number of tasks based on complexity
|
||||
task-master parse-prd <prd-file.txt> --num-tasks=0
|
||||
```
|
||||
|
||||
## List Tasks
|
||||
@@ -128,6 +131,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
|
||||
# Expand a specific task with subtasks
|
||||
task-master expand --id=<id> --num=<number>
|
||||
|
||||
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
|
||||
task-master expand --id=<id> --num=0
|
||||
|
||||
# Expand with additional context
|
||||
task-master expand --id=<id> --prompt="<context>"
|
||||
|
||||
|
||||
@@ -36,6 +36,7 @@ Taskmaster uses two primary methods for configuration:
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultNumTasks": 10,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"defaultTag": "master",
|
||||
@@ -43,7 +44,8 @@ Taskmaster uses two primary methods for configuration:
|
||||
"ollamaBaseURL": "http://localhost:11434/api",
|
||||
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
|
||||
"vertexProjectId": "your-gcp-project-id",
|
||||
"vertexLocation": "us-central1"
|
||||
"vertexLocation": "us-central1",
|
||||
"responseLanguage": "English"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -64,100 +64,81 @@ task-master set-status --id=task-001 --status=in-progress
|
||||
```bash
|
||||
npm install @anthropic-ai/claude-code
|
||||
```
|
||||
3. No API key is required in your environment variables or MCP configuration
|
||||
3. Run Claude Code for the first time and authenticate with your Anthropic account:
|
||||
```bash
|
||||
claude
|
||||
```
|
||||
4. No API key is required in your environment variables or MCP configuration
|
||||
|
||||
## Advanced Settings
|
||||
|
||||
The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. While these settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), they are not currently exposed through Task Master's standard API due to architectural constraints.
|
||||
The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. These settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), and can be managed through Task Master's configuration file.
|
||||
|
||||
### Supported Settings
|
||||
### Advanced Settings Usage
|
||||
|
||||
To update settings for Claude Code, update your `.taskmaster/config.json`:
|
||||
|
||||
The Claude Code settings can be specified globally in the `claudeCode` section of the config, or on a per-command basis in the `commandSpecific` section:
|
||||
|
||||
```javascript
|
||||
const settings = {
|
||||
{
|
||||
// "models" and "global" config...
|
||||
|
||||
"claudeCode": {
|
||||
// Maximum conversation turns Claude can make in a single request
|
||||
maxTurns: 5,
|
||||
"maxTurns": 5,
|
||||
|
||||
// Custom system prompt to override Claude Code's default behavior
|
||||
customSystemPrompt: "You are a helpful assistant focused on code quality",
|
||||
"customSystemPrompt": "You are a helpful assistant focused on code quality",
|
||||
|
||||
// Append additional content to the system prompt
|
||||
"appendSystemPrompt": "Always follow coding best practices",
|
||||
|
||||
// Permission mode for file system operations
|
||||
permissionMode: 'default', // Options: 'default', 'restricted', 'permissive'
|
||||
"permissionMode": "default", // Options: "default", "acceptEdits", "plan", "bypassPermissions"
|
||||
|
||||
// Explicitly allow only certain tools
|
||||
allowedTools: ['Read', 'LS'], // Claude can only read files and list directories
|
||||
"allowedTools": ["Read", "LS"], // Claude can only read files and list directories
|
||||
|
||||
// Explicitly disallow certain tools
|
||||
disallowedTools: ['Write', 'Edit'], // Prevent Claude from modifying files
|
||||
"disallowedTools": ["Write", "Edit"], // Prevent Claude from modifying files
|
||||
|
||||
// MCP servers for additional tool integrations
|
||||
mcpServers: []
|
||||
};
|
||||
```
|
||||
"mcpServers": {
|
||||
"mcp-server-name": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "mcp-serve"],
|
||||
"env": {
|
||||
// ...
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
### Current Limitations
|
||||
|
||||
Task Master uses a standardized `BaseAIProvider` interface that only passes through common parameters (modelId, messages, maxTokens, temperature) to maintain consistency across all providers. The Claude Code advanced settings are implemented in the SDK but not accessible through Task Master's high-level commands.
|
||||
|
||||
### Future Integration Options
|
||||
|
||||
For developers who need to use these advanced settings, there are three potential approaches:
|
||||
|
||||
#### Option 1: Extend BaseAIProvider
|
||||
Modify the core Task Master architecture to support provider-specific settings:
|
||||
|
||||
```javascript
|
||||
// In BaseAIProvider
|
||||
const result = await generateText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature,
|
||||
...params.providerSettings // New: pass through provider-specific settings
|
||||
});
|
||||
```
|
||||
|
||||
#### Option 2: Override Methods in ClaudeCodeProvider
|
||||
Create custom implementations that extract and use Claude-specific settings:
|
||||
|
||||
```javascript
|
||||
// In ClaudeCodeProvider
|
||||
async generateText(params) {
|
||||
const { maxTurns, allowedTools, disallowedTools, ...baseParams } = params;
|
||||
|
||||
const client = this.getClient({
|
||||
...baseParams,
|
||||
settings: { maxTurns, allowedTools, disallowedTools }
|
||||
});
|
||||
|
||||
// Continue with generation...
|
||||
// Command-specific settings override global settings
|
||||
"commandSpecific": {
|
||||
"parse-prd": {
|
||||
// Settings specific to the 'parse-prd' command
|
||||
"maxTurns": 10,
|
||||
"customSystemPrompt": "You are a task breakdown specialist"
|
||||
},
|
||||
"analyze-complexity": {
|
||||
// Settings specific to the 'analyze-complexity' command
|
||||
"maxTurns": 3,
|
||||
"appendSystemPrompt": "Focus on identifying bottlenecks"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Option 3: Direct SDK Usage
|
||||
For immediate access to advanced features, developers can use the Claude Code SDK directly:
|
||||
|
||||
```javascript
|
||||
import { createClaudeCode } from 'task-master-ai/ai-providers/custom-sdk/claude-code';
|
||||
|
||||
const claude = createClaudeCode({
|
||||
defaultSettings: {
|
||||
maxTurns: 5,
|
||||
allowedTools: ['Read', 'LS'],
|
||||
disallowedTools: ['Write', 'Edit']
|
||||
}
|
||||
});
|
||||
|
||||
const model = claude('sonnet');
|
||||
const result = await generateText({
|
||||
model,
|
||||
messages: [{ role: 'user', content: 'Analyze this code...' }]
|
||||
});
|
||||
```
|
||||
- For a full list of Cluaude Code settings, see the [Claude Code Settings documentation](https://docs.anthropic.com/en/docs/claude-code/settings).
|
||||
- For a full list of AI powered command names, see this file: `src/constants/commands.js`
|
||||
|
||||
### Why These Settings Matter
|
||||
|
||||
- **maxTurns**: Useful for complex refactoring tasks that require multiple iterations
|
||||
- **customSystemPrompt**: Allows specializing Claude for specific domains or coding standards
|
||||
- **appendSystemPrompt**: Useful for enforcing coding standards or providing additional context
|
||||
- **permissionMode**: Critical for security in production environments
|
||||
- **allowedTools/disallowedTools**: Enable read-only analysis modes or restrict access to sensitive operations
|
||||
- **mcpServers**: Future extensibility for custom tool integrations
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
# Available Models as of June 21, 2025
|
||||
# Available Models as of July 2, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
|
||||
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
@@ -67,11 +74,19 @@
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
|
||||
|
||||
## Research Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | -------------------------- | --------- | ---------- | ----------- |
|
||||
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
|
||||
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
|
||||
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
||||
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
||||
@@ -84,12 +99,21 @@
|
||||
| xai | grok-3-fast | — | 5 | 25 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
|
||||
|
||||
## Fallback Models
|
||||
|
||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
|
||||
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||
@@ -141,3 +165,5 @@
|
||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||
| claude-code | opus | 0.725 | 0 | 0 |
|
||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
|
||||
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
|
||||
|
||||
169
docs/providers/gemini-cli.md
Normal file
169
docs/providers/gemini-cli.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# Gemini CLI Provider
|
||||
|
||||
The Gemini CLI provider allows you to use Google's Gemini models through the Gemini CLI tool, leveraging your existing Gemini subscription and OAuth authentication.
|
||||
|
||||
## Why Use Gemini CLI?
|
||||
|
||||
The primary benefit of using the `gemini-cli` provider is to leverage your existing Gemini Pro subscription or OAuth authentication configured through the Gemini CLI. This is ideal for users who:
|
||||
|
||||
- Have an active Gemini subscription
|
||||
- Want to use OAuth authentication instead of managing API keys
|
||||
- Have already configured authentication via `gemini auth login`
|
||||
|
||||
## Installation
|
||||
|
||||
The provider is already included in Task Master. However, you need to install the Gemini CLI tool:
|
||||
|
||||
```bash
|
||||
# Install gemini CLI globally
|
||||
npm install -g @google/gemini-cli
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### Primary Method: CLI Authentication (Recommended)
|
||||
|
||||
The Gemini CLI provider is designed to use your pre-configured OAuth authentication:
|
||||
|
||||
```bash
|
||||
# Authenticate with your Google account
|
||||
gemini auth login
|
||||
```
|
||||
|
||||
This will open a browser window for OAuth authentication. Once authenticated, Task Master will automatically use these credentials when you select the `gemini-cli` provider.
|
||||
|
||||
### Alternative Method: API Key
|
||||
|
||||
While the primary use case is OAuth authentication, you can also use an API key if needed:
|
||||
|
||||
```bash
|
||||
export GEMINI_API_KEY="your-gemini-api-key"
|
||||
```
|
||||
|
||||
**Note:** If you want to use API keys, consider using the standard `google` provider instead, as `gemini-cli` is specifically designed for OAuth/subscription users.
|
||||
|
||||
## Configuration
|
||||
|
||||
Configure `gemini-cli` as a provider using the Task Master models command:
|
||||
|
||||
```bash
|
||||
# Set gemini-cli as your main provider with gemini-2.5-pro
|
||||
task-master models --set-main gemini-2.5-pro --gemini-cli
|
||||
|
||||
# Or use the faster gemini-2.5-flash model
|
||||
task-master models --set-main gemini-2.5-flash --gemini-cli
|
||||
```
|
||||
|
||||
You can also manually edit your `.taskmaster/config/providers.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"provider": "gemini-cli",
|
||||
"model": "gemini-2.5-flash"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Available Models
|
||||
|
||||
The gemini-cli provider supports only two models:
|
||||
- `gemini-2.5-pro` - High performance model (1M token context window, 65,536 max output tokens)
|
||||
- `gemini-2.5-flash` - Fast, efficient model (1M token context window, 65,536 max output tokens)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Once authenticated with `gemini auth login` and configured, simply use Task Master as normal:
|
||||
|
||||
```bash
|
||||
# The provider will automatically use your OAuth credentials
|
||||
task-master new "Create a hello world function"
|
||||
```
|
||||
|
||||
### With Specific Parameters
|
||||
|
||||
Configure model parameters in your providers.json:
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"provider": "gemini-cli",
|
||||
"model": "gemini-2.5-pro",
|
||||
"parameters": {
|
||||
"maxTokens": 65536,
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### As Fallback Provider
|
||||
|
||||
Use gemini-cli as a fallback when your primary provider is unavailable:
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet-latest"
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "gemini-cli",
|
||||
"model": "gemini-2.5-flash"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Authentication failed" Error
|
||||
|
||||
If you get an authentication error:
|
||||
|
||||
1. **Primary solution**: Run `gemini auth login` to authenticate with your Google account
|
||||
2. **Check authentication status**: Run `gemini auth status` to verify you're logged in
|
||||
3. **If using API key** (not recommended): Ensure `GEMINI_API_KEY` is set correctly
|
||||
|
||||
### "Model not found" Error
|
||||
|
||||
The gemini-cli provider only supports two models:
|
||||
- `gemini-2.5-pro`
|
||||
- `gemini-2.5-flash`
|
||||
|
||||
If you need other Gemini models, use the standard `google` provider with an API key instead.
|
||||
|
||||
### Gemini CLI Not Found
|
||||
|
||||
If you get a "gemini: command not found" error:
|
||||
|
||||
```bash
|
||||
# Install the Gemini CLI globally
|
||||
npm install -g @google/gemini-cli
|
||||
|
||||
# Verify installation
|
||||
gemini --version
|
||||
```
|
||||
|
||||
### Custom Endpoints
|
||||
|
||||
Custom endpoints can be configured if needed:
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"provider": "gemini-cli",
|
||||
"model": "gemini-2.5-pro",
|
||||
"baseURL": "https://custom-endpoint.example.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **OAuth vs API Key**: This provider is specifically designed for users who want to use OAuth authentication via `gemini auth login`. If you prefer using API keys, consider using the standard `google` provider instead.
|
||||
- **Limited Model Support**: Only `gemini-2.5-pro` and `gemini-2.5-flash` are available through gemini-cli.
|
||||
- **Subscription Benefits**: Using OAuth authentication allows you to leverage any subscription benefits associated with your Google account.
|
||||
- The provider uses the `ai-sdk-provider-gemini-cli` npm package internally.
|
||||
- Supports all standard Task Master features: text generation, streaming, and structured object generation.
|
||||
@@ -20,6 +20,8 @@ import {
|
||||
* @param {string} [args.status] - Status for new subtask (default: 'pending')
|
||||
* @param {string} [args.dependencies] - Comma-separated list of dependency IDs
|
||||
* @param {boolean} [args.skipGenerate] - Skip regenerating task files
|
||||
* @param {string} [args.projectRoot] - Project root directory
|
||||
* @param {string} [args.tag] - Tag for the task
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: string}>}
|
||||
*/
|
||||
@@ -34,7 +36,9 @@ export async function addSubtaskDirect(args, log) {
|
||||
details,
|
||||
status,
|
||||
dependencies: dependenciesStr,
|
||||
skipGenerate
|
||||
skipGenerate,
|
||||
projectRoot,
|
||||
tag
|
||||
} = args;
|
||||
try {
|
||||
log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
|
||||
@@ -96,6 +100,8 @@ export async function addSubtaskDirect(args, log) {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
const context = { projectRoot, tag };
|
||||
|
||||
// Case 1: Convert existing task to subtask
|
||||
if (existingTaskId) {
|
||||
log.info(`Converting task ${existingTaskId} to a subtask of ${parentId}`);
|
||||
@@ -104,7 +110,8 @@ export async function addSubtaskDirect(args, log) {
|
||||
parentId,
|
||||
existingTaskId,
|
||||
null,
|
||||
generateFiles
|
||||
generateFiles,
|
||||
context
|
||||
);
|
||||
|
||||
// Restore normal logging
|
||||
@@ -135,7 +142,8 @@ export async function addSubtaskDirect(args, log) {
|
||||
parentId,
|
||||
null,
|
||||
newSubtaskData,
|
||||
generateFiles
|
||||
generateFiles,
|
||||
context
|
||||
);
|
||||
|
||||
// Restore normal logging
|
||||
|
||||
@@ -171,8 +171,8 @@ export async function expandTaskDirect(args, log, context = {}) {
|
||||
task.subtasks = [];
|
||||
}
|
||||
|
||||
// Save tasks.json with potentially empty subtasks array
|
||||
writeJSON(tasksPath, data);
|
||||
// Save tasks.json with potentially empty subtasks array and proper context
|
||||
writeJSON(tasksPath, data, projectRoot, tag);
|
||||
|
||||
// Create logger wrapper using the utility
|
||||
const mcpLog = createLogWrapper(log);
|
||||
|
||||
@@ -13,12 +13,14 @@ import fs from 'fs';
|
||||
* Fix invalid dependencies in tasks.json automatically
|
||||
* @param {Object} args - Function arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string} args.projectRoot - Project root directory
|
||||
* @param {string} args.tag - Tag for the project
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
|
||||
*/
|
||||
export async function fixDependenciesDirect(args, log) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath } = args;
|
||||
const { tasksJsonPath, projectRoot, tag } = args;
|
||||
try {
|
||||
log.info(`Fixing invalid dependencies in tasks: ${tasksJsonPath}`);
|
||||
|
||||
@@ -51,8 +53,10 @@ export async function fixDependenciesDirect(args, log) {
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
// Call the original command function using the provided path
|
||||
await fixDependenciesCommand(tasksPath);
|
||||
// Call the original command function using the provided path and proper context
|
||||
await fixDependenciesCommand(tasksPath, {
|
||||
context: { projectRoot, tag }
|
||||
});
|
||||
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
@@ -61,7 +65,8 @@ export async function fixDependenciesDirect(args, log) {
|
||||
success: true,
|
||||
data: {
|
||||
message: 'Dependencies fixed successfully',
|
||||
tasksPath
|
||||
tasksPath,
|
||||
tag: tag || 'master'
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
|
||||
@@ -72,15 +72,16 @@ export async function initializeProjectDirect(args, log, context = {}) {
|
||||
yes: true // Force yes mode
|
||||
};
|
||||
|
||||
// Handle rules option just like CLI
|
||||
// Handle rules option with MCP-specific defaults
|
||||
if (Array.isArray(args.rules) && args.rules.length > 0) {
|
||||
options.rules = args.rules;
|
||||
options.rulesExplicitlyProvided = true;
|
||||
log.info(`Including rules: ${args.rules.join(', ')}`);
|
||||
} else {
|
||||
options.rules = RULE_PROFILES;
|
||||
log.info(
|
||||
`No rule profiles specified, defaulting to: ${RULE_PROFILES.join(', ')}`
|
||||
);
|
||||
// For MCP initialization, default to Cursor profile only
|
||||
options.rules = ['cursor'];
|
||||
options.rulesExplicitlyProvided = true;
|
||||
log.info(`No rule profiles specified, defaulting to: Cursor`);
|
||||
}
|
||||
|
||||
log.info(`Initializing project with options: ${JSON.stringify(options)}`);
|
||||
|
||||
@@ -109,7 +109,7 @@ export async function parsePRDDirect(args, log, context = {}) {
|
||||
if (numTasksArg) {
|
||||
numTasks =
|
||||
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
|
||||
if (Number.isNaN(numTasks) || numTasks <= 0) {
|
||||
if (Number.isNaN(numTasks) || numTasks < 0) {
|
||||
// Ensure positive number
|
||||
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
|
||||
logWrapper.warn(
|
||||
|
||||
@@ -20,12 +20,13 @@ import {
|
||||
* @param {Object} args - Command arguments
|
||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
|
||||
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
|
||||
* @param {string} [args.tag] - Tag context to operate on (defaults to current active tag).
|
||||
* @param {Object} log - Logger object
|
||||
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
|
||||
*/
|
||||
export async function removeTaskDirect(args, log, context = {}) {
|
||||
// Destructure expected args
|
||||
const { tasksJsonPath, id, projectRoot } = args;
|
||||
const { tasksJsonPath, id, projectRoot, tag } = args;
|
||||
const { session } = context;
|
||||
try {
|
||||
// Check if tasksJsonPath was provided
|
||||
@@ -56,17 +57,17 @@ export async function removeTaskDirect(args, log, context = {}) {
|
||||
const taskIdArray = id.split(',').map((taskId) => taskId.trim());
|
||||
|
||||
log.info(
|
||||
`Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}`
|
||||
`Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}${tag ? ` in tag '${tag}'` : ''}`
|
||||
);
|
||||
|
||||
// Validate all task IDs exist before proceeding
|
||||
const data = readJSON(tasksJsonPath, projectRoot);
|
||||
const data = readJSON(tasksJsonPath, projectRoot, tag);
|
||||
if (!data || !data.tasks) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'INVALID_TASKS_FILE',
|
||||
message: `No valid tasks found in ${tasksJsonPath}`
|
||||
message: `No valid tasks found in ${tasksJsonPath}${tag ? ` for tag '${tag}'` : ''}`
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -80,71 +81,49 @@ export async function removeTaskDirect(args, log, context = {}) {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'INVALID_TASK_ID',
|
||||
message: `The following tasks were not found: ${invalidTasks.join(', ')}`
|
||||
message: `The following tasks were not found${tag ? ` in tag '${tag}'` : ''}: ${invalidTasks.join(', ')}`
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Remove tasks one by one
|
||||
const results = [];
|
||||
|
||||
// Enable silent mode to prevent console logs from interfering with JSON response
|
||||
enableSilentMode();
|
||||
|
||||
try {
|
||||
for (const taskId of taskIdArray) {
|
||||
try {
|
||||
const result = await removeTask(tasksJsonPath, taskId);
|
||||
results.push({
|
||||
taskId,
|
||||
success: true,
|
||||
message: result.message,
|
||||
removedTask: result.removedTask
|
||||
// Call removeTask with proper context including tag
|
||||
const result = await removeTask(tasksJsonPath, id, {
|
||||
projectRoot,
|
||||
tag
|
||||
});
|
||||
log.info(`Successfully removed task: ${taskId}`);
|
||||
} catch (error) {
|
||||
results.push({
|
||||
taskId,
|
||||
success: false,
|
||||
error: error.message
|
||||
});
|
||||
log.error(`Error removing task ${taskId}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
}
|
||||
|
||||
// Check if all tasks were successfully removed
|
||||
const successfulRemovals = results.filter((r) => r.success);
|
||||
const failedRemovals = results.filter((r) => !r.success);
|
||||
|
||||
if (successfulRemovals.length === 0) {
|
||||
// All removals failed
|
||||
if (!result.success) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'REMOVE_TASK_ERROR',
|
||||
message: 'Failed to remove any tasks',
|
||||
details: failedRemovals
|
||||
.map((r) => `${r.taskId}: ${r.error}`)
|
||||
.join('; ')
|
||||
message: result.error || 'Failed to remove tasks'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// At least some tasks were removed successfully
|
||||
log.info(`Successfully removed ${result.removedTasks.length} task(s)`);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
totalTasks: taskIdArray.length,
|
||||
successful: successfulRemovals.length,
|
||||
failed: failedRemovals.length,
|
||||
results: results,
|
||||
tasksPath: tasksJsonPath
|
||||
successful: result.removedTasks.length,
|
||||
failed: taskIdArray.length - result.removedTasks.length,
|
||||
removedTasks: result.removedTasks,
|
||||
message: result.message,
|
||||
tasksPath: tasksJsonPath,
|
||||
tag: data.tag || tag || 'master'
|
||||
}
|
||||
};
|
||||
} finally {
|
||||
// Restore normal logging
|
||||
disableSilentMode();
|
||||
}
|
||||
} catch (error) {
|
||||
// Ensure silent mode is disabled even if an outer error occurs
|
||||
disableSilentMode();
|
||||
|
||||
40
mcp-server/src/core/direct-functions/response-language.js
Normal file
40
mcp-server/src/core/direct-functions/response-language.js
Normal file
@@ -0,0 +1,40 @@
|
||||
/**
|
||||
* response-language.js
|
||||
* Direct function for managing response language via MCP
|
||||
*/
|
||||
|
||||
import { setResponseLanguage } from '../../../../scripts/modules/task-manager.js';
|
||||
import {
|
||||
enableSilentMode,
|
||||
disableSilentMode
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { createLogWrapper } from '../../tools/utils.js';
|
||||
|
||||
export async function responseLanguageDirect(args, log, context = {}) {
|
||||
const { projectRoot, language } = args;
|
||||
const mcpLog = createLogWrapper(log);
|
||||
|
||||
log.info(
|
||||
`Executing response-language_direct with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
log.info(`Using project root: ${projectRoot}`);
|
||||
|
||||
try {
|
||||
enableSilentMode();
|
||||
return setResponseLanguage(language, {
|
||||
mcpLog,
|
||||
projectRoot
|
||||
});
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'DIRECT_FUNCTION_ERROR',
|
||||
message: error.message,
|
||||
details: error.stack
|
||||
}
|
||||
};
|
||||
} finally {
|
||||
disableSilentMode();
|
||||
}
|
||||
}
|
||||
@@ -20,7 +20,8 @@ import { nextTaskDirect } from './next-task.js';
|
||||
*/
|
||||
export async function setTaskStatusDirect(args, log, context = {}) {
|
||||
// Destructure expected args, including the resolved tasksJsonPath and projectRoot
|
||||
const { tasksJsonPath, id, status, complexityReportPath, projectRoot } = args;
|
||||
const { tasksJsonPath, id, status, complexityReportPath, projectRoot, tag } =
|
||||
args;
|
||||
const { session } = context;
|
||||
try {
|
||||
log.info(`Setting task status with args: ${JSON.stringify(args)}`);
|
||||
@@ -69,11 +70,17 @@ export async function setTaskStatusDirect(args, log, context = {}) {
|
||||
enableSilentMode(); // Enable silent mode before calling core function
|
||||
try {
|
||||
// Call the core function
|
||||
await setTaskStatus(tasksPath, taskId, newStatus, {
|
||||
await setTaskStatus(
|
||||
tasksPath,
|
||||
taskId,
|
||||
newStatus,
|
||||
{
|
||||
mcpLog: log,
|
||||
projectRoot,
|
||||
session
|
||||
});
|
||||
},
|
||||
tag
|
||||
);
|
||||
|
||||
log.info(`Successfully set task ${taskId} status to ${newStatus}`);
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ import {
|
||||
*/
|
||||
export async function updateTasksDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
const { from, prompt, research, tasksJsonPath, projectRoot } = args;
|
||||
const { from, prompt, research, tasksJsonPath, projectRoot, tag } = args;
|
||||
|
||||
// Create the standard logger wrapper
|
||||
const logWrapper = createLogWrapper(log);
|
||||
@@ -75,7 +75,8 @@ export async function updateTasksDirect(args, log, context = {}) {
|
||||
{
|
||||
session,
|
||||
mcpLog: logWrapper,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
tag
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
@@ -52,6 +52,7 @@ export function registerAddSubtaskTool(server) {
|
||||
.describe(
|
||||
'Absolute path to the tasks file (default: tasks/tasks.json)'
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on'),
|
||||
skipGenerate: z
|
||||
.boolean()
|
||||
.optional()
|
||||
@@ -89,7 +90,8 @@ export function registerAddSubtaskTool(server) {
|
||||
status: args.status,
|
||||
dependencies: args.dependencies,
|
||||
skipGenerate: args.skipGenerate,
|
||||
projectRoot: args.projectRoot
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -24,7 +24,8 @@ export function registerFixDependenciesTool(server) {
|
||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
@@ -46,7 +47,9 @@ export function registerFixDependenciesTool(server) {
|
||||
|
||||
const result = await fixDependenciesDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
},
|
||||
log
|
||||
);
|
||||
|
||||
@@ -29,6 +29,7 @@ import { registerRemoveTaskTool } from './remove-task.js';
|
||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||
import { registerModelsTool } from './models.js';
|
||||
import { registerMoveTaskTool } from './move-task.js';
|
||||
import { registerResponseLanguageTool } from './response-language.js';
|
||||
import { registerAddTagTool } from './add-tag.js';
|
||||
import { registerDeleteTagTool } from './delete-tag.js';
|
||||
import { registerListTagsTool } from './list-tags.js';
|
||||
@@ -83,6 +84,7 @@ export function registerTaskMasterTools(server) {
|
||||
registerRemoveDependencyTool(server);
|
||||
registerValidateDependenciesTool(server);
|
||||
registerFixDependenciesTool(server);
|
||||
registerResponseLanguageTool(server);
|
||||
|
||||
// Group 7: Tag Management
|
||||
registerListTagsTool(server);
|
||||
|
||||
@@ -51,7 +51,7 @@ export function registerInitializeProjectTool(server) {
|
||||
.array(z.enum(RULE_PROFILES))
|
||||
.optional()
|
||||
.describe(
|
||||
`List of rule profiles to include at initialization. If omitted, defaults to all available profiles. Available options: ${RULE_PROFILES.join(', ')}`
|
||||
`List of rule profiles to include at initialization. If omitted, defaults to Cursor profile only. Available options: ${RULE_PROFILES.join(', ')}`
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, context) => {
|
||||
|
||||
@@ -43,7 +43,7 @@ export function registerParsePRDTool(server) {
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Avoid entering numbers above 50 due to context window limitations.'
|
||||
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Setting to 0 will allow Taskmaster to determine the appropriate number of tasks based on the complexity of the PRD. Avoid entering numbers above 50 due to context window limitations.'
|
||||
),
|
||||
force: z
|
||||
.boolean()
|
||||
|
||||
@@ -33,7 +33,13 @@ export function registerRemoveTaskTool(server) {
|
||||
confirm: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to skip confirmation prompt (default: false)')
|
||||
.describe('Whether to skip confirmation prompt (default: false)'),
|
||||
tag: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Specify which tag context to operate on. Defaults to the current active tag.'
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
@@ -59,7 +65,8 @@ export function registerRemoveTaskTool(server) {
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
id: args.id,
|
||||
projectRoot: args.projectRoot
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
46
mcp-server/src/tools/response-language.js
Normal file
46
mcp-server/src/tools/response-language.js
Normal file
@@ -0,0 +1,46 @@
|
||||
import { z } from 'zod';
|
||||
import {
|
||||
createErrorResponse,
|
||||
handleApiResult,
|
||||
withNormalizedProjectRoot
|
||||
} from './utils.js';
|
||||
import { responseLanguageDirect } from '../core/direct-functions/response-language.js';
|
||||
|
||||
export function registerResponseLanguageTool(server) {
|
||||
server.addTool({
|
||||
name: 'response-language',
|
||||
description: 'Get or set the response language for the project',
|
||||
parameters: z.object({
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe(
|
||||
'The root directory for the project. ALWAYS SET THIS TO THE PROJECT ROOT DIRECTORY. IF NOT SET, THE TOOL WILL NOT WORK.'
|
||||
),
|
||||
language: z
|
||||
.string()
|
||||
.describe(
|
||||
'The new response language to set. like "中文" "English" or "español".'
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
log.info(
|
||||
`Executing response-language tool with args: ${JSON.stringify(args)}`
|
||||
);
|
||||
|
||||
const result = await responseLanguageDirect(
|
||||
{
|
||||
...args,
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
return handleApiResult(result, log, 'Error setting response language');
|
||||
} catch (error) {
|
||||
log.error(`Error in response-language tool: ${error.message}`);
|
||||
return createErrorResponse(error.message);
|
||||
}
|
||||
})
|
||||
});
|
||||
}
|
||||
@@ -47,7 +47,8 @@ export function registerSetTaskStatusTool(server) {
|
||||
),
|
||||
projectRoot: z
|
||||
.string()
|
||||
.describe('The directory of the project. Must be an absolute path.')
|
||||
.describe('The directory of the project. Must be an absolute path.'),
|
||||
tag: z.string().optional().describe('Optional tag context to operate on')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
@@ -86,7 +87,8 @@ export function registerSetTaskStatusTool(server) {
|
||||
id: args.id,
|
||||
status: args.status,
|
||||
complexityReportPath,
|
||||
projectRoot: args.projectRoot
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
@@ -43,11 +43,12 @@ export function registerUpdateTool(server) {
|
||||
.optional()
|
||||
.describe(
|
||||
'The directory of the project. (Optional, usually from session)'
|
||||
)
|
||||
),
|
||||
tag: z.string().optional().describe('Tag context to operate on')
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
const toolName = 'update';
|
||||
const { from, prompt, research, file, projectRoot } = args;
|
||||
const { from, prompt, research, file, projectRoot, tag } = args;
|
||||
|
||||
try {
|
||||
log.info(
|
||||
@@ -71,7 +72,8 @@ export function registerUpdateTool(server) {
|
||||
from: from,
|
||||
prompt: prompt,
|
||||
research: research,
|
||||
projectRoot: projectRoot
|
||||
projectRoot: projectRoot,
|
||||
tag: tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
|
||||
5514
package-lock.json
generated
5514
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.18.0",
|
||||
"version": "0.19.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -68,6 +68,7 @@
|
||||
"gradient-string": "^3.0.0",
|
||||
"helmet": "^8.1.0",
|
||||
"inquirer": "^12.5.0",
|
||||
"jsonc-parser": "^3.3.1",
|
||||
"jsonwebtoken": "^9.0.2",
|
||||
"lru-cache": "^10.2.0",
|
||||
"ollama-ai-provider": "^1.2.0",
|
||||
@@ -77,7 +78,8 @@
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@anthropic-ai/claude-code": "^1.0.25"
|
||||
"@anthropic-ai/claude-code": "^1.0.25",
|
||||
"ai-sdk-provider-gemini-cli": "^0.0.3"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
|
||||
@@ -30,6 +30,7 @@ import {
|
||||
convertAllRulesToProfileRules,
|
||||
getRulesProfile
|
||||
} from '../src/utils/rule-transformer.js';
|
||||
import { updateConfigMaxTokens } from './modules/update-config-tokens.js';
|
||||
|
||||
import { execSync } from 'child_process';
|
||||
import {
|
||||
@@ -623,6 +624,14 @@ function createProjectStructure(
|
||||
}
|
||||
);
|
||||
|
||||
// Update config.json with correct maxTokens values from supported-models.json
|
||||
const configPath = path.join(targetDir, TASKMASTER_CONFIG_FILE);
|
||||
if (updateConfigMaxTokens(configPath)) {
|
||||
log('info', 'Updated config with correct maxTokens values');
|
||||
} else {
|
||||
log('warn', 'Could not update maxTokens in config');
|
||||
}
|
||||
|
||||
// Copy .gitignore with GitTasks preference
|
||||
try {
|
||||
const gitignoreTemplatePath = path.join(
|
||||
@@ -757,6 +766,44 @@ function createProjectStructure(
|
||||
}
|
||||
// =====================================
|
||||
|
||||
// === Add Response Language Step ===
|
||||
if (!isSilentMode() && !dryRun && !options?.yes) {
|
||||
console.log(
|
||||
boxen(chalk.cyan('Configuring Response Language...'), {
|
||||
padding: 0.5,
|
||||
margin: { top: 1, bottom: 0.5 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
log(
|
||||
'info',
|
||||
'Running interactive response language setup. Please input your preferred language.'
|
||||
);
|
||||
try {
|
||||
execSync('npx task-master lang --setup', {
|
||||
stdio: 'inherit',
|
||||
cwd: targetDir
|
||||
});
|
||||
log('success', 'Response Language configured.');
|
||||
} catch (error) {
|
||||
log('error', 'Failed to configure response language:', error.message);
|
||||
log('warn', 'You may need to run "task-master lang --setup" manually.');
|
||||
}
|
||||
} else if (isSilentMode() && !dryRun) {
|
||||
log(
|
||||
'info',
|
||||
'Skipping interactive response language setup in silent (MCP) mode.'
|
||||
);
|
||||
log(
|
||||
'warn',
|
||||
'Please configure response language using "task-master models --set-response-language" or the "models" MCP tool.'
|
||||
);
|
||||
} else if (dryRun) {
|
||||
log('info', 'DRY RUN: Skipping interactive response language setup.');
|
||||
}
|
||||
// =====================================
|
||||
|
||||
// === Add Model Configuration Step ===
|
||||
if (!isSilentMode() && !dryRun && !options?.yes) {
|
||||
console.log(
|
||||
|
||||
@@ -15,6 +15,7 @@ import {
|
||||
getFallbackProvider,
|
||||
getFallbackModelId,
|
||||
getParametersForRole,
|
||||
getResponseLanguage,
|
||||
getUserId,
|
||||
MODEL_MAP,
|
||||
getDebugFlag,
|
||||
@@ -24,7 +25,8 @@ import {
|
||||
getAzureBaseURL,
|
||||
getBedrockBaseURL,
|
||||
getVertexProjectId,
|
||||
getVertexLocation
|
||||
getVertexLocation,
|
||||
providersWithoutApiKeys
|
||||
} from './config-manager.js';
|
||||
import {
|
||||
log,
|
||||
@@ -45,7 +47,8 @@ import {
|
||||
BedrockAIProvider,
|
||||
AzureProvider,
|
||||
VertexAIProvider,
|
||||
ClaudeCodeProvider
|
||||
ClaudeCodeProvider,
|
||||
GeminiCliProvider
|
||||
} from '../../src/ai-providers/index.js';
|
||||
|
||||
// Create provider instances
|
||||
@@ -60,7 +63,8 @@ const PROVIDERS = {
|
||||
bedrock: new BedrockAIProvider(),
|
||||
azure: new AzureProvider(),
|
||||
vertex: new VertexAIProvider(),
|
||||
'claude-code': new ClaudeCodeProvider()
|
||||
'claude-code': new ClaudeCodeProvider(),
|
||||
'gemini-cli': new GeminiCliProvider()
|
||||
};
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
@@ -232,6 +236,12 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
return 'claude-code-no-key-required';
|
||||
}
|
||||
|
||||
// Gemini CLI can work without an API key (uses CLI auth)
|
||||
if (providerName === 'gemini-cli') {
|
||||
const apiKey = resolveEnvVariable('GEMINI_API_KEY', session, projectRoot);
|
||||
return apiKey || 'gemini-cli-no-key-required';
|
||||
}
|
||||
|
||||
const keyMap = {
|
||||
openai: 'OPENAI_API_KEY',
|
||||
anthropic: 'ANTHROPIC_API_KEY',
|
||||
@@ -244,7 +254,8 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
ollama: 'OLLAMA_API_KEY',
|
||||
bedrock: 'AWS_ACCESS_KEY_ID',
|
||||
vertex: 'GOOGLE_API_KEY',
|
||||
'claude-code': 'CLAUDE_CODE_API_KEY' // Not actually used, but included for consistency
|
||||
'claude-code': 'CLAUDE_CODE_API_KEY', // Not actually used, but included for consistency
|
||||
'gemini-cli': 'GEMINI_API_KEY'
|
||||
};
|
||||
|
||||
const envVarName = keyMap[providerName];
|
||||
@@ -257,7 +268,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
|
||||
|
||||
// Special handling for providers that can use alternative auth
|
||||
if (providerName === 'ollama' || providerName === 'bedrock') {
|
||||
if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
|
||||
return apiKey || null;
|
||||
}
|
||||
|
||||
@@ -457,7 +468,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
}
|
||||
|
||||
// Check API key if needed
|
||||
if (providerName?.toLowerCase() !== 'ollama') {
|
||||
if (!providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
|
||||
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
|
||||
log(
|
||||
'warn',
|
||||
@@ -541,9 +552,12 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
}
|
||||
|
||||
const messages = [];
|
||||
if (systemPrompt) {
|
||||
messages.push({ role: 'system', content: systemPrompt });
|
||||
}
|
||||
const responseLanguage = getResponseLanguage(effectiveProjectRoot);
|
||||
const systemPromptWithLanguage = `${systemPrompt} \n\n Always respond in ${responseLanguage}.`;
|
||||
messages.push({
|
||||
role: 'system',
|
||||
content: systemPromptWithLanguage.trim()
|
||||
});
|
||||
|
||||
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
|
||||
// {
|
||||
|
||||
@@ -42,7 +42,8 @@ import {
|
||||
findTaskById,
|
||||
taskExists,
|
||||
moveTask,
|
||||
migrateProject
|
||||
migrateProject,
|
||||
setResponseLanguage
|
||||
} from './task-manager.js';
|
||||
|
||||
import {
|
||||
@@ -69,7 +70,9 @@ import {
|
||||
ConfigurationError,
|
||||
isConfigFilePresent,
|
||||
getAvailableModels,
|
||||
getBaseUrlForRole
|
||||
getBaseUrlForRole,
|
||||
getDefaultNumTasks,
|
||||
getDefaultSubtasks
|
||||
} from './config-manager.js';
|
||||
|
||||
import { CUSTOM_PROVIDERS } from '../../src/constants/providers.js';
|
||||
@@ -803,7 +806,11 @@ function registerCommands(programInstance) {
|
||||
'Path to the PRD file (alternative to positional argument)'
|
||||
)
|
||||
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
|
||||
.option('-n, --num-tasks <number>', 'Number of tasks to generate', '10')
|
||||
.option(
|
||||
'-n, --num-tasks <number>',
|
||||
'Number of tasks to generate',
|
||||
getDefaultNumTasks()
|
||||
)
|
||||
.option('-f, --force', 'Skip confirmation when overwriting existing tasks')
|
||||
.option(
|
||||
'--append',
|
||||
@@ -3421,6 +3428,10 @@ ${result.result}
|
||||
'--vertex',
|
||||
'Allow setting a custom Vertex AI model ID (use with --set-*) '
|
||||
)
|
||||
.option(
|
||||
'--gemini-cli',
|
||||
'Allow setting a Gemini CLI model ID (use with --set-*)'
|
||||
)
|
||||
.addHelpText(
|
||||
'after',
|
||||
`
|
||||
@@ -3435,6 +3446,7 @@ Examples:
|
||||
$ task-master models --set-main sonnet --claude-code # Set Claude Code model for main role
|
||||
$ task-master models --set-main gpt-4o --azure # Set custom Azure OpenAI model for main role
|
||||
$ task-master models --set-main claude-3-5-sonnet@20241022 --vertex # Set custom Vertex AI model for main role
|
||||
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role
|
||||
$ task-master models --setup # Run interactive setup`
|
||||
)
|
||||
.action(async (options) => {
|
||||
@@ -3448,12 +3460,13 @@ Examples:
|
||||
options.openrouter,
|
||||
options.ollama,
|
||||
options.bedrock,
|
||||
options.claudeCode
|
||||
options.claudeCode,
|
||||
options.geminiCli
|
||||
].filter(Boolean).length;
|
||||
if (providerFlags > 1) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code) simultaneously.'
|
||||
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli) simultaneously.'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
@@ -3497,6 +3510,8 @@ Examples:
|
||||
? 'bedrock'
|
||||
: options.claudeCode
|
||||
? 'claude-code'
|
||||
: options.geminiCli
|
||||
? 'gemini-cli'
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
@@ -3521,6 +3536,8 @@ Examples:
|
||||
? 'bedrock'
|
||||
: options.claudeCode
|
||||
? 'claude-code'
|
||||
: options.geminiCli
|
||||
? 'gemini-cli'
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
@@ -3547,6 +3564,8 @@ Examples:
|
||||
? 'bedrock'
|
||||
: options.claudeCode
|
||||
? 'claude-code'
|
||||
: options.geminiCli
|
||||
? 'gemini-cli'
|
||||
: undefined
|
||||
});
|
||||
if (result.success) {
|
||||
@@ -3643,6 +3662,63 @@ Examples:
|
||||
return; // Stop execution here
|
||||
});
|
||||
|
||||
// response-language command
|
||||
programInstance
|
||||
.command('lang')
|
||||
.description('Manage response language settings')
|
||||
.option('--response <response_language>', 'Set the response language')
|
||||
.option('--setup', 'Run interactive setup to configure response language')
|
||||
.action(async (options) => {
|
||||
const projectRoot = findProjectRoot(); // Find project root for context
|
||||
const { response, setup } = options;
|
||||
console.log(
|
||||
chalk.blue('Response language set to:', JSON.stringify(options))
|
||||
);
|
||||
let responseLanguage = response || 'English';
|
||||
if (setup) {
|
||||
console.log(
|
||||
chalk.blue('Starting interactive response language setup...')
|
||||
);
|
||||
try {
|
||||
const userResponse = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'responseLanguage',
|
||||
message: 'Input your preferred response language',
|
||||
default: 'English'
|
||||
}
|
||||
]);
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
'Response language set to:',
|
||||
userResponse.responseLanguage
|
||||
)
|
||||
);
|
||||
responseLanguage = userResponse.responseLanguage;
|
||||
} catch (setupError) {
|
||||
console.error(
|
||||
chalk.red('\\nInteractive setup failed unexpectedly:'),
|
||||
setupError.message
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const result = setResponseLanguage(responseLanguage, {
|
||||
projectRoot
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`❌ Error setting response language: ${result.error.message}`
|
||||
)
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// move-task command
|
||||
programInstance
|
||||
.command('move')
|
||||
@@ -3810,7 +3886,11 @@ Examples:
|
||||
$ task-master rules --${RULES_SETUP_ACTION} # Interactive setup to select rule profiles`
|
||||
)
|
||||
.action(async (action, profiles, options) => {
|
||||
const projectDir = process.cwd();
|
||||
const projectRoot = findProjectRoot();
|
||||
if (!projectRoot) {
|
||||
console.error(chalk.red('Error: Could not find project root.'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
/**
|
||||
* 'task-master rules --setup' action:
|
||||
@@ -3857,7 +3937,7 @@ Examples:
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
const addResult = convertAllRulesToProfileRules(
|
||||
projectDir,
|
||||
projectRoot,
|
||||
profileConfig
|
||||
);
|
||||
|
||||
@@ -3903,8 +3983,8 @@ Examples:
|
||||
let confirmed = true;
|
||||
if (!options.force) {
|
||||
// Check if this removal would leave no profiles remaining
|
||||
if (wouldRemovalLeaveNoProfiles(projectDir, expandedProfiles)) {
|
||||
const installedProfiles = getInstalledProfiles(projectDir);
|
||||
if (wouldRemovalLeaveNoProfiles(projectRoot, expandedProfiles)) {
|
||||
const installedProfiles = getInstalledProfiles(projectRoot);
|
||||
confirmed = await confirmRemoveAllRemainingProfiles(
|
||||
expandedProfiles,
|
||||
installedProfiles
|
||||
@@ -3934,12 +4014,12 @@ Examples:
|
||||
if (action === RULES_ACTIONS.ADD) {
|
||||
console.log(chalk.blue(`Adding rules for profile: ${profile}...`));
|
||||
const addResult = convertAllRulesToProfileRules(
|
||||
projectDir,
|
||||
projectRoot,
|
||||
profileConfig
|
||||
);
|
||||
if (typeof profileConfig.onAddRulesProfile === 'function') {
|
||||
const assetsDir = path.join(process.cwd(), 'assets');
|
||||
profileConfig.onAddRulesProfile(projectDir, assetsDir);
|
||||
const assetsDir = path.join(projectRoot, 'assets');
|
||||
profileConfig.onAddRulesProfile(projectRoot, assetsDir);
|
||||
}
|
||||
console.log(
|
||||
chalk.blue(`Completed adding rules for profile: ${profile}`)
|
||||
@@ -3955,7 +4035,7 @@ Examples:
|
||||
console.log(chalk.green(generateProfileSummary(profile, addResult)));
|
||||
} else if (action === RULES_ACTIONS.REMOVE) {
|
||||
console.log(chalk.blue(`Removing rules for profile: ${profile}...`));
|
||||
const result = removeProfileRules(projectDir, profileConfig);
|
||||
const result = removeProfileRules(projectRoot, profileConfig);
|
||||
removalResults.push(result);
|
||||
console.log(
|
||||
chalk.green(generateProfileRemovalSummary(profile, result))
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { z } from 'zod';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
|
||||
import { log, findProjectRoot, resolveEnvVariable, isEmpty } from './utils.js';
|
||||
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
|
||||
import { findConfigPath } from '../../src/utils/path-utils.js';
|
||||
import {
|
||||
@@ -11,6 +12,7 @@ import {
|
||||
CUSTOM_PROVIDERS_ARRAY,
|
||||
ALL_PROVIDERS
|
||||
} from '../../src/constants/providers.js';
|
||||
import { AI_COMMAND_NAMES } from '../../src/constants/commands.js';
|
||||
|
||||
// Calculate __dirname in ESM
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
@@ -61,12 +63,15 @@ const DEFAULTS = {
|
||||
global: {
|
||||
logLevel: 'info',
|
||||
debug: false,
|
||||
defaultNumTasks: 10,
|
||||
defaultSubtasks: 5,
|
||||
defaultPriority: 'medium',
|
||||
projectName: 'Task Master',
|
||||
ollamaBaseURL: 'http://localhost:11434/api',
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
|
||||
}
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
|
||||
responseLanguage: 'English'
|
||||
},
|
||||
claudeCode: {}
|
||||
};
|
||||
|
||||
// --- Internal Config Loading ---
|
||||
@@ -127,7 +132,8 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
? { ...defaults.models.fallback, ...parsedConfig.models.fallback }
|
||||
: { ...defaults.models.fallback }
|
||||
},
|
||||
global: { ...defaults.global, ...parsedConfig?.global }
|
||||
global: { ...defaults.global, ...parsedConfig?.global },
|
||||
claudeCode: { ...defaults.claudeCode, ...parsedConfig?.claudeCode }
|
||||
};
|
||||
configSource = `file (${configPath})`; // Update source info
|
||||
|
||||
@@ -170,6 +176,9 @@ function _loadAndValidateConfig(explicitRoot = null) {
|
||||
config.models.fallback.provider = undefined;
|
||||
config.models.fallback.modelId = undefined;
|
||||
}
|
||||
if (config.claudeCode && !isEmpty(config.claudeCode)) {
|
||||
config.claudeCode = validateClaudeCodeSettings(config.claudeCode);
|
||||
}
|
||||
} catch (error) {
|
||||
// Use console.error for actual errors during parsing
|
||||
console.error(
|
||||
@@ -277,6 +286,83 @@ function validateProviderModelCombination(providerName, modelId) {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates Claude Code AI provider custom settings
|
||||
* @param {object} settings The settings to validate
|
||||
* @returns {object} The validated settings
|
||||
*/
|
||||
function validateClaudeCodeSettings(settings) {
|
||||
// Define the base settings schema without commandSpecific first
|
||||
const BaseSettingsSchema = z.object({
|
||||
maxTurns: z.number().int().positive().optional(),
|
||||
customSystemPrompt: z.string().optional(),
|
||||
appendSystemPrompt: z.string().optional(),
|
||||
permissionMode: z
|
||||
.enum(['default', 'acceptEdits', 'plan', 'bypassPermissions'])
|
||||
.optional(),
|
||||
allowedTools: z.array(z.string()).optional(),
|
||||
disallowedTools: z.array(z.string()).optional(),
|
||||
mcpServers: z
|
||||
.record(
|
||||
z.string(),
|
||||
z.object({
|
||||
type: z.enum(['stdio', 'sse']).optional(),
|
||||
command: z.string(),
|
||||
args: z.array(z.string()).optional(),
|
||||
env: z.record(z.string()).optional(),
|
||||
url: z.string().url().optional(),
|
||||
headers: z.record(z.string()).optional()
|
||||
})
|
||||
)
|
||||
.optional()
|
||||
});
|
||||
|
||||
// Define CommandSpecificSchema using the base schema
|
||||
const CommandSpecificSchema = z.record(
|
||||
z.enum(AI_COMMAND_NAMES),
|
||||
BaseSettingsSchema
|
||||
);
|
||||
|
||||
// Define the full settings schema with commandSpecific
|
||||
const SettingsSchema = BaseSettingsSchema.extend({
|
||||
commandSpecific: CommandSpecificSchema.optional()
|
||||
});
|
||||
|
||||
let validatedSettings = {};
|
||||
|
||||
try {
|
||||
validatedSettings = SettingsSchema.parse(settings);
|
||||
} catch (error) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: Invalid Claude Code settings in config: ${error.message}. Falling back to default.`
|
||||
)
|
||||
);
|
||||
|
||||
validatedSettings = {};
|
||||
}
|
||||
|
||||
return validatedSettings;
|
||||
}
|
||||
|
||||
// --- Claude Code Settings Getters ---
|
||||
|
||||
function getClaudeCodeSettings(explicitRoot = null, forceReload = false) {
|
||||
const config = getConfig(explicitRoot, forceReload);
|
||||
// Ensure Claude Code defaults are applied if Claude Code section is missing
|
||||
return { ...DEFAULTS.claudeCode, ...(config?.claudeCode || {}) };
|
||||
}
|
||||
|
||||
function getClaudeCodeSettingsForCommand(
|
||||
commandName,
|
||||
explicitRoot = null,
|
||||
forceReload = false
|
||||
) {
|
||||
const settings = getClaudeCodeSettings(explicitRoot, forceReload);
|
||||
const commandSpecific = settings?.commandSpecific || {};
|
||||
return { ...settings, ...commandSpecific[commandName] };
|
||||
}
|
||||
|
||||
// --- Role-Specific Getters ---
|
||||
|
||||
function getModelConfigForRole(role, explicitRoot = null) {
|
||||
@@ -424,6 +510,11 @@ function getVertexLocation(explicitRoot = null) {
|
||||
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
|
||||
}
|
||||
|
||||
function getResponseLanguage(explicitRoot = null) {
|
||||
// Directly return value from config
|
||||
return getGlobalConfig(explicitRoot).responseLanguage;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets model parameters (maxTokens, temperature) for a specific role,
|
||||
* considering model-specific overrides from supported-models.json.
|
||||
@@ -500,7 +591,8 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
|
||||
// Providers that don't require API keys for authentication
|
||||
const providersWithoutApiKeys = [
|
||||
CUSTOM_PROVIDERS.OLLAMA,
|
||||
CUSTOM_PROVIDERS.BEDROCK
|
||||
CUSTOM_PROVIDERS.BEDROCK,
|
||||
CUSTOM_PROVIDERS.GEMINI_CLI
|
||||
];
|
||||
|
||||
if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
|
||||
@@ -794,15 +886,26 @@ function getBaseUrlForRole(role, explicitRoot = null) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// Export the providers without API keys array for use in other modules
|
||||
export const providersWithoutApiKeys = [
|
||||
CUSTOM_PROVIDERS.OLLAMA,
|
||||
CUSTOM_PROVIDERS.BEDROCK,
|
||||
CUSTOM_PROVIDERS.GEMINI_CLI
|
||||
];
|
||||
|
||||
export {
|
||||
// Core config access
|
||||
getConfig,
|
||||
writeConfig,
|
||||
ConfigurationError,
|
||||
isConfigFilePresent,
|
||||
// Claude Code settings
|
||||
getClaudeCodeSettings,
|
||||
getClaudeCodeSettingsForCommand,
|
||||
// Validation
|
||||
validateProvider,
|
||||
validateProviderModelCombination,
|
||||
validateClaudeCodeSettings,
|
||||
VALIDATED_PROVIDERS,
|
||||
CUSTOM_PROVIDERS,
|
||||
ALL_PROVIDERS,
|
||||
@@ -832,6 +935,7 @@ export {
|
||||
getOllamaBaseURL,
|
||||
getAzureBaseURL,
|
||||
getBedrockBaseURL,
|
||||
getResponseLanguage,
|
||||
getParametersForRole,
|
||||
getUserId,
|
||||
// API Key Checkers (still relevant)
|
||||
|
||||
@@ -1,16 +1,64 @@
|
||||
{
|
||||
"bedrock": [
|
||||
{
|
||||
"id": "us.anthropic.claude-3-haiku-20240307-v1:0",
|
||||
"swe_score": 0.4,
|
||||
"cost_per_1m_tokens": { "input": 0.25, "output": 1.25 },
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-3-opus-20240229-v1:0",
|
||||
"swe_score": 0.725,
|
||||
"cost_per_1m_tokens": { "input": 15, "output": 75 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
|
||||
"swe_score": 0.49,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
"swe_score": 0.49,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 3,
|
||||
"output": 15
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 65536
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"swe_score": 0.4,
|
||||
"cost_per_1m_tokens": { "input": 0.8, "output": 4 },
|
||||
"allowed_roles": ["main", "fallback"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-opus-4-20250514-v1:0",
|
||||
"swe_score": 0.725,
|
||||
"cost_per_1m_tokens": { "input": 15, "output": 75 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
},
|
||||
{
|
||||
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"swe_score": 0.727,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
},
|
||||
{
|
||||
"id": "us.deepseek.r1-v1:0",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": { "input": 1.35, "output": 5.4 },
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 1.35,
|
||||
"output": 5.4
|
||||
},
|
||||
"allowed_roles": ["research"],
|
||||
"max_tokens": 65536
|
||||
}
|
||||
@@ -648,16 +696,44 @@
|
||||
{
|
||||
"id": "opus",
|
||||
"swe_score": 0.725,
|
||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 32000
|
||||
},
|
||||
{
|
||||
"id": "sonnet",
|
||||
"swe_score": 0.727,
|
||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 64000
|
||||
}
|
||||
],
|
||||
"gemini-cli": [
|
||||
{
|
||||
"id": "gemini-2.5-pro",
|
||||
"swe_score": 0.72,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 65536
|
||||
},
|
||||
{
|
||||
"id": "gemini-2.5-flash",
|
||||
"swe_score": 0.71,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0,
|
||||
"output": 0
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 65536
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -23,10 +23,12 @@ import updateSubtaskById from './task-manager/update-subtask-by-id.js';
|
||||
import removeTask from './task-manager/remove-task.js';
|
||||
import taskExists from './task-manager/task-exists.js';
|
||||
import isTaskDependentOn from './task-manager/is-task-dependent.js';
|
||||
import setResponseLanguage from './task-manager/response-language.js';
|
||||
import moveTask from './task-manager/move-task.js';
|
||||
import { migrateProject } from './task-manager/migrate.js';
|
||||
import { performResearch } from './task-manager/research.js';
|
||||
import { readComplexityReport } from './utils.js';
|
||||
|
||||
// Export task manager functions
|
||||
export {
|
||||
parsePRD,
|
||||
@@ -49,6 +51,7 @@ export {
|
||||
findTaskById,
|
||||
taskExists,
|
||||
isTaskDependentOn,
|
||||
setResponseLanguage,
|
||||
moveTask,
|
||||
readComplexityReport,
|
||||
migrateProject,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import path from 'path';
|
||||
|
||||
import { log, readJSON, writeJSON } from '../utils.js';
|
||||
import { log, readJSON, writeJSON, getCurrentTag } from '../utils.js';
|
||||
import { isTaskDependentOn } from '../task-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
@@ -25,8 +25,10 @@ async function addSubtask(
|
||||
try {
|
||||
log('info', `Adding subtask to parent task ${parentId}...`);
|
||||
|
||||
const currentTag =
|
||||
context.tag || getCurrentTag(context.projectRoot) || 'master';
|
||||
// Read the existing tasks with proper context
|
||||
const data = readJSON(tasksPath, context.projectRoot, context.tag);
|
||||
const data = readJSON(tasksPath, context.projectRoot, currentTag);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
|
||||
}
|
||||
@@ -137,12 +139,12 @@ async function addSubtask(
|
||||
}
|
||||
|
||||
// Write the updated tasks back to the file with proper context
|
||||
writeJSON(tasksPath, data, context.projectRoot, context.tag);
|
||||
writeJSON(tasksPath, data, context.projectRoot, currentTag);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
log('info', 'Regenerating task files...');
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
|
||||
}
|
||||
|
||||
return newSubtask;
|
||||
|
||||
@@ -28,6 +28,7 @@ import {
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { getDefaultPriority } from '../config-manager.js';
|
||||
import ContextGatherer from '../utils/contextGatherer.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
// Define Zod schema for the expected AI output object
|
||||
const AiTaskDataSchema = z.object({
|
||||
@@ -553,18 +554,18 @@ async function addTask(
|
||||
report('DEBUG: Writing tasks.json...', 'debug');
|
||||
// Write the updated raw data back to the file
|
||||
// The writeJSON function will automatically filter out _rawTaggedData
|
||||
writeJSON(tasksPath, rawData);
|
||||
writeJSON(tasksPath, rawData, projectRoot, targetTag);
|
||||
report('DEBUG: tasks.json written.', 'debug');
|
||||
|
||||
// Generate markdown task files
|
||||
// report('Generating task files...', 'info');
|
||||
// report('DEBUG: Calling generateTaskFiles...', 'debug');
|
||||
// // Pass mcpLog if available to generateTaskFiles
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
// projectRoot,
|
||||
// tag: targetTag
|
||||
// });
|
||||
// report('DEBUG: generateTaskFiles finished.', 'debug');
|
||||
report('Generating task files...', 'info');
|
||||
report('DEBUG: Calling generateTaskFiles...', 'debug');
|
||||
// Pass mcpLog if available to generateTaskFiles
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
projectRoot,
|
||||
tag: targetTag
|
||||
});
|
||||
report('DEBUG: generateTaskFiles finished.', 'debug');
|
||||
|
||||
// Show success message - only for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
|
||||
@@ -2,7 +2,13 @@ import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { z } from 'zod';
|
||||
|
||||
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
|
||||
import {
|
||||
log,
|
||||
readJSON,
|
||||
writeJSON,
|
||||
isSilentMode,
|
||||
getTagAwareFilePath
|
||||
} from '../utils.js';
|
||||
|
||||
import {
|
||||
startLoadingIndicator,
|
||||
@@ -61,7 +67,7 @@ const subtaskWrapperSchema = z.object({
|
||||
*/
|
||||
function generateMainSystemPrompt(subtaskCount) {
|
||||
return `You are an AI assistant helping with task breakdown for software development.
|
||||
You need to break down a high-level task into ${subtaskCount} specific subtasks that can be implemented one by one.
|
||||
You need to break down a high-level task into ${subtaskCount > 0 ? subtaskCount : 'an appropriate number of'} specific subtasks that can be implemented one by one.
|
||||
|
||||
Subtasks should:
|
||||
1. Be specific and actionable implementation steps
|
||||
@@ -76,7 +82,7 @@ For each subtask, provide:
|
||||
- title: Clear, specific title
|
||||
- description: Detailed description
|
||||
- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)
|
||||
- details: Implementation details
|
||||
- details: Implementation details, the output should be in string
|
||||
- testStrategy: Optional testing approach
|
||||
|
||||
|
||||
@@ -111,11 +117,11 @@ function generateMainUserPrompt(
|
||||
"details": "Implementation guidance",
|
||||
"testStrategy": "Optional testing approach"
|
||||
},
|
||||
// ... (repeat for a total of ${subtaskCount} subtasks with sequential IDs)
|
||||
// ... (repeat for ${subtaskCount ? 'a total of ' + subtaskCount : 'each of the'} subtasks with sequential IDs)
|
||||
]
|
||||
}`;
|
||||
|
||||
return `Break down this task into exactly ${subtaskCount} specific subtasks:
|
||||
return `Break down this task into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks:
|
||||
|
||||
Task ID: ${task.id}
|
||||
Title: ${task.title}
|
||||
@@ -159,7 +165,7 @@ function generateResearchUserPrompt(
|
||||
]
|
||||
}`;
|
||||
|
||||
return `Analyze the following task and break it down into exactly ${subtaskCount} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
|
||||
return `Analyze the following task and break it down into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
|
||||
|
||||
Parent Task:
|
||||
ID: ${task.id}
|
||||
@@ -497,9 +503,18 @@ async function expandTask(
|
||||
let complexityReasoningContext = '';
|
||||
let systemPrompt; // Declare systemPrompt here
|
||||
|
||||
const complexityReportPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
|
||||
// Use tag-aware complexity report path
|
||||
const complexityReportPath = getTagAwareFilePath(
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
tag,
|
||||
projectRoot
|
||||
);
|
||||
let taskAnalysis = null;
|
||||
|
||||
logger.info(
|
||||
`Looking for complexity report at: ${complexityReportPath}${tag && tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
|
||||
);
|
||||
|
||||
try {
|
||||
if (fs.existsSync(complexityReportPath)) {
|
||||
const complexityReport = readJSON(complexityReportPath);
|
||||
@@ -531,7 +546,7 @@ async function expandTask(
|
||||
|
||||
// Determine final subtask count
|
||||
const explicitNumSubtasks = parseInt(numSubtasks, 10);
|
||||
if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks > 0) {
|
||||
if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks >= 0) {
|
||||
finalSubtaskCount = explicitNumSubtasks;
|
||||
logger.info(
|
||||
`Using explicitly provided subtask count: ${finalSubtaskCount}`
|
||||
@@ -545,7 +560,7 @@ async function expandTask(
|
||||
finalSubtaskCount = getDefaultSubtasks(session);
|
||||
logger.info(`Using default number of subtasks: ${finalSubtaskCount}`);
|
||||
}
|
||||
if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount <= 0) {
|
||||
if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount < 0) {
|
||||
logger.warn(
|
||||
`Invalid subtask count determined (${finalSubtaskCount}), defaulting to 3.`
|
||||
);
|
||||
@@ -566,7 +581,7 @@ async function expandTask(
|
||||
}
|
||||
|
||||
// --- Use Simplified System Prompt for Report Prompts ---
|
||||
systemPrompt = `You are an AI assistant helping with task breakdown. Generate exactly ${finalSubtaskCount} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
|
||||
systemPrompt = `You are an AI assistant helping with task breakdown. Generate ${finalSubtaskCount > 0 ? 'exactly ' + finalSubtaskCount : 'an appropriate number of'} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
|
||||
logger.info(
|
||||
`Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.`
|
||||
);
|
||||
@@ -608,7 +623,7 @@ async function expandTask(
|
||||
let loadingIndicator = null;
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
`Generating ${finalSubtaskCount} subtasks...\n`
|
||||
`Generating ${finalSubtaskCount || 'appropriate number of'} subtasks...\n`
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import chalk from 'chalk';
|
||||
|
||||
import { log, readJSON } from '../utils.js';
|
||||
|
||||
@@ -523,6 +523,24 @@ async function setModel(role, modelId, options = {}) {
|
||||
determinedProvider = CUSTOM_PROVIDERS.VERTEX;
|
||||
warningMessage = `Warning: Custom Vertex AI model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`;
|
||||
report('warn', warningMessage);
|
||||
} else if (providerHint === CUSTOM_PROVIDERS.GEMINI_CLI) {
|
||||
// Gemini CLI provider - check if model exists in our list
|
||||
determinedProvider = CUSTOM_PROVIDERS.GEMINI_CLI;
|
||||
// Re-find modelData specifically for gemini-cli provider
|
||||
const geminiCliModels = availableModels.filter(
|
||||
(m) => m.provider === 'gemini-cli'
|
||||
);
|
||||
const geminiCliModelData = geminiCliModels.find(
|
||||
(m) => m.id === modelId
|
||||
);
|
||||
if (geminiCliModelData) {
|
||||
// Update modelData to the found gemini-cli model
|
||||
modelData = geminiCliModelData;
|
||||
report('info', `Setting Gemini CLI model '${modelId}'.`);
|
||||
} else {
|
||||
warningMessage = `Warning: Gemini CLI model '${modelId}' not found in supported models. Setting without validation.`;
|
||||
report('warn', warningMessage);
|
||||
}
|
||||
} else {
|
||||
// Invalid provider hint - should not happen with our constants
|
||||
throw new Error(`Invalid provider hint received: ${providerHint}`);
|
||||
|
||||
@@ -188,7 +188,7 @@ Your task breakdown should incorporate this research, resulting in more detailed
|
||||
// Base system prompt for PRD parsing
|
||||
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
|
||||
|
||||
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
||||
Analyze the provided PRD content and generate ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
||||
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
|
||||
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
|
||||
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
|
||||
@@ -207,7 +207,7 @@ Each task should follow this JSON structure:
|
||||
}
|
||||
|
||||
Guidelines:
|
||||
1. Unless complexity warrants otherwise, create exactly ${numTasks} tasks, numbered sequentially starting from ${nextId}
|
||||
1. ${numTasks > 0 ? 'Unless complexity warrants otherwise' : 'Depending on the complexity'}, create ${numTasks > 0 ? 'exactly ' + numTasks : 'an appropriate number of'} tasks, numbered sequentially starting from ${nextId}
|
||||
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
|
||||
3. Order tasks logically - consider dependencies and implementation sequence
|
||||
4. Early tasks should focus on setup, core functionality first, then advanced features
|
||||
@@ -220,7 +220,7 @@ Guidelines:
|
||||
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
|
||||
|
||||
// Build user prompt with PRD content
|
||||
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
|
||||
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
|
||||
|
||||
Return your response in this format:
|
||||
{
|
||||
@@ -235,7 +235,7 @@ Guidelines:
|
||||
],
|
||||
"metadata": {
|
||||
"projectName": "PRD Implementation",
|
||||
"totalTasks": ${numTasks},
|
||||
"totalTasks": {number of tasks},
|
||||
"sourceFile": "${prdPath}",
|
||||
"generatedAt": "YYYY-MM-DD"
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
import { log, readJSON, writeJSON } from '../utils.js';
|
||||
import * as fs from 'fs';
|
||||
import { readJSON, writeJSON, log, findTaskById } from '../utils.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
import taskExists from './task-exists.js';
|
||||
|
||||
@@ -172,7 +171,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
}
|
||||
|
||||
// Save the updated raw data structure
|
||||
writeJSON(tasksPath, fullTaggedData);
|
||||
writeJSON(tasksPath, fullTaggedData, projectRoot, currentTag);
|
||||
|
||||
// Delete task files AFTER saving tasks.json
|
||||
for (const taskIdNum of tasksToDeleteFiles) {
|
||||
@@ -195,10 +194,10 @@ async function removeTask(tasksPath, taskIds, context = {}) {
|
||||
|
||||
// Generate updated task files ONCE, with context
|
||||
try {
|
||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
// projectRoot,
|
||||
// tag: currentTag
|
||||
// });
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
|
||||
projectRoot,
|
||||
tag: currentTag
|
||||
});
|
||||
results.messages.push('Task files regenerated successfully.');
|
||||
} catch (genError) {
|
||||
const genErrMsg = `Failed to regenerate task files: ${genError.message}`;
|
||||
|
||||
89
scripts/modules/task-manager/response-language.js
Normal file
89
scripts/modules/task-manager/response-language.js
Normal file
@@ -0,0 +1,89 @@
|
||||
import {
|
||||
getConfig,
|
||||
isConfigFilePresent,
|
||||
writeConfig
|
||||
} from '../config-manager.js';
|
||||
import { findConfigPath } from '../../../src/utils/path-utils.js';
|
||||
import { log } from '../utils.js';
|
||||
|
||||
function setResponseLanguage(lang, options = {}) {
|
||||
const { mcpLog, projectRoot } = options;
|
||||
|
||||
const report = (level, ...args) => {
|
||||
if (mcpLog && typeof mcpLog[level] === 'function') {
|
||||
mcpLog[level](...args);
|
||||
}
|
||||
};
|
||||
|
||||
// Use centralized config path finding instead of hardcoded path
|
||||
const configPath = findConfigPath(null, { projectRoot });
|
||||
const configExists = isConfigFilePresent(projectRoot);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Checking for config file using findConfigPath, found: ${configPath}`
|
||||
);
|
||||
log(
|
||||
'debug',
|
||||
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
|
||||
);
|
||||
|
||||
if (!configExists) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'CONFIG_MISSING',
|
||||
message:
|
||||
'The configuration file is missing. Run "task-master models --setup" to create it.'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Validate response language
|
||||
if (typeof lang !== 'string' || lang.trim() === '') {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'INVALID_RESPONSE_LANGUAGE',
|
||||
message: `Invalid response language: ${lang}. Must be a non-empty string.`
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
const currentConfig = getConfig(projectRoot);
|
||||
currentConfig.global.responseLanguage = lang;
|
||||
const writeResult = writeConfig(currentConfig, projectRoot);
|
||||
|
||||
if (!writeResult) {
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'WRITE_ERROR',
|
||||
message: 'Error writing updated configuration to configuration file'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const successMessage = `Successfully set response language to: ${lang}`;
|
||||
report('info', successMessage);
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
responseLanguage: lang,
|
||||
message: successMessage
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
report('error', `Error setting response language: ${error.message}`);
|
||||
return {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'SET_RESPONSE_LANGUAGE_ERROR',
|
||||
message: error.message
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export default setResponseLanguage;
|
||||
@@ -132,7 +132,7 @@ async function setTaskStatus(
|
||||
|
||||
// Write the updated raw data back to the file
|
||||
// The writeJSON function will automatically filter out _rawTaggedData
|
||||
writeJSON(tasksPath, rawData);
|
||||
writeJSON(tasksPath, rawData, options.projectRoot, currentTag);
|
||||
|
||||
// Validate dependencies after status update
|
||||
log('info', 'Validating dependencies after status update...');
|
||||
|
||||
@@ -145,8 +145,8 @@ async function createTag(
|
||||
}
|
||||
}
|
||||
|
||||
// Write the clean data back to file
|
||||
writeJSON(tasksPath, cleanData);
|
||||
// Write the clean data back to file with proper context to avoid tag corruption
|
||||
writeJSON(tasksPath, cleanData, projectRoot);
|
||||
|
||||
logFn.success(`Successfully created tag "${tagName}"`);
|
||||
|
||||
@@ -365,8 +365,8 @@ async function deleteTag(
|
||||
}
|
||||
}
|
||||
|
||||
// Write the clean data back to file
|
||||
writeJSON(tasksPath, cleanData);
|
||||
// Write the clean data back to file with proper context to avoid tag corruption
|
||||
writeJSON(tasksPath, cleanData, projectRoot);
|
||||
|
||||
logFn.success(`Successfully deleted tag "${tagName}"`);
|
||||
|
||||
@@ -485,7 +485,7 @@ async function enhanceTagsWithMetadata(tasksPath, rawData, context = {}) {
|
||||
cleanData[key] = value;
|
||||
}
|
||||
}
|
||||
writeJSON(tasksPath, cleanData);
|
||||
writeJSON(tasksPath, cleanData, context.projectRoot);
|
||||
}
|
||||
} catch (error) {
|
||||
// Don't throw - just log and continue
|
||||
@@ -905,8 +905,8 @@ async function renameTag(
|
||||
}
|
||||
}
|
||||
|
||||
// Write the clean data back to file
|
||||
writeJSON(tasksPath, cleanData);
|
||||
// Write the clean data back to file with proper context to avoid tag corruption
|
||||
writeJSON(tasksPath, cleanData, projectRoot);
|
||||
|
||||
// Get task count
|
||||
const tasks = getTasksForTag(rawData, newName);
|
||||
@@ -1062,8 +1062,8 @@ async function copyTag(
|
||||
}
|
||||
}
|
||||
|
||||
// Write the clean data back to file
|
||||
writeJSON(tasksPath, cleanData);
|
||||
// Write the clean data back to file with proper context to avoid tag corruption
|
||||
writeJSON(tasksPath, cleanData, projectRoot);
|
||||
|
||||
logFn.success(
|
||||
`Successfully copied tag from "${sourceName}" to "${targetName}"`
|
||||
|
||||
@@ -9,7 +9,8 @@ import {
|
||||
readJSON,
|
||||
writeJSON,
|
||||
truncate,
|
||||
isSilentMode
|
||||
isSilentMode,
|
||||
getCurrentTag
|
||||
} from '../utils.js';
|
||||
|
||||
import {
|
||||
@@ -222,6 +223,7 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
|
||||
* @param {Object} [context.session] - Session object from MCP server.
|
||||
* @param {Object} [context.mcpLog] - MCP logger object.
|
||||
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
||||
* @param {string} [tag=null] - Tag associated with the tasks.
|
||||
*/
|
||||
async function updateTasks(
|
||||
tasksPath,
|
||||
@@ -231,7 +233,7 @@ async function updateTasks(
|
||||
context = {},
|
||||
outputFormat = 'text' // Default to text for CLI
|
||||
) {
|
||||
const { session, mcpLog, projectRoot: providedProjectRoot } = context;
|
||||
const { session, mcpLog, projectRoot: providedProjectRoot, tag } = context;
|
||||
// Use mcpLog if available, otherwise use the imported consoleLog function
|
||||
const logFn = mcpLog || consoleLog;
|
||||
// Flag to easily check which logger type we have
|
||||
@@ -255,8 +257,11 @@ async function updateTasks(
|
||||
throw new Error('Could not determine project root directory');
|
||||
}
|
||||
|
||||
// --- Task Loading/Filtering (Unchanged) ---
|
||||
const data = readJSON(tasksPath, projectRoot);
|
||||
// Determine the current tag - prioritize explicit tag, then context.tag, then current tag
|
||||
const currentTag = tag || getCurrentTag(projectRoot) || 'master';
|
||||
|
||||
// --- Task Loading/Filtering (Updated to pass projectRoot and tag) ---
|
||||
const data = readJSON(tasksPath, projectRoot, currentTag);
|
||||
if (!data || !data.tasks)
|
||||
throw new Error(`No valid tasks found in ${tasksPath}`);
|
||||
const tasksToUpdate = data.tasks.filter(
|
||||
@@ -428,7 +433,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
isMCP
|
||||
);
|
||||
|
||||
// --- Update Tasks Data (Unchanged) ---
|
||||
// --- Update Tasks Data (Updated writeJSON call) ---
|
||||
if (!Array.isArray(parsedUpdatedTasks)) {
|
||||
// Should be caught by parser, but extra check
|
||||
throw new Error(
|
||||
@@ -467,7 +472,8 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
`Applied updates to ${actualUpdateCount} tasks in the dataset.`
|
||||
);
|
||||
|
||||
writeJSON(tasksPath, data);
|
||||
// Fix: Pass projectRoot and currentTag to writeJSON
|
||||
writeJSON(tasksPath, data, projectRoot, currentTag);
|
||||
if (isMCP)
|
||||
logFn.info(
|
||||
`Successfully updated ${actualUpdateCount} tasks in ${tasksPath}`
|
||||
|
||||
57
scripts/modules/update-config-tokens.js
Normal file
57
scripts/modules/update-config-tokens.js
Normal file
@@ -0,0 +1,57 @@
|
||||
/**
|
||||
* update-config-tokens.js
|
||||
* Updates config.json with correct maxTokens values from supported-models.json
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname } from 'path';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
/**
|
||||
* Updates the config file with correct maxTokens values from supported-models.json
|
||||
* @param {string} configPath - Path to the config.json file to update
|
||||
* @returns {boolean} True if successful, false otherwise
|
||||
*/
|
||||
export function updateConfigMaxTokens(configPath) {
|
||||
try {
|
||||
// Load supported models
|
||||
const supportedModelsPath = path.join(__dirname, 'supported-models.json');
|
||||
const supportedModels = JSON.parse(
|
||||
fs.readFileSync(supportedModelsPath, 'utf-8')
|
||||
);
|
||||
|
||||
// Load config
|
||||
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
|
||||
|
||||
// Update each role's maxTokens if the model exists in supported-models.json
|
||||
const roles = ['main', 'research', 'fallback'];
|
||||
|
||||
for (const role of roles) {
|
||||
if (config.models && config.models[role]) {
|
||||
const provider = config.models[role].provider;
|
||||
const modelId = config.models[role].modelId;
|
||||
|
||||
// Find the model in supported models
|
||||
if (supportedModels[provider]) {
|
||||
const modelData = supportedModels[provider].find(
|
||||
(m) => m.id === modelId
|
||||
);
|
||||
if (modelData && modelData.max_tokens) {
|
||||
config.models[role].maxTokens = modelData.max_tokens;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write back the updated config
|
||||
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error('Error updating config maxTokens:', error.message);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -64,6 +64,51 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// --- Tag-Aware Path Resolution Utility ---
|
||||
|
||||
/**
|
||||
* Slugifies a tag name to be filesystem-safe
|
||||
* @param {string} tagName - The tag name to slugify
|
||||
* @returns {string} Slugified tag name safe for filesystem use
|
||||
*/
|
||||
function slugifyTagForFilePath(tagName) {
|
||||
if (!tagName || typeof tagName !== 'string') {
|
||||
return 'unknown-tag';
|
||||
}
|
||||
|
||||
// Replace invalid filesystem characters with hyphens and clean up
|
||||
return tagName
|
||||
.replace(/[^a-zA-Z0-9_-]/g, '-') // Replace invalid chars with hyphens
|
||||
.replace(/^-+|-+$/g, '') // Remove leading/trailing hyphens
|
||||
.replace(/-+/g, '-') // Collapse multiple hyphens
|
||||
.toLowerCase() // Convert to lowercase
|
||||
.substring(0, 50); // Limit length to prevent overly long filenames
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolves a file path to be tag-aware, following the pattern used by other commands.
|
||||
* For non-master tags, appends _slugified-tagname before the file extension.
|
||||
* @param {string} basePath - The base file path (e.g., '.taskmaster/reports/task-complexity-report.json')
|
||||
* @param {string|null} tag - The tag name (null, undefined, or 'master' uses base path)
|
||||
* @param {string} [projectRoot='.'] - The project root directory
|
||||
* @returns {string} The resolved file path
|
||||
*/
|
||||
function getTagAwareFilePath(basePath, tag, projectRoot = '.') {
|
||||
// Use path.parse and format for clean tag insertion
|
||||
const parsedPath = path.parse(basePath);
|
||||
if (!tag || tag === 'master') {
|
||||
return path.join(projectRoot, basePath);
|
||||
}
|
||||
|
||||
// Slugify the tag for filesystem safety
|
||||
const slugifiedTag = slugifyTagForFilePath(tag);
|
||||
|
||||
// Append slugified tag before file extension
|
||||
parsedPath.base = `${parsedPath.name}_${slugifiedTag}${parsedPath.ext}`;
|
||||
const relativePath = path.format(parsedPath);
|
||||
return path.join(projectRoot, relativePath);
|
||||
}
|
||||
|
||||
// --- Project Root Finding Utility ---
|
||||
/**
|
||||
* Recursively searches upwards for project root starting from a given directory.
|
||||
@@ -967,6 +1012,21 @@ function truncate(text, maxLength) {
|
||||
return `${text.slice(0, maxLength - 3)}...`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if array or object are empty
|
||||
* @param {*} value - The value to check
|
||||
* @returns {boolean} True if empty, false otherwise
|
||||
*/
|
||||
function isEmpty(value) {
|
||||
if (Array.isArray(value)) {
|
||||
return value.length === 0;
|
||||
} else if (typeof value === 'object' && value !== null) {
|
||||
return Object.keys(value).length === 0;
|
||||
}
|
||||
|
||||
return false; // Not an array or object, or is null
|
||||
}
|
||||
|
||||
/**
|
||||
* Find cycles in a dependency graph using DFS
|
||||
* @param {string} subtaskId - Current subtask ID
|
||||
@@ -1328,6 +1388,7 @@ export {
|
||||
formatTaskId,
|
||||
findTaskById,
|
||||
truncate,
|
||||
isEmpty,
|
||||
findCycles,
|
||||
toKebabCase,
|
||||
detectCamelCaseFlags,
|
||||
@@ -1338,6 +1399,8 @@ export {
|
||||
addComplexityToTask,
|
||||
resolveEnvVariable,
|
||||
findProjectRoot,
|
||||
getTagAwareFilePath,
|
||||
slugifyTagForFilePath,
|
||||
aggregateTelemetry,
|
||||
getCurrentTag,
|
||||
resolveTag,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { generateText, streamText, generateObject } from 'ai';
|
||||
import { generateObject, generateText, streamText } from 'ai';
|
||||
import { log } from '../../scripts/modules/index.js';
|
||||
|
||||
/**
|
||||
@@ -109,7 +109,7 @@ export class BaseAIProvider {
|
||||
`Generating ${this.name} text with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const client = await this.getClient(params);
|
||||
const result = await generateText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
@@ -145,7 +145,7 @@ export class BaseAIProvider {
|
||||
|
||||
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const client = await this.getClient(params);
|
||||
const stream = await streamText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
@@ -184,7 +184,7 @@ export class BaseAIProvider {
|
||||
`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const client = await this.getClient(params);
|
||||
const result = await generateObject({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
import { createClaudeCode } from './custom-sdk/claude-code/index.js';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js';
|
||||
|
||||
export class ClaudeCodeProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
@@ -26,6 +27,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
|
||||
/**
|
||||
* Creates and returns a Claude Code client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.commandName] - Name of the command invoking the service
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint (not used by Claude Code)
|
||||
* @returns {Function} Claude Code client function
|
||||
* @throws {Error} If initialization fails
|
||||
@@ -35,10 +37,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
|
||||
// Claude Code doesn't use API keys or base URLs
|
||||
// Just return the provider factory
|
||||
return createClaudeCode({
|
||||
defaultSettings: {
|
||||
// Add any default settings if needed
|
||||
// These can be overridden per request
|
||||
}
|
||||
defaultSettings: getClaudeCodeSettingsForCommand(params?.commandName)
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
|
||||
656
src/ai-providers/gemini-cli.js
Normal file
656
src/ai-providers/gemini-cli.js
Normal file
@@ -0,0 +1,656 @@
|
||||
/**
|
||||
* src/ai-providers/gemini-cli.js
|
||||
*
|
||||
* Implementation for interacting with Gemini models via Gemini CLI
|
||||
* using the ai-sdk-provider-gemini-cli package.
|
||||
*/
|
||||
|
||||
import { generateObject, generateText, streamText } from 'ai';
|
||||
import { parse } from 'jsonc-parser';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
import { log } from '../../scripts/modules/index.js';
|
||||
|
||||
let createGeminiProvider;
|
||||
|
||||
async function loadGeminiCliModule() {
|
||||
if (!createGeminiProvider) {
|
||||
try {
|
||||
const mod = await import('ai-sdk-provider-gemini-cli');
|
||||
createGeminiProvider = mod.createGeminiProvider;
|
||||
} catch (err) {
|
||||
throw new Error(
|
||||
"Gemini CLI SDK is not installed. Please install 'ai-sdk-provider-gemini-cli' to use the gemini-cli provider."
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export class GeminiCliProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Gemini CLI';
|
||||
}
|
||||
|
||||
/**
|
||||
* Override validateAuth to handle Gemini CLI authentication options
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(params) {
|
||||
// Gemini CLI is designed to use pre-configured OAuth authentication
|
||||
// Users choose gemini-cli specifically to leverage their existing
|
||||
// gemini auth login credentials, not to use API keys.
|
||||
// We support API keys for compatibility, but the expected usage
|
||||
// is through CLI authentication (no API key required).
|
||||
// No validation needed - the SDK will handle auth internally
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a Gemini CLI client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.apiKey] - Optional Gemini API key (rarely used with gemini-cli)
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Promise<Function>} Gemini CLI client function
|
||||
* @throws {Error} If initialization fails
|
||||
*/
|
||||
async getClient(params) {
|
||||
try {
|
||||
// Load the Gemini CLI module dynamically
|
||||
await loadGeminiCliModule();
|
||||
// Primary use case: Use existing gemini CLI authentication
|
||||
// Secondary use case: Direct API key (for compatibility)
|
||||
let authOptions = {};
|
||||
|
||||
if (params.apiKey && params.apiKey !== 'gemini-cli-no-key-required') {
|
||||
// API key provided - use it for compatibility
|
||||
authOptions = {
|
||||
authType: 'api-key',
|
||||
apiKey: params.apiKey
|
||||
};
|
||||
} else {
|
||||
// Expected case: Use gemini CLI authentication
|
||||
// Requires: gemini auth login (pre-configured)
|
||||
authOptions = {
|
||||
authType: 'oauth-personal'
|
||||
};
|
||||
}
|
||||
|
||||
// Add baseURL if provided (for custom endpoints)
|
||||
if (params.baseURL) {
|
||||
authOptions.baseURL = params.baseURL;
|
||||
}
|
||||
|
||||
// Create and return the provider
|
||||
return createGeminiProvider(authOptions);
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts system messages from the messages array and returns them separately.
|
||||
* This is needed because ai-sdk-provider-gemini-cli expects system prompts as a separate parameter.
|
||||
* @param {Array} messages - Array of message objects
|
||||
* @param {Object} options - Options for system prompt enhancement
|
||||
* @param {boolean} options.enforceJsonOutput - Whether to add JSON enforcement to system prompt
|
||||
* @returns {Object} - {systemPrompt: string|undefined, messages: Array}
|
||||
*/
|
||||
_extractSystemMessage(messages, options = {}) {
|
||||
if (!messages || !Array.isArray(messages)) {
|
||||
return { systemPrompt: undefined, messages: messages || [] };
|
||||
}
|
||||
|
||||
const systemMessages = messages.filter((msg) => msg.role === 'system');
|
||||
const nonSystemMessages = messages.filter((msg) => msg.role !== 'system');
|
||||
|
||||
// Combine multiple system messages if present
|
||||
let systemPrompt =
|
||||
systemMessages.length > 0
|
||||
? systemMessages.map((msg) => msg.content).join('\n\n')
|
||||
: undefined;
|
||||
|
||||
// Add Gemini CLI specific JSON enforcement if requested
|
||||
if (options.enforceJsonOutput) {
|
||||
const jsonEnforcement = this._getJsonEnforcementPrompt();
|
||||
systemPrompt = systemPrompt
|
||||
? `${systemPrompt}\n\n${jsonEnforcement}`
|
||||
: jsonEnforcement;
|
||||
}
|
||||
|
||||
return { systemPrompt, messages: nonSystemMessages };
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets a Gemini CLI specific system prompt to enforce strict JSON output
|
||||
* @returns {string} JSON enforcement system prompt
|
||||
*/
|
||||
_getJsonEnforcementPrompt() {
|
||||
return `CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a string is valid JSON
|
||||
* @param {string} text - Text to validate
|
||||
* @returns {boolean} True if valid JSON
|
||||
*/
|
||||
_isValidJson(text) {
|
||||
if (!text || typeof text !== 'string') {
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
JSON.parse(text.trim());
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects if the user prompt is requesting JSON output
|
||||
* @param {Array} messages - Array of message objects
|
||||
* @returns {boolean} True if JSON output is likely expected
|
||||
*/
|
||||
_detectJsonRequest(messages) {
|
||||
const userMessages = messages.filter((msg) => msg.role === 'user');
|
||||
const combinedText = userMessages
|
||||
.map((msg) => msg.content)
|
||||
.join(' ')
|
||||
.toLowerCase();
|
||||
|
||||
// Look for indicators that JSON output is expected
|
||||
const jsonIndicators = [
|
||||
'json',
|
||||
'respond only with',
|
||||
'return only',
|
||||
'output only',
|
||||
'format:',
|
||||
'structure:',
|
||||
'schema:',
|
||||
'{"',
|
||||
'[{',
|
||||
'subtasks',
|
||||
'array',
|
||||
'object'
|
||||
];
|
||||
|
||||
return jsonIndicators.some((indicator) => combinedText.includes(indicator));
|
||||
}
|
||||
|
||||
/**
|
||||
* Simplifies complex prompts for gemini-cli to improve JSON output compliance
|
||||
* @param {Array} messages - Array of message objects
|
||||
* @returns {Array} Simplified messages array
|
||||
*/
|
||||
_simplifyJsonPrompts(messages) {
|
||||
// First, check if this is an expand-task operation by looking at the system message
|
||||
const systemMsg = messages.find((m) => m.role === 'system');
|
||||
const isExpandTask =
|
||||
systemMsg &&
|
||||
systemMsg.content.includes(
|
||||
'You are an AI assistant helping with task breakdown. Generate exactly'
|
||||
);
|
||||
|
||||
if (!isExpandTask) {
|
||||
return messages; // Not an expand task, return unchanged
|
||||
}
|
||||
|
||||
// Extract subtask count from system message
|
||||
const subtaskCountMatch = systemMsg.content.match(
|
||||
/Generate exactly (\d+) subtasks/
|
||||
);
|
||||
const subtaskCount = subtaskCountMatch ? subtaskCountMatch[1] : '10';
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} detected expand-task operation, simplifying for ${subtaskCount} subtasks`
|
||||
);
|
||||
|
||||
return messages.map((msg) => {
|
||||
if (msg.role !== 'user') {
|
||||
return msg;
|
||||
}
|
||||
|
||||
// For expand-task user messages, create a much simpler, more direct prompt
|
||||
// that doesn't depend on specific task content
|
||||
const simplifiedPrompt = `Generate exactly ${subtaskCount} subtasks in the following JSON format.
|
||||
|
||||
CRITICAL INSTRUCTION: You must respond with ONLY valid JSON. No explanatory text, no "Here is", no "Of course", no markdown - just the JSON object.
|
||||
|
||||
Required JSON structure:
|
||||
{
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Specific actionable task title",
|
||||
"description": "Clear task description",
|
||||
"dependencies": [],
|
||||
"details": "Implementation details and guidance",
|
||||
"testStrategy": "Testing approach"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
Generate ${subtaskCount} subtasks based on the original task context. Return ONLY the JSON object.`;
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} simplified user prompt for better JSON compliance`
|
||||
);
|
||||
return { ...msg, content: simplifiedPrompt };
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract JSON from Gemini's response using a tolerant parser.
|
||||
*
|
||||
* Optimized approach that progressively tries different parsing strategies:
|
||||
* 1. Direct parsing after cleanup
|
||||
* 2. Smart boundary detection with single-pass analysis
|
||||
* 3. Limited character-by-character fallback for edge cases
|
||||
*
|
||||
* @param {string} text - Raw text which may contain JSON
|
||||
* @returns {string} A valid JSON string if extraction succeeds, otherwise the original text
|
||||
*/
|
||||
extractJson(text) {
|
||||
if (!text || typeof text !== 'string') {
|
||||
return text;
|
||||
}
|
||||
|
||||
let content = text.trim();
|
||||
|
||||
// Early exit for very short content
|
||||
if (content.length < 2) {
|
||||
return text;
|
||||
}
|
||||
|
||||
// Strip common wrappers in a single pass
|
||||
content = content
|
||||
// Remove markdown fences
|
||||
.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
|
||||
// Remove variable declarations
|
||||
.replace(/^\s*(?:const|let|var)\s+\w+\s*=\s*([\s\S]*?)(?:;|\s*)$/i, '$1')
|
||||
// Remove common prefixes
|
||||
.replace(/^(?:Here's|The)\s+(?:the\s+)?JSON.*?[:]\s*/i, '')
|
||||
.trim();
|
||||
|
||||
// Find the first JSON-like structure
|
||||
const firstObj = content.indexOf('{');
|
||||
const firstArr = content.indexOf('[');
|
||||
|
||||
if (firstObj === -1 && firstArr === -1) {
|
||||
return text;
|
||||
}
|
||||
|
||||
const start =
|
||||
firstArr === -1
|
||||
? firstObj
|
||||
: firstObj === -1
|
||||
? firstArr
|
||||
: Math.min(firstObj, firstArr);
|
||||
content = content.slice(start);
|
||||
|
||||
// Optimized parsing function with error collection
|
||||
const tryParse = (value) => {
|
||||
if (!value || value.length < 2) return undefined;
|
||||
|
||||
const errors = [];
|
||||
try {
|
||||
const result = parse(value, errors, {
|
||||
allowTrailingComma: true,
|
||||
allowEmptyContent: false
|
||||
});
|
||||
if (errors.length === 0 && result !== undefined) {
|
||||
return JSON.stringify(result, null, 2);
|
||||
}
|
||||
} catch {
|
||||
// Parsing failed completely
|
||||
}
|
||||
return undefined;
|
||||
};
|
||||
|
||||
// Try parsing the full content first
|
||||
const fullParse = tryParse(content);
|
||||
if (fullParse !== undefined) {
|
||||
return fullParse;
|
||||
}
|
||||
|
||||
// Smart boundary detection - single pass with optimizations
|
||||
const openChar = content[0];
|
||||
const closeChar = openChar === '{' ? '}' : ']';
|
||||
|
||||
let depth = 0;
|
||||
let inString = false;
|
||||
let escapeNext = false;
|
||||
let lastValidEnd = -1;
|
||||
|
||||
// Single-pass boundary detection with early termination
|
||||
for (let i = 0; i < content.length && i < 10000; i++) {
|
||||
// Limit scan for performance
|
||||
const char = content[i];
|
||||
|
||||
if (escapeNext) {
|
||||
escapeNext = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === '\\') {
|
||||
escapeNext = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === '"') {
|
||||
inString = !inString;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (inString) continue;
|
||||
|
||||
if (char === openChar) {
|
||||
depth++;
|
||||
} else if (char === closeChar) {
|
||||
depth--;
|
||||
if (depth === 0) {
|
||||
lastValidEnd = i + 1;
|
||||
// Try parsing immediately on first valid boundary
|
||||
const candidate = content.slice(0, lastValidEnd);
|
||||
const parsed = tryParse(candidate);
|
||||
if (parsed !== undefined) {
|
||||
return parsed;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we found valid boundaries but parsing failed, try limited fallback
|
||||
if (lastValidEnd > 0) {
|
||||
const maxAttempts = Math.min(5, Math.floor(lastValidEnd / 100)); // Limit attempts
|
||||
for (let i = 0; i < maxAttempts; i++) {
|
||||
const testEnd = Math.max(
|
||||
lastValidEnd - i * 50,
|
||||
Math.floor(lastValidEnd * 0.8)
|
||||
);
|
||||
const candidate = content.slice(0, testEnd);
|
||||
const parsed = tryParse(candidate);
|
||||
if (parsed !== undefined) {
|
||||
return parsed;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return text;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates text using Gemini CLI model
|
||||
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
|
||||
*/
|
||||
async generateText(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`Generating ${this.name} text with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
|
||||
const enforceJsonOutput = this._detectJsonRequest(params.messages);
|
||||
|
||||
// Debug logging to understand what's happening
|
||||
log('debug', `${this.name} JSON detection analysis:`, {
|
||||
enforceJsonOutput,
|
||||
messageCount: params.messages.length,
|
||||
messages: params.messages.map((msg) => ({
|
||||
role: msg.role,
|
||||
contentPreview: msg.content
|
||||
? msg.content.substring(0, 200) + '...'
|
||||
: 'empty'
|
||||
}))
|
||||
});
|
||||
|
||||
if (enforceJsonOutput) {
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
|
||||
);
|
||||
}
|
||||
|
||||
// For gemini-cli, simplify complex prompts before processing
|
||||
let processedMessages = params.messages;
|
||||
if (enforceJsonOutput) {
|
||||
processedMessages = this._simplifyJsonPrompts(params.messages);
|
||||
}
|
||||
|
||||
// Extract system messages for separate handling with optional JSON enforcement
|
||||
const { systemPrompt, messages } = this._extractSystemMessage(
|
||||
processedMessages,
|
||||
{ enforceJsonOutput }
|
||||
);
|
||||
|
||||
// Debug the final system prompt being sent
|
||||
log('debug', `${this.name} final system prompt:`, {
|
||||
systemPromptLength: systemPrompt ? systemPrompt.length : 0,
|
||||
systemPromptPreview: systemPrompt
|
||||
? systemPrompt.substring(0, 300) + '...'
|
||||
: 'none',
|
||||
finalMessageCount: messages.length
|
||||
});
|
||||
|
||||
const client = await this.getClient(params);
|
||||
const result = await generateText({
|
||||
model: client(params.modelId),
|
||||
system: systemPrompt,
|
||||
messages: messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
// If we detected a JSON request and gemini-cli returned conversational text,
|
||||
// attempt to extract JSON from the response
|
||||
let finalText = result.text;
|
||||
if (enforceJsonOutput && result.text && !this._isValidJson(result.text)) {
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} response appears conversational, attempting JSON extraction`
|
||||
);
|
||||
|
||||
// Log first 1000 chars of the response to see what Gemini actually returned
|
||||
log('debug', `${this.name} raw response preview:`, {
|
||||
responseLength: result.text.length,
|
||||
responseStart: result.text.substring(0, 1000)
|
||||
});
|
||||
|
||||
const extractedJson = this.extractJson(result.text);
|
||||
if (this._isValidJson(extractedJson)) {
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} successfully extracted JSON from conversational response`
|
||||
);
|
||||
finalText = extractedJson;
|
||||
} else {
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} JSON extraction failed, returning original response`
|
||||
);
|
||||
|
||||
// Log what extraction returned to debug why it failed
|
||||
log('debug', `${this.name} extraction result preview:`, {
|
||||
extractedLength: extractedJson ? extractedJson.length : 0,
|
||||
extractedStart: extractedJson
|
||||
? extractedJson.substring(0, 500)
|
||||
: 'null'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} generateText completed successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
return {
|
||||
text: finalText,
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleError('text generation', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using Gemini CLI model
|
||||
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
|
||||
*/
|
||||
async streamText(params) {
|
||||
try {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
|
||||
|
||||
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
|
||||
const enforceJsonOutput = this._detectJsonRequest(params.messages);
|
||||
|
||||
// Debug logging to understand what's happening
|
||||
log('debug', `${this.name} JSON detection analysis:`, {
|
||||
enforceJsonOutput,
|
||||
messageCount: params.messages.length,
|
||||
messages: params.messages.map((msg) => ({
|
||||
role: msg.role,
|
||||
contentPreview: msg.content
|
||||
? msg.content.substring(0, 200) + '...'
|
||||
: 'empty'
|
||||
}))
|
||||
});
|
||||
|
||||
if (enforceJsonOutput) {
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
|
||||
);
|
||||
}
|
||||
|
||||
// Extract system messages for separate handling with optional JSON enforcement
|
||||
const { systemPrompt, messages } = this._extractSystemMessage(
|
||||
params.messages,
|
||||
{ enforceJsonOutput }
|
||||
);
|
||||
|
||||
const client = await this.getClient(params);
|
||||
const stream = await streamText({
|
||||
model: client(params.modelId),
|
||||
system: systemPrompt,
|
||||
messages: messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`${this.name} streamText initiated successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
// Note: For streaming, we can't intercept and modify the response in real-time
|
||||
// The JSON extraction would need to happen on the consuming side
|
||||
return stream;
|
||||
} catch (error) {
|
||||
this.handleError('text streaming', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using Gemini CLI model
|
||||
* Overrides base implementation to handle Gemini-specific JSON formatting issues and system messages
|
||||
*/
|
||||
async generateObject(params) {
|
||||
try {
|
||||
// First try the standard generateObject from base class
|
||||
return await super.generateObject(params);
|
||||
} catch (error) {
|
||||
// If it's a JSON parsing error, try to extract and parse JSON manually
|
||||
if (error.message?.includes('JSON') || error.message?.includes('parse')) {
|
||||
log(
|
||||
'debug',
|
||||
`Gemini CLI generateObject failed with parsing error, attempting manual extraction`
|
||||
);
|
||||
|
||||
try {
|
||||
// Validate params first
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
if (!params.schema) {
|
||||
throw new Error('Schema is required for object generation');
|
||||
}
|
||||
if (!params.objectName) {
|
||||
throw new Error('Object name is required for object generation');
|
||||
}
|
||||
|
||||
// Extract system messages for separate handling with JSON enforcement
|
||||
const { systemPrompt, messages } = this._extractSystemMessage(
|
||||
params.messages,
|
||||
{ enforceJsonOutput: true }
|
||||
);
|
||||
|
||||
// Call generateObject directly with our client
|
||||
const client = await this.getClient(params);
|
||||
const result = await generateObject({
|
||||
model: client(params.modelId),
|
||||
system: systemPrompt,
|
||||
messages: messages,
|
||||
schema: params.schema,
|
||||
mode: 'json', // Use json mode instead of auto for Gemini
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
});
|
||||
|
||||
// If we get rawResponse text, try to extract JSON from it
|
||||
if (result.rawResponse?.text && !result.object) {
|
||||
const extractedJson = this.extractJson(result.rawResponse.text);
|
||||
try {
|
||||
result.object = JSON.parse(extractedJson);
|
||||
} catch (parseError) {
|
||||
log(
|
||||
'error',
|
||||
`Failed to parse extracted JSON: ${parseError.message}`
|
||||
);
|
||||
log(
|
||||
'debug',
|
||||
`Extracted JSON: ${extractedJson.substring(0, 500)}...`
|
||||
);
|
||||
throw new Error(
|
||||
`Gemini CLI returned invalid JSON that could not be parsed: ${parseError.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
};
|
||||
} catch (retryError) {
|
||||
log(
|
||||
'error',
|
||||
`Gemini CLI manual JSON extraction failed: ${retryError.message}`
|
||||
);
|
||||
// Re-throw the original error with more context
|
||||
throw new Error(
|
||||
`${this.name} failed to generate valid JSON object: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// For non-parsing errors, just re-throw
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -14,3 +14,4 @@ export { BedrockAIProvider } from './bedrock.js';
|
||||
export { AzureProvider } from './azure.js';
|
||||
export { VertexAIProvider } from './google-vertex.js';
|
||||
export { ClaudeCodeProvider } from './claude-code.js';
|
||||
export { GeminiCliProvider } from './gemini-cli.js';
|
||||
|
||||
17
src/constants/commands.js
Normal file
17
src/constants/commands.js
Normal file
@@ -0,0 +1,17 @@
|
||||
/**
|
||||
* Command related constants
|
||||
* Defines which commands trigger AI processing
|
||||
*/
|
||||
|
||||
// Command names that trigger AI processing
|
||||
export const AI_COMMAND_NAMES = [
|
||||
'add-task',
|
||||
'analyze-complexity',
|
||||
'expand-task',
|
||||
'parse-prd',
|
||||
'research',
|
||||
'research-save',
|
||||
'update-subtask',
|
||||
'update-task',
|
||||
'update-tasks'
|
||||
];
|
||||
@@ -20,7 +20,8 @@ export const CUSTOM_PROVIDERS = {
|
||||
BEDROCK: 'bedrock',
|
||||
OPENROUTER: 'openrouter',
|
||||
OLLAMA: 'ollama',
|
||||
CLAUDE_CODE: 'claude-code'
|
||||
CLAUDE_CODE: 'claude-code',
|
||||
GEMINI_CLI: 'gemini-cli'
|
||||
};
|
||||
|
||||
// Custom providers array (for backward compatibility and iteration)
|
||||
|
||||
@@ -25,7 +25,7 @@ function formatJSONWithTabs(obj) {
|
||||
}
|
||||
|
||||
// Structure matches project conventions (see scripts/init.js)
|
||||
export function setupMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
export function setupMCPConfiguration(projectRoot, mcpConfigPath) {
|
||||
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
|
||||
if (!mcpConfigPath) {
|
||||
log(
|
||||
@@ -36,7 +36,7 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
}
|
||||
|
||||
// Build the full path to the MCP config file
|
||||
const mcpPath = path.join(projectDir, mcpConfigPath);
|
||||
const mcpPath = path.join(projectRoot, mcpConfigPath);
|
||||
const configDir = path.dirname(mcpPath);
|
||||
|
||||
log('info', `Setting up MCP configuration at ${mcpPath}...`);
|
||||
@@ -140,11 +140,11 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
/**
|
||||
* Remove Task Master MCP server configuration from an existing mcp.json file
|
||||
* Only removes Task Master entries, preserving other MCP servers
|
||||
* @param {string} projectDir - Target project directory
|
||||
* @param {string} projectRoot - Target project directory
|
||||
* @param {string} mcpConfigPath - Relative path to MCP config file (e.g., '.cursor/mcp.json')
|
||||
* @returns {Object} Result object with success status and details
|
||||
*/
|
||||
export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
|
||||
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
|
||||
if (!mcpConfigPath) {
|
||||
return {
|
||||
@@ -156,7 +156,7 @@ export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
};
|
||||
}
|
||||
|
||||
const mcpPath = path.join(projectDir, mcpConfigPath);
|
||||
const mcpPath = path.join(projectRoot, mcpConfigPath);
|
||||
|
||||
let result = {
|
||||
success: false,
|
||||
|
||||
@@ -170,7 +170,7 @@ function validateInputs(targetPath, content, storeTasksInGit) {
|
||||
*/
|
||||
function createNewGitignoreFile(targetPath, templateLines, log) {
|
||||
try {
|
||||
fs.writeFileSync(targetPath, templateLines.join('\n'));
|
||||
fs.writeFileSync(targetPath, templateLines.join('\n') + '\n');
|
||||
if (typeof log === 'function') {
|
||||
log('success', `Created ${targetPath} with full template`);
|
||||
}
|
||||
@@ -223,7 +223,7 @@ function mergeWithExistingFile(
|
||||
finalLines.push(...buildTaskFilesSection(storeTasksInGit));
|
||||
|
||||
// Write result
|
||||
fs.writeFileSync(targetPath, finalLines.join('\n'));
|
||||
fs.writeFileSync(targetPath, finalLines.join('\n') + '\n');
|
||||
|
||||
if (typeof log === 'function') {
|
||||
const hasNewContent =
|
||||
|
||||
@@ -25,6 +25,9 @@ import { getLoggerOrDefault } from './logger-utils.js';
|
||||
export function normalizeProjectRoot(projectRoot) {
|
||||
if (!projectRoot) return projectRoot;
|
||||
|
||||
// Ensure it's a string
|
||||
projectRoot = String(projectRoot);
|
||||
|
||||
// Split the path into segments
|
||||
const segments = projectRoot.split(path.sep);
|
||||
|
||||
|
||||
@@ -198,7 +198,7 @@ export function convertRuleToProfileRule(sourcePath, targetPath, profile) {
|
||||
/**
|
||||
* Convert all Cursor rules to profile rules for a specific profile
|
||||
*/
|
||||
export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
export function convertAllRulesToProfileRules(projectRoot, profile) {
|
||||
// Handle simple profiles (Claude, Codex) that just copy files to root
|
||||
const isSimpleProfile = Object.keys(profile.fileMap).length === 0;
|
||||
if (isSimpleProfile) {
|
||||
@@ -208,7 +208,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
const assetsDir = path.join(__dirname, '..', '..', 'assets');
|
||||
|
||||
if (typeof profile.onPostConvertRulesProfile === 'function') {
|
||||
profile.onPostConvertRulesProfile(projectDir, assetsDir);
|
||||
profile.onPostConvertRulesProfile(projectRoot, assetsDir);
|
||||
}
|
||||
return { success: 1, failed: 0 };
|
||||
}
|
||||
@@ -216,7 +216,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const sourceDir = path.join(__dirname, '..', '..', 'assets', 'rules');
|
||||
const targetDir = path.join(projectDir, profile.rulesDir);
|
||||
const targetDir = path.join(projectRoot, profile.rulesDir);
|
||||
|
||||
// Ensure target directory exists
|
||||
if (!fs.existsSync(targetDir)) {
|
||||
@@ -225,7 +225,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
|
||||
// Setup MCP configuration if enabled
|
||||
if (profile.mcpConfig !== false) {
|
||||
setupMCPConfiguration(projectDir, profile.mcpConfigPath);
|
||||
setupMCPConfiguration(projectRoot, profile.mcpConfigPath);
|
||||
}
|
||||
|
||||
let success = 0;
|
||||
@@ -286,7 +286,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
// Call post-processing hook if defined (e.g., for Roo's rules-*mode* folders)
|
||||
if (typeof profile.onPostConvertRulesProfile === 'function') {
|
||||
const assetsDir = path.join(__dirname, '..', '..', 'assets');
|
||||
profile.onPostConvertRulesProfile(projectDir, assetsDir);
|
||||
profile.onPostConvertRulesProfile(projectRoot, assetsDir);
|
||||
}
|
||||
|
||||
return { success, failed };
|
||||
@@ -294,13 +294,13 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
|
||||
/**
|
||||
* Remove only Task Master specific files from a profile, leaving other existing rules intact
|
||||
* @param {string} projectDir - Target project directory
|
||||
* @param {string} projectRoot - Target project directory
|
||||
* @param {Object} profile - Profile configuration
|
||||
* @returns {Object} Result object
|
||||
*/
|
||||
export function removeProfileRules(projectDir, profile) {
|
||||
const targetDir = path.join(projectDir, profile.rulesDir);
|
||||
const profileDir = path.join(projectDir, profile.profileDir);
|
||||
export function removeProfileRules(projectRoot, profile) {
|
||||
const targetDir = path.join(projectRoot, profile.rulesDir);
|
||||
const profileDir = path.join(projectRoot, profile.profileDir);
|
||||
|
||||
const result = {
|
||||
profileName: profile.profileName,
|
||||
@@ -320,12 +320,12 @@ export function removeProfileRules(projectDir, profile) {
|
||||
if (isSimpleProfile) {
|
||||
// For simple profiles, just call their removal hook and return
|
||||
if (typeof profile.onRemoveRulesProfile === 'function') {
|
||||
profile.onRemoveRulesProfile(projectDir);
|
||||
profile.onRemoveRulesProfile(projectRoot);
|
||||
}
|
||||
result.success = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectDir}`
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectRoot}`
|
||||
);
|
||||
return result;
|
||||
}
|
||||
@@ -418,7 +418,7 @@ export function removeProfileRules(projectDir, profile) {
|
||||
// 2. Handle MCP configuration - only remove Task Master, preserve other servers
|
||||
if (profile.mcpConfig !== false) {
|
||||
result.mcpResult = removeTaskMasterMCPConfiguration(
|
||||
projectDir,
|
||||
projectRoot,
|
||||
profile.mcpConfigPath
|
||||
);
|
||||
if (result.mcpResult.hasOtherServers) {
|
||||
@@ -432,7 +432,7 @@ export function removeProfileRules(projectDir, profile) {
|
||||
|
||||
// 3. Call removal hook if defined (e.g., Roo's custom cleanup)
|
||||
if (typeof profile.onRemoveRulesProfile === 'function') {
|
||||
profile.onRemoveRulesProfile(projectDir);
|
||||
profile.onRemoveRulesProfile(projectRoot);
|
||||
}
|
||||
|
||||
// 4. Only remove profile directory if:
|
||||
@@ -490,7 +490,7 @@ export function removeProfileRules(projectDir, profile) {
|
||||
result.success = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectDir}`
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectRoot}`
|
||||
);
|
||||
} catch (error) {
|
||||
result.error = error.message;
|
||||
|
||||
649
tests/unit/ai-providers/gemini-cli.test.js
Normal file
649
tests/unit/ai-providers/gemini-cli.test.js
Normal file
@@ -0,0 +1,649 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the ai module
|
||||
jest.unstable_mockModule('ai', () => ({
|
||||
generateObject: jest.fn(),
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock the gemini-cli SDK module
|
||||
jest.unstable_mockModule('ai-sdk-provider-gemini-cli', () => ({
|
||||
createGeminiProvider: jest.fn((options) => {
|
||||
const provider = (modelId, settings) => ({
|
||||
// Mock language model
|
||||
id: modelId,
|
||||
settings,
|
||||
authOptions: options
|
||||
});
|
||||
provider.languageModel = jest.fn((id, settings) => ({ id, settings }));
|
||||
provider.chat = provider.languageModel;
|
||||
return provider;
|
||||
})
|
||||
}));
|
||||
|
||||
// Mock the base provider
|
||||
jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
|
||||
BaseAIProvider: class {
|
||||
constructor() {
|
||||
this.name = 'Base Provider';
|
||||
}
|
||||
handleError(context, error) {
|
||||
throw error;
|
||||
}
|
||||
validateParams(params) {
|
||||
// Basic validation
|
||||
if (!params.modelId) {
|
||||
throw new Error('Model ID is required');
|
||||
}
|
||||
}
|
||||
validateMessages(messages) {
|
||||
if (!messages || !Array.isArray(messages)) {
|
||||
throw new Error('Invalid messages array');
|
||||
}
|
||||
}
|
||||
async generateObject(params) {
|
||||
// Mock implementation that can be overridden
|
||||
throw new Error('Mock base generateObject error');
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock the log module
|
||||
jest.unstable_mockModule('../../../scripts/modules/index.js', () => ({
|
||||
log: jest.fn()
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
const { GeminiCliProvider } = await import(
|
||||
'../../../src/ai-providers/gemini-cli.js'
|
||||
);
|
||||
const { createGeminiProvider } = await import('ai-sdk-provider-gemini-cli');
|
||||
const { generateObject, generateText, streamText } = await import('ai');
|
||||
const { log } = await import('../../../scripts/modules/index.js');
|
||||
|
||||
describe('GeminiCliProvider', () => {
|
||||
let provider;
|
||||
let consoleLogSpy;
|
||||
|
||||
beforeEach(() => {
|
||||
provider = new GeminiCliProvider();
|
||||
jest.clearAllMocks();
|
||||
consoleLogSpy = jest.spyOn(console, 'log').mockImplementation();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
consoleLogSpy.mockRestore();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should set the provider name to Gemini CLI', () => {
|
||||
expect(provider.name).toBe('Gemini CLI');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAuth', () => {
|
||||
it('should not throw an error when API key is provided', () => {
|
||||
expect(() => provider.validateAuth({ apiKey: 'test-key' })).not.toThrow();
|
||||
expect(consoleLogSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not require API key and should not log messages', () => {
|
||||
expect(() => provider.validateAuth({})).not.toThrow();
|
||||
expect(consoleLogSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not require any parameters', () => {
|
||||
expect(() => provider.validateAuth()).not.toThrow();
|
||||
expect(consoleLogSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getClient', () => {
|
||||
it('should return a gemini client with API key auth when apiKey is provided', async () => {
|
||||
const client = await provider.getClient({ apiKey: 'test-api-key' });
|
||||
|
||||
expect(client).toBeDefined();
|
||||
expect(typeof client).toBe('function');
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'api-key',
|
||||
apiKey: 'test-api-key'
|
||||
});
|
||||
});
|
||||
|
||||
it('should return a gemini client with OAuth auth when no apiKey is provided', async () => {
|
||||
const client = await provider.getClient({});
|
||||
|
||||
expect(client).toBeDefined();
|
||||
expect(typeof client).toBe('function');
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'oauth-personal'
|
||||
});
|
||||
});
|
||||
|
||||
it('should include baseURL when provided', async () => {
|
||||
const client = await provider.getClient({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://custom-endpoint.com'
|
||||
});
|
||||
|
||||
expect(client).toBeDefined();
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'api-key',
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://custom-endpoint.com'
|
||||
});
|
||||
});
|
||||
|
||||
it('should have languageModel and chat methods', async () => {
|
||||
const client = await provider.getClient({ apiKey: 'test-key' });
|
||||
expect(client.languageModel).toBeDefined();
|
||||
expect(client.chat).toBeDefined();
|
||||
expect(client.chat).toBe(client.languageModel);
|
||||
});
|
||||
});
|
||||
|
||||
describe('_extractSystemMessage', () => {
|
||||
it('should extract single system message', () => {
|
||||
const messages = [
|
||||
{ role: 'system', content: 'You are a helpful assistant' },
|
||||
{ role: 'user', content: 'Hello' }
|
||||
];
|
||||
const result = provider._extractSystemMessage(messages);
|
||||
expect(result.systemPrompt).toBe('You are a helpful assistant');
|
||||
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
|
||||
});
|
||||
|
||||
it('should combine multiple system messages', () => {
|
||||
const messages = [
|
||||
{ role: 'system', content: 'You are helpful' },
|
||||
{ role: 'system', content: 'Be concise' },
|
||||
{ role: 'user', content: 'Hello' }
|
||||
];
|
||||
const result = provider._extractSystemMessage(messages);
|
||||
expect(result.systemPrompt).toBe('You are helpful\n\nBe concise');
|
||||
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
|
||||
});
|
||||
|
||||
it('should handle messages without system prompts', () => {
|
||||
const messages = [
|
||||
{ role: 'user', content: 'Hello' },
|
||||
{ role: 'assistant', content: 'Hi there' }
|
||||
];
|
||||
const result = provider._extractSystemMessage(messages);
|
||||
expect(result.systemPrompt).toBeUndefined();
|
||||
expect(result.messages).toEqual(messages);
|
||||
});
|
||||
|
||||
it('should handle empty or invalid input', () => {
|
||||
expect(provider._extractSystemMessage([])).toEqual({
|
||||
systemPrompt: undefined,
|
||||
messages: []
|
||||
});
|
||||
expect(provider._extractSystemMessage(null)).toEqual({
|
||||
systemPrompt: undefined,
|
||||
messages: []
|
||||
});
|
||||
expect(provider._extractSystemMessage(undefined)).toEqual({
|
||||
systemPrompt: undefined,
|
||||
messages: []
|
||||
});
|
||||
});
|
||||
|
||||
it('should add JSON enforcement when enforceJsonOutput is true', () => {
|
||||
const messages = [
|
||||
{ role: 'system', content: 'You are a helpful assistant' },
|
||||
{ role: 'user', content: 'Hello' }
|
||||
];
|
||||
const result = provider._extractSystemMessage(messages, {
|
||||
enforceJsonOutput: true
|
||||
});
|
||||
expect(result.systemPrompt).toContain('You are a helpful assistant');
|
||||
expect(result.systemPrompt).toContain(
|
||||
'CRITICAL: You MUST respond with ONLY valid JSON'
|
||||
);
|
||||
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
|
||||
});
|
||||
|
||||
it('should add JSON enforcement with no existing system message', () => {
|
||||
const messages = [{ role: 'user', content: 'Return JSON format' }];
|
||||
const result = provider._extractSystemMessage(messages, {
|
||||
enforceJsonOutput: true
|
||||
});
|
||||
expect(result.systemPrompt).toBe(
|
||||
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.'
|
||||
);
|
||||
expect(result.messages).toEqual([
|
||||
{ role: 'user', content: 'Return JSON format' }
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('_detectJsonRequest', () => {
|
||||
it('should detect JSON requests from user messages', () => {
|
||||
const messages = [
|
||||
{
|
||||
role: 'user',
|
||||
content: 'Please return JSON format with subtasks array'
|
||||
}
|
||||
];
|
||||
expect(provider._detectJsonRequest(messages)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect various JSON indicators', () => {
|
||||
const testCases = [
|
||||
'respond only with valid JSON',
|
||||
'return JSON format',
|
||||
'output schema: {"test": true}',
|
||||
'format: [{"id": 1}]',
|
||||
'Please return subtasks in array format',
|
||||
'Return an object with properties'
|
||||
];
|
||||
|
||||
testCases.forEach((content) => {
|
||||
const messages = [{ role: 'user', content }];
|
||||
expect(provider._detectJsonRequest(messages)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
it('should not detect JSON requests for regular conversation', () => {
|
||||
const messages = [{ role: 'user', content: 'Hello, how are you today?' }];
|
||||
expect(provider._detectJsonRequest(messages)).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle multiple user messages', () => {
|
||||
const messages = [
|
||||
{ role: 'user', content: 'Hello' },
|
||||
{ role: 'assistant', content: 'Hi there' },
|
||||
{ role: 'user', content: 'Now please return JSON format' }
|
||||
];
|
||||
expect(provider._detectJsonRequest(messages)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('_getJsonEnforcementPrompt', () => {
|
||||
it('should return strict JSON enforcement prompt', () => {
|
||||
const prompt = provider._getJsonEnforcementPrompt();
|
||||
expect(prompt).toContain('CRITICAL');
|
||||
expect(prompt).toContain('ONLY valid JSON');
|
||||
expect(prompt).toContain('No exceptions');
|
||||
});
|
||||
});
|
||||
|
||||
describe('_isValidJson', () => {
|
||||
it('should return true for valid JSON objects', () => {
|
||||
expect(provider._isValidJson('{"test": true}')).toBe(true);
|
||||
expect(provider._isValidJson('{"subtasks": [{"id": 1}]}')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for valid JSON arrays', () => {
|
||||
expect(provider._isValidJson('[1, 2, 3]')).toBe(true);
|
||||
expect(provider._isValidJson('[{"id": 1}, {"id": 2}]')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for invalid JSON', () => {
|
||||
expect(provider._isValidJson('Of course. Here is...')).toBe(false);
|
||||
expect(provider._isValidJson('{"invalid": json}')).toBe(false);
|
||||
expect(provider._isValidJson('not json at all')).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle edge cases', () => {
|
||||
expect(provider._isValidJson('')).toBe(false);
|
||||
expect(provider._isValidJson(null)).toBe(false);
|
||||
expect(provider._isValidJson(undefined)).toBe(false);
|
||||
expect(provider._isValidJson(' {"test": true} ')).toBe(true); // with whitespace
|
||||
});
|
||||
});
|
||||
|
||||
describe('extractJson', () => {
|
||||
it('should extract JSON from markdown code blocks', () => {
|
||||
const input = '```json\n{"subtasks": [{"id": 1}]}\n```';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
|
||||
});
|
||||
|
||||
it('should extract JSON with explanatory text', () => {
|
||||
const input = 'Here\'s the JSON response:\n{"subtasks": [{"id": 1}]}';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
|
||||
});
|
||||
|
||||
it('should handle variable declarations', () => {
|
||||
const input = 'const result = {"subtasks": [{"id": 1}]};';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
|
||||
});
|
||||
|
||||
it('should handle trailing commas with jsonc-parser', () => {
|
||||
const input = '{"subtasks": [{"id": 1,}],}';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
|
||||
});
|
||||
|
||||
it('should handle arrays', () => {
|
||||
const input = 'The result is: [1, 2, 3]';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual([1, 2, 3]);
|
||||
});
|
||||
|
||||
it('should handle nested objects with proper bracket matching', () => {
|
||||
const input =
|
||||
'Response: {"outer": {"inner": {"value": "test"}}} extra text';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ outer: { inner: { value: 'test' } } });
|
||||
});
|
||||
|
||||
it('should handle escaped quotes in strings', () => {
|
||||
const input = '{"message": "He said \\"hello\\" to me"}';
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ message: 'He said "hello" to me' });
|
||||
});
|
||||
|
||||
it('should return original text if no JSON found', () => {
|
||||
const input = 'No JSON here';
|
||||
expect(provider.extractJson(input)).toBe(input);
|
||||
});
|
||||
|
||||
it('should handle null or non-string input', () => {
|
||||
expect(provider.extractJson(null)).toBe(null);
|
||||
expect(provider.extractJson(undefined)).toBe(undefined);
|
||||
expect(provider.extractJson(123)).toBe(123);
|
||||
});
|
||||
|
||||
it('should handle partial JSON by finding valid boundaries', () => {
|
||||
const input = '{"valid": true, "partial": "incomplete';
|
||||
// Should return original text since no valid JSON can be extracted
|
||||
expect(provider.extractJson(input)).toBe(input);
|
||||
});
|
||||
|
||||
it('should handle performance edge cases with large text', () => {
|
||||
// Test with large text that has JSON at the end
|
||||
const largePrefix = 'This is a very long explanation. '.repeat(1000);
|
||||
const json = '{"result": "success"}';
|
||||
const input = largePrefix + json;
|
||||
|
||||
const result = provider.extractJson(input);
|
||||
const parsed = JSON.parse(result);
|
||||
expect(parsed).toEqual({ result: 'success' });
|
||||
});
|
||||
|
||||
it('should handle early termination for very large invalid content', () => {
|
||||
// Test that it doesn't hang on very large content without JSON
|
||||
const largeText = 'No JSON here. '.repeat(2000);
|
||||
const result = provider.extractJson(largeText);
|
||||
expect(result).toBe(largeText);
|
||||
});
|
||||
});
|
||||
|
||||
describe('generateObject', () => {
|
||||
const mockParams = {
|
||||
modelId: 'gemini-2.0-flash-exp',
|
||||
apiKey: 'test-key',
|
||||
messages: [{ role: 'user', content: 'Test message' }],
|
||||
schema: { type: 'object', properties: {} },
|
||||
objectName: 'testObject'
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should handle JSON parsing errors by attempting manual extraction', async () => {
|
||||
// Mock the parent generateObject to throw a JSON parsing error
|
||||
jest
|
||||
.spyOn(
|
||||
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
|
||||
'generateObject'
|
||||
)
|
||||
.mockRejectedValueOnce(new Error('Failed to parse JSON response'));
|
||||
|
||||
// Mock generateObject from ai module to return text with JSON
|
||||
generateObject.mockResolvedValueOnce({
|
||||
rawResponse: {
|
||||
text: 'Here is the JSON:\n```json\n{"subtasks": [{"id": 1}]}\n```'
|
||||
},
|
||||
object: null,
|
||||
usage: { promptTokens: 10, completionTokens: 20, totalTokens: 30 }
|
||||
});
|
||||
|
||||
const result = await provider.generateObject(mockParams);
|
||||
|
||||
expect(log).toHaveBeenCalledWith(
|
||||
'debug',
|
||||
expect.stringContaining('attempting manual extraction')
|
||||
);
|
||||
expect(generateObject).toHaveBeenCalledWith({
|
||||
model: expect.objectContaining({
|
||||
id: 'gemini-2.0-flash-exp',
|
||||
authOptions: expect.objectContaining({
|
||||
authType: 'api-key',
|
||||
apiKey: 'test-key'
|
||||
})
|
||||
}),
|
||||
messages: mockParams.messages,
|
||||
schema: mockParams.schema,
|
||||
mode: 'json', // Should use json mode for Gemini
|
||||
system: expect.stringContaining(
|
||||
'CRITICAL: You MUST respond with ONLY valid JSON'
|
||||
),
|
||||
maxTokens: undefined,
|
||||
temperature: undefined
|
||||
});
|
||||
expect(result.object).toEqual({ subtasks: [{ id: 1 }] });
|
||||
});
|
||||
|
||||
it('should throw error if manual extraction also fails', async () => {
|
||||
// Mock parent to throw JSON error
|
||||
jest
|
||||
.spyOn(
|
||||
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
|
||||
'generateObject'
|
||||
)
|
||||
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
|
||||
|
||||
// Mock generateObject to return unparseable text
|
||||
generateObject.mockResolvedValueOnce({
|
||||
rawResponse: { text: 'Not valid JSON at all' },
|
||||
object: null
|
||||
});
|
||||
|
||||
await expect(provider.generateObject(mockParams)).rejects.toThrow(
|
||||
'Gemini CLI failed to generate valid JSON object: Failed to parse JSON'
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass through non-JSON errors unchanged', async () => {
|
||||
const otherError = new Error('Network error');
|
||||
jest
|
||||
.spyOn(
|
||||
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
|
||||
'generateObject'
|
||||
)
|
||||
.mockRejectedValueOnce(otherError);
|
||||
|
||||
await expect(provider.generateObject(mockParams)).rejects.toThrow(
|
||||
'Network error'
|
||||
);
|
||||
expect(generateObject).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle successful response from parent', async () => {
|
||||
const mockResult = {
|
||||
object: { test: 'data' },
|
||||
usage: { inputTokens: 5, outputTokens: 10, totalTokens: 15 }
|
||||
};
|
||||
jest
|
||||
.spyOn(
|
||||
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
|
||||
'generateObject'
|
||||
)
|
||||
.mockResolvedValueOnce(mockResult);
|
||||
|
||||
const result = await provider.generateObject(mockParams);
|
||||
expect(result).toEqual(mockResult);
|
||||
expect(generateObject).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('system message support', () => {
|
||||
const mockParams = {
|
||||
modelId: 'gemini-2.0-flash-exp',
|
||||
apiKey: 'test-key',
|
||||
messages: [
|
||||
{ role: 'system', content: 'You are a helpful assistant' },
|
||||
{ role: 'user', content: 'Hello' }
|
||||
],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
};
|
||||
|
||||
describe('generateText with system messages', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should pass system prompt separately to AI SDK', async () => {
|
||||
const { generateText } = await import('ai');
|
||||
generateText.mockResolvedValueOnce({
|
||||
text: 'Hello! How can I help you?',
|
||||
usage: { promptTokens: 10, completionTokens: 8, totalTokens: 18 }
|
||||
});
|
||||
|
||||
const result = await provider.generateText(mockParams);
|
||||
|
||||
expect(generateText).toHaveBeenCalledWith({
|
||||
model: expect.objectContaining({
|
||||
id: 'gemini-2.0-flash-exp'
|
||||
}),
|
||||
system: 'You are a helpful assistant',
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
expect(result.text).toBe('Hello! How can I help you?');
|
||||
});
|
||||
|
||||
it('should handle messages without system prompt', async () => {
|
||||
const { generateText } = await import('ai');
|
||||
const paramsNoSystem = {
|
||||
...mockParams,
|
||||
messages: [{ role: 'user', content: 'Hello' }]
|
||||
};
|
||||
|
||||
generateText.mockResolvedValueOnce({
|
||||
text: 'Hi there!',
|
||||
usage: { promptTokens: 5, completionTokens: 3, totalTokens: 8 }
|
||||
});
|
||||
|
||||
await provider.generateText(paramsNoSystem);
|
||||
|
||||
expect(generateText).toHaveBeenCalledWith({
|
||||
model: expect.objectContaining({
|
||||
id: 'gemini-2.0-flash-exp'
|
||||
}),
|
||||
system: undefined,
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('streamText with system messages', () => {
|
||||
it('should pass system prompt separately to AI SDK', async () => {
|
||||
const { streamText } = await import('ai');
|
||||
const mockStream = { stream: 'mock-stream' };
|
||||
streamText.mockResolvedValueOnce(mockStream);
|
||||
|
||||
const result = await provider.streamText(mockParams);
|
||||
|
||||
expect(streamText).toHaveBeenCalledWith({
|
||||
model: expect.objectContaining({
|
||||
id: 'gemini-2.0-flash-exp'
|
||||
}),
|
||||
system: 'You are a helpful assistant',
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
expect(result).toBe(mockStream);
|
||||
});
|
||||
});
|
||||
|
||||
describe('generateObject with system messages', () => {
|
||||
const mockObjectParams = {
|
||||
...mockParams,
|
||||
schema: { type: 'object', properties: {} },
|
||||
objectName: 'testObject'
|
||||
};
|
||||
|
||||
it('should include system prompt in fallback generateObject call', async () => {
|
||||
// Mock parent to throw JSON error
|
||||
jest
|
||||
.spyOn(
|
||||
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
|
||||
'generateObject'
|
||||
)
|
||||
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
|
||||
|
||||
// Mock direct generateObject call
|
||||
generateObject.mockResolvedValueOnce({
|
||||
object: { result: 'success' },
|
||||
usage: { promptTokens: 15, completionTokens: 10, totalTokens: 25 }
|
||||
});
|
||||
|
||||
const result = await provider.generateObject(mockObjectParams);
|
||||
|
||||
expect(generateObject).toHaveBeenCalledWith({
|
||||
model: expect.objectContaining({
|
||||
id: 'gemini-2.0-flash-exp'
|
||||
}),
|
||||
system: expect.stringContaining('You are a helpful assistant'),
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
schema: mockObjectParams.schema,
|
||||
mode: 'json',
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
expect(result.object).toEqual({ result: 'success' });
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// Note: Error handling for module loading is tested in integration tests
|
||||
// since dynamic imports are difficult to mock properly in unit tests
|
||||
|
||||
describe('authentication scenarios', () => {
|
||||
it('should use api-key auth type with API key', async () => {
|
||||
await provider.getClient({ apiKey: 'gemini-test-key' });
|
||||
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'api-key',
|
||||
apiKey: 'gemini-test-key'
|
||||
});
|
||||
});
|
||||
|
||||
it('should use oauth-personal auth type without API key', async () => {
|
||||
await provider.getClient({});
|
||||
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'oauth-personal'
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle empty string API key as no API key', async () => {
|
||||
await provider.getClient({ apiKey: '' });
|
||||
|
||||
expect(createGeminiProvider).toHaveBeenCalledWith({
|
||||
authType: 'oauth-personal'
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -8,6 +8,7 @@ const mockGetResearchModelId = jest.fn();
|
||||
const mockGetFallbackProvider = jest.fn();
|
||||
const mockGetFallbackModelId = jest.fn();
|
||||
const mockGetParametersForRole = jest.fn();
|
||||
const mockGetResponseLanguage = jest.fn();
|
||||
const mockGetUserId = jest.fn();
|
||||
const mockGetDebugFlag = jest.fn();
|
||||
const mockIsApiKeySet = jest.fn();
|
||||
@@ -98,6 +99,7 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
getFallbackMaxTokens: mockGetFallbackMaxTokens,
|
||||
getFallbackTemperature: mockGetFallbackTemperature,
|
||||
getParametersForRole: mockGetParametersForRole,
|
||||
getResponseLanguage: mockGetResponseLanguage,
|
||||
getUserId: mockGetUserId,
|
||||
getDebugFlag: mockGetDebugFlag,
|
||||
getBaseUrlForRole: mockGetBaseUrlForRole,
|
||||
@@ -117,7 +119,10 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
getBedrockBaseURL: mockGetBedrockBaseURL,
|
||||
getVertexProjectId: mockGetVertexProjectId,
|
||||
getVertexLocation: mockGetVertexLocation,
|
||||
getMcpApiKeyStatus: mockGetMcpApiKeyStatus
|
||||
getMcpApiKeyStatus: mockGetMcpApiKeyStatus,
|
||||
|
||||
// Providers without API keys
|
||||
providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli']
|
||||
}));
|
||||
|
||||
// Mock AI Provider Classes with proper methods
|
||||
@@ -185,6 +190,11 @@ jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn()
|
||||
})),
|
||||
GeminiCliProvider: jest.fn(() => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn()
|
||||
}))
|
||||
}));
|
||||
|
||||
@@ -269,6 +279,7 @@ describe('Unified AI Services', () => {
|
||||
if (role === 'fallback') return { maxTokens: 150, temperature: 0.6 };
|
||||
return { maxTokens: 100, temperature: 0.5 }; // Default
|
||||
});
|
||||
mockGetResponseLanguage.mockReturnValue('English');
|
||||
mockResolveEnvVariable.mockImplementation((key) => {
|
||||
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
|
||||
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
|
||||
@@ -455,6 +466,68 @@ describe('Unified AI Services', () => {
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('should use configured responseLanguage in system prompt', async () => {
|
||||
mockGetResponseLanguage.mockReturnValue('中文');
|
||||
mockAnthropicProvider.generateText.mockResolvedValue('中文回复');
|
||||
|
||||
const params = {
|
||||
role: 'main',
|
||||
systemPrompt: 'You are an assistant',
|
||||
prompt: 'Hello'
|
||||
};
|
||||
await generateTextService(params);
|
||||
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
messages: [
|
||||
{
|
||||
role: 'system',
|
||||
content: expect.stringContaining('Always respond in 中文')
|
||||
},
|
||||
{ role: 'user', content: 'Hello' }
|
||||
]
|
||||
})
|
||||
);
|
||||
expect(mockGetResponseLanguage).toHaveBeenCalledWith(fakeProjectRoot);
|
||||
});
|
||||
|
||||
test('should pass custom projectRoot to getResponseLanguage', async () => {
|
||||
const customRoot = '/custom/project/root';
|
||||
mockGetResponseLanguage.mockReturnValue('Español');
|
||||
mockAnthropicProvider.generateText.mockResolvedValue(
|
||||
'Respuesta en Español'
|
||||
);
|
||||
|
||||
const params = {
|
||||
role: 'main',
|
||||
systemPrompt: 'You are an assistant',
|
||||
prompt: 'Hello',
|
||||
projectRoot: customRoot
|
||||
};
|
||||
await generateTextService(params);
|
||||
|
||||
expect(mockGetResponseLanguage).toHaveBeenCalledWith(customRoot);
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
messages: [
|
||||
{
|
||||
role: 'system',
|
||||
content: expect.stringContaining('Always respond in Español')
|
||||
},
|
||||
{ role: 'user', content: 'Hello' }
|
||||
]
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
// Add more tests for edge cases:
|
||||
// - Missing API keys (should throw from _resolveApiKey)
|
||||
// - Unsupported provider configured (should skip and log)
|
||||
// - Missing provider/model config for a role (should skip and log)
|
||||
// - Missing prompt
|
||||
// - Different initial roles (research, fallback)
|
||||
// - generateObjectService (mock schema, check object result)
|
||||
// - streamTextService (more complex to test, might need stream helpers)
|
||||
test('should skip provider with missing API key and try next in fallback sequence', async () => {
|
||||
// Setup isApiKeySet to return false for anthropic but true for perplexity
|
||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
||||
|
||||
@@ -48,11 +48,14 @@ const mockConsole = {
|
||||
};
|
||||
global.console = mockConsole;
|
||||
|
||||
// --- Define Mock Function Instances ---
|
||||
const mockFindConfigPath = jest.fn(() => null); // Default to null, can be overridden in tests
|
||||
|
||||
// Mock path-utils to prevent config file path discovery and logging
|
||||
jest.mock('../../src/utils/path-utils.js', () => ({
|
||||
__esModule: true,
|
||||
findProjectRoot: jest.fn(() => '/mock/project'),
|
||||
findConfigPath: jest.fn(() => null), // Always return null to prevent config discovery
|
||||
findConfigPath: mockFindConfigPath, // Use the mock function instance
|
||||
findTasksPath: jest.fn(() => '/mock/tasks.json'),
|
||||
findComplexityReportPath: jest.fn(() => null),
|
||||
resolveTasksOutputPath: jest.fn(() => '/mock/tasks.json'),
|
||||
@@ -136,12 +139,15 @@ const DEFAULT_CONFIG = {
|
||||
global: {
|
||||
logLevel: 'info',
|
||||
debug: false,
|
||||
defaultNumTasks: 10,
|
||||
defaultSubtasks: 5,
|
||||
defaultPriority: 'medium',
|
||||
projectName: 'Task Master',
|
||||
ollamaBaseURL: 'http://localhost:11434/api',
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
|
||||
}
|
||||
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
|
||||
responseLanguage: 'English'
|
||||
},
|
||||
claudeCode: {}
|
||||
};
|
||||
|
||||
// Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG)
|
||||
@@ -195,6 +201,61 @@ const INVALID_PROVIDER_CONFIG = {
|
||||
}
|
||||
};
|
||||
|
||||
// Claude Code test data
|
||||
const VALID_CLAUDE_CODE_CONFIG = {
|
||||
maxTurns: 5,
|
||||
customSystemPrompt: 'You are a helpful coding assistant',
|
||||
appendSystemPrompt: 'Always follow best practices',
|
||||
permissionMode: 'acceptEdits',
|
||||
allowedTools: ['Read', 'LS', 'Edit'],
|
||||
disallowedTools: ['Write'],
|
||||
mcpServers: {
|
||||
'test-server': {
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
args: ['server.js'],
|
||||
env: { NODE_ENV: 'test' }
|
||||
}
|
||||
},
|
||||
commandSpecific: {
|
||||
'add-task': {
|
||||
maxTurns: 3,
|
||||
permissionMode: 'plan'
|
||||
},
|
||||
research: {
|
||||
customSystemPrompt: 'You are a research assistant'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const INVALID_CLAUDE_CODE_CONFIG = {
|
||||
maxTurns: 'invalid', // Should be number
|
||||
permissionMode: 'invalid-mode', // Invalid enum value
|
||||
allowedTools: 'not-an-array', // Should be array
|
||||
mcpServers: {
|
||||
'invalid-server': {
|
||||
type: 'invalid-type', // Invalid enum value
|
||||
url: 'not-a-valid-url' // Invalid URL format
|
||||
}
|
||||
},
|
||||
commandSpecific: {
|
||||
'invalid-command': {
|
||||
// Invalid command name
|
||||
maxTurns: -1 // Invalid negative number
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const PARTIAL_CLAUDE_CODE_CONFIG = {
|
||||
maxTurns: 10,
|
||||
permissionMode: 'default',
|
||||
commandSpecific: {
|
||||
'expand-task': {
|
||||
customSystemPrompt: 'Focus on task breakdown'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Define spies globally to be restored in afterAll
|
||||
let consoleErrorSpy;
|
||||
let consoleWarnSpy;
|
||||
@@ -220,6 +281,7 @@ beforeEach(() => {
|
||||
// Reset the external mock instances for utils
|
||||
mockFindProjectRoot.mockReset();
|
||||
mockLog.mockReset();
|
||||
mockFindConfigPath.mockReset();
|
||||
|
||||
// --- Set up spies ON the imported 'fs' mock ---
|
||||
fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync');
|
||||
@@ -228,6 +290,7 @@ beforeEach(() => {
|
||||
|
||||
// --- Default Mock Implementations ---
|
||||
mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot
|
||||
mockFindConfigPath.mockReturnValue(null); // Default to no config file found
|
||||
fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default
|
||||
|
||||
// Default readFileSync: Return REAL models content, mocked config, or throw error
|
||||
@@ -325,6 +388,162 @@ describe('Validation Functions', () => {
|
||||
});
|
||||
});
|
||||
|
||||
// --- Claude Code Validation Tests ---
|
||||
describe('Claude Code Validation', () => {
|
||||
test('validateClaudeCodeSettings should return valid settings for correct input', () => {
|
||||
const result = configManager.validateClaudeCodeSettings(
|
||||
VALID_CLAUDE_CODE_CONFIG
|
||||
);
|
||||
|
||||
expect(result).toEqual(VALID_CLAUDE_CODE_CONFIG);
|
||||
expect(consoleWarnSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('validateClaudeCodeSettings should return empty object for invalid input', () => {
|
||||
const result = configManager.validateClaudeCodeSettings(
|
||||
INVALID_CLAUDE_CODE_CONFIG
|
||||
);
|
||||
|
||||
expect(result).toEqual({});
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Warning: Invalid Claude Code settings in config')
|
||||
);
|
||||
});
|
||||
|
||||
test('validateClaudeCodeSettings should handle partial valid configuration', () => {
|
||||
const result = configManager.validateClaudeCodeSettings(
|
||||
PARTIAL_CLAUDE_CODE_CONFIG
|
||||
);
|
||||
|
||||
expect(result).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
|
||||
expect(consoleWarnSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('validateClaudeCodeSettings should return empty object for empty input', () => {
|
||||
const result = configManager.validateClaudeCodeSettings({});
|
||||
|
||||
expect(result).toEqual({});
|
||||
expect(consoleWarnSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('validateClaudeCodeSettings should handle null/undefined input', () => {
|
||||
expect(configManager.validateClaudeCodeSettings(null)).toEqual({});
|
||||
expect(configManager.validateClaudeCodeSettings(undefined)).toEqual({});
|
||||
expect(consoleWarnSpy).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
});
|
||||
|
||||
// --- Claude Code Getter Tests ---
|
||||
describe('Claude Code Getter Functions', () => {
|
||||
test('getClaudeCodeSettings should return default empty object when no config exists', () => {
|
||||
// No config file exists, should return empty object
|
||||
fsExistsSyncSpy.mockReturnValue(false);
|
||||
const settings = configManager.getClaudeCodeSettings(MOCK_PROJECT_ROOT);
|
||||
|
||||
expect(settings).toEqual({});
|
||||
});
|
||||
|
||||
test('getClaudeCodeSettings should return merged settings from config file', () => {
|
||||
// Config file with Claude Code settings
|
||||
const configWithClaudeCode = {
|
||||
...VALID_CUSTOM_CONFIG,
|
||||
claudeCode: VALID_CLAUDE_CODE_CONFIG
|
||||
};
|
||||
|
||||
// Mock findConfigPath to return the mock config path
|
||||
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
|
||||
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH)
|
||||
return JSON.stringify(configWithClaudeCode);
|
||||
if (path.basename(filePath) === 'supported-models.json') {
|
||||
return JSON.stringify({
|
||||
openai: [{ id: 'gpt-4o' }],
|
||||
google: [{ id: 'gemini-1.5-pro-latest' }],
|
||||
anthropic: [
|
||||
{ id: 'claude-3-opus-20240229' },
|
||||
{ id: 'claude-3-7-sonnet-20250219' },
|
||||
{ id: 'claude-3-5-sonnet' }
|
||||
],
|
||||
perplexity: [{ id: 'sonar-pro' }],
|
||||
ollama: [],
|
||||
openrouter: []
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
const settings = configManager.getClaudeCodeSettings(
|
||||
MOCK_PROJECT_ROOT,
|
||||
true
|
||||
); // Force reload
|
||||
|
||||
expect(settings).toEqual(VALID_CLAUDE_CODE_CONFIG);
|
||||
});
|
||||
|
||||
test('getClaudeCodeSettingsForCommand should return command-specific settings', () => {
|
||||
// Config with command-specific settings
|
||||
const configWithClaudeCode = {
|
||||
...VALID_CUSTOM_CONFIG,
|
||||
claudeCode: VALID_CLAUDE_CODE_CONFIG
|
||||
};
|
||||
|
||||
// Mock findConfigPath to return the mock config path
|
||||
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
|
||||
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (path.basename(filePath) === 'supported-models.json') return '{}';
|
||||
if (filePath === MOCK_CONFIG_PATH)
|
||||
return JSON.stringify(configWithClaudeCode);
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
const settings = configManager.getClaudeCodeSettingsForCommand(
|
||||
'add-task',
|
||||
MOCK_PROJECT_ROOT,
|
||||
true
|
||||
); // Force reload
|
||||
|
||||
// Should merge global settings with command-specific settings
|
||||
const expectedSettings = {
|
||||
...VALID_CLAUDE_CODE_CONFIG,
|
||||
...VALID_CLAUDE_CODE_CONFIG.commandSpecific['add-task']
|
||||
};
|
||||
expect(settings).toEqual(expectedSettings);
|
||||
});
|
||||
|
||||
test('getClaudeCodeSettingsForCommand should return global settings for unknown command', () => {
|
||||
// Config with Claude Code settings
|
||||
const configWithClaudeCode = {
|
||||
...VALID_CUSTOM_CONFIG,
|
||||
claudeCode: PARTIAL_CLAUDE_CODE_CONFIG
|
||||
};
|
||||
|
||||
// Mock findConfigPath to return the mock config path
|
||||
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
|
||||
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (path.basename(filePath) === 'supported-models.json') return '{}';
|
||||
if (filePath === MOCK_CONFIG_PATH)
|
||||
return JSON.stringify(configWithClaudeCode);
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
const settings = configManager.getClaudeCodeSettingsForCommand(
|
||||
'unknown-command',
|
||||
MOCK_PROJECT_ROOT,
|
||||
true
|
||||
); // Force reload
|
||||
|
||||
// Should return global settings only
|
||||
expect(settings).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
|
||||
});
|
||||
});
|
||||
|
||||
// --- getConfig Tests ---
|
||||
describe('getConfig Tests', () => {
|
||||
test('should return default config if .taskmasterconfig does not exist', () => {
|
||||
@@ -409,7 +628,11 @@ describe('getConfig Tests', () => {
|
||||
...VALID_CUSTOM_CONFIG.models.fallback
|
||||
}
|
||||
},
|
||||
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global }
|
||||
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global },
|
||||
claudeCode: {
|
||||
...DEFAULT_CONFIG.claudeCode,
|
||||
...VALID_CUSTOM_CONFIG.claudeCode
|
||||
}
|
||||
};
|
||||
expect(config).toEqual(expectedMergedConfig);
|
||||
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
||||
@@ -447,7 +670,11 @@ describe('getConfig Tests', () => {
|
||||
research: { ...DEFAULT_CONFIG.models.research },
|
||||
fallback: { ...DEFAULT_CONFIG.models.fallback }
|
||||
},
|
||||
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global }
|
||||
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global },
|
||||
claudeCode: {
|
||||
...DEFAULT_CONFIG.claudeCode,
|
||||
...VALID_CUSTOM_CONFIG.claudeCode
|
||||
}
|
||||
};
|
||||
expect(config).toEqual(expectedMergedConfig);
|
||||
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
|
||||
@@ -551,7 +778,11 @@ describe('getConfig Tests', () => {
|
||||
},
|
||||
fallback: { ...DEFAULT_CONFIG.models.fallback }
|
||||
},
|
||||
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global }
|
||||
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global },
|
||||
claudeCode: {
|
||||
...DEFAULT_CONFIG.claudeCode,
|
||||
...VALID_CUSTOM_CONFIG.claudeCode
|
||||
}
|
||||
};
|
||||
expect(config).toEqual(expectedMergedConfig);
|
||||
});
|
||||
@@ -684,6 +915,82 @@ describe('Getter Functions', () => {
|
||||
expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel);
|
||||
});
|
||||
|
||||
test('getResponseLanguage should return responseLanguage from config', () => {
|
||||
// Arrange
|
||||
// Prepare a config object with responseLanguage property for this test
|
||||
const configWithLanguage = JSON.stringify({
|
||||
models: {
|
||||
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
|
||||
},
|
||||
global: {
|
||||
projectName: 'Test Project',
|
||||
responseLanguage: '中文'
|
||||
}
|
||||
});
|
||||
|
||||
// Set up fs.readFileSync to return our test config
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return configWithLanguage;
|
||||
}
|
||||
if (path.basename(filePath) === 'supported-models.json') {
|
||||
return JSON.stringify({
|
||||
openai: [{ id: 'gpt-4-turbo' }]
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Ensure getConfig returns new values instead of cached ones
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act
|
||||
const responseLanguage =
|
||||
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(responseLanguage).toBe('中文');
|
||||
});
|
||||
|
||||
test('getResponseLanguage should return undefined when responseLanguage is not in config', () => {
|
||||
// Arrange
|
||||
const configWithoutLanguage = JSON.stringify({
|
||||
models: {
|
||||
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
|
||||
},
|
||||
global: {
|
||||
projectName: 'Test Project'
|
||||
// No responseLanguage property
|
||||
}
|
||||
});
|
||||
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return configWithoutLanguage;
|
||||
}
|
||||
if (path.basename(filePath) === 'supported-models.json') {
|
||||
return JSON.stringify({
|
||||
openai: [{ id: 'gpt-4-turbo' }]
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Ensure getConfig returns new values instead of cached ones
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act
|
||||
const responseLanguage =
|
||||
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(responseLanguage).toBe('English');
|
||||
});
|
||||
|
||||
// Add more tests for other getters (getResearchProvider, getProjectName, etc.)
|
||||
});
|
||||
|
||||
@@ -738,5 +1045,116 @@ describe('getAllProviders', () => {
|
||||
|
||||
// Add tests for getParametersForRole if needed
|
||||
|
||||
// --- defaultNumTasks Tests ---
|
||||
describe('Configuration Getters', () => {
|
||||
test('getDefaultNumTasks should return default value when config is valid', () => {
|
||||
// Arrange: Mock fs.readFileSync to return valid config when called with the expected path
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return JSON.stringify({
|
||||
global: {
|
||||
defaultNumTasks: 15
|
||||
}
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Force reload to clear cache
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act: Call getDefaultNumTasks with explicit root
|
||||
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(15);
|
||||
});
|
||||
|
||||
test('getDefaultNumTasks should return fallback when config value is invalid', () => {
|
||||
// Arrange: Mock fs.readFileSync to return invalid config
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return JSON.stringify({
|
||||
global: {
|
||||
defaultNumTasks: 'invalid'
|
||||
}
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Force reload to clear cache
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act: Call getDefaultNumTasks with explicit root
|
||||
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
|
||||
});
|
||||
|
||||
test('getDefaultNumTasks should return fallback when config value is missing', () => {
|
||||
// Arrange: Mock fs.readFileSync to return config without defaultNumTasks
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return JSON.stringify({
|
||||
global: {}
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Force reload to clear cache
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act: Call getDefaultNumTasks with explicit root
|
||||
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
|
||||
});
|
||||
|
||||
test('getDefaultNumTasks should handle non-existent config file', () => {
|
||||
// Arrange: Mock file not existing
|
||||
fsExistsSyncSpy.mockReturnValue(false);
|
||||
|
||||
// Force reload to clear cache
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act: Call getDefaultNumTasks with explicit root
|
||||
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
|
||||
});
|
||||
|
||||
test('getDefaultNumTasks should accept explicit project root', () => {
|
||||
// Arrange: Mock fs.readFileSync to return valid config
|
||||
fsReadFileSyncSpy.mockImplementation((filePath) => {
|
||||
if (filePath === MOCK_CONFIG_PATH) {
|
||||
return JSON.stringify({
|
||||
global: {
|
||||
defaultNumTasks: 20
|
||||
}
|
||||
});
|
||||
}
|
||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
||||
});
|
||||
fsExistsSyncSpy.mockReturnValue(true);
|
||||
|
||||
// Force reload to clear cache
|
||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
||||
|
||||
// Act: Call getDefaultNumTasks with explicit project root
|
||||
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(20);
|
||||
});
|
||||
});
|
||||
|
||||
// Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation.
|
||||
// If similar setter functions exist, add tests for them following the writeConfig pattern.
|
||||
|
||||
@@ -179,7 +179,8 @@ logs
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `
|
||||
# tasks/
|
||||
`
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
@@ -200,7 +201,8 @@ logs
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `
|
||||
tasks/
|
||||
`
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
@@ -432,7 +434,8 @@ tasks/ `;
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toBe(`# Task files
|
||||
# tasks.json
|
||||
# tasks/ `);
|
||||
# tasks/
|
||||
`);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
528
tests/unit/mcp/tools/remove-task.test.js
Normal file
528
tests/unit/mcp/tools/remove-task.test.js
Normal file
@@ -0,0 +1,528 @@
|
||||
/**
|
||||
* Tests for the remove-task MCP tool
|
||||
*
|
||||
* Note: This test does NOT test the actual implementation. It tests that:
|
||||
* 1. The tool is registered correctly with the correct parameters
|
||||
* 2. Arguments are passed correctly to removeTaskDirect
|
||||
* 3. Error handling works as expected
|
||||
* 4. Tag parameter is properly handled and passed through
|
||||
*
|
||||
* We do NOT import the real implementation - everything is mocked
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock EVERYTHING
|
||||
const mockRemoveTaskDirect = jest.fn();
|
||||
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
|
||||
removeTaskDirect: mockRemoveTaskDirect
|
||||
}));
|
||||
|
||||
const mockHandleApiResult = jest.fn((result) => result);
|
||||
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
|
||||
const mockCreateErrorResponse = jest.fn((msg) => ({
|
||||
success: false,
|
||||
error: { code: 'ERROR', message: msg }
|
||||
}));
|
||||
const mockFindTasksPath = jest.fn(() => '/mock/project/tasks.json');
|
||||
|
||||
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
|
||||
handleApiResult: mockHandleApiResult,
|
||||
createErrorResponse: mockCreateErrorResponse,
|
||||
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
|
||||
}));
|
||||
|
||||
jest.mock('../../../../mcp-server/src/core/utils/path-utils.js', () => ({
|
||||
findTasksPath: mockFindTasksPath
|
||||
}));
|
||||
|
||||
// Mock the z object from zod
|
||||
const mockZod = {
|
||||
object: jest.fn(() => mockZod),
|
||||
string: jest.fn(() => mockZod),
|
||||
boolean: jest.fn(() => mockZod),
|
||||
optional: jest.fn(() => mockZod),
|
||||
describe: jest.fn(() => mockZod),
|
||||
_def: {
|
||||
shape: () => ({
|
||||
id: {},
|
||||
file: {},
|
||||
projectRoot: {},
|
||||
confirm: {},
|
||||
tag: {}
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
jest.mock('zod', () => ({
|
||||
z: mockZod
|
||||
}));
|
||||
|
||||
// DO NOT import the real module - create a fake implementation
|
||||
// This is the fake implementation of registerRemoveTaskTool
|
||||
const registerRemoveTaskTool = (server) => {
|
||||
// Create simplified version of the tool config
|
||||
const toolConfig = {
|
||||
name: 'remove_task',
|
||||
description: 'Remove a task or subtask permanently from the tasks list',
|
||||
parameters: mockZod,
|
||||
|
||||
// Create a simplified mock of the execute function
|
||||
execute: mockWithNormalizedProjectRoot(async (args, context) => {
|
||||
const { log, session } = context;
|
||||
|
||||
try {
|
||||
log.info && log.info(`Removing task(s) with ID(s): ${args.id}`);
|
||||
|
||||
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
|
||||
let tasksJsonPath;
|
||||
try {
|
||||
tasksJsonPath = mockFindTasksPath(
|
||||
{ projectRoot: args.projectRoot, file: args.file },
|
||||
log
|
||||
);
|
||||
} catch (error) {
|
||||
log.error && log.error(`Error finding tasks.json: ${error.message}`);
|
||||
return mockCreateErrorResponse(
|
||||
`Failed to find tasks.json: ${error.message}`
|
||||
);
|
||||
}
|
||||
|
||||
log.info && log.info(`Using tasks file path: ${tasksJsonPath}`);
|
||||
|
||||
const result = await mockRemoveTaskDirect(
|
||||
{
|
||||
tasksJsonPath: tasksJsonPath,
|
||||
id: args.id,
|
||||
projectRoot: args.projectRoot,
|
||||
tag: args.tag
|
||||
},
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
|
||||
if (result.success) {
|
||||
log.info && log.info(`Successfully removed task: ${args.id}`);
|
||||
} else {
|
||||
log.error &&
|
||||
log.error(`Failed to remove task: ${result.error.message}`);
|
||||
}
|
||||
|
||||
return mockHandleApiResult(
|
||||
result,
|
||||
log,
|
||||
'Error removing task',
|
||||
undefined,
|
||||
args.projectRoot
|
||||
);
|
||||
} catch (error) {
|
||||
log.error && log.error(`Error in remove-task tool: ${error.message}`);
|
||||
return mockCreateErrorResponse(error.message);
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
// Register the tool with the server
|
||||
server.addTool(toolConfig);
|
||||
};
|
||||
|
||||
describe('MCP Tool: remove-task', () => {
|
||||
// Create mock server
|
||||
let mockServer;
|
||||
let executeFunction;
|
||||
|
||||
// Create mock logger
|
||||
const mockLogger = {
|
||||
debug: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn()
|
||||
};
|
||||
|
||||
// Test data
|
||||
const validArgs = {
|
||||
id: '5',
|
||||
projectRoot: '/mock/project/root',
|
||||
file: '/mock/project/tasks.json',
|
||||
confirm: true,
|
||||
tag: 'feature-branch'
|
||||
};
|
||||
|
||||
const multipleTaskArgs = {
|
||||
id: '5,6.1,7',
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'master'
|
||||
};
|
||||
|
||||
// Standard responses
|
||||
const successResponse = {
|
||||
success: true,
|
||||
data: {
|
||||
totalTasks: 1,
|
||||
successful: 1,
|
||||
failed: 0,
|
||||
removedTasks: [
|
||||
{
|
||||
id: 5,
|
||||
title: 'Removed Task',
|
||||
status: 'pending'
|
||||
}
|
||||
],
|
||||
messages: ["Successfully removed task 5 from tag 'feature-branch'"],
|
||||
errors: [],
|
||||
tasksPath: '/mock/project/tasks.json',
|
||||
tag: 'feature-branch'
|
||||
}
|
||||
};
|
||||
|
||||
const multipleTasksSuccessResponse = {
|
||||
success: true,
|
||||
data: {
|
||||
totalTasks: 3,
|
||||
successful: 3,
|
||||
failed: 0,
|
||||
removedTasks: [
|
||||
{ id: 5, title: 'Task 5', status: 'pending' },
|
||||
{ id: 1, title: 'Subtask 6.1', status: 'done', parentTaskId: 6 },
|
||||
{ id: 7, title: 'Task 7', status: 'in-progress' }
|
||||
],
|
||||
messages: [
|
||||
"Successfully removed task 5 from tag 'master'",
|
||||
"Successfully removed subtask 6.1 from tag 'master'",
|
||||
"Successfully removed task 7 from tag 'master'"
|
||||
],
|
||||
errors: [],
|
||||
tasksPath: '/mock/project/tasks.json',
|
||||
tag: 'master'
|
||||
}
|
||||
};
|
||||
|
||||
const errorResponse = {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'INVALID_TASK_ID',
|
||||
message: "The following tasks were not found in tag 'feature-branch': 999"
|
||||
}
|
||||
};
|
||||
|
||||
const pathErrorResponse = {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'PATH_ERROR',
|
||||
message: 'Failed to find tasks.json: No tasks.json found'
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset all mocks
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create mock server
|
||||
mockServer = {
|
||||
addTool: jest.fn((config) => {
|
||||
executeFunction = config.execute;
|
||||
})
|
||||
};
|
||||
|
||||
// Setup default successful response
|
||||
mockRemoveTaskDirect.mockResolvedValue(successResponse);
|
||||
mockFindTasksPath.mockReturnValue('/mock/project/tasks.json');
|
||||
|
||||
// Register the tool
|
||||
registerRemoveTaskTool(mockServer);
|
||||
});
|
||||
|
||||
test('should register the tool correctly', () => {
|
||||
// Verify tool was registered
|
||||
expect(mockServer.addTool).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
name: 'remove_task',
|
||||
description: 'Remove a task or subtask permanently from the tasks list',
|
||||
parameters: expect.any(Object),
|
||||
execute: expect.any(Function)
|
||||
})
|
||||
);
|
||||
|
||||
// Verify the tool config was passed
|
||||
const toolConfig = mockServer.addTool.mock.calls[0][0];
|
||||
expect(toolConfig).toHaveProperty('parameters');
|
||||
expect(toolConfig).toHaveProperty('execute');
|
||||
});
|
||||
|
||||
test('should execute the tool with valid parameters including tag', async () => {
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify findTasksPath was called with correct arguments
|
||||
expect(mockFindTasksPath).toHaveBeenCalledWith(
|
||||
{
|
||||
projectRoot: validArgs.projectRoot,
|
||||
file: validArgs.file
|
||||
},
|
||||
mockLogger
|
||||
);
|
||||
|
||||
// Verify removeTaskDirect was called with correct arguments including tag
|
||||
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
tasksJsonPath: '/mock/project/tasks.json',
|
||||
id: validArgs.id,
|
||||
projectRoot: validArgs.projectRoot,
|
||||
tag: validArgs.tag // This is the key test - tag parameter should be passed through
|
||||
}),
|
||||
mockLogger,
|
||||
{
|
||||
session: mockContext.session
|
||||
}
|
||||
);
|
||||
|
||||
// Verify handleApiResult was called
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
successResponse,
|
||||
mockLogger,
|
||||
'Error removing task',
|
||||
undefined,
|
||||
validArgs.projectRoot
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle multiple task IDs with tag context', async () => {
|
||||
// Setup multiple tasks response
|
||||
mockRemoveTaskDirect.mockResolvedValueOnce(multipleTasksSuccessResponse);
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(multipleTaskArgs, mockContext);
|
||||
|
||||
// Verify removeTaskDirect was called with comma-separated IDs and tag
|
||||
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
id: '5,6.1,7',
|
||||
tag: 'master'
|
||||
}),
|
||||
mockLogger,
|
||||
expect.any(Object)
|
||||
);
|
||||
|
||||
// Verify successful handling of multiple tasks
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
multipleTasksSuccessResponse,
|
||||
mockLogger,
|
||||
'Error removing task',
|
||||
undefined,
|
||||
multipleTaskArgs.projectRoot
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle missing tag parameter (defaults to current tag)', async () => {
|
||||
const argsWithoutTag = {
|
||||
id: '5',
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(argsWithoutTag, mockContext);
|
||||
|
||||
// Verify removeTaskDirect was called with undefined tag (should default to current tag)
|
||||
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
id: '5',
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: undefined // Should be undefined when not provided
|
||||
}),
|
||||
mockLogger,
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle errors from removeTaskDirect', async () => {
|
||||
// Setup error response
|
||||
mockRemoveTaskDirect.mockResolvedValueOnce(errorResponse);
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify removeTaskDirect was called
|
||||
expect(mockRemoveTaskDirect).toHaveBeenCalled();
|
||||
|
||||
// Verify error logging
|
||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
||||
"Failed to remove task: The following tasks were not found in tag 'feature-branch': 999"
|
||||
);
|
||||
|
||||
// Verify handleApiResult was called with error response
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
errorResponse,
|
||||
mockLogger,
|
||||
'Error removing task',
|
||||
undefined,
|
||||
validArgs.projectRoot
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle path finding errors', async () => {
|
||||
// Setup path finding error
|
||||
mockFindTasksPath.mockImplementationOnce(() => {
|
||||
throw new Error('No tasks.json found');
|
||||
});
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
const result = await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify error logging
|
||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
||||
'Error finding tasks.json: No tasks.json found'
|
||||
);
|
||||
|
||||
// Verify error response was returned
|
||||
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
|
||||
'Failed to find tasks.json: No tasks.json found'
|
||||
);
|
||||
|
||||
// Verify removeTaskDirect was NOT called
|
||||
expect(mockRemoveTaskDirect).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle unexpected errors in execute function', async () => {
|
||||
// Setup unexpected error
|
||||
mockRemoveTaskDirect.mockImplementationOnce(() => {
|
||||
throw new Error('Unexpected error');
|
||||
});
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify error logging
|
||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
||||
'Error in remove-task tool: Unexpected error'
|
||||
);
|
||||
|
||||
// Verify error response was returned
|
||||
expect(mockCreateErrorResponse).toHaveBeenCalledWith('Unexpected error');
|
||||
});
|
||||
|
||||
test('should properly handle withNormalizedProjectRoot wrapper', () => {
|
||||
// Verify that withNormalizedProjectRoot was called with the execute function
|
||||
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
|
||||
expect.any(Function)
|
||||
);
|
||||
});
|
||||
|
||||
test('should log appropriate info messages for successful operations', async () => {
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify appropriate logging
|
||||
expect(mockLogger.info).toHaveBeenCalledWith(
|
||||
'Removing task(s) with ID(s): 5'
|
||||
);
|
||||
expect(mockLogger.info).toHaveBeenCalledWith(
|
||||
'Using tasks file path: /mock/project/tasks.json'
|
||||
);
|
||||
expect(mockLogger.info).toHaveBeenCalledWith(
|
||||
'Successfully removed task: 5'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle subtask removal with proper tag context', async () => {
|
||||
const subtaskArgs = {
|
||||
id: '5.2',
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'feature-branch'
|
||||
};
|
||||
|
||||
const subtaskSuccessResponse = {
|
||||
success: true,
|
||||
data: {
|
||||
totalTasks: 1,
|
||||
successful: 1,
|
||||
failed: 0,
|
||||
removedTasks: [
|
||||
{
|
||||
id: 2,
|
||||
title: 'Removed Subtask',
|
||||
status: 'pending',
|
||||
parentTaskId: 5
|
||||
}
|
||||
],
|
||||
messages: [
|
||||
"Successfully removed subtask 5.2 from tag 'feature-branch'"
|
||||
],
|
||||
errors: [],
|
||||
tasksPath: '/mock/project/tasks.json',
|
||||
tag: 'feature-branch'
|
||||
}
|
||||
};
|
||||
|
||||
mockRemoveTaskDirect.mockResolvedValueOnce(subtaskSuccessResponse);
|
||||
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
await executeFunction(subtaskArgs, mockContext);
|
||||
|
||||
// Verify removeTaskDirect was called with subtask ID and tag
|
||||
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
id: '5.2',
|
||||
tag: 'feature-branch'
|
||||
}),
|
||||
mockLogger,
|
||||
expect.any(Object)
|
||||
);
|
||||
|
||||
// Verify successful handling
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
subtaskSuccessResponse,
|
||||
mockLogger,
|
||||
'Error removing task',
|
||||
undefined,
|
||||
subtaskArgs.projectRoot
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,190 @@
|
||||
/**
|
||||
* Unit test to ensure fixDependenciesCommand writes JSON with the correct
|
||||
* projectRoot and tag arguments so that tag data is preserved.
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock process.exit to prevent test termination
|
||||
const mockProcessExit = jest.fn();
|
||||
const originalExit = process.exit;
|
||||
process.exit = mockProcessExit;
|
||||
|
||||
// Mock utils.js BEFORE importing the module under test
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
findProjectRoot: jest.fn(() => '/mock/project/root'),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
taskExists: jest.fn(() => true),
|
||||
formatTaskId: jest.fn((id) => id),
|
||||
findCycles: jest.fn(() => []),
|
||||
isSilentMode: jest.fn(() => true),
|
||||
resolveTag: jest.fn(() => 'master'),
|
||||
getTasksForTag: jest.fn(() => []),
|
||||
setTasksForTag: jest.fn(),
|
||||
enableSilentMode: jest.fn(),
|
||||
disableSilentMode: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock ui.js
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock task-manager.js
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager.js',
|
||||
() => ({
|
||||
generateTaskFiles: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
// Mock external libraries
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
green: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
// Import the mocked modules
|
||||
const { readJSON, writeJSON, log, taskExists } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
// Import the module under test
|
||||
const { fixDependenciesCommand } = await import(
|
||||
'../../../../../scripts/modules/dependency-manager.js'
|
||||
);
|
||||
|
||||
describe('fixDependenciesCommand tag preservation', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockProcessExit.mockClear();
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
// Restore original process.exit
|
||||
process.exit = originalExit;
|
||||
});
|
||||
|
||||
it('calls writeJSON with projectRoot and tag parameters when changes are made', async () => {
|
||||
const tasksPath = '/mock/tasks.json';
|
||||
const projectRoot = '/mock/project/root';
|
||||
const tag = 'master';
|
||||
|
||||
// Mock data WITH dependency issues to trigger writeJSON
|
||||
const tasksDataWithIssues = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [999] // Non-existent dependency to trigger fix
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
dependencies: []
|
||||
}
|
||||
],
|
||||
tag: 'master',
|
||||
_rawTaggedData: {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [999]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
readJSON.mockReturnValue(tasksDataWithIssues);
|
||||
taskExists.mockReturnValue(false); // Make dependency invalid to trigger fix
|
||||
|
||||
await fixDependenciesCommand(tasksPath, {
|
||||
context: { projectRoot, tag }
|
||||
});
|
||||
|
||||
// Verify readJSON was called with correct parameters
|
||||
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
|
||||
|
||||
// Verify writeJSON was called (should be triggered by removing invalid dependency)
|
||||
expect(writeJSON).toHaveBeenCalled();
|
||||
|
||||
// Check the writeJSON call parameters
|
||||
const writeJSONCalls = writeJSON.mock.calls;
|
||||
const lastWriteCall = writeJSONCalls[writeJSONCalls.length - 1];
|
||||
const [calledPath, _data, calledProjectRoot, calledTag] = lastWriteCall;
|
||||
|
||||
expect(calledPath).toBe(tasksPath);
|
||||
expect(calledProjectRoot).toBe(projectRoot);
|
||||
expect(calledTag).toBe(tag);
|
||||
|
||||
// Verify process.exit was NOT called (meaning the function succeeded)
|
||||
expect(mockProcessExit).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('does not call writeJSON when no changes are needed', async () => {
|
||||
const tasksPath = '/mock/tasks.json';
|
||||
const projectRoot = '/mock/project/root';
|
||||
const tag = 'master';
|
||||
|
||||
// Mock data WITHOUT dependency issues (no changes needed)
|
||||
const cleanTasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
dependencies: [] // Clean, no issues
|
||||
}
|
||||
],
|
||||
tag: 'master'
|
||||
};
|
||||
|
||||
readJSON.mockReturnValue(cleanTasksData);
|
||||
taskExists.mockReturnValue(true); // All dependencies exist
|
||||
|
||||
await fixDependenciesCommand(tasksPath, {
|
||||
context: { projectRoot, tag }
|
||||
});
|
||||
|
||||
// Verify readJSON was called
|
||||
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
|
||||
|
||||
// Verify writeJSON was NOT called (no changes needed)
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
|
||||
// Verify process.exit was NOT called
|
||||
expect(mockProcessExit).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('handles early exit when no valid tasks found', async () => {
|
||||
const tasksPath = '/mock/tasks.json';
|
||||
|
||||
// Mock invalid data to trigger early exit
|
||||
readJSON.mockReturnValue(null);
|
||||
|
||||
await fixDependenciesCommand(tasksPath, {
|
||||
context: { projectRoot: '/mock', tag: 'master' }
|
||||
});
|
||||
|
||||
// Verify readJSON was called
|
||||
expect(readJSON).toHaveBeenCalled();
|
||||
|
||||
// Verify writeJSON was NOT called (early exit)
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
|
||||
// Verify process.exit WAS called due to invalid data
|
||||
expect(mockProcessExit).toHaveBeenCalledWith(1);
|
||||
});
|
||||
});
|
||||
@@ -2,308 +2,171 @@
|
||||
* Tests for the addSubtask function
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
import path from 'path';
|
||||
|
||||
// Mock dependencies
|
||||
const mockReadJSON = jest.fn();
|
||||
const mockWriteJSON = jest.fn();
|
||||
const mockGenerateTaskFiles = jest.fn();
|
||||
const mockIsTaskDependentOn = jest.fn().mockReturnValue(false);
|
||||
|
||||
// Mock path module
|
||||
jest.mock('path', () => ({
|
||||
dirname: jest.fn()
|
||||
}));
|
||||
|
||||
// Define test version of the addSubtask function
|
||||
const testAddSubtask = (
|
||||
tasksPath,
|
||||
parentId,
|
||||
existingTaskId,
|
||||
newSubtaskData,
|
||||
generateFiles = true
|
||||
) => {
|
||||
// Read the existing tasks
|
||||
const data = mockReadJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
|
||||
}
|
||||
|
||||
// Convert parent ID to number
|
||||
const parentIdNum = parseInt(parentId, 10);
|
||||
|
||||
// Find the parent task
|
||||
const parentTask = data.tasks.find((t) => t.id === parentIdNum);
|
||||
if (!parentTask) {
|
||||
throw new Error(`Parent task with ID ${parentIdNum} not found`);
|
||||
}
|
||||
|
||||
// Initialize subtasks array if it doesn't exist
|
||||
if (!parentTask.subtasks) {
|
||||
parentTask.subtasks = [];
|
||||
}
|
||||
|
||||
let newSubtask;
|
||||
|
||||
// Case 1: Convert an existing task to a subtask
|
||||
if (existingTaskId !== null) {
|
||||
const existingTaskIdNum = parseInt(existingTaskId, 10);
|
||||
|
||||
// Find the existing task
|
||||
const existingTaskIndex = data.tasks.findIndex(
|
||||
(t) => t.id === existingTaskIdNum
|
||||
);
|
||||
if (existingTaskIndex === -1) {
|
||||
throw new Error(`Task with ID ${existingTaskIdNum} not found`);
|
||||
}
|
||||
|
||||
const existingTask = data.tasks[existingTaskIndex];
|
||||
|
||||
// Check if task is already a subtask
|
||||
if (existingTask.parentTaskId) {
|
||||
throw new Error(
|
||||
`Task ${existingTaskIdNum} is already a subtask of task ${existingTask.parentTaskId}`
|
||||
);
|
||||
}
|
||||
|
||||
// Check for circular dependency
|
||||
if (existingTaskIdNum === parentIdNum) {
|
||||
throw new Error(`Cannot make a task a subtask of itself`);
|
||||
}
|
||||
|
||||
// Check for circular dependency using mockIsTaskDependentOn
|
||||
if (mockIsTaskDependentOn()) {
|
||||
throw new Error(
|
||||
`Cannot create circular dependency: task ${parentIdNum} is already a subtask or dependent of task ${existingTaskIdNum}`
|
||||
);
|
||||
}
|
||||
|
||||
// Find the highest subtask ID to determine the next ID
|
||||
const highestSubtaskId =
|
||||
parentTask.subtasks.length > 0
|
||||
? Math.max(...parentTask.subtasks.map((st) => st.id))
|
||||
: 0;
|
||||
const newSubtaskId = highestSubtaskId + 1;
|
||||
|
||||
// Clone the existing task to be converted to a subtask
|
||||
newSubtask = {
|
||||
...existingTask,
|
||||
id: newSubtaskId,
|
||||
parentTaskId: parentIdNum
|
||||
};
|
||||
|
||||
// Add to parent's subtasks
|
||||
parentTask.subtasks.push(newSubtask);
|
||||
|
||||
// Remove the task from the main tasks array
|
||||
data.tasks.splice(existingTaskIndex, 1);
|
||||
}
|
||||
// Case 2: Create a new subtask
|
||||
else if (newSubtaskData) {
|
||||
// Find the highest subtask ID to determine the next ID
|
||||
const highestSubtaskId =
|
||||
parentTask.subtasks.length > 0
|
||||
? Math.max(...parentTask.subtasks.map((st) => st.id))
|
||||
: 0;
|
||||
const newSubtaskId = highestSubtaskId + 1;
|
||||
|
||||
// Create the new subtask object
|
||||
newSubtask = {
|
||||
id: newSubtaskId,
|
||||
title: newSubtaskData.title,
|
||||
description: newSubtaskData.description || '',
|
||||
details: newSubtaskData.details || '',
|
||||
status: newSubtaskData.status || 'pending',
|
||||
dependencies: newSubtaskData.dependencies || [],
|
||||
parentTaskId: parentIdNum
|
||||
};
|
||||
|
||||
// Add to parent's subtasks
|
||||
parentTask.subtasks.push(newSubtask);
|
||||
} else {
|
||||
throw new Error('Either existingTaskId or newSubtaskData must be provided');
|
||||
}
|
||||
|
||||
// Write the updated tasks back to the file
|
||||
mockWriteJSON(tasksPath, data);
|
||||
|
||||
// Generate task files if requested
|
||||
if (generateFiles) {
|
||||
mockGenerateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
}
|
||||
|
||||
return newSubtask;
|
||||
// Mock dependencies before importing the module
|
||||
const mockUtils = {
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
getCurrentTag: jest.fn()
|
||||
};
|
||||
const mockTaskManager = {
|
||||
isTaskDependentOn: jest.fn()
|
||||
};
|
||||
const mockGenerateTaskFiles = jest.fn();
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/utils.js',
|
||||
() => mockUtils
|
||||
);
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager.js',
|
||||
() => mockTaskManager
|
||||
);
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: mockGenerateTaskFiles
|
||||
})
|
||||
);
|
||||
|
||||
const addSubtask = (
|
||||
await import('../../../../../scripts/modules/task-manager/add-subtask.js')
|
||||
).default;
|
||||
|
||||
describe('addSubtask function', () => {
|
||||
// Reset mocks before each test
|
||||
const multiTagData = {
|
||||
master: {
|
||||
tasks: [{ id: 1, title: 'Master Task', subtasks: [] }],
|
||||
metadata: { description: 'Master tasks' }
|
||||
},
|
||||
'feature-branch': {
|
||||
tasks: [{ id: 1, title: 'Feature Task', subtasks: [] }],
|
||||
metadata: { description: 'Feature tasks' }
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Default mock implementations
|
||||
mockReadJSON.mockImplementation(() => ({
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Parent Task',
|
||||
description: 'This is a parent task',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Existing Task',
|
||||
description: 'This is an existing task',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Another Task',
|
||||
description: 'This is another task',
|
||||
status: 'pending',
|
||||
dependencies: [1]
|
||||
}
|
||||
]
|
||||
}));
|
||||
|
||||
// Setup success write response
|
||||
mockWriteJSON.mockImplementation((path, data) => {
|
||||
return data;
|
||||
mockTaskManager.isTaskDependentOn.mockReturnValue(false);
|
||||
});
|
||||
|
||||
// Set up default behavior for dependency check
|
||||
mockIsTaskDependentOn.mockReturnValue(false);
|
||||
test('should add a new subtask and preserve other tags', async () => {
|
||||
const context = { projectRoot: '/fake/root', tag: 'feature-branch' };
|
||||
const newSubtaskData = { title: 'My New Subtask' };
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [{ id: 1, title: 'Feature Task', subtasks: [] }],
|
||||
metadata: { description: 'Feature tasks' }
|
||||
});
|
||||
|
||||
await addSubtask('tasks.json', '1', null, newSubtaskData, true, context);
|
||||
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
|
||||
'tasks.json',
|
||||
expect.any(Object),
|
||||
'/fake/root',
|
||||
'feature-branch'
|
||||
);
|
||||
const writtenData = mockUtils.writeJSON.mock.calls[0][1];
|
||||
const parentTask = writtenData.tasks.find((t) => t.id === 1);
|
||||
expect(parentTask.subtasks).toHaveLength(1);
|
||||
expect(parentTask.subtasks[0].title).toBe('My New Subtask');
|
||||
});
|
||||
|
||||
test('should add a new subtask to a parent task', async () => {
|
||||
// Create new subtask data
|
||||
const newSubtaskData = {
|
||||
title: 'New Subtask',
|
||||
description: 'This is a new subtask',
|
||||
details: 'Implementation details for the subtask',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
};
|
||||
|
||||
// Execute the test version of addSubtask
|
||||
const newSubtask = testAddSubtask(
|
||||
'tasks/tasks.json',
|
||||
1,
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [{ id: 1, title: 'Parent Task', subtasks: [] }]
|
||||
});
|
||||
const context = {};
|
||||
const newSubtask = await addSubtask(
|
||||
'tasks.json',
|
||||
'1',
|
||||
null,
|
||||
newSubtaskData,
|
||||
true
|
||||
{ title: 'New Subtask' },
|
||||
true,
|
||||
context
|
||||
);
|
||||
|
||||
// Verify readJSON was called with the correct path
|
||||
expect(mockReadJSON).toHaveBeenCalledWith('tasks/tasks.json');
|
||||
|
||||
// Verify writeJSON was called with the correct path
|
||||
expect(mockWriteJSON).toHaveBeenCalledWith(
|
||||
'tasks/tasks.json',
|
||||
expect.any(Object)
|
||||
);
|
||||
|
||||
// Verify the subtask was created with correct data
|
||||
expect(newSubtask).toBeDefined();
|
||||
expect(newSubtask.id).toBe(1);
|
||||
expect(newSubtask.title).toBe('New Subtask');
|
||||
expect(newSubtask.parentTaskId).toBe(1);
|
||||
|
||||
// Verify generateTaskFiles was called
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalled();
|
||||
const writeCallArgs = mockUtils.writeJSON.mock.calls[0][1]; // data is the second arg now
|
||||
const parentTask = writeCallArgs.tasks.find((t) => t.id === 1);
|
||||
expect(parentTask.subtasks).toHaveLength(1);
|
||||
expect(parentTask.subtasks[0].title).toBe('New Subtask');
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should convert an existing task to a subtask', async () => {
|
||||
// Execute the test version of addSubtask to convert task 2 to a subtask of task 1
|
||||
const convertedSubtask = testAddSubtask(
|
||||
'tasks/tasks.json',
|
||||
1,
|
||||
2,
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [
|
||||
{ id: 1, title: 'Parent Task', subtasks: [] },
|
||||
{ id: 2, title: 'Existing Task 2', subtasks: [] }
|
||||
]
|
||||
});
|
||||
const context = {};
|
||||
const convertedSubtask = await addSubtask(
|
||||
'tasks.json',
|
||||
'1',
|
||||
'2',
|
||||
null,
|
||||
true
|
||||
true,
|
||||
context
|
||||
);
|
||||
|
||||
// Verify readJSON was called with the correct path
|
||||
expect(mockReadJSON).toHaveBeenCalledWith('tasks/tasks.json');
|
||||
|
||||
// Verify writeJSON was called
|
||||
expect(mockWriteJSON).toHaveBeenCalled();
|
||||
|
||||
// Verify the subtask was created with correct data
|
||||
expect(convertedSubtask).toBeDefined();
|
||||
expect(convertedSubtask.id).toBe(1);
|
||||
expect(convertedSubtask.title).toBe('Existing Task');
|
||||
expect(convertedSubtask.parentTaskId).toBe(1);
|
||||
|
||||
// Verify generateTaskFiles was called
|
||||
expect(mockGenerateTaskFiles).toHaveBeenCalled();
|
||||
expect(convertedSubtask.title).toBe('Existing Task 2');
|
||||
expect(mockUtils.writeJSON).toHaveBeenCalled();
|
||||
const writeCallArgs = mockUtils.writeJSON.mock.calls[0][1];
|
||||
const parentTask = writeCallArgs.tasks.find((t) => t.id === 1);
|
||||
expect(parentTask.subtasks).toHaveLength(1);
|
||||
expect(parentTask.subtasks[0].title).toBe('Existing Task 2');
|
||||
});
|
||||
|
||||
test('should throw an error if parent task does not exist', async () => {
|
||||
// Create new subtask data
|
||||
const newSubtaskData = {
|
||||
title: 'New Subtask',
|
||||
description: 'This is a new subtask'
|
||||
};
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [{ id: 1, title: 'Task 1', subtasks: [] }]
|
||||
});
|
||||
const context = {};
|
||||
await expect(
|
||||
addSubtask(
|
||||
'tasks.json',
|
||||
'99',
|
||||
null,
|
||||
{ title: 'New Subtask' },
|
||||
true,
|
||||
context
|
||||
)
|
||||
).rejects.toThrow('Parent task with ID 99 not found');
|
||||
});
|
||||
|
||||
// Override mockReadJSON for this specific test case
|
||||
mockReadJSON.mockImplementationOnce(() => ({
|
||||
test('should throw an error if trying to convert a non-existent task', async () => {
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [{ id: 1, title: 'Parent Task', subtasks: [] }]
|
||||
});
|
||||
const context = {};
|
||||
await expect(
|
||||
addSubtask('tasks.json', '1', '99', null, true, context)
|
||||
).rejects.toThrow('Task with ID 99 not found');
|
||||
});
|
||||
|
||||
test('should throw an error for circular dependency', async () => {
|
||||
mockUtils.readJSON.mockReturnValueOnce({
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
status: 'pending'
|
||||
}
|
||||
{ id: 1, title: 'Parent Task', subtasks: [] },
|
||||
{ id: 2, title: 'Child Task', subtasks: [] }
|
||||
]
|
||||
}));
|
||||
|
||||
// Expect an error when trying to add a subtask to a non-existent parent
|
||||
expect(() =>
|
||||
testAddSubtask('tasks/tasks.json', 999, null, newSubtaskData)
|
||||
).toThrow(/Parent task with ID 999 not found/);
|
||||
|
||||
// Verify writeJSON was not called
|
||||
expect(mockWriteJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should throw an error if existing task does not exist', async () => {
|
||||
// Expect an error when trying to convert a non-existent task
|
||||
expect(() => testAddSubtask('tasks/tasks.json', 1, 999, null)).toThrow(
|
||||
/Task with ID 999 not found/
|
||||
mockTaskManager.isTaskDependentOn.mockImplementation(
|
||||
(tasks, parentTask, existingTaskIdNum) => {
|
||||
return parentTask.id === 1 && existingTaskIdNum === 2;
|
||||
}
|
||||
);
|
||||
|
||||
// Verify writeJSON was not called
|
||||
expect(mockWriteJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should throw an error if trying to create a circular dependency', async () => {
|
||||
// Force the isTaskDependentOn mock to return true for this test only
|
||||
mockIsTaskDependentOn.mockReturnValueOnce(true);
|
||||
|
||||
// Expect an error when trying to create a circular dependency
|
||||
expect(() => testAddSubtask('tasks/tasks.json', 3, 1, null)).toThrow(
|
||||
/circular dependency/
|
||||
const context = {};
|
||||
await expect(
|
||||
addSubtask('tasks.json', '1', '2', null, true, context)
|
||||
).rejects.toThrow(
|
||||
'Cannot create circular dependency: task 1 is already a subtask or dependent of task 2'
|
||||
);
|
||||
|
||||
// Verify writeJSON was not called
|
||||
expect(mockWriteJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should not regenerate task files if generateFiles is false', async () => {
|
||||
// Create new subtask data
|
||||
const newSubtaskData = {
|
||||
title: 'New Subtask',
|
||||
description: 'This is a new subtask'
|
||||
};
|
||||
|
||||
// Execute the test version of addSubtask with generateFiles = false
|
||||
testAddSubtask('tasks/tasks.json', 1, null, newSubtaskData, false);
|
||||
|
||||
// Verify writeJSON was called
|
||||
expect(mockWriteJSON).toHaveBeenCalled();
|
||||
|
||||
// Verify task files were not regenerated
|
||||
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -258,7 +258,9 @@ describe('addTask', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root', // projectRoot parameter
|
||||
'master' // tag parameter
|
||||
);
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
@@ -299,7 +301,9 @@ describe('addTask', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root', // projectRoot parameter
|
||||
'master' // tag parameter
|
||||
);
|
||||
});
|
||||
|
||||
@@ -334,7 +338,9 @@ describe('addTask', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root', // projectRoot parameter
|
||||
'master' // tag parameter
|
||||
);
|
||||
expect(context.mcpLog.warn).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
@@ -366,7 +372,9 @@ describe('addTask', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root', // projectRoot parameter
|
||||
'master' // tag parameter
|
||||
);
|
||||
});
|
||||
|
||||
@@ -401,7 +409,9 @@ describe('addTask', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root', // projectRoot parameter
|
||||
'master' // tag parameter
|
||||
);
|
||||
});
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
* Tests for the analyze-task-complexity.js module
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
createGetTagAwareFilePathMock,
|
||||
createSlugifyTagForFilePathMock
|
||||
} from './setup.js';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
@@ -32,6 +36,8 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
ensureTagMetadata: jest.fn((tagObj) => tagObj),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
flattenTasksWithSubtasks: jest.fn((tasks) => tasks),
|
||||
getTagAwareFilePath: createGetTagAwareFilePathMock(),
|
||||
slugifyTagForFilePath: createSlugifyTagForFilePathMock(),
|
||||
markMigrationForNotice: jest.fn(),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
|
||||
@@ -3,6 +3,10 @@
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import {
|
||||
createGetTagAwareFilePathMock,
|
||||
createSlugifyTagForFilePathMock
|
||||
} from './setup.js';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
@@ -36,6 +40,8 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
}
|
||||
return allTasks;
|
||||
}),
|
||||
getTagAwareFilePath: createGetTagAwareFilePathMock(),
|
||||
slugifyTagForFilePath: createSlugifyTagForFilePathMock(),
|
||||
readComplexityReport: jest.fn(),
|
||||
markMigrationForNotice: jest.fn(),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
@@ -116,7 +122,8 @@ jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDefaultSubtasks: jest.fn(() => 3),
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
getDefaultNumTasks: jest.fn(() => 10)
|
||||
})
|
||||
);
|
||||
|
||||
@@ -193,6 +200,10 @@ const generateTaskFiles = (
|
||||
)
|
||||
).default;
|
||||
|
||||
const { getDefaultSubtasks } = await import(
|
||||
'../../../../../scripts/modules/config-manager.js'
|
||||
);
|
||||
|
||||
// Import the module under test
|
||||
const { default: expandTask } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
||||
@@ -649,6 +660,61 @@ describe('expandTask', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complexity Report Integration (Tag-Specific)', () => {
|
||||
test('should use tag-specific complexity report when available', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task in feature-branch
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'feature-branch'
|
||||
};
|
||||
|
||||
// Stub fs.existsSync to simulate complexity report exists for this tag
|
||||
const existsSpy = jest
|
||||
.spyOn(fs, 'existsSync')
|
||||
.mockImplementation((filepath) =>
|
||||
filepath.endsWith('task-complexity-report_feature-branch.json')
|
||||
);
|
||||
|
||||
// Stub readJSON to return complexity report when reading the report path
|
||||
readJSON.mockImplementation((filepath, projectRootParam, tagParam) => {
|
||||
if (filepath.includes('task-complexity-report_feature-branch.json')) {
|
||||
return {
|
||||
complexityAnalysis: [
|
||||
{
|
||||
taskId: 1,
|
||||
complexityScore: 8,
|
||||
recommendedSubtasks: 5,
|
||||
reasoning: 'Needs five detailed steps',
|
||||
expansionPrompt: 'Please break this task into 5 parts'
|
||||
}
|
||||
]
|
||||
};
|
||||
}
|
||||
// Default tasks data for tasks.json
|
||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
||||
const selectedTag = tagParam || 'master';
|
||||
return {
|
||||
...sampleTasksCopy[selectedTag],
|
||||
tag: selectedTag,
|
||||
_rawTaggedData: sampleTasksCopy
|
||||
};
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, undefined, false, '', context, false);
|
||||
|
||||
// Assert - generateTextService called with systemPrompt for 5 subtasks
|
||||
const callArg = generateTextService.mock.calls[0][0];
|
||||
expect(callArg.systemPrompt).toContain('Generate exactly 5 subtasks');
|
||||
|
||||
// Clean up stub
|
||||
existsSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
test('should handle non-existent task ID', async () => {
|
||||
// Arrange
|
||||
@@ -885,4 +951,120 @@ describe('expandTask', () => {
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Dynamic Subtask Generation', () => {
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = 1;
|
||||
const context = { session: null, mcpLog: null };
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset all mocks
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Setup default mocks
|
||||
readJSON.mockReturnValue({
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Test Task',
|
||||
description: 'A test task',
|
||||
status: 'pending',
|
||||
subtasks: []
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
findTaskById.mockReturnValue({
|
||||
id: 1,
|
||||
title: 'Test Task',
|
||||
description: 'A test task',
|
||||
status: 'pending',
|
||||
subtasks: []
|
||||
});
|
||||
|
||||
findProjectRoot.mockReturnValue('/mock/project/root');
|
||||
});
|
||||
|
||||
test('should accept 0 as valid numSubtasks value for dynamic generation', async () => {
|
||||
// Act - Call with numSubtasks=0 (should not throw error)
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
0,
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should complete successfully
|
||||
expect(result).toBeDefined();
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should use dynamic prompting when numSubtasks is 0', async () => {
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 0, false, '', context, false);
|
||||
|
||||
// Assert - Verify generateTextService was called
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
|
||||
// Get the call arguments to verify the system prompt
|
||||
const callArgs = generateTextService.mock.calls[0][0];
|
||||
expect(callArgs.systemPrompt).toContain(
|
||||
'an appropriate number of specific subtasks'
|
||||
);
|
||||
});
|
||||
|
||||
test('should use specific count prompting when numSubtasks is positive', async () => {
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 5, false, '', context, false);
|
||||
|
||||
// Assert - Verify generateTextService was called
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
|
||||
// Get the call arguments to verify the system prompt
|
||||
const callArgs = generateTextService.mock.calls[0][0];
|
||||
expect(callArgs.systemPrompt).toContain('5 specific subtasks');
|
||||
});
|
||||
|
||||
test('should reject negative numSubtasks values and fallback to default', async () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(4);
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, -3, false, '', context, false);
|
||||
|
||||
// Assert - Should use default value instead of negative
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
const callArgs = generateTextService.mock.calls[0][0];
|
||||
expect(callArgs.systemPrompt).toContain('4 specific subtasks');
|
||||
});
|
||||
|
||||
test('should use getDefaultSubtasks when numSubtasks is undefined', async () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(6);
|
||||
|
||||
// Act - Call without specifying numSubtasks (undefined)
|
||||
await expandTask(tasksPath, taskId, undefined, false, '', context, false);
|
||||
|
||||
// Assert - Should use default value
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
const callArgs = generateTextService.mock.calls[0][0];
|
||||
expect(callArgs.systemPrompt).toContain('6 specific subtasks');
|
||||
});
|
||||
|
||||
test('should use getDefaultSubtasks when numSubtasks is null', async () => {
|
||||
// Mock getDefaultSubtasks to return a specific value
|
||||
getDefaultSubtasks.mockReturnValue(7);
|
||||
|
||||
// Act - Call with null numSubtasks
|
||||
await expandTask(tasksPath, taskId, null, false, '', context, false);
|
||||
|
||||
// Assert - Should use default value
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
const callArgs = generateTextService.mock.calls[0][0];
|
||||
expect(callArgs.systemPrompt).toContain('7 specific subtasks');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -47,7 +47,8 @@ jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
getDebugFlag: jest.fn(() => false),
|
||||
getDefaultNumTasks: jest.fn(() => 10)
|
||||
})
|
||||
);
|
||||
|
||||
@@ -94,13 +95,15 @@ jest.unstable_mockModule('path', () => ({
|
||||
}));
|
||||
|
||||
// Import the mocked modules
|
||||
const { readJSON, writeJSON, log, promptYesNo } = await import(
|
||||
const { readJSON, promptYesNo } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
const { generateObjectService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
// Note: getDefaultNumTasks validation happens at CLI/MCP level, not in the main parse-prd module
|
||||
const generateTaskFiles = (
|
||||
await import(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js'
|
||||
@@ -433,4 +436,123 @@ describe('parsePRD', () => {
|
||||
// Verify prompt was NOT called with append flag
|
||||
expect(promptYesNo).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
describe('Dynamic Task Generation', () => {
|
||||
test('should use dynamic prompting when numTasks is 0', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with numTasks=0 for dynamic generation
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 0);
|
||||
|
||||
// Verify generateObjectService was called
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
|
||||
// Get the call arguments to verify the prompt
|
||||
const callArgs = generateObjectService.mock.calls[0][0];
|
||||
expect(callArgs.prompt).toContain('an appropriate number of');
|
||||
expect(callArgs.prompt).not.toContain('approximately 0');
|
||||
});
|
||||
|
||||
test('should use specific count prompting when numTasks is positive', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with specific numTasks
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 5);
|
||||
|
||||
// Verify generateObjectService was called
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
|
||||
// Get the call arguments to verify the prompt
|
||||
const callArgs = generateObjectService.mock.calls[0][0];
|
||||
expect(callArgs.prompt).toContain('approximately 5');
|
||||
expect(callArgs.prompt).not.toContain('an appropriate number of');
|
||||
});
|
||||
|
||||
test('should accept 0 as valid numTasks value', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with numTasks=0 - should not throw error
|
||||
const result = await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 0);
|
||||
|
||||
// Verify it completed successfully
|
||||
expect(result).toEqual({
|
||||
success: true,
|
||||
tasksPath: 'tasks/tasks.json',
|
||||
telemetryData: {}
|
||||
});
|
||||
});
|
||||
|
||||
test('should use dynamic prompting when numTasks is negative (no validation in main module)', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with negative numTasks
|
||||
// Note: The main parse-prd.js module doesn't validate numTasks - validation happens at CLI/MCP level
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', -5);
|
||||
|
||||
// Verify generateObjectService was called
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
const callArgs = generateObjectService.mock.calls[0][0];
|
||||
// Negative values are treated as <= 0, so should use dynamic prompting
|
||||
expect(callArgs.prompt).toContain('an appropriate number of');
|
||||
expect(callArgs.prompt).not.toContain('approximately -5');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Configuration Integration', () => {
|
||||
test('should use dynamic prompting when numTasks is null', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with null numTasks
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', null);
|
||||
|
||||
// Verify generateObjectService was called with dynamic prompting
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
const callArgs = generateObjectService.mock.calls[0][0];
|
||||
expect(callArgs.prompt).toContain('an appropriate number of');
|
||||
});
|
||||
|
||||
test('should use dynamic prompting when numTasks is invalid string', async () => {
|
||||
// Setup mocks to simulate normal conditions (no existing output file)
|
||||
fs.default.existsSync.mockImplementation((p) => {
|
||||
if (p === 'tasks/tasks.json') return false; // Output file doesn't exist
|
||||
if (p === 'tasks') return true; // Directory exists
|
||||
return false;
|
||||
});
|
||||
|
||||
// Call the function with invalid numTasks (string that's not a number)
|
||||
await parsePRD('path/to/prd.txt', 'tasks/tasks.json', 'invalid');
|
||||
|
||||
// Verify generateObjectService was called with dynamic prompting
|
||||
// Note: The main module doesn't validate - it just uses the value as-is
|
||||
// Since 'invalid' > 0 is false, it uses dynamic prompting
|
||||
expect(generateObjectService).toHaveBeenCalled();
|
||||
const callArgs = generateObjectService.mock.calls[0][0];
|
||||
expect(callArgs.prompt).toContain('an appropriate number of');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -247,7 +247,9 @@ describe('setTaskStatus', () => {
|
||||
expect.objectContaining({ id: 2, status: 'done' })
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
undefined,
|
||||
'master'
|
||||
);
|
||||
// expect(generateTaskFiles).toHaveBeenCalledWith(
|
||||
// tasksPath,
|
||||
@@ -287,7 +289,9 @@ describe('setTaskStatus', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
undefined,
|
||||
'master'
|
||||
);
|
||||
});
|
||||
|
||||
@@ -318,7 +322,9 @@ describe('setTaskStatus', () => {
|
||||
expect.objectContaining({ id: 2, status: 'done' })
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
undefined,
|
||||
'master'
|
||||
);
|
||||
});
|
||||
|
||||
@@ -354,7 +360,9 @@ describe('setTaskStatus', () => {
|
||||
})
|
||||
])
|
||||
})
|
||||
})
|
||||
}),
|
||||
undefined,
|
||||
'master'
|
||||
);
|
||||
});
|
||||
|
||||
@@ -524,4 +532,45 @@ describe('setTaskStatus', () => {
|
||||
);
|
||||
expect(result).toBeDefined();
|
||||
});
|
||||
|
||||
// Regression test to ensure tag preservation when updating in multi-tag environment
|
||||
test('should preserve other tags when updating task status', async () => {
|
||||
// Arrange
|
||||
const multiTagData = {
|
||||
master: JSON.parse(JSON.stringify(sampleTasks.master)),
|
||||
'feature-branch': {
|
||||
tasks: [
|
||||
{ id: 10, title: 'FB Task', status: 'pending', dependencies: [] }
|
||||
],
|
||||
metadata: { description: 'Feature branch tasks' }
|
||||
}
|
||||
};
|
||||
const tasksPath = '/mock/path/tasks.json';
|
||||
|
||||
readJSON.mockReturnValue({
|
||||
...multiTagData.master, // resolved view not used
|
||||
tag: 'master',
|
||||
_rawTaggedData: multiTagData
|
||||
});
|
||||
|
||||
// Act
|
||||
await setTaskStatus(tasksPath, '1', 'done', {
|
||||
mcpLog: { info: jest.fn() }
|
||||
});
|
||||
|
||||
// Assert: writeJSON should be called with data containing both tags intact
|
||||
const writeArgs = writeJSON.mock.calls[0];
|
||||
expect(writeArgs[0]).toBe(tasksPath);
|
||||
const writtenData = writeArgs[1];
|
||||
expect(writtenData).toHaveProperty('master');
|
||||
expect(writtenData).toHaveProperty('feature-branch');
|
||||
// master task updated
|
||||
const updatedTask = writtenData.master.tasks.find((t) => t.id === 1);
|
||||
expect(updatedTask.status).toBe('done');
|
||||
// feature-branch untouched
|
||||
expect(writtenData['feature-branch'].tasks[0].status).toBe('pending');
|
||||
// ensure additional args (projectRoot undefined, tag 'master') present
|
||||
expect(writeArgs[2]).toBeUndefined();
|
||||
expect(writeArgs[3]).toBe('master');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -119,3 +119,45 @@ export const setupCommonMocks = () => {
|
||||
|
||||
// Helper to create a deep copy of objects to avoid test pollution
|
||||
export const cloneData = (data) => JSON.parse(JSON.stringify(data));
|
||||
|
||||
/**
|
||||
* Shared mock implementation for getTagAwareFilePath that matches the actual implementation
|
||||
* This ensures consistent behavior across all test files, particularly regarding projectRoot handling.
|
||||
*
|
||||
* The key difference from previous inconsistent implementations was that some tests were not
|
||||
* properly handling the projectRoot parameter, leading to different behaviors between test files.
|
||||
*
|
||||
* @param {string} basePath - The base file path
|
||||
* @param {string|null} tag - The tag name (null, undefined, or 'master' uses base path)
|
||||
* @param {string} [projectRoot='.'] - The project root directory
|
||||
* @returns {string} The resolved file path
|
||||
*/
|
||||
export const createGetTagAwareFilePathMock = () => {
|
||||
return jest.fn((basePath, tag, projectRoot = '.') => {
|
||||
// Handle projectRoot consistently - this was the key fix
|
||||
const fullPath = projectRoot ? `${projectRoot}/${basePath}` : basePath;
|
||||
|
||||
if (!tag || tag === 'master') {
|
||||
return fullPath;
|
||||
}
|
||||
|
||||
// Mock the slugification behavior (matches actual implementation)
|
||||
const slugifiedTag = tag.replace(/[^a-zA-Z0-9_-]/g, '-').toLowerCase();
|
||||
const idx = fullPath.lastIndexOf('.');
|
||||
return `${fullPath.slice(0, idx)}_${slugifiedTag}${fullPath.slice(idx)}`;
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Shared mock implementation for slugifyTagForFilePath that matches the actual implementation
|
||||
* @param {string} tagName - The tag name to slugify
|
||||
* @returns {string} Slugified tag name safe for filesystem use
|
||||
*/
|
||||
export const createSlugifyTagForFilePathMock = () => {
|
||||
return jest.fn((tagName) => {
|
||||
if (!tagName || typeof tagName !== 'string') {
|
||||
return 'unknown-tag';
|
||||
}
|
||||
return tagName.replace(/[^a-zA-Z0-9_-]/g, '-').toLowerCase();
|
||||
});
|
||||
};
|
||||
|
||||
@@ -165,7 +165,11 @@ describe('updateTasks', () => {
|
||||
|
||||
// Assert
|
||||
// 1. Read JSON called
|
||||
expect(readJSON).toHaveBeenCalledWith(mockTasksPath, '/mock/path');
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
'/mock/path',
|
||||
'master'
|
||||
);
|
||||
|
||||
// 2. AI Service called with correct args
|
||||
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
|
||||
@@ -183,7 +187,9 @@ describe('updateTasks', () => {
|
||||
])
|
||||
})
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/path',
|
||||
'master'
|
||||
);
|
||||
|
||||
// 4. Check return value
|
||||
@@ -228,7 +234,11 @@ describe('updateTasks', () => {
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(readJSON).toHaveBeenCalledWith(mockTasksPath, '/mock/path');
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
'/mock/path',
|
||||
'master'
|
||||
);
|
||||
expect(generateTextService).not.toHaveBeenCalled();
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
expect(log).toHaveBeenCalledWith(
|
||||
@@ -239,4 +249,113 @@ describe('updateTasks', () => {
|
||||
// Should return early with no updates
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
test('should preserve all tags when updating tasks in tagged context', async () => {
|
||||
// Arrange - Simple 2-tag structure to test tag corruption fix
|
||||
const mockTasksPath = '/mock/path/tasks.json';
|
||||
const mockFromId = 1;
|
||||
const mockPrompt = 'Update master tag tasks';
|
||||
|
||||
const mockTaggedData = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Master Task',
|
||||
status: 'pending',
|
||||
details: 'Old details'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Master Task 2',
|
||||
status: 'done',
|
||||
details: 'Done task'
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: '2024-01-01T00:00:00.000Z',
|
||||
description: 'Master tag tasks'
|
||||
}
|
||||
},
|
||||
'feature-branch': {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Feature Task',
|
||||
status: 'pending',
|
||||
details: 'Feature work'
|
||||
}
|
||||
],
|
||||
metadata: {
|
||||
created: '2024-01-02T00:00:00.000Z',
|
||||
description: 'Feature branch tasks'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const mockUpdatedTasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Updated Master Task',
|
||||
status: 'pending',
|
||||
details: 'Updated details',
|
||||
description: 'Updated description',
|
||||
dependencies: [],
|
||||
priority: 'medium',
|
||||
testStrategy: 'Test the updated functionality',
|
||||
subtasks: []
|
||||
}
|
||||
];
|
||||
|
||||
// Configure mocks - readJSON returns resolved view for master tag
|
||||
readJSON.mockReturnValue({
|
||||
...mockTaggedData.master,
|
||||
tag: 'master',
|
||||
_rawTaggedData: mockTaggedData
|
||||
});
|
||||
|
||||
generateTextService.mockResolvedValue({
|
||||
mainResult: JSON.stringify(mockUpdatedTasks),
|
||||
telemetryData: { commandName: 'update-tasks', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await updateTasks(
|
||||
mockTasksPath,
|
||||
mockFromId,
|
||||
mockPrompt,
|
||||
false, // research
|
||||
{ projectRoot: '/mock/project/root', tag: 'master' },
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert - CRITICAL: Both tags must be preserved (this would fail before the fix)
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.objectContaining({
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 1, title: 'Updated Master Task' }),
|
||||
expect.objectContaining({ id: 2, title: 'Master Task 2' }) // Unchanged done task
|
||||
])
|
||||
}),
|
||||
// CRITICAL: This tag would be missing/corrupted if the bug existed
|
||||
'feature-branch': expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 1, title: 'Feature Task' })
|
||||
]),
|
||||
metadata: expect.objectContaining({
|
||||
description: 'Feature branch tasks'
|
||||
})
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
'master'
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.updatedTasks).toEqual(mockUpdatedTasks);
|
||||
});
|
||||
});
|
||||
|
||||
83
tests/unit/scripts/modules/utils-tag-aware-paths.test.js
Normal file
83
tests/unit/scripts/modules/utils-tag-aware-paths.test.js
Normal file
@@ -0,0 +1,83 @@
|
||||
/**
|
||||
* Test for getTagAwareFilePath utility function
|
||||
* Tests the fix for Issue #850
|
||||
*/
|
||||
|
||||
import { getTagAwareFilePath } from '../../../../scripts/modules/utils.js';
|
||||
import path from 'path';
|
||||
|
||||
describe('getTagAwareFilePath utility function', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const basePath = '.taskmaster/reports/task-complexity-report.json';
|
||||
|
||||
it('should return base path for master tag', () => {
|
||||
const result = getTagAwareFilePath(basePath, 'master', projectRoot);
|
||||
const expected = path.join(projectRoot, basePath);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should return base path for null tag', () => {
|
||||
const result = getTagAwareFilePath(basePath, null, projectRoot);
|
||||
const expected = path.join(projectRoot, basePath);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should return base path for undefined tag', () => {
|
||||
const result = getTagAwareFilePath(basePath, undefined, projectRoot);
|
||||
const expected = path.join(projectRoot, basePath);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should return tag-specific path for non-master tag', () => {
|
||||
const tag = 'feature-branch';
|
||||
const result = getTagAwareFilePath(basePath, tag, projectRoot);
|
||||
const expected = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster/reports/task-complexity-report_feature-branch.json'
|
||||
);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should handle different file extensions', () => {
|
||||
const csvBasePath = '.taskmaster/reports/export.csv';
|
||||
const tag = 'dev-branch';
|
||||
const result = getTagAwareFilePath(csvBasePath, tag, projectRoot);
|
||||
const expected = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster/reports/export_dev-branch.csv'
|
||||
);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should handle paths without extensions', () => {
|
||||
const noExtPath = '.taskmaster/reports/summary';
|
||||
const tag = 'test-tag';
|
||||
const result = getTagAwareFilePath(noExtPath, tag, projectRoot);
|
||||
// Since there's no extension, it should append the tag
|
||||
const expected = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster/reports/summary_test-tag'
|
||||
);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should use default project root when not provided', () => {
|
||||
const tag = 'feature-tag';
|
||||
const result = getTagAwareFilePath(basePath, tag);
|
||||
const expected = path.join(
|
||||
'.',
|
||||
'.taskmaster/reports/task-complexity-report_feature-tag.json'
|
||||
);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
|
||||
it('should handle complex tag names with special characters', () => {
|
||||
const tag = 'feature-user-auth-v2';
|
||||
const result = getTagAwareFilePath(basePath, tag, projectRoot);
|
||||
const expected = path.join(
|
||||
projectRoot,
|
||||
'.taskmaster/reports/task-complexity-report_feature-user-auth-v2.json'
|
||||
);
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
});
|
||||
115
tests/unit/task-manager/tag-management.test.js
Normal file
115
tests/unit/task-manager/tag-management.test.js
Normal file
@@ -0,0 +1,115 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import {
|
||||
createTag,
|
||||
deleteTag,
|
||||
renameTag,
|
||||
copyTag,
|
||||
tags as listTags
|
||||
} from '../../../scripts/modules/task-manager/tag-management.js';
|
||||
|
||||
const TEMP_DIR = path.join(process.cwd(), '.tmp_tag_management_tests');
|
||||
const TASKS_PATH = path.join(TEMP_DIR, 'tasks.json');
|
||||
|
||||
/**
|
||||
* Helper to write an initial tagged tasks.json structure
|
||||
*/
|
||||
function writeInitialFile() {
|
||||
const initialData = {
|
||||
master: {
|
||||
tasks: [{ id: 1, title: 'Initial Task', status: 'pending' }],
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
description: 'Master tag'
|
||||
}
|
||||
}
|
||||
};
|
||||
fs.mkdirSync(TEMP_DIR, { recursive: true });
|
||||
fs.writeFileSync(TASKS_PATH, JSON.stringify(initialData, null, 2));
|
||||
}
|
||||
|
||||
describe('Tag Management – writeJSON context preservation', () => {
|
||||
beforeEach(() => {
|
||||
writeInitialFile();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(TEMP_DIR, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('createTag should not corrupt other tags', async () => {
|
||||
await createTag(
|
||||
TASKS_PATH,
|
||||
'feature',
|
||||
{ copyFromCurrent: true },
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
const data = JSON.parse(fs.readFileSync(TASKS_PATH, 'utf8'));
|
||||
expect(data.master).toBeDefined();
|
||||
expect(data.feature).toBeDefined();
|
||||
});
|
||||
|
||||
it('renameTag should keep overall structure intact', async () => {
|
||||
await createTag(
|
||||
TASKS_PATH,
|
||||
'oldtag',
|
||||
{},
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
await renameTag(
|
||||
TASKS_PATH,
|
||||
'oldtag',
|
||||
'newtag',
|
||||
{},
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
const data = JSON.parse(fs.readFileSync(TASKS_PATH, 'utf8'));
|
||||
expect(data.newtag).toBeDefined();
|
||||
expect(data.oldtag).toBeUndefined();
|
||||
});
|
||||
|
||||
it('copyTag then deleteTag preserves other tags', async () => {
|
||||
await createTag(
|
||||
TASKS_PATH,
|
||||
'source',
|
||||
{},
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
await copyTag(
|
||||
TASKS_PATH,
|
||||
'source',
|
||||
'copy',
|
||||
{},
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
await deleteTag(
|
||||
TASKS_PATH,
|
||||
'copy',
|
||||
{ yes: true },
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
const tagsList = await listTags(
|
||||
TASKS_PATH,
|
||||
{},
|
||||
{ projectRoot: TEMP_DIR },
|
||||
'json'
|
||||
);
|
||||
|
||||
const tagNames = tagsList.tags.map((t) => t.name);
|
||||
expect(tagNames).toContain('master');
|
||||
expect(tagNames).toContain('source');
|
||||
expect(tagNames).not.toContain('copy');
|
||||
});
|
||||
});
|
||||
@@ -22,10 +22,26 @@ jest.mock('fs', () => ({
|
||||
}));
|
||||
|
||||
jest.mock('path', () => ({
|
||||
join: jest.fn((dir, file) => `${dir}/${file}`),
|
||||
join: jest.fn((...paths) => paths.join('/')),
|
||||
dirname: jest.fn((filePath) => filePath.split('/').slice(0, -1).join('/')),
|
||||
resolve: jest.fn((...paths) => paths.join('/')),
|
||||
basename: jest.fn((filePath) => filePath.split('/').pop())
|
||||
basename: jest.fn((filePath) => filePath.split('/').pop()),
|
||||
parse: jest.fn((filePath) => {
|
||||
const parts = filePath.split('/');
|
||||
const fileName = parts[parts.length - 1];
|
||||
const extIndex = fileName.lastIndexOf('.');
|
||||
return {
|
||||
dir: parts.length > 1 ? parts.slice(0, -1).join('/') : '',
|
||||
name: extIndex > 0 ? fileName.substring(0, extIndex) : fileName,
|
||||
ext: extIndex > 0 ? fileName.substring(extIndex) : '',
|
||||
base: fileName
|
||||
};
|
||||
}),
|
||||
format: jest.fn((pathObj) => {
|
||||
const dir = pathObj.dir || '';
|
||||
const base = pathObj.base || `${pathObj.name || ''}${pathObj.ext || ''}`;
|
||||
return dir ? `${dir}/${base}` : base;
|
||||
})
|
||||
}));
|
||||
|
||||
jest.mock('chalk', () => ({
|
||||
@@ -72,7 +88,9 @@ import {
|
||||
taskExists,
|
||||
formatTaskId,
|
||||
findCycles,
|
||||
toKebabCase
|
||||
toKebabCase,
|
||||
slugifyTagForFilePath,
|
||||
getTagAwareFilePath
|
||||
} from '../../scripts/modules/utils.js';
|
||||
|
||||
// Import the mocked modules for use in tests
|
||||
@@ -119,6 +137,8 @@ describe('Utils Module', () => {
|
||||
beforeEach(() => {
|
||||
// Clear all mocks before each test
|
||||
jest.clearAllMocks();
|
||||
// Restore the original path.join mock
|
||||
jest.spyOn(path, 'join').mockImplementation((...paths) => paths.join('/'));
|
||||
});
|
||||
|
||||
describe('truncate function', () => {
|
||||
@@ -677,3 +697,51 @@ describe('CLI Flag Format Validation', () => {
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test('slugifyTagForFilePath should create filesystem-safe tag names', () => {
|
||||
expect(slugifyTagForFilePath('feature/user-auth')).toBe('feature-user-auth');
|
||||
expect(slugifyTagForFilePath('Feature Branch')).toBe('feature-branch');
|
||||
expect(slugifyTagForFilePath('test@special#chars')).toBe(
|
||||
'test-special-chars'
|
||||
);
|
||||
expect(slugifyTagForFilePath('UPPERCASE')).toBe('uppercase');
|
||||
expect(slugifyTagForFilePath('multiple---hyphens')).toBe('multiple-hyphens');
|
||||
expect(slugifyTagForFilePath('--leading-trailing--')).toBe(
|
||||
'leading-trailing'
|
||||
);
|
||||
expect(slugifyTagForFilePath('')).toBe('unknown-tag');
|
||||
expect(slugifyTagForFilePath(null)).toBe('unknown-tag');
|
||||
expect(slugifyTagForFilePath(undefined)).toBe('unknown-tag');
|
||||
});
|
||||
|
||||
test('getTagAwareFilePath should use slugified tags in file paths', () => {
|
||||
const basePath = '.taskmaster/reports/complexity-report.json';
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Master tag should not be slugified
|
||||
expect(getTagAwareFilePath(basePath, 'master', projectRoot)).toBe(
|
||||
'/test/project/.taskmaster/reports/complexity-report.json'
|
||||
);
|
||||
|
||||
// Null/undefined tags should use base path
|
||||
expect(getTagAwareFilePath(basePath, null, projectRoot)).toBe(
|
||||
'/test/project/.taskmaster/reports/complexity-report.json'
|
||||
);
|
||||
|
||||
// Regular tag should be slugified
|
||||
expect(getTagAwareFilePath(basePath, 'feature-branch', projectRoot)).toBe(
|
||||
'/test/project/.taskmaster/reports/complexity-report_feature-branch.json'
|
||||
);
|
||||
|
||||
// Tag with special characters should be slugified
|
||||
expect(getTagAwareFilePath(basePath, 'feature/user-auth', projectRoot)).toBe(
|
||||
'/test/project/.taskmaster/reports/complexity-report_feature-user-auth.json'
|
||||
);
|
||||
|
||||
// Tag with spaces and special characters
|
||||
expect(
|
||||
getTagAwareFilePath(basePath, 'Feature Branch @Test', projectRoot)
|
||||
).toBe(
|
||||
'/test/project/.taskmaster/reports/complexity-report_feature-branch-test.json'
|
||||
);
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user