Merge pull request #917 from eyaltoledano/next

This commit is contained in:
Ralph Khreish
2025-07-04 07:23:21 +03:00
committed by GitHub
111 changed files with 8827 additions and 2344 deletions

View File

@@ -0,0 +1,12 @@
---
"task-master-ai": patch
---
Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Ensure projectRoot is a string (potential WSL fix)

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix bulk update tag corruption in tagged task lists

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Include additional Anthropic models running on Bedrock in what is supported

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": patch
---
Fix expand-task to use tag-specific complexity reports
The expand-task function now correctly uses complexity reports specific to the current tag context (e.g., task-complexity-report_feature-branch.json) instead of always using the default task-complexity-report.json file. This enables proper task expansion behavior when working with multiple tag contexts.

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Call rules interactive setup during init

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Update o3 model price

View File

@@ -0,0 +1,17 @@
---
'task-master-ai': minor
---
Added comprehensive rule profile management:
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
- Resolves #338

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix .gitignore missing trailing newline during project initialization

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Default to Cursor profile for MCP init when no rules specified

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improves Amazon Bedrock support

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Adds support for gemini-cli as a provider, enabling free or subscription use through Google Accounts and paid Gemini Cloud Assist (GCA) subscriptions.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": patch
---
Fixes issue with expand CLI command "Complexity report not found"
- Closes #735
- Closes #728

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix data corruption issues by ensuring project root and tag information is properly passed through all command operations

View File

@@ -0,0 +1,10 @@
---
"task-master-ai": minor
---
Make task-master more compatible with the "o" family models of OpenAI
Now works well with:
- o3
- o3-mini
- etc.

23
.changeset/pre.json Normal file
View File

@@ -0,0 +1,23 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.17.1"
},
"changesets": [
"bright-llamas-enter",
"huge-moose-prove",
"icy-dryers-hunt",
"lemon-deer-hide",
"modern-cats-pick",
"nasty-berries-tan",
"shy-groups-fly",
"sour-lions-check",
"spicy-teams-travel",
"stale-cameras-sin",
"swift-squids-sip",
"tiny-dogs-change",
"vast-plants-exist",
"wet-berries-dress"
]
}

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add better support for python projects by adding `pyproject.toml` as a projectRoot marker

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Added option for the AI to determine the number of tasks required based entirely on complexity

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Store tasks in Git by default

View File

@@ -0,0 +1,11 @@
---
"task-master-ai": patch
---
Improve provider validation system with clean constants structure
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix weird `task-master init` bug when using in certain environments

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add advanced settings for Claude Code AI Provider

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Rename Roo Code Boomerang role to Orchestrator

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
fixes a critical issue where subtask generation fails on gemini-2.5-pro unless explicitly prompted to return 'details' field as a string not an object

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Support custom response language

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improve mcp keys check in cursor

View File

@@ -0,0 +1,22 @@
---
"task-master-ai": minor
---
- **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734

View File

@@ -0,0 +1,22 @@
---
"task-master-ai": minor
---
Add Claude Code provider support
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
Key features:
- New claude-code provider with support for opus and sonnet models
- No API key required - uses local Claude Code CLI installation
- Optional dependency - won't affect users who don't need Claude Code
- Lazy loading ensures the provider only loads when requested
- Full integration with existing Task Master commands and workflows
- Comprehensive test coverage for reliability
- New --claude-code flag for the models command
Users can now configure Claude Code models with:
task-master models --set-main sonnet --claude-code
task-master models --set-research opus --claude-code
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix rules command to use reliable project root detection like other commands

View File

@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.) 4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master 5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail. The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples ### Workflow Transition Examples

View File

@@ -272,7 +272,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master clear-subtasks [options]` * **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` * **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:** * **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)

View File

@@ -29,6 +29,8 @@
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"userId": "1234567890", "userId": "1234567890",
"azureBaseURL": "https://your-endpoint.azure.com/", "azureBaseURL": "https://your-endpoint.azure.com/",
"defaultTag": "master" "defaultTag": "master",
} "responseLanguage": "English"
},
"claudeCode": {}
} }

View File

@@ -219,6 +219,110 @@
- [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')` - [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')`
## 0.18.0-rc.0
### Minor Changes
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`
- [#460](https://github.com/eyaltoledano/claude-task-master/pull/460) [`a09a2d0`](https://github.com/eyaltoledano/claude-task-master/commit/a09a2d0967a10276623e3f3ead3ed577c15ce62f) Thanks [@joedanz](https://github.com/joedanz)! - Added comprehensive rule profile management:
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
- Resolves #338
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734
- [#829](https://github.com/eyaltoledano/claude-task-master/pull/829) [`4b0c9d9`](https://github.com/eyaltoledano/claude-task-master/commit/4b0c9d9af62d00359fca3f43283cf33223d410bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
Key features:
- New claude-code provider with support for opus and sonnet models
- No API key required - uses local Claude Code CLI installation
- Optional dependency - won't affect users who don't need Claude Code
- Lazy loading ensures the provider only loads when requested
- Full integration with existing Task Master commands and workflows
- Comprehensive test coverage for reliability
- New --claude-code flag for the models command
Users can now configure Claude Code models with:
task-master models --set-main sonnet --claude-code
task-master models --set-research opus --claude-code
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
### Patch Changes
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
- [#833](https://github.com/eyaltoledano/claude-task-master/pull/833) [`cf2c066`](https://github.com/eyaltoledano/claude-task-master/commit/cf2c06697a0b5b952fb6ca4b3c923e9892604d08) Thanks [@joedanz](https://github.com/joedanz)! - Call rules interactive setup during init
- [#826](https://github.com/eyaltoledano/claude-task-master/pull/826) [`7811227`](https://github.com/eyaltoledano/claude-task-master/commit/78112277b3caa4539e6e29805341a944799fb0e7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
- [#834](https://github.com/eyaltoledano/claude-task-master/pull/834) [`6483537`](https://github.com/eyaltoledano/claude-task-master/commit/648353794eb60d11ffceda87370a321ad310fbd7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
- [#633](https://github.com/eyaltoledano/claude-task-master/pull/633) [`3a2325a`](https://github.com/eyaltoledano/claude-task-master/commit/3a2325a963fed82377ab52546eedcbfebf507a7e) Thanks [@nmarley](https://github.com/nmarley)! - Fix weird `task-master init` bug when using in certain environments
- [#831](https://github.com/eyaltoledano/claude-task-master/pull/831) [`b592dff`](https://github.com/eyaltoledano/claude-task-master/commit/b592dff8bc5c5d7966843fceaa0adf4570934336) Thanks [@joedanz](https://github.com/joedanz)! - Rename Roo Code Boomerang role to Orchestrator
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
## 0.17.1
### Patch Changes
- [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')`
## 0.17.0 ## 0.17.0
### Minor Changes ### Minor Changes

View File

@@ -323,8 +323,11 @@ Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks # Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt> task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated # Limit the number of tasks generated (default is 10)
task-master parse-prd <prd-file.txt> --num-tasks=10 task-master parse-prd <prd-file.txt> --num-tasks=5
# Allow task master to determine the number of tasks based on complexity
task-master parse-prd <prd-file.txt> --num-tasks=0
``` ```
### List Tasks ### List Tasks
@@ -397,6 +400,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
# Expand a specific task with subtasks # Expand a specific task with subtasks
task-master expand --id=<id> --num=<number> task-master expand --id=<id> --num=<number>
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
task-master expand --id=<id> --num=0
# Expand with additional context # Expand with additional context
task-master expand --id=<id> --prompt="<context>" task-master expand --id=<id> --prompt="<context>"

View File

@@ -3,7 +3,7 @@
"main": { "main": {
"provider": "anthropic", "provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219", "modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000, "maxTokens": 100000,
"temperature": 0.2 "temperature": 0.2
}, },
"research": { "research": {
@@ -14,9 +14,9 @@
}, },
"fallback": { "fallback": {
"provider": "anthropic", "provider": "anthropic",
"modelId": "claude-3-5-sonnet-20240620", "modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 8192, "maxTokens": 8192,
"temperature": 0.1 "temperature": 0.2
} }
}, },
"global": { "global": {
@@ -28,6 +28,7 @@
"defaultTag": "master", "defaultTag": "master",
"ollamaBaseURL": "http://localhost:11434/api", "ollamaBaseURL": "http://localhost:11434/api",
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/", "azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com" "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"responseLanguage": "English"
} }
} }

View File

@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.) 4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master 5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail. The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples ### Workflow Transition Examples

View File

@@ -271,7 +271,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master clear-subtasks [options]` * **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` * **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:** * **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)

View File

@@ -8,8 +8,11 @@ Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks # Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt> task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated # Limit the number of tasks generated (default is 10)
task-master parse-prd <prd-file.txt> --num-tasks=10 task-master parse-prd <prd-file.txt> --num-tasks=5
# Allow task master to determine the number of tasks based on complexity
task-master parse-prd <prd-file.txt> --num-tasks=0
``` ```
## List Tasks ## List Tasks
@@ -128,6 +131,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
# Expand a specific task with subtasks # Expand a specific task with subtasks
task-master expand --id=<id> --num=<number> task-master expand --id=<id> --num=<number>
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
task-master expand --id=<id> --num=0
# Expand with additional context # Expand with additional context
task-master expand --id=<id> --prompt="<context>" task-master expand --id=<id> --prompt="<context>"

View File

@@ -36,6 +36,7 @@ Taskmaster uses two primary methods for configuration:
"global": { "global": {
"logLevel": "info", "logLevel": "info",
"debug": false, "debug": false,
"defaultNumTasks": 10,
"defaultSubtasks": 5, "defaultSubtasks": 5,
"defaultPriority": "medium", "defaultPriority": "medium",
"defaultTag": "master", "defaultTag": "master",
@@ -43,7 +44,8 @@ Taskmaster uses two primary methods for configuration:
"ollamaBaseURL": "http://localhost:11434/api", "ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments", "azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
"vertexProjectId": "your-gcp-project-id", "vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1" "vertexLocation": "us-central1",
"responseLanguage": "English"
} }
} }
``` ```

View File

@@ -64,100 +64,81 @@ task-master set-status --id=task-001 --status=in-progress
```bash ```bash
npm install @anthropic-ai/claude-code npm install @anthropic-ai/claude-code
``` ```
3. No API key is required in your environment variables or MCP configuration 3. Run Claude Code for the first time and authenticate with your Anthropic account:
```bash
claude
```
4. No API key is required in your environment variables or MCP configuration
## Advanced Settings ## Advanced Settings
The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. While these settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), they are not currently exposed through Task Master's standard API due to architectural constraints. The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. These settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), and can be managed through Task Master's configuration file.
### Supported Settings ### Advanced Settings Usage
To update settings for Claude Code, update your `.taskmaster/config.json`:
The Claude Code settings can be specified globally in the `claudeCode` section of the config, or on a per-command basis in the `commandSpecific` section:
```javascript ```javascript
const settings = { {
// "models" and "global" config...
"claudeCode": {
// Maximum conversation turns Claude can make in a single request // Maximum conversation turns Claude can make in a single request
maxTurns: 5, "maxTurns": 5,
// Custom system prompt to override Claude Code's default behavior // Custom system prompt to override Claude Code's default behavior
customSystemPrompt: "You are a helpful assistant focused on code quality", "customSystemPrompt": "You are a helpful assistant focused on code quality",
// Append additional content to the system prompt
"appendSystemPrompt": "Always follow coding best practices",
// Permission mode for file system operations // Permission mode for file system operations
permissionMode: 'default', // Options: 'default', 'restricted', 'permissive' "permissionMode": "default", // Options: "default", "acceptEdits", "plan", "bypassPermissions"
// Explicitly allow only certain tools // Explicitly allow only certain tools
allowedTools: ['Read', 'LS'], // Claude can only read files and list directories "allowedTools": ["Read", "LS"], // Claude can only read files and list directories
// Explicitly disallow certain tools // Explicitly disallow certain tools
disallowedTools: ['Write', 'Edit'], // Prevent Claude from modifying files "disallowedTools": ["Write", "Edit"], // Prevent Claude from modifying files
// MCP servers for additional tool integrations // MCP servers for additional tool integrations
mcpServers: [] "mcpServers": {
}; "mcp-server-name": {
``` "command": "npx",
"args": ["-y", "mcp-serve"],
"env": {
// ...
}
}
}
},
### Current Limitations // Command-specific settings override global settings
"commandSpecific": {
Task Master uses a standardized `BaseAIProvider` interface that only passes through common parameters (modelId, messages, maxTokens, temperature) to maintain consistency across all providers. The Claude Code advanced settings are implemented in the SDK but not accessible through Task Master's high-level commands. "parse-prd": {
// Settings specific to the 'parse-prd' command
### Future Integration Options "maxTurns": 10,
"customSystemPrompt": "You are a task breakdown specialist"
For developers who need to use these advanced settings, there are three potential approaches: },
"analyze-complexity": {
#### Option 1: Extend BaseAIProvider // Settings specific to the 'analyze-complexity' command
Modify the core Task Master architecture to support provider-specific settings: "maxTurns": 3,
"appendSystemPrompt": "Focus on identifying bottlenecks"
```javascript }
// In BaseAIProvider }
const result = await generateText({
model: client(params.modelId),
messages: params.messages,
maxTokens: params.maxTokens,
temperature: params.temperature,
...params.providerSettings // New: pass through provider-specific settings
});
```
#### Option 2: Override Methods in ClaudeCodeProvider
Create custom implementations that extract and use Claude-specific settings:
```javascript
// In ClaudeCodeProvider
async generateText(params) {
const { maxTurns, allowedTools, disallowedTools, ...baseParams } = params;
const client = this.getClient({
...baseParams,
settings: { maxTurns, allowedTools, disallowedTools }
});
// Continue with generation...
} }
``` ```
#### Option 3: Direct SDK Usage - For a full list of Cluaude Code settings, see the [Claude Code Settings documentation](https://docs.anthropic.com/en/docs/claude-code/settings).
For immediate access to advanced features, developers can use the Claude Code SDK directly: - For a full list of AI powered command names, see this file: `src/constants/commands.js`
```javascript
import { createClaudeCode } from 'task-master-ai/ai-providers/custom-sdk/claude-code';
const claude = createClaudeCode({
defaultSettings: {
maxTurns: 5,
allowedTools: ['Read', 'LS'],
disallowedTools: ['Write', 'Edit']
}
});
const model = claude('sonnet');
const result = await generateText({
model,
messages: [{ role: 'user', content: 'Analyze this code...' }]
});
```
### Why These Settings Matter ### Why These Settings Matter
- **maxTurns**: Useful for complex refactoring tasks that require multiple iterations - **maxTurns**: Useful for complex refactoring tasks that require multiple iterations
- **customSystemPrompt**: Allows specializing Claude for specific domains or coding standards - **customSystemPrompt**: Allows specializing Claude for specific domains or coding standards
- **appendSystemPrompt**: Useful for enforcing coding standards or providing additional context
- **permissionMode**: Critical for security in production environments - **permissionMode**: Critical for security in production environments
- **allowedTools/disallowedTools**: Enable read-only analysis modes or restrict access to sensitive operations - **allowedTools/disallowedTools**: Enable read-only analysis modes or restrict access to sensitive operations
- **mcpServers**: Future extensibility for custom tool integrations - **mcpServers**: Future extensibility for custom tool integrations

View File

@@ -1,10 +1,17 @@
# Available Models as of June 21, 2025 # Available Models as of July 2, 2025
## Main Models ## Main Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost | | Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- | | ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 | | bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 | | anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 | | anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | | anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
@@ -67,11 +74,19 @@
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 | | openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Research Models ## Research Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost | | Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------- | --------- | ---------- | ----------- | | ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 | | bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 | | openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 | | openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
@@ -84,12 +99,21 @@
| xai | grok-3-fast | — | 5 | 25 | | xai | grok-3-fast | — | 5 | 25 |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Fallback Models ## Fallback Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost | | Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- | | ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 | | bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 | | anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 | | anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | | anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
@@ -141,3 +165,5 @@
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 | | openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |

View File

@@ -0,0 +1,169 @@
# Gemini CLI Provider
The Gemini CLI provider allows you to use Google's Gemini models through the Gemini CLI tool, leveraging your existing Gemini subscription and OAuth authentication.
## Why Use Gemini CLI?
The primary benefit of using the `gemini-cli` provider is to leverage your existing Gemini Pro subscription or OAuth authentication configured through the Gemini CLI. This is ideal for users who:
- Have an active Gemini subscription
- Want to use OAuth authentication instead of managing API keys
- Have already configured authentication via `gemini auth login`
## Installation
The provider is already included in Task Master. However, you need to install the Gemini CLI tool:
```bash
# Install gemini CLI globally
npm install -g @google/gemini-cli
```
## Authentication
### Primary Method: CLI Authentication (Recommended)
The Gemini CLI provider is designed to use your pre-configured OAuth authentication:
```bash
# Authenticate with your Google account
gemini auth login
```
This will open a browser window for OAuth authentication. Once authenticated, Task Master will automatically use these credentials when you select the `gemini-cli` provider.
### Alternative Method: API Key
While the primary use case is OAuth authentication, you can also use an API key if needed:
```bash
export GEMINI_API_KEY="your-gemini-api-key"
```
**Note:** If you want to use API keys, consider using the standard `google` provider instead, as `gemini-cli` is specifically designed for OAuth/subscription users.
## Configuration
Configure `gemini-cli` as a provider using the Task Master models command:
```bash
# Set gemini-cli as your main provider with gemini-2.5-pro
task-master models --set-main gemini-2.5-pro --gemini-cli
# Or use the faster gemini-2.5-flash model
task-master models --set-main gemini-2.5-flash --gemini-cli
```
You can also manually edit your `.taskmaster/config/providers.json`:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-flash"
}
}
```
### Available Models
The gemini-cli provider supports only two models:
- `gemini-2.5-pro` - High performance model (1M token context window, 65,536 max output tokens)
- `gemini-2.5-flash` - Fast, efficient model (1M token context window, 65,536 max output tokens)
## Usage Examples
### Basic Usage
Once authenticated with `gemini auth login` and configured, simply use Task Master as normal:
```bash
# The provider will automatically use your OAuth credentials
task-master new "Create a hello world function"
```
### With Specific Parameters
Configure model parameters in your providers.json:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-pro",
"parameters": {
"maxTokens": 65536,
"temperature": 0.7
}
}
}
```
### As Fallback Provider
Use gemini-cli as a fallback when your primary provider is unavailable:
```json
{
"main": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-latest"
},
"fallback": {
"provider": "gemini-cli",
"model": "gemini-2.5-flash"
}
}
```
## Troubleshooting
### "Authentication failed" Error
If you get an authentication error:
1. **Primary solution**: Run `gemini auth login` to authenticate with your Google account
2. **Check authentication status**: Run `gemini auth status` to verify you're logged in
3. **If using API key** (not recommended): Ensure `GEMINI_API_KEY` is set correctly
### "Model not found" Error
The gemini-cli provider only supports two models:
- `gemini-2.5-pro`
- `gemini-2.5-flash`
If you need other Gemini models, use the standard `google` provider with an API key instead.
### Gemini CLI Not Found
If you get a "gemini: command not found" error:
```bash
# Install the Gemini CLI globally
npm install -g @google/gemini-cli
# Verify installation
gemini --version
```
### Custom Endpoints
Custom endpoints can be configured if needed:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-pro",
"baseURL": "https://custom-endpoint.example.com"
}
}
```
## Important Notes
- **OAuth vs API Key**: This provider is specifically designed for users who want to use OAuth authentication via `gemini auth login`. If you prefer using API keys, consider using the standard `google` provider instead.
- **Limited Model Support**: Only `gemini-2.5-pro` and `gemini-2.5-flash` are available through gemini-cli.
- **Subscription Benefits**: Using OAuth authentication allows you to leverage any subscription benefits associated with your Google account.
- The provider uses the `ai-sdk-provider-gemini-cli` npm package internally.
- Supports all standard Task Master features: text generation, streaming, and structured object generation.

View File

@@ -20,6 +20,8 @@ import {
* @param {string} [args.status] - Status for new subtask (default: 'pending') * @param {string} [args.status] - Status for new subtask (default: 'pending')
* @param {string} [args.dependencies] - Comma-separated list of dependency IDs * @param {string} [args.dependencies] - Comma-separated list of dependency IDs
* @param {boolean} [args.skipGenerate] - Skip regenerating task files * @param {boolean} [args.skipGenerate] - Skip regenerating task files
* @param {string} [args.projectRoot] - Project root directory
* @param {string} [args.tag] - Tag for the task
* @param {Object} log - Logger object * @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: string}>} * @returns {Promise<{success: boolean, data?: Object, error?: string}>}
*/ */
@@ -34,7 +36,9 @@ export async function addSubtaskDirect(args, log) {
details, details,
status, status,
dependencies: dependenciesStr, dependencies: dependenciesStr,
skipGenerate skipGenerate,
projectRoot,
tag
} = args; } = args;
try { try {
log.info(`Adding subtask with args: ${JSON.stringify(args)}`); log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
@@ -96,6 +100,8 @@ export async function addSubtaskDirect(args, log) {
// Enable silent mode to prevent console logs from interfering with JSON response // Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode(); enableSilentMode();
const context = { projectRoot, tag };
// Case 1: Convert existing task to subtask // Case 1: Convert existing task to subtask
if (existingTaskId) { if (existingTaskId) {
log.info(`Converting task ${existingTaskId} to a subtask of ${parentId}`); log.info(`Converting task ${existingTaskId} to a subtask of ${parentId}`);
@@ -104,7 +110,8 @@ export async function addSubtaskDirect(args, log) {
parentId, parentId,
existingTaskId, existingTaskId,
null, null,
generateFiles generateFiles,
context
); );
// Restore normal logging // Restore normal logging
@@ -135,7 +142,8 @@ export async function addSubtaskDirect(args, log) {
parentId, parentId,
null, null,
newSubtaskData, newSubtaskData,
generateFiles generateFiles,
context
); );
// Restore normal logging // Restore normal logging

View File

@@ -171,8 +171,8 @@ export async function expandTaskDirect(args, log, context = {}) {
task.subtasks = []; task.subtasks = [];
} }
// Save tasks.json with potentially empty subtasks array // Save tasks.json with potentially empty subtasks array and proper context
writeJSON(tasksPath, data); writeJSON(tasksPath, data, projectRoot, tag);
// Create logger wrapper using the utility // Create logger wrapper using the utility
const mcpLog = createLogWrapper(log); const mcpLog = createLogWrapper(log);

View File

@@ -13,12 +13,14 @@ import fs from 'fs';
* Fix invalid dependencies in tasks.json automatically * Fix invalid dependencies in tasks.json automatically
* @param {Object} args - Function arguments * @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file. * @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.projectRoot - Project root directory
* @param {string} args.tag - Tag for the project
* @param {Object} log - Logger object * @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>} * @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
*/ */
export async function fixDependenciesDirect(args, log) { export async function fixDependenciesDirect(args, log) {
// Destructure expected args // Destructure expected args
const { tasksJsonPath } = args; const { tasksJsonPath, projectRoot, tag } = args;
try { try {
log.info(`Fixing invalid dependencies in tasks: ${tasksJsonPath}`); log.info(`Fixing invalid dependencies in tasks: ${tasksJsonPath}`);
@@ -51,8 +53,10 @@ export async function fixDependenciesDirect(args, log) {
// Enable silent mode to prevent console logs from interfering with JSON response // Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode(); enableSilentMode();
// Call the original command function using the provided path // Call the original command function using the provided path and proper context
await fixDependenciesCommand(tasksPath); await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Restore normal logging // Restore normal logging
disableSilentMode(); disableSilentMode();
@@ -61,7 +65,8 @@ export async function fixDependenciesDirect(args, log) {
success: true, success: true,
data: { data: {
message: 'Dependencies fixed successfully', message: 'Dependencies fixed successfully',
tasksPath tasksPath,
tag: tag || 'master'
} }
}; };
} catch (error) { } catch (error) {

View File

@@ -72,15 +72,16 @@ export async function initializeProjectDirect(args, log, context = {}) {
yes: true // Force yes mode yes: true // Force yes mode
}; };
// Handle rules option just like CLI // Handle rules option with MCP-specific defaults
if (Array.isArray(args.rules) && args.rules.length > 0) { if (Array.isArray(args.rules) && args.rules.length > 0) {
options.rules = args.rules; options.rules = args.rules;
options.rulesExplicitlyProvided = true;
log.info(`Including rules: ${args.rules.join(', ')}`); log.info(`Including rules: ${args.rules.join(', ')}`);
} else { } else {
options.rules = RULE_PROFILES; // For MCP initialization, default to Cursor profile only
log.info( options.rules = ['cursor'];
`No rule profiles specified, defaulting to: ${RULE_PROFILES.join(', ')}` options.rulesExplicitlyProvided = true;
); log.info(`No rule profiles specified, defaulting to: Cursor`);
} }
log.info(`Initializing project with options: ${JSON.stringify(options)}`); log.info(`Initializing project with options: ${JSON.stringify(options)}`);

View File

@@ -109,7 +109,7 @@ export async function parsePRDDirect(args, log, context = {}) {
if (numTasksArg) { if (numTasksArg) {
numTasks = numTasks =
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg; typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
if (Number.isNaN(numTasks) || numTasks <= 0) { if (Number.isNaN(numTasks) || numTasks < 0) {
// Ensure positive number // Ensure positive number
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
logWrapper.warn( logWrapper.warn(

View File

@@ -20,12 +20,13 @@ import {
* @param {Object} args - Command arguments * @param {Object} args - Command arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file. * @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple). * @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
* @param {string} [args.tag] - Tag context to operate on (defaults to current active tag).
* @param {Object} log - Logger object * @param {Object} log - Logger object
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } } * @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
*/ */
export async function removeTaskDirect(args, log, context = {}) { export async function removeTaskDirect(args, log, context = {}) {
// Destructure expected args // Destructure expected args
const { tasksJsonPath, id, projectRoot } = args; const { tasksJsonPath, id, projectRoot, tag } = args;
const { session } = context; const { session } = context;
try { try {
// Check if tasksJsonPath was provided // Check if tasksJsonPath was provided
@@ -56,17 +57,17 @@ export async function removeTaskDirect(args, log, context = {}) {
const taskIdArray = id.split(',').map((taskId) => taskId.trim()); const taskIdArray = id.split(',').map((taskId) => taskId.trim());
log.info( log.info(
`Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}` `Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}${tag ? ` in tag '${tag}'` : ''}`
); );
// Validate all task IDs exist before proceeding // Validate all task IDs exist before proceeding
const data = readJSON(tasksJsonPath, projectRoot); const data = readJSON(tasksJsonPath, projectRoot, tag);
if (!data || !data.tasks) { if (!data || !data.tasks) {
return { return {
success: false, success: false,
error: { error: {
code: 'INVALID_TASKS_FILE', code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksJsonPath}` message: `No valid tasks found in ${tasksJsonPath}${tag ? ` for tag '${tag}'` : ''}`
} }
}; };
} }
@@ -80,71 +81,49 @@ export async function removeTaskDirect(args, log, context = {}) {
success: false, success: false,
error: { error: {
code: 'INVALID_TASK_ID', code: 'INVALID_TASK_ID',
message: `The following tasks were not found: ${invalidTasks.join(', ')}` message: `The following tasks were not found${tag ? ` in tag '${tag}'` : ''}: ${invalidTasks.join(', ')}`
} }
}; };
} }
// Remove tasks one by one
const results = [];
// Enable silent mode to prevent console logs from interfering with JSON response // Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode(); enableSilentMode();
try { try {
for (const taskId of taskIdArray) { // Call removeTask with proper context including tag
try { const result = await removeTask(tasksJsonPath, id, {
const result = await removeTask(tasksJsonPath, taskId); projectRoot,
results.push({ tag
taskId,
success: true,
message: result.message,
removedTask: result.removedTask
}); });
log.info(`Successfully removed task: ${taskId}`);
} catch (error) {
results.push({
taskId,
success: false,
error: error.message
});
log.error(`Error removing task ${taskId}: ${error.message}`);
}
}
} finally {
// Restore normal logging
disableSilentMode();
}
// Check if all tasks were successfully removed if (!result.success) {
const successfulRemovals = results.filter((r) => r.success);
const failedRemovals = results.filter((r) => !r.success);
if (successfulRemovals.length === 0) {
// All removals failed
return { return {
success: false, success: false,
error: { error: {
code: 'REMOVE_TASK_ERROR', code: 'REMOVE_TASK_ERROR',
message: 'Failed to remove any tasks', message: result.error || 'Failed to remove tasks'
details: failedRemovals
.map((r) => `${r.taskId}: ${r.error}`)
.join('; ')
} }
}; };
} }
// At least some tasks were removed successfully log.info(`Successfully removed ${result.removedTasks.length} task(s)`);
return { return {
success: true, success: true,
data: { data: {
totalTasks: taskIdArray.length, totalTasks: taskIdArray.length,
successful: successfulRemovals.length, successful: result.removedTasks.length,
failed: failedRemovals.length, failed: taskIdArray.length - result.removedTasks.length,
results: results, removedTasks: result.removedTasks,
tasksPath: tasksJsonPath message: result.message,
tasksPath: tasksJsonPath,
tag: data.tag || tag || 'master'
} }
}; };
} finally {
// Restore normal logging
disableSilentMode();
}
} catch (error) { } catch (error) {
// Ensure silent mode is disabled even if an outer error occurs // Ensure silent mode is disabled even if an outer error occurs
disableSilentMode(); disableSilentMode();

View File

@@ -0,0 +1,40 @@
/**
* response-language.js
* Direct function for managing response language via MCP
*/
import { setResponseLanguage } from '../../../../scripts/modules/task-manager.js';
import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
export async function responseLanguageDirect(args, log, context = {}) {
const { projectRoot, language } = args;
const mcpLog = createLogWrapper(log);
log.info(
`Executing response-language_direct with args: ${JSON.stringify(args)}`
);
log.info(`Using project root: ${projectRoot}`);
try {
enableSilentMode();
return setResponseLanguage(language, {
mcpLog,
projectRoot
});
} catch (error) {
return {
success: false,
error: {
code: 'DIRECT_FUNCTION_ERROR',
message: error.message,
details: error.stack
}
};
} finally {
disableSilentMode();
}
}

View File

@@ -20,7 +20,8 @@ import { nextTaskDirect } from './next-task.js';
*/ */
export async function setTaskStatusDirect(args, log, context = {}) { export async function setTaskStatusDirect(args, log, context = {}) {
// Destructure expected args, including the resolved tasksJsonPath and projectRoot // Destructure expected args, including the resolved tasksJsonPath and projectRoot
const { tasksJsonPath, id, status, complexityReportPath, projectRoot } = args; const { tasksJsonPath, id, status, complexityReportPath, projectRoot, tag } =
args;
const { session } = context; const { session } = context;
try { try {
log.info(`Setting task status with args: ${JSON.stringify(args)}`); log.info(`Setting task status with args: ${JSON.stringify(args)}`);
@@ -69,11 +70,17 @@ export async function setTaskStatusDirect(args, log, context = {}) {
enableSilentMode(); // Enable silent mode before calling core function enableSilentMode(); // Enable silent mode before calling core function
try { try {
// Call the core function // Call the core function
await setTaskStatus(tasksPath, taskId, newStatus, { await setTaskStatus(
tasksPath,
taskId,
newStatus,
{
mcpLog: log, mcpLog: log,
projectRoot, projectRoot,
session session
}); },
tag
);
log.info(`Successfully set task ${taskId} status to ${newStatus}`); log.info(`Successfully set task ${taskId} status to ${newStatus}`);

View File

@@ -21,7 +21,7 @@ import {
*/ */
export async function updateTasksDirect(args, log, context = {}) { export async function updateTasksDirect(args, log, context = {}) {
const { session } = context; const { session } = context;
const { from, prompt, research, tasksJsonPath, projectRoot } = args; const { from, prompt, research, tasksJsonPath, projectRoot, tag } = args;
// Create the standard logger wrapper // Create the standard logger wrapper
const logWrapper = createLogWrapper(log); const logWrapper = createLogWrapper(log);
@@ -75,7 +75,8 @@ export async function updateTasksDirect(args, log, context = {}) {
{ {
session, session,
mcpLog: logWrapper, mcpLog: logWrapper,
projectRoot projectRoot,
tag
}, },
'json' 'json'
); );

View File

@@ -52,6 +52,7 @@ export function registerAddSubtaskTool(server) {
.describe( .describe(
'Absolute path to the tasks file (default: tasks/tasks.json)' 'Absolute path to the tasks file (default: tasks/tasks.json)'
), ),
tag: z.string().optional().describe('Tag context to operate on'),
skipGenerate: z skipGenerate: z
.boolean() .boolean()
.optional() .optional()
@@ -89,7 +90,8 @@ export function registerAddSubtaskTool(server) {
status: args.status, status: args.status,
dependencies: args.dependencies, dependencies: args.dependencies,
skipGenerate: args.skipGenerate, skipGenerate: args.skipGenerate,
projectRoot: args.projectRoot projectRoot: args.projectRoot,
tag: args.tag
}, },
log, log,
{ session } { session }

View File

@@ -24,7 +24,8 @@ export function registerFixDependenciesTool(server) {
file: z.string().optional().describe('Absolute path to the tasks file'), file: z.string().optional().describe('Absolute path to the tasks file'),
projectRoot: z projectRoot: z
.string() .string()
.describe('The directory of the project. Must be an absolute path.') .describe('The directory of the project. Must be an absolute path.'),
tag: z.string().optional().describe('Tag context to operate on')
}), }),
execute: withNormalizedProjectRoot(async (args, { log, session }) => { execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try { try {
@@ -46,7 +47,9 @@ export function registerFixDependenciesTool(server) {
const result = await fixDependenciesDirect( const result = await fixDependenciesDirect(
{ {
tasksJsonPath: tasksJsonPath tasksJsonPath: tasksJsonPath,
projectRoot: args.projectRoot,
tag: args.tag
}, },
log log
); );

View File

@@ -29,6 +29,7 @@ import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js'; import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js'; import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js'; import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js'; import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js'; import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js'; import { registerListTagsTool } from './list-tags.js';
@@ -83,6 +84,7 @@ export function registerTaskMasterTools(server) {
registerRemoveDependencyTool(server); registerRemoveDependencyTool(server);
registerValidateDependenciesTool(server); registerValidateDependenciesTool(server);
registerFixDependenciesTool(server); registerFixDependenciesTool(server);
registerResponseLanguageTool(server);
// Group 7: Tag Management // Group 7: Tag Management
registerListTagsTool(server); registerListTagsTool(server);

View File

@@ -51,7 +51,7 @@ export function registerInitializeProjectTool(server) {
.array(z.enum(RULE_PROFILES)) .array(z.enum(RULE_PROFILES))
.optional() .optional()
.describe( .describe(
`List of rule profiles to include at initialization. If omitted, defaults to all available profiles. Available options: ${RULE_PROFILES.join(', ')}` `List of rule profiles to include at initialization. If omitted, defaults to Cursor profile only. Available options: ${RULE_PROFILES.join(', ')}`
) )
}), }),
execute: withNormalizedProjectRoot(async (args, context) => { execute: withNormalizedProjectRoot(async (args, context) => {

View File

@@ -43,7 +43,7 @@ export function registerParsePRDTool(server) {
.string() .string()
.optional() .optional()
.describe( .describe(
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Avoid entering numbers above 50 due to context window limitations.' 'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Setting to 0 will allow Taskmaster to determine the appropriate number of tasks based on the complexity of the PRD. Avoid entering numbers above 50 due to context window limitations.'
), ),
force: z force: z
.boolean() .boolean()

View File

@@ -33,7 +33,13 @@ export function registerRemoveTaskTool(server) {
confirm: z confirm: z
.boolean() .boolean()
.optional() .optional()
.describe('Whether to skip confirmation prompt (default: false)') .describe('Whether to skip confirmation prompt (default: false)'),
tag: z
.string()
.optional()
.describe(
'Specify which tag context to operate on. Defaults to the current active tag.'
)
}), }),
execute: withNormalizedProjectRoot(async (args, { log, session }) => { execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try { try {
@@ -59,7 +65,8 @@ export function registerRemoveTaskTool(server) {
{ {
tasksJsonPath: tasksJsonPath, tasksJsonPath: tasksJsonPath,
id: args.id, id: args.id,
projectRoot: args.projectRoot projectRoot: args.projectRoot,
tag: args.tag
}, },
log, log,
{ session } { session }

View File

@@ -0,0 +1,46 @@
import { z } from 'zod';
import {
createErrorResponse,
handleApiResult,
withNormalizedProjectRoot
} from './utils.js';
import { responseLanguageDirect } from '../core/direct-functions/response-language.js';
export function registerResponseLanguageTool(server) {
server.addTool({
name: 'response-language',
description: 'Get or set the response language for the project',
parameters: z.object({
projectRoot: z
.string()
.describe(
'The root directory for the project. ALWAYS SET THIS TO THE PROJECT ROOT DIRECTORY. IF NOT SET, THE TOOL WILL NOT WORK.'
),
language: z
.string()
.describe(
'The new response language to set. like "中文" "English" or "español".'
)
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
log.info(
`Executing response-language tool with args: ${JSON.stringify(args)}`
);
const result = await responseLanguageDirect(
{
...args,
projectRoot: args.projectRoot
},
log,
{ session }
);
return handleApiResult(result, log, 'Error setting response language');
} catch (error) {
log.error(`Error in response-language tool: ${error.message}`);
return createErrorResponse(error.message);
}
})
});
}

View File

@@ -47,7 +47,8 @@ export function registerSetTaskStatusTool(server) {
), ),
projectRoot: z projectRoot: z
.string() .string()
.describe('The directory of the project. Must be an absolute path.') .describe('The directory of the project. Must be an absolute path.'),
tag: z.string().optional().describe('Optional tag context to operate on')
}), }),
execute: withNormalizedProjectRoot(async (args, { log, session }) => { execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try { try {
@@ -86,7 +87,8 @@ export function registerSetTaskStatusTool(server) {
id: args.id, id: args.id,
status: args.status, status: args.status,
complexityReportPath, complexityReportPath,
projectRoot: args.projectRoot projectRoot: args.projectRoot,
tag: args.tag
}, },
log, log,
{ session } { session }

View File

@@ -43,11 +43,12 @@ export function registerUpdateTool(server) {
.optional() .optional()
.describe( .describe(
'The directory of the project. (Optional, usually from session)' 'The directory of the project. (Optional, usually from session)'
) ),
tag: z.string().optional().describe('Tag context to operate on')
}), }),
execute: withNormalizedProjectRoot(async (args, { log, session }) => { execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const toolName = 'update'; const toolName = 'update';
const { from, prompt, research, file, projectRoot } = args; const { from, prompt, research, file, projectRoot, tag } = args;
try { try {
log.info( log.info(
@@ -71,7 +72,8 @@ export function registerUpdateTool(server) {
from: from, from: from,
prompt: prompt, prompt: prompt,
research: research, research: research,
projectRoot: projectRoot projectRoot: projectRoot,
tag: tag
}, },
log, log,
{ session } { session }

5514
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -68,6 +68,7 @@
"gradient-string": "^3.0.0", "gradient-string": "^3.0.0",
"helmet": "^8.1.0", "helmet": "^8.1.0",
"inquirer": "^12.5.0", "inquirer": "^12.5.0",
"jsonc-parser": "^3.3.1",
"jsonwebtoken": "^9.0.2", "jsonwebtoken": "^9.0.2",
"lru-cache": "^10.2.0", "lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0", "ollama-ai-provider": "^1.2.0",
@@ -77,7 +78,8 @@
"zod": "^3.23.8" "zod": "^3.23.8"
}, },
"optionalDependencies": { "optionalDependencies": {
"@anthropic-ai/claude-code": "^1.0.25" "@anthropic-ai/claude-code": "^1.0.25",
"ai-sdk-provider-gemini-cli": "^0.0.3"
}, },
"engines": { "engines": {
"node": ">=18.0.0" "node": ">=18.0.0"

View File

@@ -30,6 +30,7 @@ import {
convertAllRulesToProfileRules, convertAllRulesToProfileRules,
getRulesProfile getRulesProfile
} from '../src/utils/rule-transformer.js'; } from '../src/utils/rule-transformer.js';
import { updateConfigMaxTokens } from './modules/update-config-tokens.js';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import { import {
@@ -623,6 +624,14 @@ function createProjectStructure(
} }
); );
// Update config.json with correct maxTokens values from supported-models.json
const configPath = path.join(targetDir, TASKMASTER_CONFIG_FILE);
if (updateConfigMaxTokens(configPath)) {
log('info', 'Updated config with correct maxTokens values');
} else {
log('warn', 'Could not update maxTokens in config');
}
// Copy .gitignore with GitTasks preference // Copy .gitignore with GitTasks preference
try { try {
const gitignoreTemplatePath = path.join( const gitignoreTemplatePath = path.join(
@@ -757,6 +766,44 @@ function createProjectStructure(
} }
// ===================================== // =====================================
// === Add Response Language Step ===
if (!isSilentMode() && !dryRun && !options?.yes) {
console.log(
boxen(chalk.cyan('Configuring Response Language...'), {
padding: 0.5,
margin: { top: 1, bottom: 0.5 },
borderStyle: 'round',
borderColor: 'blue'
})
);
log(
'info',
'Running interactive response language setup. Please input your preferred language.'
);
try {
execSync('npx task-master lang --setup', {
stdio: 'inherit',
cwd: targetDir
});
log('success', 'Response Language configured.');
} catch (error) {
log('error', 'Failed to configure response language:', error.message);
log('warn', 'You may need to run "task-master lang --setup" manually.');
}
} else if (isSilentMode() && !dryRun) {
log(
'info',
'Skipping interactive response language setup in silent (MCP) mode.'
);
log(
'warn',
'Please configure response language using "task-master models --set-response-language" or the "models" MCP tool.'
);
} else if (dryRun) {
log('info', 'DRY RUN: Skipping interactive response language setup.');
}
// =====================================
// === Add Model Configuration Step === // === Add Model Configuration Step ===
if (!isSilentMode() && !dryRun && !options?.yes) { if (!isSilentMode() && !dryRun && !options?.yes) {
console.log( console.log(

View File

@@ -15,6 +15,7 @@ import {
getFallbackProvider, getFallbackProvider,
getFallbackModelId, getFallbackModelId,
getParametersForRole, getParametersForRole,
getResponseLanguage,
getUserId, getUserId,
MODEL_MAP, MODEL_MAP,
getDebugFlag, getDebugFlag,
@@ -24,7 +25,8 @@ import {
getAzureBaseURL, getAzureBaseURL,
getBedrockBaseURL, getBedrockBaseURL,
getVertexProjectId, getVertexProjectId,
getVertexLocation getVertexLocation,
providersWithoutApiKeys
} from './config-manager.js'; } from './config-manager.js';
import { import {
log, log,
@@ -45,7 +47,8 @@ import {
BedrockAIProvider, BedrockAIProvider,
AzureProvider, AzureProvider,
VertexAIProvider, VertexAIProvider,
ClaudeCodeProvider ClaudeCodeProvider,
GeminiCliProvider
} from '../../src/ai-providers/index.js'; } from '../../src/ai-providers/index.js';
// Create provider instances // Create provider instances
@@ -60,7 +63,8 @@ const PROVIDERS = {
bedrock: new BedrockAIProvider(), bedrock: new BedrockAIProvider(),
azure: new AzureProvider(), azure: new AzureProvider(),
vertex: new VertexAIProvider(), vertex: new VertexAIProvider(),
'claude-code': new ClaudeCodeProvider() 'claude-code': new ClaudeCodeProvider(),
'gemini-cli': new GeminiCliProvider()
}; };
// Helper function to get cost for a specific model // Helper function to get cost for a specific model
@@ -232,6 +236,12 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
return 'claude-code-no-key-required'; return 'claude-code-no-key-required';
} }
// Gemini CLI can work without an API key (uses CLI auth)
if (providerName === 'gemini-cli') {
const apiKey = resolveEnvVariable('GEMINI_API_KEY', session, projectRoot);
return apiKey || 'gemini-cli-no-key-required';
}
const keyMap = { const keyMap = {
openai: 'OPENAI_API_KEY', openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY', anthropic: 'ANTHROPIC_API_KEY',
@@ -244,7 +254,8 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
ollama: 'OLLAMA_API_KEY', ollama: 'OLLAMA_API_KEY',
bedrock: 'AWS_ACCESS_KEY_ID', bedrock: 'AWS_ACCESS_KEY_ID',
vertex: 'GOOGLE_API_KEY', vertex: 'GOOGLE_API_KEY',
'claude-code': 'CLAUDE_CODE_API_KEY' // Not actually used, but included for consistency 'claude-code': 'CLAUDE_CODE_API_KEY', // Not actually used, but included for consistency
'gemini-cli': 'GEMINI_API_KEY'
}; };
const envVarName = keyMap[providerName]; const envVarName = keyMap[providerName];
@@ -257,7 +268,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
const apiKey = resolveEnvVariable(envVarName, session, projectRoot); const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
// Special handling for providers that can use alternative auth // Special handling for providers that can use alternative auth
if (providerName === 'ollama' || providerName === 'bedrock') { if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
return apiKey || null; return apiKey || null;
} }
@@ -457,7 +468,7 @@ async function _unifiedServiceRunner(serviceType, params) {
} }
// Check API key if needed // Check API key if needed
if (providerName?.toLowerCase() !== 'ollama') { if (!providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) { if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
log( log(
'warn', 'warn',
@@ -541,9 +552,12 @@ async function _unifiedServiceRunner(serviceType, params) {
} }
const messages = []; const messages = [];
if (systemPrompt) { const responseLanguage = getResponseLanguage(effectiveProjectRoot);
messages.push({ role: 'system', content: systemPrompt }); const systemPromptWithLanguage = `${systemPrompt} \n\n Always respond in ${responseLanguage}.`;
} messages.push({
role: 'system',
content: systemPromptWithLanguage.trim()
});
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS // IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
// { // {

View File

@@ -42,7 +42,8 @@ import {
findTaskById, findTaskById,
taskExists, taskExists,
moveTask, moveTask,
migrateProject migrateProject,
setResponseLanguage
} from './task-manager.js'; } from './task-manager.js';
import { import {
@@ -69,7 +70,9 @@ import {
ConfigurationError, ConfigurationError,
isConfigFilePresent, isConfigFilePresent,
getAvailableModels, getAvailableModels,
getBaseUrlForRole getBaseUrlForRole,
getDefaultNumTasks,
getDefaultSubtasks
} from './config-manager.js'; } from './config-manager.js';
import { CUSTOM_PROVIDERS } from '../../src/constants/providers.js'; import { CUSTOM_PROVIDERS } from '../../src/constants/providers.js';
@@ -803,7 +806,11 @@ function registerCommands(programInstance) {
'Path to the PRD file (alternative to positional argument)' 'Path to the PRD file (alternative to positional argument)'
) )
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE) .option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
.option('-n, --num-tasks <number>', 'Number of tasks to generate', '10') .option(
'-n, --num-tasks <number>',
'Number of tasks to generate',
getDefaultNumTasks()
)
.option('-f, --force', 'Skip confirmation when overwriting existing tasks') .option('-f, --force', 'Skip confirmation when overwriting existing tasks')
.option( .option(
'--append', '--append',
@@ -3421,6 +3428,10 @@ ${result.result}
'--vertex', '--vertex',
'Allow setting a custom Vertex AI model ID (use with --set-*) ' 'Allow setting a custom Vertex AI model ID (use with --set-*) '
) )
.option(
'--gemini-cli',
'Allow setting a Gemini CLI model ID (use with --set-*)'
)
.addHelpText( .addHelpText(
'after', 'after',
` `
@@ -3435,6 +3446,7 @@ Examples:
$ task-master models --set-main sonnet --claude-code # Set Claude Code model for main role $ task-master models --set-main sonnet --claude-code # Set Claude Code model for main role
$ task-master models --set-main gpt-4o --azure # Set custom Azure OpenAI model for main role $ task-master models --set-main gpt-4o --azure # Set custom Azure OpenAI model for main role
$ task-master models --set-main claude-3-5-sonnet@20241022 --vertex # Set custom Vertex AI model for main role $ task-master models --set-main claude-3-5-sonnet@20241022 --vertex # Set custom Vertex AI model for main role
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role
$ task-master models --setup # Run interactive setup` $ task-master models --setup # Run interactive setup`
) )
.action(async (options) => { .action(async (options) => {
@@ -3448,12 +3460,13 @@ Examples:
options.openrouter, options.openrouter,
options.ollama, options.ollama,
options.bedrock, options.bedrock,
options.claudeCode options.claudeCode,
options.geminiCli
].filter(Boolean).length; ].filter(Boolean).length;
if (providerFlags > 1) { if (providerFlags > 1) {
console.error( console.error(
chalk.red( chalk.red(
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code) simultaneously.' 'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli) simultaneously.'
) )
); );
process.exit(1); process.exit(1);
@@ -3497,6 +3510,8 @@ Examples:
? 'bedrock' ? 'bedrock'
: options.claudeCode : options.claudeCode
? 'claude-code' ? 'claude-code'
: options.geminiCli
? 'gemini-cli'
: undefined : undefined
}); });
if (result.success) { if (result.success) {
@@ -3521,6 +3536,8 @@ Examples:
? 'bedrock' ? 'bedrock'
: options.claudeCode : options.claudeCode
? 'claude-code' ? 'claude-code'
: options.geminiCli
? 'gemini-cli'
: undefined : undefined
}); });
if (result.success) { if (result.success) {
@@ -3547,6 +3564,8 @@ Examples:
? 'bedrock' ? 'bedrock'
: options.claudeCode : options.claudeCode
? 'claude-code' ? 'claude-code'
: options.geminiCli
? 'gemini-cli'
: undefined : undefined
}); });
if (result.success) { if (result.success) {
@@ -3643,6 +3662,63 @@ Examples:
return; // Stop execution here return; // Stop execution here
}); });
// response-language command
programInstance
.command('lang')
.description('Manage response language settings')
.option('--response <response_language>', 'Set the response language')
.option('--setup', 'Run interactive setup to configure response language')
.action(async (options) => {
const projectRoot = findProjectRoot(); // Find project root for context
const { response, setup } = options;
console.log(
chalk.blue('Response language set to:', JSON.stringify(options))
);
let responseLanguage = response || 'English';
if (setup) {
console.log(
chalk.blue('Starting interactive response language setup...')
);
try {
const userResponse = await inquirer.prompt([
{
type: 'input',
name: 'responseLanguage',
message: 'Input your preferred response language',
default: 'English'
}
]);
console.log(
chalk.blue(
'Response language set to:',
userResponse.responseLanguage
)
);
responseLanguage = userResponse.responseLanguage;
} catch (setupError) {
console.error(
chalk.red('\\nInteractive setup failed unexpectedly:'),
setupError.message
);
}
}
const result = setResponseLanguage(responseLanguage, {
projectRoot
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
} else {
console.error(
chalk.red(
`❌ Error setting response language: ${result.error.message}`
)
);
}
});
// move-task command // move-task command
programInstance programInstance
.command('move') .command('move')
@@ -3810,7 +3886,11 @@ Examples:
$ task-master rules --${RULES_SETUP_ACTION} # Interactive setup to select rule profiles` $ task-master rules --${RULES_SETUP_ACTION} # Interactive setup to select rule profiles`
) )
.action(async (action, profiles, options) => { .action(async (action, profiles, options) => {
const projectDir = process.cwd(); const projectRoot = findProjectRoot();
if (!projectRoot) {
console.error(chalk.red('Error: Could not find project root.'));
process.exit(1);
}
/** /**
* 'task-master rules --setup' action: * 'task-master rules --setup' action:
@@ -3857,7 +3937,7 @@ Examples:
const profileConfig = getRulesProfile(profile); const profileConfig = getRulesProfile(profile);
const addResult = convertAllRulesToProfileRules( const addResult = convertAllRulesToProfileRules(
projectDir, projectRoot,
profileConfig profileConfig
); );
@@ -3903,8 +3983,8 @@ Examples:
let confirmed = true; let confirmed = true;
if (!options.force) { if (!options.force) {
// Check if this removal would leave no profiles remaining // Check if this removal would leave no profiles remaining
if (wouldRemovalLeaveNoProfiles(projectDir, expandedProfiles)) { if (wouldRemovalLeaveNoProfiles(projectRoot, expandedProfiles)) {
const installedProfiles = getInstalledProfiles(projectDir); const installedProfiles = getInstalledProfiles(projectRoot);
confirmed = await confirmRemoveAllRemainingProfiles( confirmed = await confirmRemoveAllRemainingProfiles(
expandedProfiles, expandedProfiles,
installedProfiles installedProfiles
@@ -3934,12 +4014,12 @@ Examples:
if (action === RULES_ACTIONS.ADD) { if (action === RULES_ACTIONS.ADD) {
console.log(chalk.blue(`Adding rules for profile: ${profile}...`)); console.log(chalk.blue(`Adding rules for profile: ${profile}...`));
const addResult = convertAllRulesToProfileRules( const addResult = convertAllRulesToProfileRules(
projectDir, projectRoot,
profileConfig profileConfig
); );
if (typeof profileConfig.onAddRulesProfile === 'function') { if (typeof profileConfig.onAddRulesProfile === 'function') {
const assetsDir = path.join(process.cwd(), 'assets'); const assetsDir = path.join(projectRoot, 'assets');
profileConfig.onAddRulesProfile(projectDir, assetsDir); profileConfig.onAddRulesProfile(projectRoot, assetsDir);
} }
console.log( console.log(
chalk.blue(`Completed adding rules for profile: ${profile}`) chalk.blue(`Completed adding rules for profile: ${profile}`)
@@ -3955,7 +4035,7 @@ Examples:
console.log(chalk.green(generateProfileSummary(profile, addResult))); console.log(chalk.green(generateProfileSummary(profile, addResult)));
} else if (action === RULES_ACTIONS.REMOVE) { } else if (action === RULES_ACTIONS.REMOVE) {
console.log(chalk.blue(`Removing rules for profile: ${profile}...`)); console.log(chalk.blue(`Removing rules for profile: ${profile}...`));
const result = removeProfileRules(projectDir, profileConfig); const result = removeProfileRules(projectRoot, profileConfig);
removalResults.push(result); removalResults.push(result);
console.log( console.log(
chalk.green(generateProfileRemovalSummary(profile, result)) chalk.green(generateProfileRemovalSummary(profile, result))

View File

@@ -1,8 +1,9 @@
import fs from 'fs'; import fs from 'fs';
import path from 'path'; import path from 'path';
import chalk from 'chalk'; import chalk from 'chalk';
import { z } from 'zod';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js'; import { log, findProjectRoot, resolveEnvVariable, isEmpty } from './utils.js';
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js'; import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
import { findConfigPath } from '../../src/utils/path-utils.js'; import { findConfigPath } from '../../src/utils/path-utils.js';
import { import {
@@ -11,6 +12,7 @@ import {
CUSTOM_PROVIDERS_ARRAY, CUSTOM_PROVIDERS_ARRAY,
ALL_PROVIDERS ALL_PROVIDERS
} from '../../src/constants/providers.js'; } from '../../src/constants/providers.js';
import { AI_COMMAND_NAMES } from '../../src/constants/commands.js';
// Calculate __dirname in ESM // Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
@@ -61,12 +63,15 @@ const DEFAULTS = {
global: { global: {
logLevel: 'info', logLevel: 'info',
debug: false, debug: false,
defaultNumTasks: 10,
defaultSubtasks: 5, defaultSubtasks: 5,
defaultPriority: 'medium', defaultPriority: 'medium',
projectName: 'Task Master', projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api', ollamaBaseURL: 'http://localhost:11434/api',
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com' bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
} responseLanguage: 'English'
},
claudeCode: {}
}; };
// --- Internal Config Loading --- // --- Internal Config Loading ---
@@ -127,7 +132,8 @@ function _loadAndValidateConfig(explicitRoot = null) {
? { ...defaults.models.fallback, ...parsedConfig.models.fallback } ? { ...defaults.models.fallback, ...parsedConfig.models.fallback }
: { ...defaults.models.fallback } : { ...defaults.models.fallback }
}, },
global: { ...defaults.global, ...parsedConfig?.global } global: { ...defaults.global, ...parsedConfig?.global },
claudeCode: { ...defaults.claudeCode, ...parsedConfig?.claudeCode }
}; };
configSource = `file (${configPath})`; // Update source info configSource = `file (${configPath})`; // Update source info
@@ -170,6 +176,9 @@ function _loadAndValidateConfig(explicitRoot = null) {
config.models.fallback.provider = undefined; config.models.fallback.provider = undefined;
config.models.fallback.modelId = undefined; config.models.fallback.modelId = undefined;
} }
if (config.claudeCode && !isEmpty(config.claudeCode)) {
config.claudeCode = validateClaudeCodeSettings(config.claudeCode);
}
} catch (error) { } catch (error) {
// Use console.error for actual errors during parsing // Use console.error for actual errors during parsing
console.error( console.error(
@@ -277,6 +286,83 @@ function validateProviderModelCombination(providerName, modelId) {
); );
} }
/**
* Validates Claude Code AI provider custom settings
* @param {object} settings The settings to validate
* @returns {object} The validated settings
*/
function validateClaudeCodeSettings(settings) {
// Define the base settings schema without commandSpecific first
const BaseSettingsSchema = z.object({
maxTurns: z.number().int().positive().optional(),
customSystemPrompt: z.string().optional(),
appendSystemPrompt: z.string().optional(),
permissionMode: z
.enum(['default', 'acceptEdits', 'plan', 'bypassPermissions'])
.optional(),
allowedTools: z.array(z.string()).optional(),
disallowedTools: z.array(z.string()).optional(),
mcpServers: z
.record(
z.string(),
z.object({
type: z.enum(['stdio', 'sse']).optional(),
command: z.string(),
args: z.array(z.string()).optional(),
env: z.record(z.string()).optional(),
url: z.string().url().optional(),
headers: z.record(z.string()).optional()
})
)
.optional()
});
// Define CommandSpecificSchema using the base schema
const CommandSpecificSchema = z.record(
z.enum(AI_COMMAND_NAMES),
BaseSettingsSchema
);
// Define the full settings schema with commandSpecific
const SettingsSchema = BaseSettingsSchema.extend({
commandSpecific: CommandSpecificSchema.optional()
});
let validatedSettings = {};
try {
validatedSettings = SettingsSchema.parse(settings);
} catch (error) {
console.warn(
chalk.yellow(
`Warning: Invalid Claude Code settings in config: ${error.message}. Falling back to default.`
)
);
validatedSettings = {};
}
return validatedSettings;
}
// --- Claude Code Settings Getters ---
function getClaudeCodeSettings(explicitRoot = null, forceReload = false) {
const config = getConfig(explicitRoot, forceReload);
// Ensure Claude Code defaults are applied if Claude Code section is missing
return { ...DEFAULTS.claudeCode, ...(config?.claudeCode || {}) };
}
function getClaudeCodeSettingsForCommand(
commandName,
explicitRoot = null,
forceReload = false
) {
const settings = getClaudeCodeSettings(explicitRoot, forceReload);
const commandSpecific = settings?.commandSpecific || {};
return { ...settings, ...commandSpecific[commandName] };
}
// --- Role-Specific Getters --- // --- Role-Specific Getters ---
function getModelConfigForRole(role, explicitRoot = null) { function getModelConfigForRole(role, explicitRoot = null) {
@@ -424,6 +510,11 @@ function getVertexLocation(explicitRoot = null) {
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1'; return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
} }
function getResponseLanguage(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).responseLanguage;
}
/** /**
* Gets model parameters (maxTokens, temperature) for a specific role, * Gets model parameters (maxTokens, temperature) for a specific role,
* considering model-specific overrides from supported-models.json. * considering model-specific overrides from supported-models.json.
@@ -500,7 +591,8 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
// Providers that don't require API keys for authentication // Providers that don't require API keys for authentication
const providersWithoutApiKeys = [ const providersWithoutApiKeys = [
CUSTOM_PROVIDERS.OLLAMA, CUSTOM_PROVIDERS.OLLAMA,
CUSTOM_PROVIDERS.BEDROCK CUSTOM_PROVIDERS.BEDROCK,
CUSTOM_PROVIDERS.GEMINI_CLI
]; ];
if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) { if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
@@ -794,15 +886,26 @@ function getBaseUrlForRole(role, explicitRoot = null) {
return undefined; return undefined;
} }
// Export the providers without API keys array for use in other modules
export const providersWithoutApiKeys = [
CUSTOM_PROVIDERS.OLLAMA,
CUSTOM_PROVIDERS.BEDROCK,
CUSTOM_PROVIDERS.GEMINI_CLI
];
export { export {
// Core config access // Core config access
getConfig, getConfig,
writeConfig, writeConfig,
ConfigurationError, ConfigurationError,
isConfigFilePresent, isConfigFilePresent,
// Claude Code settings
getClaudeCodeSettings,
getClaudeCodeSettingsForCommand,
// Validation // Validation
validateProvider, validateProvider,
validateProviderModelCombination, validateProviderModelCombination,
validateClaudeCodeSettings,
VALIDATED_PROVIDERS, VALIDATED_PROVIDERS,
CUSTOM_PROVIDERS, CUSTOM_PROVIDERS,
ALL_PROVIDERS, ALL_PROVIDERS,
@@ -832,6 +935,7 @@ export {
getOllamaBaseURL, getOllamaBaseURL,
getAzureBaseURL, getAzureBaseURL,
getBedrockBaseURL, getBedrockBaseURL,
getResponseLanguage,
getParametersForRole, getParametersForRole,
getUserId, getUserId,
// API Key Checkers (still relevant) // API Key Checkers (still relevant)

View File

@@ -1,16 +1,64 @@
{ {
"bedrock": [ "bedrock": [
{
"id": "us.anthropic.claude-3-haiku-20240307-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": { "input": 0.25, "output": 1.25 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-3-opus-20240229-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 15, "output": 75 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
"swe_score": 0.49,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"swe_score": 0.49,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{ {
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0", "id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"swe_score": 0.623, "swe_score": 0.623,
"cost_per_1m_tokens": { "input": 3, "output": 15 }, "cost_per_1m_tokens": {
"allowed_roles": ["main", "fallback"], "input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536 "max_tokens": 65536
}, },
{
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": { "input": 0.8, "output": 4 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-opus-4-20250514-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 15, "output": 75 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"swe_score": 0.727,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{ {
"id": "us.deepseek.r1-v1:0", "id": "us.deepseek.r1-v1:0",
"swe_score": 0, "swe_score": 0,
"cost_per_1m_tokens": { "input": 1.35, "output": 5.4 }, "cost_per_1m_tokens": {
"input": 1.35,
"output": 5.4
},
"allowed_roles": ["research"], "allowed_roles": ["research"],
"max_tokens": 65536 "max_tokens": 65536
} }
@@ -648,16 +696,44 @@
{ {
"id": "opus", "id": "opus",
"swe_score": 0.725, "swe_score": 0.725,
"cost_per_1m_tokens": { "input": 0, "output": 0 }, "cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32000 "max_tokens": 32000
}, },
{ {
"id": "sonnet", "id": "sonnet",
"swe_score": 0.727, "swe_score": 0.727,
"cost_per_1m_tokens": { "input": 0, "output": 0 }, "cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000 "max_tokens": 64000
} }
],
"gemini-cli": [
{
"id": "gemini-2.5-pro",
"swe_score": 0.72,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
},
{
"id": "gemini-2.5-flash",
"swe_score": 0.71,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
}
] ]
} }

View File

@@ -23,10 +23,12 @@ import updateSubtaskById from './task-manager/update-subtask-by-id.js';
import removeTask from './task-manager/remove-task.js'; import removeTask from './task-manager/remove-task.js';
import taskExists from './task-manager/task-exists.js'; import taskExists from './task-manager/task-exists.js';
import isTaskDependentOn from './task-manager/is-task-dependent.js'; import isTaskDependentOn from './task-manager/is-task-dependent.js';
import setResponseLanguage from './task-manager/response-language.js';
import moveTask from './task-manager/move-task.js'; import moveTask from './task-manager/move-task.js';
import { migrateProject } from './task-manager/migrate.js'; import { migrateProject } from './task-manager/migrate.js';
import { performResearch } from './task-manager/research.js'; import { performResearch } from './task-manager/research.js';
import { readComplexityReport } from './utils.js'; import { readComplexityReport } from './utils.js';
// Export task manager functions // Export task manager functions
export { export {
parsePRD, parsePRD,
@@ -49,6 +51,7 @@ export {
findTaskById, findTaskById,
taskExists, taskExists,
isTaskDependentOn, isTaskDependentOn,
setResponseLanguage,
moveTask, moveTask,
readComplexityReport, readComplexityReport,
migrateProject, migrateProject,

View File

@@ -1,6 +1,6 @@
import path from 'path'; import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js'; import { log, readJSON, writeJSON, getCurrentTag } from '../utils.js';
import { isTaskDependentOn } from '../task-manager.js'; import { isTaskDependentOn } from '../task-manager.js';
import generateTaskFiles from './generate-task-files.js'; import generateTaskFiles from './generate-task-files.js';
@@ -25,8 +25,10 @@ async function addSubtask(
try { try {
log('info', `Adding subtask to parent task ${parentId}...`); log('info', `Adding subtask to parent task ${parentId}...`);
const currentTag =
context.tag || getCurrentTag(context.projectRoot) || 'master';
// Read the existing tasks with proper context // Read the existing tasks with proper context
const data = readJSON(tasksPath, context.projectRoot, context.tag); const data = readJSON(tasksPath, context.projectRoot, currentTag);
if (!data || !data.tasks) { if (!data || !data.tasks) {
throw new Error(`Invalid or missing tasks file at ${tasksPath}`); throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
} }
@@ -137,12 +139,12 @@ async function addSubtask(
} }
// Write the updated tasks back to the file with proper context // Write the updated tasks back to the file with proper context
writeJSON(tasksPath, data, context.projectRoot, context.tag); writeJSON(tasksPath, data, context.projectRoot, currentTag);
// Generate task files if requested // Generate task files if requested
if (generateFiles) { if (generateFiles) {
log('info', 'Regenerating task files...'); log('info', 'Regenerating task files...');
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), context); await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
} }
return newSubtask; return newSubtask;

View File

@@ -28,6 +28,7 @@ import {
import { generateObjectService } from '../ai-services-unified.js'; import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js'; import { getDefaultPriority } from '../config-manager.js';
import ContextGatherer from '../utils/contextGatherer.js'; import ContextGatherer from '../utils/contextGatherer.js';
import generateTaskFiles from './generate-task-files.js';
// Define Zod schema for the expected AI output object // Define Zod schema for the expected AI output object
const AiTaskDataSchema = z.object({ const AiTaskDataSchema = z.object({
@@ -553,18 +554,18 @@ async function addTask(
report('DEBUG: Writing tasks.json...', 'debug'); report('DEBUG: Writing tasks.json...', 'debug');
// Write the updated raw data back to the file // Write the updated raw data back to the file
// The writeJSON function will automatically filter out _rawTaggedData // The writeJSON function will automatically filter out _rawTaggedData
writeJSON(tasksPath, rawData); writeJSON(tasksPath, rawData, projectRoot, targetTag);
report('DEBUG: tasks.json written.', 'debug'); report('DEBUG: tasks.json written.', 'debug');
// Generate markdown task files // Generate markdown task files
// report('Generating task files...', 'info'); report('Generating task files...', 'info');
// report('DEBUG: Calling generateTaskFiles...', 'debug'); report('DEBUG: Calling generateTaskFiles...', 'debug');
// // Pass mcpLog if available to generateTaskFiles // Pass mcpLog if available to generateTaskFiles
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), { await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
// projectRoot, projectRoot,
// tag: targetTag tag: targetTag
// }); });
// report('DEBUG: generateTaskFiles finished.', 'debug'); report('DEBUG: generateTaskFiles finished.', 'debug');
// Show success message - only for text output (CLI) // Show success message - only for text output (CLI)
if (outputFormat === 'text') { if (outputFormat === 'text') {

View File

@@ -2,7 +2,13 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { z } from 'zod'; import { z } from 'zod';
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js'; import {
log,
readJSON,
writeJSON,
isSilentMode,
getTagAwareFilePath
} from '../utils.js';
import { import {
startLoadingIndicator, startLoadingIndicator,
@@ -61,7 +67,7 @@ const subtaskWrapperSchema = z.object({
*/ */
function generateMainSystemPrompt(subtaskCount) { function generateMainSystemPrompt(subtaskCount) {
return `You are an AI assistant helping with task breakdown for software development. return `You are an AI assistant helping with task breakdown for software development.
You need to break down a high-level task into ${subtaskCount} specific subtasks that can be implemented one by one. You need to break down a high-level task into ${subtaskCount > 0 ? subtaskCount : 'an appropriate number of'} specific subtasks that can be implemented one by one.
Subtasks should: Subtasks should:
1. Be specific and actionable implementation steps 1. Be specific and actionable implementation steps
@@ -76,7 +82,7 @@ For each subtask, provide:
- title: Clear, specific title - title: Clear, specific title
- description: Detailed description - description: Detailed description
- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs) - dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)
- details: Implementation details - details: Implementation details, the output should be in string
- testStrategy: Optional testing approach - testStrategy: Optional testing approach
@@ -111,11 +117,11 @@ function generateMainUserPrompt(
"details": "Implementation guidance", "details": "Implementation guidance",
"testStrategy": "Optional testing approach" "testStrategy": "Optional testing approach"
}, },
// ... (repeat for a total of ${subtaskCount} subtasks with sequential IDs) // ... (repeat for ${subtaskCount ? 'a total of ' + subtaskCount : 'each of the'} subtasks with sequential IDs)
] ]
}`; }`;
return `Break down this task into exactly ${subtaskCount} specific subtasks: return `Break down this task into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks:
Task ID: ${task.id} Task ID: ${task.id}
Title: ${task.title} Title: ${task.title}
@@ -159,7 +165,7 @@ function generateResearchUserPrompt(
] ]
}`; }`;
return `Analyze the following task and break it down into exactly ${subtaskCount} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}. return `Analyze the following task and break it down into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
Parent Task: Parent Task:
ID: ${task.id} ID: ${task.id}
@@ -497,9 +503,18 @@ async function expandTask(
let complexityReasoningContext = ''; let complexityReasoningContext = '';
let systemPrompt; // Declare systemPrompt here let systemPrompt; // Declare systemPrompt here
const complexityReportPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE); // Use tag-aware complexity report path
const complexityReportPath = getTagAwareFilePath(
COMPLEXITY_REPORT_FILE,
tag,
projectRoot
);
let taskAnalysis = null; let taskAnalysis = null;
logger.info(
`Looking for complexity report at: ${complexityReportPath}${tag && tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
);
try { try {
if (fs.existsSync(complexityReportPath)) { if (fs.existsSync(complexityReportPath)) {
const complexityReport = readJSON(complexityReportPath); const complexityReport = readJSON(complexityReportPath);
@@ -531,7 +546,7 @@ async function expandTask(
// Determine final subtask count // Determine final subtask count
const explicitNumSubtasks = parseInt(numSubtasks, 10); const explicitNumSubtasks = parseInt(numSubtasks, 10);
if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks > 0) { if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks >= 0) {
finalSubtaskCount = explicitNumSubtasks; finalSubtaskCount = explicitNumSubtasks;
logger.info( logger.info(
`Using explicitly provided subtask count: ${finalSubtaskCount}` `Using explicitly provided subtask count: ${finalSubtaskCount}`
@@ -545,7 +560,7 @@ async function expandTask(
finalSubtaskCount = getDefaultSubtasks(session); finalSubtaskCount = getDefaultSubtasks(session);
logger.info(`Using default number of subtasks: ${finalSubtaskCount}`); logger.info(`Using default number of subtasks: ${finalSubtaskCount}`);
} }
if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount <= 0) { if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount < 0) {
logger.warn( logger.warn(
`Invalid subtask count determined (${finalSubtaskCount}), defaulting to 3.` `Invalid subtask count determined (${finalSubtaskCount}), defaulting to 3.`
); );
@@ -566,7 +581,7 @@ async function expandTask(
} }
// --- Use Simplified System Prompt for Report Prompts --- // --- Use Simplified System Prompt for Report Prompts ---
systemPrompt = `You are an AI assistant helping with task breakdown. Generate exactly ${finalSubtaskCount} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`; systemPrompt = `You are an AI assistant helping with task breakdown. Generate ${finalSubtaskCount > 0 ? 'exactly ' + finalSubtaskCount : 'an appropriate number of'} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
logger.info( logger.info(
`Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.` `Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.`
); );
@@ -608,7 +623,7 @@ async function expandTask(
let loadingIndicator = null; let loadingIndicator = null;
if (outputFormat === 'text') { if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator( loadingIndicator = startLoadingIndicator(
`Generating ${finalSubtaskCount} subtasks...\n` `Generating ${finalSubtaskCount || 'appropriate number of'} subtasks...\n`
); );
} }

View File

@@ -1,5 +1,5 @@
import fs from 'fs';
import path from 'path'; import path from 'path';
import fs from 'fs';
import chalk from 'chalk'; import chalk from 'chalk';
import { log, readJSON } from '../utils.js'; import { log, readJSON } from '../utils.js';

View File

@@ -523,6 +523,24 @@ async function setModel(role, modelId, options = {}) {
determinedProvider = CUSTOM_PROVIDERS.VERTEX; determinedProvider = CUSTOM_PROVIDERS.VERTEX;
warningMessage = `Warning: Custom Vertex AI model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`; warningMessage = `Warning: Custom Vertex AI model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`;
report('warn', warningMessage); report('warn', warningMessage);
} else if (providerHint === CUSTOM_PROVIDERS.GEMINI_CLI) {
// Gemini CLI provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.GEMINI_CLI;
// Re-find modelData specifically for gemini-cli provider
const geminiCliModels = availableModels.filter(
(m) => m.provider === 'gemini-cli'
);
const geminiCliModelData = geminiCliModels.find(
(m) => m.id === modelId
);
if (geminiCliModelData) {
// Update modelData to the found gemini-cli model
modelData = geminiCliModelData;
report('info', `Setting Gemini CLI model '${modelId}'.`);
} else {
warningMessage = `Warning: Gemini CLI model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
} else { } else {
// Invalid provider hint - should not happen with our constants // Invalid provider hint - should not happen with our constants
throw new Error(`Invalid provider hint received: ${providerHint}`); throw new Error(`Invalid provider hint received: ${providerHint}`);

View File

@@ -188,7 +188,7 @@ Your task breakdown should incorporate this research, resulting in more detailed
// Base system prompt for PRD parsing // Base system prompt for PRD parsing
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition} const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD Analyze the provided PRD content and generate ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task. Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content. Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks. Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
@@ -207,7 +207,7 @@ Each task should follow this JSON structure:
} }
Guidelines: Guidelines:
1. Unless complexity warrants otherwise, create exactly ${numTasks} tasks, numbered sequentially starting from ${nextId} 1. ${numTasks > 0 ? 'Unless complexity warrants otherwise' : 'Depending on the complexity'}, create ${numTasks > 0 ? 'exactly ' + numTasks : 'an appropriate number of'} tasks, numbered sequentially starting from ${nextId}
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards 2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
3. Order tasks logically - consider dependencies and implementation sequence 3. Order tasks logically - consider dependencies and implementation sequence
4. Early tasks should focus on setup, core functionality first, then advanced features 4. Early tasks should focus on setup, core functionality first, then advanced features
@@ -220,7 +220,7 @@ Guidelines:
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`; 11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
// Build user prompt with PRD content // Build user prompt with PRD content
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
Return your response in this format: Return your response in this format:
{ {
@@ -235,7 +235,7 @@ Guidelines:
], ],
"metadata": { "metadata": {
"projectName": "PRD Implementation", "projectName": "PRD Implementation",
"totalTasks": ${numTasks}, "totalTasks": {number of tasks},
"sourceFile": "${prdPath}", "sourceFile": "${prdPath}",
"generatedAt": "YYYY-MM-DD" "generatedAt": "YYYY-MM-DD"
} }

View File

@@ -1,7 +1,6 @@
import fs from 'fs';
import path from 'path'; import path from 'path';
import * as fs from 'fs';
import { log, readJSON, writeJSON } from '../utils.js'; import { readJSON, writeJSON, log, findTaskById } from '../utils.js';
import generateTaskFiles from './generate-task-files.js'; import generateTaskFiles from './generate-task-files.js';
import taskExists from './task-exists.js'; import taskExists from './task-exists.js';
@@ -172,7 +171,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
} }
// Save the updated raw data structure // Save the updated raw data structure
writeJSON(tasksPath, fullTaggedData); writeJSON(tasksPath, fullTaggedData, projectRoot, currentTag);
// Delete task files AFTER saving tasks.json // Delete task files AFTER saving tasks.json
for (const taskIdNum of tasksToDeleteFiles) { for (const taskIdNum of tasksToDeleteFiles) {
@@ -195,10 +194,10 @@ async function removeTask(tasksPath, taskIds, context = {}) {
// Generate updated task files ONCE, with context // Generate updated task files ONCE, with context
try { try {
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), { await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
// projectRoot, projectRoot,
// tag: currentTag tag: currentTag
// }); });
results.messages.push('Task files regenerated successfully.'); results.messages.push('Task files regenerated successfully.');
} catch (genError) { } catch (genError) {
const genErrMsg = `Failed to regenerate task files: ${genError.message}`; const genErrMsg = `Failed to regenerate task files: ${genError.message}`;

View File

@@ -0,0 +1,89 @@
import {
getConfig,
isConfigFilePresent,
writeConfig
} from '../config-manager.js';
import { findConfigPath } from '../../../src/utils/path-utils.js';
import { log } from '../utils.js';
function setResponseLanguage(lang, options = {}) {
const { mcpLog, projectRoot } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The configuration file is missing. Run "task-master models --setup" to create it.'
}
};
}
// Validate response language
if (typeof lang !== 'string' || lang.trim() === '') {
return {
success: false,
error: {
code: 'INVALID_RESPONSE_LANGUAGE',
message: `Invalid response language: ${lang}. Must be a non-empty string.`
}
};
}
try {
const currentConfig = getConfig(projectRoot);
currentConfig.global.responseLanguage = lang;
const writeResult = writeConfig(currentConfig, projectRoot);
if (!writeResult) {
return {
success: false,
error: {
code: 'WRITE_ERROR',
message: 'Error writing updated configuration to configuration file'
}
};
}
const successMessage = `Successfully set response language to: ${lang}`;
report('info', successMessage);
return {
success: true,
data: {
responseLanguage: lang,
message: successMessage
}
};
} catch (error) {
report('error', `Error setting response language: ${error.message}`);
return {
success: false,
error: {
code: 'SET_RESPONSE_LANGUAGE_ERROR',
message: error.message
}
};
}
}
export default setResponseLanguage;

View File

@@ -132,7 +132,7 @@ async function setTaskStatus(
// Write the updated raw data back to the file // Write the updated raw data back to the file
// The writeJSON function will automatically filter out _rawTaggedData // The writeJSON function will automatically filter out _rawTaggedData
writeJSON(tasksPath, rawData); writeJSON(tasksPath, rawData, options.projectRoot, currentTag);
// Validate dependencies after status update // Validate dependencies after status update
log('info', 'Validating dependencies after status update...'); log('info', 'Validating dependencies after status update...');

View File

@@ -145,8 +145,8 @@ async function createTag(
} }
} }
// Write the clean data back to file // Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData); writeJSON(tasksPath, cleanData, projectRoot);
logFn.success(`Successfully created tag "${tagName}"`); logFn.success(`Successfully created tag "${tagName}"`);
@@ -365,8 +365,8 @@ async function deleteTag(
} }
} }
// Write the clean data back to file // Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData); writeJSON(tasksPath, cleanData, projectRoot);
logFn.success(`Successfully deleted tag "${tagName}"`); logFn.success(`Successfully deleted tag "${tagName}"`);
@@ -485,7 +485,7 @@ async function enhanceTagsWithMetadata(tasksPath, rawData, context = {}) {
cleanData[key] = value; cleanData[key] = value;
} }
} }
writeJSON(tasksPath, cleanData); writeJSON(tasksPath, cleanData, context.projectRoot);
} }
} catch (error) { } catch (error) {
// Don't throw - just log and continue // Don't throw - just log and continue
@@ -905,8 +905,8 @@ async function renameTag(
} }
} }
// Write the clean data back to file // Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData); writeJSON(tasksPath, cleanData, projectRoot);
// Get task count // Get task count
const tasks = getTasksForTag(rawData, newName); const tasks = getTasksForTag(rawData, newName);
@@ -1062,8 +1062,8 @@ async function copyTag(
} }
} }
// Write the clean data back to file // Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData); writeJSON(tasksPath, cleanData, projectRoot);
logFn.success( logFn.success(
`Successfully copied tag from "${sourceName}" to "${targetName}"` `Successfully copied tag from "${sourceName}" to "${targetName}"`

View File

@@ -9,7 +9,8 @@ import {
readJSON, readJSON,
writeJSON, writeJSON,
truncate, truncate,
isSilentMode isSilentMode,
getCurrentTag
} from '../utils.js'; } from '../utils.js';
import { import {
@@ -222,6 +223,7 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
* @param {Object} [context.session] - Session object from MCP server. * @param {Object} [context.session] - Session object from MCP server.
* @param {Object} [context.mcpLog] - MCP logger object. * @param {Object} [context.mcpLog] - MCP logger object.
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). * @param {string} [outputFormat='text'] - Output format ('text' or 'json').
* @param {string} [tag=null] - Tag associated with the tasks.
*/ */
async function updateTasks( async function updateTasks(
tasksPath, tasksPath,
@@ -231,7 +233,7 @@ async function updateTasks(
context = {}, context = {},
outputFormat = 'text' // Default to text for CLI outputFormat = 'text' // Default to text for CLI
) { ) {
const { session, mcpLog, projectRoot: providedProjectRoot } = context; const { session, mcpLog, projectRoot: providedProjectRoot, tag } = context;
// Use mcpLog if available, otherwise use the imported consoleLog function // Use mcpLog if available, otherwise use the imported consoleLog function
const logFn = mcpLog || consoleLog; const logFn = mcpLog || consoleLog;
// Flag to easily check which logger type we have // Flag to easily check which logger type we have
@@ -255,8 +257,11 @@ async function updateTasks(
throw new Error('Could not determine project root directory'); throw new Error('Could not determine project root directory');
} }
// --- Task Loading/Filtering (Unchanged) --- // Determine the current tag - prioritize explicit tag, then context.tag, then current tag
const data = readJSON(tasksPath, projectRoot); const currentTag = tag || getCurrentTag(projectRoot) || 'master';
// --- Task Loading/Filtering (Updated to pass projectRoot and tag) ---
const data = readJSON(tasksPath, projectRoot, currentTag);
if (!data || !data.tasks) if (!data || !data.tasks)
throw new Error(`No valid tasks found in ${tasksPath}`); throw new Error(`No valid tasks found in ${tasksPath}`);
const tasksToUpdate = data.tasks.filter( const tasksToUpdate = data.tasks.filter(
@@ -428,7 +433,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
isMCP isMCP
); );
// --- Update Tasks Data (Unchanged) --- // --- Update Tasks Data (Updated writeJSON call) ---
if (!Array.isArray(parsedUpdatedTasks)) { if (!Array.isArray(parsedUpdatedTasks)) {
// Should be caught by parser, but extra check // Should be caught by parser, but extra check
throw new Error( throw new Error(
@@ -467,7 +472,8 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
`Applied updates to ${actualUpdateCount} tasks in the dataset.` `Applied updates to ${actualUpdateCount} tasks in the dataset.`
); );
writeJSON(tasksPath, data); // Fix: Pass projectRoot and currentTag to writeJSON
writeJSON(tasksPath, data, projectRoot, currentTag);
if (isMCP) if (isMCP)
logFn.info( logFn.info(
`Successfully updated ${actualUpdateCount} tasks in ${tasksPath}` `Successfully updated ${actualUpdateCount} tasks in ${tasksPath}`

View File

@@ -0,0 +1,57 @@
/**
* update-config-tokens.js
* Updates config.json with correct maxTokens values from supported-models.json
*/
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
/**
* Updates the config file with correct maxTokens values from supported-models.json
* @param {string} configPath - Path to the config.json file to update
* @returns {boolean} True if successful, false otherwise
*/
export function updateConfigMaxTokens(configPath) {
try {
// Load supported models
const supportedModelsPath = path.join(__dirname, 'supported-models.json');
const supportedModels = JSON.parse(
fs.readFileSync(supportedModelsPath, 'utf-8')
);
// Load config
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
// Update each role's maxTokens if the model exists in supported-models.json
const roles = ['main', 'research', 'fallback'];
for (const role of roles) {
if (config.models && config.models[role]) {
const provider = config.models[role].provider;
const modelId = config.models[role].modelId;
// Find the model in supported models
if (supportedModels[provider]) {
const modelData = supportedModels[provider].find(
(m) => m.id === modelId
);
if (modelData && modelData.max_tokens) {
config.models[role].maxTokens = modelData.max_tokens;
}
}
}
}
// Write back the updated config
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
return true;
} catch (error) {
console.error('Error updating config maxTokens:', error.message);
return false;
}
}

View File

@@ -64,6 +64,51 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
return undefined; return undefined;
} }
// --- Tag-Aware Path Resolution Utility ---
/**
* Slugifies a tag name to be filesystem-safe
* @param {string} tagName - The tag name to slugify
* @returns {string} Slugified tag name safe for filesystem use
*/
function slugifyTagForFilePath(tagName) {
if (!tagName || typeof tagName !== 'string') {
return 'unknown-tag';
}
// Replace invalid filesystem characters with hyphens and clean up
return tagName
.replace(/[^a-zA-Z0-9_-]/g, '-') // Replace invalid chars with hyphens
.replace(/^-+|-+$/g, '') // Remove leading/trailing hyphens
.replace(/-+/g, '-') // Collapse multiple hyphens
.toLowerCase() // Convert to lowercase
.substring(0, 50); // Limit length to prevent overly long filenames
}
/**
* Resolves a file path to be tag-aware, following the pattern used by other commands.
* For non-master tags, appends _slugified-tagname before the file extension.
* @param {string} basePath - The base file path (e.g., '.taskmaster/reports/task-complexity-report.json')
* @param {string|null} tag - The tag name (null, undefined, or 'master' uses base path)
* @param {string} [projectRoot='.'] - The project root directory
* @returns {string} The resolved file path
*/
function getTagAwareFilePath(basePath, tag, projectRoot = '.') {
// Use path.parse and format for clean tag insertion
const parsedPath = path.parse(basePath);
if (!tag || tag === 'master') {
return path.join(projectRoot, basePath);
}
// Slugify the tag for filesystem safety
const slugifiedTag = slugifyTagForFilePath(tag);
// Append slugified tag before file extension
parsedPath.base = `${parsedPath.name}_${slugifiedTag}${parsedPath.ext}`;
const relativePath = path.format(parsedPath);
return path.join(projectRoot, relativePath);
}
// --- Project Root Finding Utility --- // --- Project Root Finding Utility ---
/** /**
* Recursively searches upwards for project root starting from a given directory. * Recursively searches upwards for project root starting from a given directory.
@@ -967,6 +1012,21 @@ function truncate(text, maxLength) {
return `${text.slice(0, maxLength - 3)}...`; return `${text.slice(0, maxLength - 3)}...`;
} }
/**
* Checks if array or object are empty
* @param {*} value - The value to check
* @returns {boolean} True if empty, false otherwise
*/
function isEmpty(value) {
if (Array.isArray(value)) {
return value.length === 0;
} else if (typeof value === 'object' && value !== null) {
return Object.keys(value).length === 0;
}
return false; // Not an array or object, or is null
}
/** /**
* Find cycles in a dependency graph using DFS * Find cycles in a dependency graph using DFS
* @param {string} subtaskId - Current subtask ID * @param {string} subtaskId - Current subtask ID
@@ -1328,6 +1388,7 @@ export {
formatTaskId, formatTaskId,
findTaskById, findTaskById,
truncate, truncate,
isEmpty,
findCycles, findCycles,
toKebabCase, toKebabCase,
detectCamelCaseFlags, detectCamelCaseFlags,
@@ -1338,6 +1399,8 @@ export {
addComplexityToTask, addComplexityToTask,
resolveEnvVariable, resolveEnvVariable,
findProjectRoot, findProjectRoot,
getTagAwareFilePath,
slugifyTagForFilePath,
aggregateTelemetry, aggregateTelemetry,
getCurrentTag, getCurrentTag,
resolveTag, resolveTag,

View File

@@ -1,4 +1,4 @@
import { generateText, streamText, generateObject } from 'ai'; import { generateObject, generateText, streamText } from 'ai';
import { log } from '../../scripts/modules/index.js'; import { log } from '../../scripts/modules/index.js';
/** /**
@@ -109,7 +109,7 @@ export class BaseAIProvider {
`Generating ${this.name} text with model: ${params.modelId}` `Generating ${this.name} text with model: ${params.modelId}`
); );
const client = this.getClient(params); const client = await this.getClient(params);
const result = await generateText({ const result = await generateText({
model: client(params.modelId), model: client(params.modelId),
messages: params.messages, messages: params.messages,
@@ -145,7 +145,7 @@ export class BaseAIProvider {
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`); log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
const client = this.getClient(params); const client = await this.getClient(params);
const stream = await streamText({ const stream = await streamText({
model: client(params.modelId), model: client(params.modelId),
messages: params.messages, messages: params.messages,
@@ -184,7 +184,7 @@ export class BaseAIProvider {
`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}` `Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
); );
const client = this.getClient(params); const client = await this.getClient(params);
const result = await generateObject({ const result = await generateObject({
model: client(params.modelId), model: client(params.modelId),
messages: params.messages, messages: params.messages,

View File

@@ -7,6 +7,7 @@
import { createClaudeCode } from './custom-sdk/claude-code/index.js'; import { createClaudeCode } from './custom-sdk/claude-code/index.js';
import { BaseAIProvider } from './base-provider.js'; import { BaseAIProvider } from './base-provider.js';
import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js';
export class ClaudeCodeProvider extends BaseAIProvider { export class ClaudeCodeProvider extends BaseAIProvider {
constructor() { constructor() {
@@ -26,6 +27,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
/** /**
* Creates and returns a Claude Code client instance. * Creates and returns a Claude Code client instance.
* @param {object} params - Parameters for client initialization * @param {object} params - Parameters for client initialization
* @param {string} [params.commandName] - Name of the command invoking the service
* @param {string} [params.baseURL] - Optional custom API endpoint (not used by Claude Code) * @param {string} [params.baseURL] - Optional custom API endpoint (not used by Claude Code)
* @returns {Function} Claude Code client function * @returns {Function} Claude Code client function
* @throws {Error} If initialization fails * @throws {Error} If initialization fails
@@ -35,10 +37,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
// Claude Code doesn't use API keys or base URLs // Claude Code doesn't use API keys or base URLs
// Just return the provider factory // Just return the provider factory
return createClaudeCode({ return createClaudeCode({
defaultSettings: { defaultSettings: getClaudeCodeSettingsForCommand(params?.commandName)
// Add any default settings if needed
// These can be overridden per request
}
}); });
} catch (error) { } catch (error) {
this.handleError('client initialization', error); this.handleError('client initialization', error);

View File

@@ -0,0 +1,656 @@
/**
* src/ai-providers/gemini-cli.js
*
* Implementation for interacting with Gemini models via Gemini CLI
* using the ai-sdk-provider-gemini-cli package.
*/
import { generateObject, generateText, streamText } from 'ai';
import { parse } from 'jsonc-parser';
import { BaseAIProvider } from './base-provider.js';
import { log } from '../../scripts/modules/index.js';
let createGeminiProvider;
async function loadGeminiCliModule() {
if (!createGeminiProvider) {
try {
const mod = await import('ai-sdk-provider-gemini-cli');
createGeminiProvider = mod.createGeminiProvider;
} catch (err) {
throw new Error(
"Gemini CLI SDK is not installed. Please install 'ai-sdk-provider-gemini-cli' to use the gemini-cli provider."
);
}
}
}
export class GeminiCliProvider extends BaseAIProvider {
constructor() {
super();
this.name = 'Gemini CLI';
}
/**
* Override validateAuth to handle Gemini CLI authentication options
* @param {object} params - Parameters to validate
*/
validateAuth(params) {
// Gemini CLI is designed to use pre-configured OAuth authentication
// Users choose gemini-cli specifically to leverage their existing
// gemini auth login credentials, not to use API keys.
// We support API keys for compatibility, but the expected usage
// is through CLI authentication (no API key required).
// No validation needed - the SDK will handle auth internally
}
/**
* Creates and returns a Gemini CLI client instance.
* @param {object} params - Parameters for client initialization
* @param {string} [params.apiKey] - Optional Gemini API key (rarely used with gemini-cli)
* @param {string} [params.baseURL] - Optional custom API endpoint
* @returns {Promise<Function>} Gemini CLI client function
* @throws {Error} If initialization fails
*/
async getClient(params) {
try {
// Load the Gemini CLI module dynamically
await loadGeminiCliModule();
// Primary use case: Use existing gemini CLI authentication
// Secondary use case: Direct API key (for compatibility)
let authOptions = {};
if (params.apiKey && params.apiKey !== 'gemini-cli-no-key-required') {
// API key provided - use it for compatibility
authOptions = {
authType: 'api-key',
apiKey: params.apiKey
};
} else {
// Expected case: Use gemini CLI authentication
// Requires: gemini auth login (pre-configured)
authOptions = {
authType: 'oauth-personal'
};
}
// Add baseURL if provided (for custom endpoints)
if (params.baseURL) {
authOptions.baseURL = params.baseURL;
}
// Create and return the provider
return createGeminiProvider(authOptions);
} catch (error) {
this.handleError('client initialization', error);
}
}
/**
* Extracts system messages from the messages array and returns them separately.
* This is needed because ai-sdk-provider-gemini-cli expects system prompts as a separate parameter.
* @param {Array} messages - Array of message objects
* @param {Object} options - Options for system prompt enhancement
* @param {boolean} options.enforceJsonOutput - Whether to add JSON enforcement to system prompt
* @returns {Object} - {systemPrompt: string|undefined, messages: Array}
*/
_extractSystemMessage(messages, options = {}) {
if (!messages || !Array.isArray(messages)) {
return { systemPrompt: undefined, messages: messages || [] };
}
const systemMessages = messages.filter((msg) => msg.role === 'system');
const nonSystemMessages = messages.filter((msg) => msg.role !== 'system');
// Combine multiple system messages if present
let systemPrompt =
systemMessages.length > 0
? systemMessages.map((msg) => msg.content).join('\n\n')
: undefined;
// Add Gemini CLI specific JSON enforcement if requested
if (options.enforceJsonOutput) {
const jsonEnforcement = this._getJsonEnforcementPrompt();
systemPrompt = systemPrompt
? `${systemPrompt}\n\n${jsonEnforcement}`
: jsonEnforcement;
}
return { systemPrompt, messages: nonSystemMessages };
}
/**
* Gets a Gemini CLI specific system prompt to enforce strict JSON output
* @returns {string} JSON enforcement system prompt
*/
_getJsonEnforcementPrompt() {
return `CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.`;
}
/**
* Checks if a string is valid JSON
* @param {string} text - Text to validate
* @returns {boolean} True if valid JSON
*/
_isValidJson(text) {
if (!text || typeof text !== 'string') {
return false;
}
try {
JSON.parse(text.trim());
return true;
} catch {
return false;
}
}
/**
* Detects if the user prompt is requesting JSON output
* @param {Array} messages - Array of message objects
* @returns {boolean} True if JSON output is likely expected
*/
_detectJsonRequest(messages) {
const userMessages = messages.filter((msg) => msg.role === 'user');
const combinedText = userMessages
.map((msg) => msg.content)
.join(' ')
.toLowerCase();
// Look for indicators that JSON output is expected
const jsonIndicators = [
'json',
'respond only with',
'return only',
'output only',
'format:',
'structure:',
'schema:',
'{"',
'[{',
'subtasks',
'array',
'object'
];
return jsonIndicators.some((indicator) => combinedText.includes(indicator));
}
/**
* Simplifies complex prompts for gemini-cli to improve JSON output compliance
* @param {Array} messages - Array of message objects
* @returns {Array} Simplified messages array
*/
_simplifyJsonPrompts(messages) {
// First, check if this is an expand-task operation by looking at the system message
const systemMsg = messages.find((m) => m.role === 'system');
const isExpandTask =
systemMsg &&
systemMsg.content.includes(
'You are an AI assistant helping with task breakdown. Generate exactly'
);
if (!isExpandTask) {
return messages; // Not an expand task, return unchanged
}
// Extract subtask count from system message
const subtaskCountMatch = systemMsg.content.match(
/Generate exactly (\d+) subtasks/
);
const subtaskCount = subtaskCountMatch ? subtaskCountMatch[1] : '10';
log(
'debug',
`${this.name} detected expand-task operation, simplifying for ${subtaskCount} subtasks`
);
return messages.map((msg) => {
if (msg.role !== 'user') {
return msg;
}
// For expand-task user messages, create a much simpler, more direct prompt
// that doesn't depend on specific task content
const simplifiedPrompt = `Generate exactly ${subtaskCount} subtasks in the following JSON format.
CRITICAL INSTRUCTION: You must respond with ONLY valid JSON. No explanatory text, no "Here is", no "Of course", no markdown - just the JSON object.
Required JSON structure:
{
"subtasks": [
{
"id": 1,
"title": "Specific actionable task title",
"description": "Clear task description",
"dependencies": [],
"details": "Implementation details and guidance",
"testStrategy": "Testing approach"
}
]
}
Generate ${subtaskCount} subtasks based on the original task context. Return ONLY the JSON object.`;
log(
'debug',
`${this.name} simplified user prompt for better JSON compliance`
);
return { ...msg, content: simplifiedPrompt };
});
}
/**
* Extract JSON from Gemini's response using a tolerant parser.
*
* Optimized approach that progressively tries different parsing strategies:
* 1. Direct parsing after cleanup
* 2. Smart boundary detection with single-pass analysis
* 3. Limited character-by-character fallback for edge cases
*
* @param {string} text - Raw text which may contain JSON
* @returns {string} A valid JSON string if extraction succeeds, otherwise the original text
*/
extractJson(text) {
if (!text || typeof text !== 'string') {
return text;
}
let content = text.trim();
// Early exit for very short content
if (content.length < 2) {
return text;
}
// Strip common wrappers in a single pass
content = content
// Remove markdown fences
.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
// Remove variable declarations
.replace(/^\s*(?:const|let|var)\s+\w+\s*=\s*([\s\S]*?)(?:;|\s*)$/i, '$1')
// Remove common prefixes
.replace(/^(?:Here's|The)\s+(?:the\s+)?JSON.*?[:]\s*/i, '')
.trim();
// Find the first JSON-like structure
const firstObj = content.indexOf('{');
const firstArr = content.indexOf('[');
if (firstObj === -1 && firstArr === -1) {
return text;
}
const start =
firstArr === -1
? firstObj
: firstObj === -1
? firstArr
: Math.min(firstObj, firstArr);
content = content.slice(start);
// Optimized parsing function with error collection
const tryParse = (value) => {
if (!value || value.length < 2) return undefined;
const errors = [];
try {
const result = parse(value, errors, {
allowTrailingComma: true,
allowEmptyContent: false
});
if (errors.length === 0 && result !== undefined) {
return JSON.stringify(result, null, 2);
}
} catch {
// Parsing failed completely
}
return undefined;
};
// Try parsing the full content first
const fullParse = tryParse(content);
if (fullParse !== undefined) {
return fullParse;
}
// Smart boundary detection - single pass with optimizations
const openChar = content[0];
const closeChar = openChar === '{' ? '}' : ']';
let depth = 0;
let inString = false;
let escapeNext = false;
let lastValidEnd = -1;
// Single-pass boundary detection with early termination
for (let i = 0; i < content.length && i < 10000; i++) {
// Limit scan for performance
const char = content[i];
if (escapeNext) {
escapeNext = false;
continue;
}
if (char === '\\') {
escapeNext = true;
continue;
}
if (char === '"') {
inString = !inString;
continue;
}
if (inString) continue;
if (char === openChar) {
depth++;
} else if (char === closeChar) {
depth--;
if (depth === 0) {
lastValidEnd = i + 1;
// Try parsing immediately on first valid boundary
const candidate = content.slice(0, lastValidEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) {
return parsed;
}
}
}
}
// If we found valid boundaries but parsing failed, try limited fallback
if (lastValidEnd > 0) {
const maxAttempts = Math.min(5, Math.floor(lastValidEnd / 100)); // Limit attempts
for (let i = 0; i < maxAttempts; i++) {
const testEnd = Math.max(
lastValidEnd - i * 50,
Math.floor(lastValidEnd * 0.8)
);
const candidate = content.slice(0, testEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) {
return parsed;
}
}
}
return text;
}
/**
* Generates text using Gemini CLI model
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
*/
async generateText(params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
log(
'debug',
`Generating ${this.name} text with model: ${params.modelId}`
);
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
const enforceJsonOutput = this._detectJsonRequest(params.messages);
// Debug logging to understand what's happening
log('debug', `${this.name} JSON detection analysis:`, {
enforceJsonOutput,
messageCount: params.messages.length,
messages: params.messages.map((msg) => ({
role: msg.role,
contentPreview: msg.content
? msg.content.substring(0, 200) + '...'
: 'empty'
}))
});
if (enforceJsonOutput) {
log(
'debug',
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
);
}
// For gemini-cli, simplify complex prompts before processing
let processedMessages = params.messages;
if (enforceJsonOutput) {
processedMessages = this._simplifyJsonPrompts(params.messages);
}
// Extract system messages for separate handling with optional JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
processedMessages,
{ enforceJsonOutput }
);
// Debug the final system prompt being sent
log('debug', `${this.name} final system prompt:`, {
systemPromptLength: systemPrompt ? systemPrompt.length : 0,
systemPromptPreview: systemPrompt
? systemPrompt.substring(0, 300) + '...'
: 'none',
finalMessageCount: messages.length
});
const client = await this.getClient(params);
const result = await generateText({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});
// If we detected a JSON request and gemini-cli returned conversational text,
// attempt to extract JSON from the response
let finalText = result.text;
if (enforceJsonOutput && result.text && !this._isValidJson(result.text)) {
log(
'debug',
`${this.name} response appears conversational, attempting JSON extraction`
);
// Log first 1000 chars of the response to see what Gemini actually returned
log('debug', `${this.name} raw response preview:`, {
responseLength: result.text.length,
responseStart: result.text.substring(0, 1000)
});
const extractedJson = this.extractJson(result.text);
if (this._isValidJson(extractedJson)) {
log(
'debug',
`${this.name} successfully extracted JSON from conversational response`
);
finalText = extractedJson;
} else {
log(
'debug',
`${this.name} JSON extraction failed, returning original response`
);
// Log what extraction returned to debug why it failed
log('debug', `${this.name} extraction result preview:`, {
extractedLength: extractedJson ? extractedJson.length : 0,
extractedStart: extractedJson
? extractedJson.substring(0, 500)
: 'null'
});
}
}
log(
'debug',
`${this.name} generateText completed successfully for model: ${params.modelId}`
);
return {
text: finalText,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (error) {
this.handleError('text generation', error);
}
}
/**
* Streams text using Gemini CLI model
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
*/
async streamText(params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
const enforceJsonOutput = this._detectJsonRequest(params.messages);
// Debug logging to understand what's happening
log('debug', `${this.name} JSON detection analysis:`, {
enforceJsonOutput,
messageCount: params.messages.length,
messages: params.messages.map((msg) => ({
role: msg.role,
contentPreview: msg.content
? msg.content.substring(0, 200) + '...'
: 'empty'
}))
});
if (enforceJsonOutput) {
log(
'debug',
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
);
}
// Extract system messages for separate handling with optional JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
params.messages,
{ enforceJsonOutput }
);
const client = await this.getClient(params);
const stream = await streamText({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});
log(
'debug',
`${this.name} streamText initiated successfully for model: ${params.modelId}`
);
// Note: For streaming, we can't intercept and modify the response in real-time
// The JSON extraction would need to happen on the consuming side
return stream;
} catch (error) {
this.handleError('text streaming', error);
}
}
/**
* Generates a structured object using Gemini CLI model
* Overrides base implementation to handle Gemini-specific JSON formatting issues and system messages
*/
async generateObject(params) {
try {
// First try the standard generateObject from base class
return await super.generateObject(params);
} catch (error) {
// If it's a JSON parsing error, try to extract and parse JSON manually
if (error.message?.includes('JSON') || error.message?.includes('parse')) {
log(
'debug',
`Gemini CLI generateObject failed with parsing error, attempting manual extraction`
);
try {
// Validate params first
this.validateParams(params);
this.validateMessages(params.messages);
if (!params.schema) {
throw new Error('Schema is required for object generation');
}
if (!params.objectName) {
throw new Error('Object name is required for object generation');
}
// Extract system messages for separate handling with JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
params.messages,
{ enforceJsonOutput: true }
);
// Call generateObject directly with our client
const client = await this.getClient(params);
const result = await generateObject({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
schema: params.schema,
mode: 'json', // Use json mode instead of auto for Gemini
maxTokens: params.maxTokens,
temperature: params.temperature
});
// If we get rawResponse text, try to extract JSON from it
if (result.rawResponse?.text && !result.object) {
const extractedJson = this.extractJson(result.rawResponse.text);
try {
result.object = JSON.parse(extractedJson);
} catch (parseError) {
log(
'error',
`Failed to parse extracted JSON: ${parseError.message}`
);
log(
'debug',
`Extracted JSON: ${extractedJson.substring(0, 500)}...`
);
throw new Error(
`Gemini CLI returned invalid JSON that could not be parsed: ${parseError.message}`
);
}
}
return {
object: result.object,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (retryError) {
log(
'error',
`Gemini CLI manual JSON extraction failed: ${retryError.message}`
);
// Re-throw the original error with more context
throw new Error(
`${this.name} failed to generate valid JSON object: ${error.message}`
);
}
}
// For non-parsing errors, just re-throw
throw error;
}
}
}

View File

@@ -14,3 +14,4 @@ export { BedrockAIProvider } from './bedrock.js';
export { AzureProvider } from './azure.js'; export { AzureProvider } from './azure.js';
export { VertexAIProvider } from './google-vertex.js'; export { VertexAIProvider } from './google-vertex.js';
export { ClaudeCodeProvider } from './claude-code.js'; export { ClaudeCodeProvider } from './claude-code.js';
export { GeminiCliProvider } from './gemini-cli.js';

17
src/constants/commands.js Normal file
View File

@@ -0,0 +1,17 @@
/**
* Command related constants
* Defines which commands trigger AI processing
*/
// Command names that trigger AI processing
export const AI_COMMAND_NAMES = [
'add-task',
'analyze-complexity',
'expand-task',
'parse-prd',
'research',
'research-save',
'update-subtask',
'update-task',
'update-tasks'
];

View File

@@ -20,7 +20,8 @@ export const CUSTOM_PROVIDERS = {
BEDROCK: 'bedrock', BEDROCK: 'bedrock',
OPENROUTER: 'openrouter', OPENROUTER: 'openrouter',
OLLAMA: 'ollama', OLLAMA: 'ollama',
CLAUDE_CODE: 'claude-code' CLAUDE_CODE: 'claude-code',
GEMINI_CLI: 'gemini-cli'
}; };
// Custom providers array (for backward compatibility and iteration) // Custom providers array (for backward compatibility and iteration)

View File

@@ -25,7 +25,7 @@ function formatJSONWithTabs(obj) {
} }
// Structure matches project conventions (see scripts/init.js) // Structure matches project conventions (see scripts/init.js)
export function setupMCPConfiguration(projectDir, mcpConfigPath) { export function setupMCPConfiguration(projectRoot, mcpConfigPath) {
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles) // Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
if (!mcpConfigPath) { if (!mcpConfigPath) {
log( log(
@@ -36,7 +36,7 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
} }
// Build the full path to the MCP config file // Build the full path to the MCP config file
const mcpPath = path.join(projectDir, mcpConfigPath); const mcpPath = path.join(projectRoot, mcpConfigPath);
const configDir = path.dirname(mcpPath); const configDir = path.dirname(mcpPath);
log('info', `Setting up MCP configuration at ${mcpPath}...`); log('info', `Setting up MCP configuration at ${mcpPath}...`);
@@ -140,11 +140,11 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
/** /**
* Remove Task Master MCP server configuration from an existing mcp.json file * Remove Task Master MCP server configuration from an existing mcp.json file
* Only removes Task Master entries, preserving other MCP servers * Only removes Task Master entries, preserving other MCP servers
* @param {string} projectDir - Target project directory * @param {string} projectRoot - Target project directory
* @param {string} mcpConfigPath - Relative path to MCP config file (e.g., '.cursor/mcp.json') * @param {string} mcpConfigPath - Relative path to MCP config file (e.g., '.cursor/mcp.json')
* @returns {Object} Result object with success status and details * @returns {Object} Result object with success status and details
*/ */
export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) { export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles) // Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
if (!mcpConfigPath) { if (!mcpConfigPath) {
return { return {
@@ -156,7 +156,7 @@ export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
}; };
} }
const mcpPath = path.join(projectDir, mcpConfigPath); const mcpPath = path.join(projectRoot, mcpConfigPath);
let result = { let result = {
success: false, success: false,

View File

@@ -170,7 +170,7 @@ function validateInputs(targetPath, content, storeTasksInGit) {
*/ */
function createNewGitignoreFile(targetPath, templateLines, log) { function createNewGitignoreFile(targetPath, templateLines, log) {
try { try {
fs.writeFileSync(targetPath, templateLines.join('\n')); fs.writeFileSync(targetPath, templateLines.join('\n') + '\n');
if (typeof log === 'function') { if (typeof log === 'function') {
log('success', `Created ${targetPath} with full template`); log('success', `Created ${targetPath} with full template`);
} }
@@ -223,7 +223,7 @@ function mergeWithExistingFile(
finalLines.push(...buildTaskFilesSection(storeTasksInGit)); finalLines.push(...buildTaskFilesSection(storeTasksInGit));
// Write result // Write result
fs.writeFileSync(targetPath, finalLines.join('\n')); fs.writeFileSync(targetPath, finalLines.join('\n') + '\n');
if (typeof log === 'function') { if (typeof log === 'function') {
const hasNewContent = const hasNewContent =

View File

@@ -25,6 +25,9 @@ import { getLoggerOrDefault } from './logger-utils.js';
export function normalizeProjectRoot(projectRoot) { export function normalizeProjectRoot(projectRoot) {
if (!projectRoot) return projectRoot; if (!projectRoot) return projectRoot;
// Ensure it's a string
projectRoot = String(projectRoot);
// Split the path into segments // Split the path into segments
const segments = projectRoot.split(path.sep); const segments = projectRoot.split(path.sep);

View File

@@ -198,7 +198,7 @@ export function convertRuleToProfileRule(sourcePath, targetPath, profile) {
/** /**
* Convert all Cursor rules to profile rules for a specific profile * Convert all Cursor rules to profile rules for a specific profile
*/ */
export function convertAllRulesToProfileRules(projectDir, profile) { export function convertAllRulesToProfileRules(projectRoot, profile) {
// Handle simple profiles (Claude, Codex) that just copy files to root // Handle simple profiles (Claude, Codex) that just copy files to root
const isSimpleProfile = Object.keys(profile.fileMap).length === 0; const isSimpleProfile = Object.keys(profile.fileMap).length === 0;
if (isSimpleProfile) { if (isSimpleProfile) {
@@ -208,7 +208,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
const assetsDir = path.join(__dirname, '..', '..', 'assets'); const assetsDir = path.join(__dirname, '..', '..', 'assets');
if (typeof profile.onPostConvertRulesProfile === 'function') { if (typeof profile.onPostConvertRulesProfile === 'function') {
profile.onPostConvertRulesProfile(projectDir, assetsDir); profile.onPostConvertRulesProfile(projectRoot, assetsDir);
} }
return { success: 1, failed: 0 }; return { success: 1, failed: 0 };
} }
@@ -216,7 +216,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename); const __dirname = path.dirname(__filename);
const sourceDir = path.join(__dirname, '..', '..', 'assets', 'rules'); const sourceDir = path.join(__dirname, '..', '..', 'assets', 'rules');
const targetDir = path.join(projectDir, profile.rulesDir); const targetDir = path.join(projectRoot, profile.rulesDir);
// Ensure target directory exists // Ensure target directory exists
if (!fs.existsSync(targetDir)) { if (!fs.existsSync(targetDir)) {
@@ -225,7 +225,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
// Setup MCP configuration if enabled // Setup MCP configuration if enabled
if (profile.mcpConfig !== false) { if (profile.mcpConfig !== false) {
setupMCPConfiguration(projectDir, profile.mcpConfigPath); setupMCPConfiguration(projectRoot, profile.mcpConfigPath);
} }
let success = 0; let success = 0;
@@ -286,7 +286,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
// Call post-processing hook if defined (e.g., for Roo's rules-*mode* folders) // Call post-processing hook if defined (e.g., for Roo's rules-*mode* folders)
if (typeof profile.onPostConvertRulesProfile === 'function') { if (typeof profile.onPostConvertRulesProfile === 'function') {
const assetsDir = path.join(__dirname, '..', '..', 'assets'); const assetsDir = path.join(__dirname, '..', '..', 'assets');
profile.onPostConvertRulesProfile(projectDir, assetsDir); profile.onPostConvertRulesProfile(projectRoot, assetsDir);
} }
return { success, failed }; return { success, failed };
@@ -294,13 +294,13 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
/** /**
* Remove only Task Master specific files from a profile, leaving other existing rules intact * Remove only Task Master specific files from a profile, leaving other existing rules intact
* @param {string} projectDir - Target project directory * @param {string} projectRoot - Target project directory
* @param {Object} profile - Profile configuration * @param {Object} profile - Profile configuration
* @returns {Object} Result object * @returns {Object} Result object
*/ */
export function removeProfileRules(projectDir, profile) { export function removeProfileRules(projectRoot, profile) {
const targetDir = path.join(projectDir, profile.rulesDir); const targetDir = path.join(projectRoot, profile.rulesDir);
const profileDir = path.join(projectDir, profile.profileDir); const profileDir = path.join(projectRoot, profile.profileDir);
const result = { const result = {
profileName: profile.profileName, profileName: profile.profileName,
@@ -320,12 +320,12 @@ export function removeProfileRules(projectDir, profile) {
if (isSimpleProfile) { if (isSimpleProfile) {
// For simple profiles, just call their removal hook and return // For simple profiles, just call their removal hook and return
if (typeof profile.onRemoveRulesProfile === 'function') { if (typeof profile.onRemoveRulesProfile === 'function') {
profile.onRemoveRulesProfile(projectDir); profile.onRemoveRulesProfile(projectRoot);
} }
result.success = true; result.success = true;
log( log(
'debug', 'debug',
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectDir}` `[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectRoot}`
); );
return result; return result;
} }
@@ -418,7 +418,7 @@ export function removeProfileRules(projectDir, profile) {
// 2. Handle MCP configuration - only remove Task Master, preserve other servers // 2. Handle MCP configuration - only remove Task Master, preserve other servers
if (profile.mcpConfig !== false) { if (profile.mcpConfig !== false) {
result.mcpResult = removeTaskMasterMCPConfiguration( result.mcpResult = removeTaskMasterMCPConfiguration(
projectDir, projectRoot,
profile.mcpConfigPath profile.mcpConfigPath
); );
if (result.mcpResult.hasOtherServers) { if (result.mcpResult.hasOtherServers) {
@@ -432,7 +432,7 @@ export function removeProfileRules(projectDir, profile) {
// 3. Call removal hook if defined (e.g., Roo's custom cleanup) // 3. Call removal hook if defined (e.g., Roo's custom cleanup)
if (typeof profile.onRemoveRulesProfile === 'function') { if (typeof profile.onRemoveRulesProfile === 'function') {
profile.onRemoveRulesProfile(projectDir); profile.onRemoveRulesProfile(projectRoot);
} }
// 4. Only remove profile directory if: // 4. Only remove profile directory if:
@@ -490,7 +490,7 @@ export function removeProfileRules(projectDir, profile) {
result.success = true; result.success = true;
log( log(
'debug', 'debug',
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectDir}` `[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectRoot}`
); );
} catch (error) { } catch (error) {
result.error = error.message; result.error = error.message;

View File

@@ -0,0 +1,649 @@
import { jest } from '@jest/globals';
// Mock the ai module
jest.unstable_mockModule('ai', () => ({
generateObject: jest.fn(),
generateText: jest.fn(),
streamText: jest.fn()
}));
// Mock the gemini-cli SDK module
jest.unstable_mockModule('ai-sdk-provider-gemini-cli', () => ({
createGeminiProvider: jest.fn((options) => {
const provider = (modelId, settings) => ({
// Mock language model
id: modelId,
settings,
authOptions: options
});
provider.languageModel = jest.fn((id, settings) => ({ id, settings }));
provider.chat = provider.languageModel;
return provider;
})
}));
// Mock the base provider
jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
BaseAIProvider: class {
constructor() {
this.name = 'Base Provider';
}
handleError(context, error) {
throw error;
}
validateParams(params) {
// Basic validation
if (!params.modelId) {
throw new Error('Model ID is required');
}
}
validateMessages(messages) {
if (!messages || !Array.isArray(messages)) {
throw new Error('Invalid messages array');
}
}
async generateObject(params) {
// Mock implementation that can be overridden
throw new Error('Mock base generateObject error');
}
}
}));
// Mock the log module
jest.unstable_mockModule('../../../scripts/modules/index.js', () => ({
log: jest.fn()
}));
// Import after mocking
const { GeminiCliProvider } = await import(
'../../../src/ai-providers/gemini-cli.js'
);
const { createGeminiProvider } = await import('ai-sdk-provider-gemini-cli');
const { generateObject, generateText, streamText } = await import('ai');
const { log } = await import('../../../scripts/modules/index.js');
describe('GeminiCliProvider', () => {
let provider;
let consoleLogSpy;
beforeEach(() => {
provider = new GeminiCliProvider();
jest.clearAllMocks();
consoleLogSpy = jest.spyOn(console, 'log').mockImplementation();
});
afterEach(() => {
consoleLogSpy.mockRestore();
});
describe('constructor', () => {
it('should set the provider name to Gemini CLI', () => {
expect(provider.name).toBe('Gemini CLI');
});
});
describe('validateAuth', () => {
it('should not throw an error when API key is provided', () => {
expect(() => provider.validateAuth({ apiKey: 'test-key' })).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
it('should not require API key and should not log messages', () => {
expect(() => provider.validateAuth({})).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
it('should not require any parameters', () => {
expect(() => provider.validateAuth()).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
});
describe('getClient', () => {
it('should return a gemini client with API key auth when apiKey is provided', async () => {
const client = await provider.getClient({ apiKey: 'test-api-key' });
expect(client).toBeDefined();
expect(typeof client).toBe('function');
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'test-api-key'
});
});
it('should return a gemini client with OAuth auth when no apiKey is provided', async () => {
const client = await provider.getClient({});
expect(client).toBeDefined();
expect(typeof client).toBe('function');
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
it('should include baseURL when provided', async () => {
const client = await provider.getClient({
apiKey: 'test-key',
baseURL: 'https://custom-endpoint.com'
});
expect(client).toBeDefined();
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'test-key',
baseURL: 'https://custom-endpoint.com'
});
});
it('should have languageModel and chat methods', async () => {
const client = await provider.getClient({ apiKey: 'test-key' });
expect(client.languageModel).toBeDefined();
expect(client.chat).toBeDefined();
expect(client.chat).toBe(client.languageModel);
});
});
describe('_extractSystemMessage', () => {
it('should extract single system message', () => {
const messages = [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBe('You are a helpful assistant');
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should combine multiple system messages', () => {
const messages = [
{ role: 'system', content: 'You are helpful' },
{ role: 'system', content: 'Be concise' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBe('You are helpful\n\nBe concise');
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should handle messages without system prompts', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBeUndefined();
expect(result.messages).toEqual(messages);
});
it('should handle empty or invalid input', () => {
expect(provider._extractSystemMessage([])).toEqual({
systemPrompt: undefined,
messages: []
});
expect(provider._extractSystemMessage(null)).toEqual({
systemPrompt: undefined,
messages: []
});
expect(provider._extractSystemMessage(undefined)).toEqual({
systemPrompt: undefined,
messages: []
});
});
it('should add JSON enforcement when enforceJsonOutput is true', () => {
const messages = [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages, {
enforceJsonOutput: true
});
expect(result.systemPrompt).toContain('You are a helpful assistant');
expect(result.systemPrompt).toContain(
'CRITICAL: You MUST respond with ONLY valid JSON'
);
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should add JSON enforcement with no existing system message', () => {
const messages = [{ role: 'user', content: 'Return JSON format' }];
const result = provider._extractSystemMessage(messages, {
enforceJsonOutput: true
});
expect(result.systemPrompt).toBe(
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.'
);
expect(result.messages).toEqual([
{ role: 'user', content: 'Return JSON format' }
]);
});
});
describe('_detectJsonRequest', () => {
it('should detect JSON requests from user messages', () => {
const messages = [
{
role: 'user',
content: 'Please return JSON format with subtasks array'
}
];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
it('should detect various JSON indicators', () => {
const testCases = [
'respond only with valid JSON',
'return JSON format',
'output schema: {"test": true}',
'format: [{"id": 1}]',
'Please return subtasks in array format',
'Return an object with properties'
];
testCases.forEach((content) => {
const messages = [{ role: 'user', content }];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
});
it('should not detect JSON requests for regular conversation', () => {
const messages = [{ role: 'user', content: 'Hello, how are you today?' }];
expect(provider._detectJsonRequest(messages)).toBe(false);
});
it('should handle multiple user messages', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'Now please return JSON format' }
];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
});
describe('_getJsonEnforcementPrompt', () => {
it('should return strict JSON enforcement prompt', () => {
const prompt = provider._getJsonEnforcementPrompt();
expect(prompt).toContain('CRITICAL');
expect(prompt).toContain('ONLY valid JSON');
expect(prompt).toContain('No exceptions');
});
});
describe('_isValidJson', () => {
it('should return true for valid JSON objects', () => {
expect(provider._isValidJson('{"test": true}')).toBe(true);
expect(provider._isValidJson('{"subtasks": [{"id": 1}]}')).toBe(true);
});
it('should return true for valid JSON arrays', () => {
expect(provider._isValidJson('[1, 2, 3]')).toBe(true);
expect(provider._isValidJson('[{"id": 1}, {"id": 2}]')).toBe(true);
});
it('should return false for invalid JSON', () => {
expect(provider._isValidJson('Of course. Here is...')).toBe(false);
expect(provider._isValidJson('{"invalid": json}')).toBe(false);
expect(provider._isValidJson('not json at all')).toBe(false);
});
it('should handle edge cases', () => {
expect(provider._isValidJson('')).toBe(false);
expect(provider._isValidJson(null)).toBe(false);
expect(provider._isValidJson(undefined)).toBe(false);
expect(provider._isValidJson(' {"test": true} ')).toBe(true); // with whitespace
});
});
describe('extractJson', () => {
it('should extract JSON from markdown code blocks', () => {
const input = '```json\n{"subtasks": [{"id": 1}]}\n```';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should extract JSON with explanatory text', () => {
const input = 'Here\'s the JSON response:\n{"subtasks": [{"id": 1}]}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle variable declarations', () => {
const input = 'const result = {"subtasks": [{"id": 1}]};';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle trailing commas with jsonc-parser', () => {
const input = '{"subtasks": [{"id": 1,}],}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle arrays', () => {
const input = 'The result is: [1, 2, 3]';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual([1, 2, 3]);
});
it('should handle nested objects with proper bracket matching', () => {
const input =
'Response: {"outer": {"inner": {"value": "test"}}} extra text';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ outer: { inner: { value: 'test' } } });
});
it('should handle escaped quotes in strings', () => {
const input = '{"message": "He said \\"hello\\" to me"}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ message: 'He said "hello" to me' });
});
it('should return original text if no JSON found', () => {
const input = 'No JSON here';
expect(provider.extractJson(input)).toBe(input);
});
it('should handle null or non-string input', () => {
expect(provider.extractJson(null)).toBe(null);
expect(provider.extractJson(undefined)).toBe(undefined);
expect(provider.extractJson(123)).toBe(123);
});
it('should handle partial JSON by finding valid boundaries', () => {
const input = '{"valid": true, "partial": "incomplete';
// Should return original text since no valid JSON can be extracted
expect(provider.extractJson(input)).toBe(input);
});
it('should handle performance edge cases with large text', () => {
// Test with large text that has JSON at the end
const largePrefix = 'This is a very long explanation. '.repeat(1000);
const json = '{"result": "success"}';
const input = largePrefix + json;
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ result: 'success' });
});
it('should handle early termination for very large invalid content', () => {
// Test that it doesn't hang on very large content without JSON
const largeText = 'No JSON here. '.repeat(2000);
const result = provider.extractJson(largeText);
expect(result).toBe(largeText);
});
});
describe('generateObject', () => {
const mockParams = {
modelId: 'gemini-2.0-flash-exp',
apiKey: 'test-key',
messages: [{ role: 'user', content: 'Test message' }],
schema: { type: 'object', properties: {} },
objectName: 'testObject'
};
beforeEach(() => {
jest.clearAllMocks();
});
it('should handle JSON parsing errors by attempting manual extraction', async () => {
// Mock the parent generateObject to throw a JSON parsing error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON response'));
// Mock generateObject from ai module to return text with JSON
generateObject.mockResolvedValueOnce({
rawResponse: {
text: 'Here is the JSON:\n```json\n{"subtasks": [{"id": 1}]}\n```'
},
object: null,
usage: { promptTokens: 10, completionTokens: 20, totalTokens: 30 }
});
const result = await provider.generateObject(mockParams);
expect(log).toHaveBeenCalledWith(
'debug',
expect.stringContaining('attempting manual extraction')
);
expect(generateObject).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp',
authOptions: expect.objectContaining({
authType: 'api-key',
apiKey: 'test-key'
})
}),
messages: mockParams.messages,
schema: mockParams.schema,
mode: 'json', // Should use json mode for Gemini
system: expect.stringContaining(
'CRITICAL: You MUST respond with ONLY valid JSON'
),
maxTokens: undefined,
temperature: undefined
});
expect(result.object).toEqual({ subtasks: [{ id: 1 }] });
});
it('should throw error if manual extraction also fails', async () => {
// Mock parent to throw JSON error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
// Mock generateObject to return unparseable text
generateObject.mockResolvedValueOnce({
rawResponse: { text: 'Not valid JSON at all' },
object: null
});
await expect(provider.generateObject(mockParams)).rejects.toThrow(
'Gemini CLI failed to generate valid JSON object: Failed to parse JSON'
);
});
it('should pass through non-JSON errors unchanged', async () => {
const otherError = new Error('Network error');
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(otherError);
await expect(provider.generateObject(mockParams)).rejects.toThrow(
'Network error'
);
expect(generateObject).not.toHaveBeenCalled();
});
it('should handle successful response from parent', async () => {
const mockResult = {
object: { test: 'data' },
usage: { inputTokens: 5, outputTokens: 10, totalTokens: 15 }
};
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockResolvedValueOnce(mockResult);
const result = await provider.generateObject(mockParams);
expect(result).toEqual(mockResult);
expect(generateObject).not.toHaveBeenCalled();
});
});
describe('system message support', () => {
const mockParams = {
modelId: 'gemini-2.0-flash-exp',
apiKey: 'test-key',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
],
maxTokens: 100,
temperature: 0.7
};
describe('generateText with system messages', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('should pass system prompt separately to AI SDK', async () => {
const { generateText } = await import('ai');
generateText.mockResolvedValueOnce({
text: 'Hello! How can I help you?',
usage: { promptTokens: 10, completionTokens: 8, totalTokens: 18 }
});
const result = await provider.generateText(mockParams);
expect(generateText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: 'You are a helpful assistant',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
expect(result.text).toBe('Hello! How can I help you?');
});
it('should handle messages without system prompt', async () => {
const { generateText } = await import('ai');
const paramsNoSystem = {
...mockParams,
messages: [{ role: 'user', content: 'Hello' }]
};
generateText.mockResolvedValueOnce({
text: 'Hi there!',
usage: { promptTokens: 5, completionTokens: 3, totalTokens: 8 }
});
await provider.generateText(paramsNoSystem);
expect(generateText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: undefined,
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
});
});
describe('streamText with system messages', () => {
it('should pass system prompt separately to AI SDK', async () => {
const { streamText } = await import('ai');
const mockStream = { stream: 'mock-stream' };
streamText.mockResolvedValueOnce(mockStream);
const result = await provider.streamText(mockParams);
expect(streamText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: 'You are a helpful assistant',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
expect(result).toBe(mockStream);
});
});
describe('generateObject with system messages', () => {
const mockObjectParams = {
...mockParams,
schema: { type: 'object', properties: {} },
objectName: 'testObject'
};
it('should include system prompt in fallback generateObject call', async () => {
// Mock parent to throw JSON error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
// Mock direct generateObject call
generateObject.mockResolvedValueOnce({
object: { result: 'success' },
usage: { promptTokens: 15, completionTokens: 10, totalTokens: 25 }
});
const result = await provider.generateObject(mockObjectParams);
expect(generateObject).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: expect.stringContaining('You are a helpful assistant'),
messages: [{ role: 'user', content: 'Hello' }],
schema: mockObjectParams.schema,
mode: 'json',
maxTokens: 100,
temperature: 0.7
});
expect(result.object).toEqual({ result: 'success' });
});
});
});
// Note: Error handling for module loading is tested in integration tests
// since dynamic imports are difficult to mock properly in unit tests
describe('authentication scenarios', () => {
it('should use api-key auth type with API key', async () => {
await provider.getClient({ apiKey: 'gemini-test-key' });
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'gemini-test-key'
});
});
it('should use oauth-personal auth type without API key', async () => {
await provider.getClient({});
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
it('should handle empty string API key as no API key', async () => {
await provider.getClient({ apiKey: '' });
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
});
});

View File

@@ -8,6 +8,7 @@ const mockGetResearchModelId = jest.fn();
const mockGetFallbackProvider = jest.fn(); const mockGetFallbackProvider = jest.fn();
const mockGetFallbackModelId = jest.fn(); const mockGetFallbackModelId = jest.fn();
const mockGetParametersForRole = jest.fn(); const mockGetParametersForRole = jest.fn();
const mockGetResponseLanguage = jest.fn();
const mockGetUserId = jest.fn(); const mockGetUserId = jest.fn();
const mockGetDebugFlag = jest.fn(); const mockGetDebugFlag = jest.fn();
const mockIsApiKeySet = jest.fn(); const mockIsApiKeySet = jest.fn();
@@ -98,6 +99,7 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
getFallbackMaxTokens: mockGetFallbackMaxTokens, getFallbackMaxTokens: mockGetFallbackMaxTokens,
getFallbackTemperature: mockGetFallbackTemperature, getFallbackTemperature: mockGetFallbackTemperature,
getParametersForRole: mockGetParametersForRole, getParametersForRole: mockGetParametersForRole,
getResponseLanguage: mockGetResponseLanguage,
getUserId: mockGetUserId, getUserId: mockGetUserId,
getDebugFlag: mockGetDebugFlag, getDebugFlag: mockGetDebugFlag,
getBaseUrlForRole: mockGetBaseUrlForRole, getBaseUrlForRole: mockGetBaseUrlForRole,
@@ -117,7 +119,10 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
getBedrockBaseURL: mockGetBedrockBaseURL, getBedrockBaseURL: mockGetBedrockBaseURL,
getVertexProjectId: mockGetVertexProjectId, getVertexProjectId: mockGetVertexProjectId,
getVertexLocation: mockGetVertexLocation, getVertexLocation: mockGetVertexLocation,
getMcpApiKeyStatus: mockGetMcpApiKeyStatus getMcpApiKeyStatus: mockGetMcpApiKeyStatus,
// Providers without API keys
providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli']
})); }));
// Mock AI Provider Classes with proper methods // Mock AI Provider Classes with proper methods
@@ -185,6 +190,11 @@ jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
generateText: jest.fn(), generateText: jest.fn(),
streamText: jest.fn(), streamText: jest.fn(),
generateObject: jest.fn() generateObject: jest.fn()
})),
GeminiCliProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
})) }))
})); }));
@@ -269,6 +279,7 @@ describe('Unified AI Services', () => {
if (role === 'fallback') return { maxTokens: 150, temperature: 0.6 }; if (role === 'fallback') return { maxTokens: 150, temperature: 0.6 };
return { maxTokens: 100, temperature: 0.5 }; // Default return { maxTokens: 100, temperature: 0.5 }; // Default
}); });
mockGetResponseLanguage.mockReturnValue('English');
mockResolveEnvVariable.mockImplementation((key) => { mockResolveEnvVariable.mockImplementation((key) => {
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key'; if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key'; if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
@@ -455,6 +466,68 @@ describe('Unified AI Services', () => {
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1); expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1);
}); });
test('should use configured responseLanguage in system prompt', async () => {
mockGetResponseLanguage.mockReturnValue('中文');
mockAnthropicProvider.generateText.mockResolvedValue('中文回复');
const params = {
role: 'main',
systemPrompt: 'You are an assistant',
prompt: 'Hello'
};
await generateTextService(params);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
expect.objectContaining({
messages: [
{
role: 'system',
content: expect.stringContaining('Always respond in 中文')
},
{ role: 'user', content: 'Hello' }
]
})
);
expect(mockGetResponseLanguage).toHaveBeenCalledWith(fakeProjectRoot);
});
test('should pass custom projectRoot to getResponseLanguage', async () => {
const customRoot = '/custom/project/root';
mockGetResponseLanguage.mockReturnValue('Español');
mockAnthropicProvider.generateText.mockResolvedValue(
'Respuesta en Español'
);
const params = {
role: 'main',
systemPrompt: 'You are an assistant',
prompt: 'Hello',
projectRoot: customRoot
};
await generateTextService(params);
expect(mockGetResponseLanguage).toHaveBeenCalledWith(customRoot);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
expect.objectContaining({
messages: [
{
role: 'system',
content: expect.stringContaining('Always respond in Español')
},
{ role: 'user', content: 'Hello' }
]
})
);
});
// Add more tests for edge cases:
// - Missing API keys (should throw from _resolveApiKey)
// - Unsupported provider configured (should skip and log)
// - Missing provider/model config for a role (should skip and log)
// - Missing prompt
// - Different initial roles (research, fallback)
// - generateObjectService (mock schema, check object result)
// - streamTextService (more complex to test, might need stream helpers)
test('should skip provider with missing API key and try next in fallback sequence', async () => { test('should skip provider with missing API key and try next in fallback sequence', async () => {
// Setup isApiKeySet to return false for anthropic but true for perplexity // Setup isApiKeySet to return false for anthropic but true for perplexity
mockIsApiKeySet.mockImplementation((provider, session, root) => { mockIsApiKeySet.mockImplementation((provider, session, root) => {

View File

@@ -48,11 +48,14 @@ const mockConsole = {
}; };
global.console = mockConsole; global.console = mockConsole;
// --- Define Mock Function Instances ---
const mockFindConfigPath = jest.fn(() => null); // Default to null, can be overridden in tests
// Mock path-utils to prevent config file path discovery and logging // Mock path-utils to prevent config file path discovery and logging
jest.mock('../../src/utils/path-utils.js', () => ({ jest.mock('../../src/utils/path-utils.js', () => ({
__esModule: true, __esModule: true,
findProjectRoot: jest.fn(() => '/mock/project'), findProjectRoot: jest.fn(() => '/mock/project'),
findConfigPath: jest.fn(() => null), // Always return null to prevent config discovery findConfigPath: mockFindConfigPath, // Use the mock function instance
findTasksPath: jest.fn(() => '/mock/tasks.json'), findTasksPath: jest.fn(() => '/mock/tasks.json'),
findComplexityReportPath: jest.fn(() => null), findComplexityReportPath: jest.fn(() => null),
resolveTasksOutputPath: jest.fn(() => '/mock/tasks.json'), resolveTasksOutputPath: jest.fn(() => '/mock/tasks.json'),
@@ -136,12 +139,15 @@ const DEFAULT_CONFIG = {
global: { global: {
logLevel: 'info', logLevel: 'info',
debug: false, debug: false,
defaultNumTasks: 10,
defaultSubtasks: 5, defaultSubtasks: 5,
defaultPriority: 'medium', defaultPriority: 'medium',
projectName: 'Task Master', projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api', ollamaBaseURL: 'http://localhost:11434/api',
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com' bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
} responseLanguage: 'English'
},
claudeCode: {}
}; };
// Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG) // Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG)
@@ -195,6 +201,61 @@ const INVALID_PROVIDER_CONFIG = {
} }
}; };
// Claude Code test data
const VALID_CLAUDE_CODE_CONFIG = {
maxTurns: 5,
customSystemPrompt: 'You are a helpful coding assistant',
appendSystemPrompt: 'Always follow best practices',
permissionMode: 'acceptEdits',
allowedTools: ['Read', 'LS', 'Edit'],
disallowedTools: ['Write'],
mcpServers: {
'test-server': {
type: 'stdio',
command: 'node',
args: ['server.js'],
env: { NODE_ENV: 'test' }
}
},
commandSpecific: {
'add-task': {
maxTurns: 3,
permissionMode: 'plan'
},
research: {
customSystemPrompt: 'You are a research assistant'
}
}
};
const INVALID_CLAUDE_CODE_CONFIG = {
maxTurns: 'invalid', // Should be number
permissionMode: 'invalid-mode', // Invalid enum value
allowedTools: 'not-an-array', // Should be array
mcpServers: {
'invalid-server': {
type: 'invalid-type', // Invalid enum value
url: 'not-a-valid-url' // Invalid URL format
}
},
commandSpecific: {
'invalid-command': {
// Invalid command name
maxTurns: -1 // Invalid negative number
}
}
};
const PARTIAL_CLAUDE_CODE_CONFIG = {
maxTurns: 10,
permissionMode: 'default',
commandSpecific: {
'expand-task': {
customSystemPrompt: 'Focus on task breakdown'
}
}
};
// Define spies globally to be restored in afterAll // Define spies globally to be restored in afterAll
let consoleErrorSpy; let consoleErrorSpy;
let consoleWarnSpy; let consoleWarnSpy;
@@ -220,6 +281,7 @@ beforeEach(() => {
// Reset the external mock instances for utils // Reset the external mock instances for utils
mockFindProjectRoot.mockReset(); mockFindProjectRoot.mockReset();
mockLog.mockReset(); mockLog.mockReset();
mockFindConfigPath.mockReset();
// --- Set up spies ON the imported 'fs' mock --- // --- Set up spies ON the imported 'fs' mock ---
fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync'); fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync');
@@ -228,6 +290,7 @@ beforeEach(() => {
// --- Default Mock Implementations --- // --- Default Mock Implementations ---
mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot
mockFindConfigPath.mockReturnValue(null); // Default to no config file found
fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default
// Default readFileSync: Return REAL models content, mocked config, or throw error // Default readFileSync: Return REAL models content, mocked config, or throw error
@@ -325,6 +388,162 @@ describe('Validation Functions', () => {
}); });
}); });
// --- Claude Code Validation Tests ---
describe('Claude Code Validation', () => {
test('validateClaudeCodeSettings should return valid settings for correct input', () => {
const result = configManager.validateClaudeCodeSettings(
VALID_CLAUDE_CODE_CONFIG
);
expect(result).toEqual(VALID_CLAUDE_CODE_CONFIG);
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should return empty object for invalid input', () => {
const result = configManager.validateClaudeCodeSettings(
INVALID_CLAUDE_CODE_CONFIG
);
expect(result).toEqual({});
expect(consoleWarnSpy).toHaveBeenCalledWith(
expect.stringContaining('Warning: Invalid Claude Code settings in config')
);
});
test('validateClaudeCodeSettings should handle partial valid configuration', () => {
const result = configManager.validateClaudeCodeSettings(
PARTIAL_CLAUDE_CODE_CONFIG
);
expect(result).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should return empty object for empty input', () => {
const result = configManager.validateClaudeCodeSettings({});
expect(result).toEqual({});
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should handle null/undefined input', () => {
expect(configManager.validateClaudeCodeSettings(null)).toEqual({});
expect(configManager.validateClaudeCodeSettings(undefined)).toEqual({});
expect(consoleWarnSpy).toHaveBeenCalledTimes(2);
});
});
// --- Claude Code Getter Tests ---
describe('Claude Code Getter Functions', () => {
test('getClaudeCodeSettings should return default empty object when no config exists', () => {
// No config file exists, should return empty object
fsExistsSyncSpy.mockReturnValue(false);
const settings = configManager.getClaudeCodeSettings(MOCK_PROJECT_ROOT);
expect(settings).toEqual({});
});
test('getClaudeCodeSettings should return merged settings from config file', () => {
// Config file with Claude Code settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: VALID_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4o' }],
google: [{ id: 'gemini-1.5-pro-latest' }],
anthropic: [
{ id: 'claude-3-opus-20240229' },
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
],
perplexity: [{ id: 'sonar-pro' }],
ollama: [],
openrouter: []
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettings(
MOCK_PROJECT_ROOT,
true
); // Force reload
expect(settings).toEqual(VALID_CLAUDE_CODE_CONFIG);
});
test('getClaudeCodeSettingsForCommand should return command-specific settings', () => {
// Config with command-specific settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: VALID_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (path.basename(filePath) === 'supported-models.json') return '{}';
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettingsForCommand(
'add-task',
MOCK_PROJECT_ROOT,
true
); // Force reload
// Should merge global settings with command-specific settings
const expectedSettings = {
...VALID_CLAUDE_CODE_CONFIG,
...VALID_CLAUDE_CODE_CONFIG.commandSpecific['add-task']
};
expect(settings).toEqual(expectedSettings);
});
test('getClaudeCodeSettingsForCommand should return global settings for unknown command', () => {
// Config with Claude Code settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: PARTIAL_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (path.basename(filePath) === 'supported-models.json') return '{}';
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettingsForCommand(
'unknown-command',
MOCK_PROJECT_ROOT,
true
); // Force reload
// Should return global settings only
expect(settings).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
});
});
// --- getConfig Tests --- // --- getConfig Tests ---
describe('getConfig Tests', () => { describe('getConfig Tests', () => {
test('should return default config if .taskmasterconfig does not exist', () => { test('should return default config if .taskmasterconfig does not exist', () => {
@@ -409,7 +628,11 @@ describe('getConfig Tests', () => {
...VALID_CUSTOM_CONFIG.models.fallback ...VALID_CUSTOM_CONFIG.models.fallback
} }
}, },
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global } global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
}; };
expect(config).toEqual(expectedMergedConfig); expect(config).toEqual(expectedMergedConfig);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH); expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
@@ -447,7 +670,11 @@ describe('getConfig Tests', () => {
research: { ...DEFAULT_CONFIG.models.research }, research: { ...DEFAULT_CONFIG.models.research },
fallback: { ...DEFAULT_CONFIG.models.fallback } fallback: { ...DEFAULT_CONFIG.models.fallback }
}, },
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global } global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
}; };
expect(config).toEqual(expectedMergedConfig); expect(config).toEqual(expectedMergedConfig);
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8'); expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
@@ -551,7 +778,11 @@ describe('getConfig Tests', () => {
}, },
fallback: { ...DEFAULT_CONFIG.models.fallback } fallback: { ...DEFAULT_CONFIG.models.fallback }
}, },
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global } global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
}; };
expect(config).toEqual(expectedMergedConfig); expect(config).toEqual(expectedMergedConfig);
}); });
@@ -684,6 +915,82 @@ describe('Getter Functions', () => {
expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel); expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel);
}); });
test('getResponseLanguage should return responseLanguage from config', () => {
// Arrange
// Prepare a config object with responseLanguage property for this test
const configWithLanguage = JSON.stringify({
models: {
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
},
global: {
projectName: 'Test Project',
responseLanguage: '中文'
}
});
// Set up fs.readFileSync to return our test config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return configWithLanguage;
}
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4-turbo' }]
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Ensure getConfig returns new values instead of cached ones
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act
const responseLanguage =
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
// Assert
expect(responseLanguage).toBe('中文');
});
test('getResponseLanguage should return undefined when responseLanguage is not in config', () => {
// Arrange
const configWithoutLanguage = JSON.stringify({
models: {
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
},
global: {
projectName: 'Test Project'
// No responseLanguage property
}
});
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return configWithoutLanguage;
}
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4-turbo' }]
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Ensure getConfig returns new values instead of cached ones
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act
const responseLanguage =
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
// Assert
expect(responseLanguage).toBe('English');
});
// Add more tests for other getters (getResearchProvider, getProjectName, etc.) // Add more tests for other getters (getResearchProvider, getProjectName, etc.)
}); });
@@ -738,5 +1045,116 @@ describe('getAllProviders', () => {
// Add tests for getParametersForRole if needed // Add tests for getParametersForRole if needed
// --- defaultNumTasks Tests ---
describe('Configuration Getters', () => {
test('getDefaultNumTasks should return default value when config is valid', () => {
// Arrange: Mock fs.readFileSync to return valid config when called with the expected path
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 15
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(15);
});
test('getDefaultNumTasks should return fallback when config value is invalid', () => {
// Arrange: Mock fs.readFileSync to return invalid config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 'invalid'
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should return fallback when config value is missing', () => {
// Arrange: Mock fs.readFileSync to return config without defaultNumTasks
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should handle non-existent config file', () => {
// Arrange: Mock file not existing
fsExistsSyncSpy.mockReturnValue(false);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should accept explicit project root', () => {
// Arrange: Mock fs.readFileSync to return valid config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 20
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit project root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(20);
});
});
// Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation. // Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation.
// If similar setter functions exist, add tests for them following the writeConfig pattern. // If similar setter functions exist, add tests for them following the writeConfig pattern.

View File

@@ -179,7 +179,8 @@ logs
# Task files # Task files
# tasks.json # tasks.json
# tasks/ ` # tasks/
`
); );
expect(mockLog).toHaveBeenCalledWith( expect(mockLog).toHaveBeenCalledWith(
'success', 'success',
@@ -200,7 +201,8 @@ logs
# Task files # Task files
tasks.json tasks.json
tasks/ ` tasks/
`
); );
expect(mockLog).toHaveBeenCalledWith( expect(mockLog).toHaveBeenCalledWith(
'success', 'success',
@@ -432,7 +434,8 @@ tasks/ `;
const writtenContent = writeFileSyncSpy.mock.calls[0][1]; const writtenContent = writeFileSyncSpy.mock.calls[0][1];
expect(writtenContent).toBe(`# Task files expect(writtenContent).toBe(`# Task files
# tasks.json # tasks.json
# tasks/ `); # tasks/
`);
}); });
}); });
}); });

View File

@@ -0,0 +1,528 @@
/**
* Tests for the remove-task MCP tool
*
* Note: This test does NOT test the actual implementation. It tests that:
* 1. The tool is registered correctly with the correct parameters
* 2. Arguments are passed correctly to removeTaskDirect
* 3. Error handling works as expected
* 4. Tag parameter is properly handled and passed through
*
* We do NOT import the real implementation - everything is mocked
*/
import { jest } from '@jest/globals';
// Mock EVERYTHING
const mockRemoveTaskDirect = jest.fn();
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
removeTaskDirect: mockRemoveTaskDirect
}));
const mockHandleApiResult = jest.fn((result) => result);
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
const mockCreateErrorResponse = jest.fn((msg) => ({
success: false,
error: { code: 'ERROR', message: msg }
}));
const mockFindTasksPath = jest.fn(() => '/mock/project/tasks.json');
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
handleApiResult: mockHandleApiResult,
createErrorResponse: mockCreateErrorResponse,
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
}));
jest.mock('../../../../mcp-server/src/core/utils/path-utils.js', () => ({
findTasksPath: mockFindTasksPath
}));
// Mock the z object from zod
const mockZod = {
object: jest.fn(() => mockZod),
string: jest.fn(() => mockZod),
boolean: jest.fn(() => mockZod),
optional: jest.fn(() => mockZod),
describe: jest.fn(() => mockZod),
_def: {
shape: () => ({
id: {},
file: {},
projectRoot: {},
confirm: {},
tag: {}
})
}
};
jest.mock('zod', () => ({
z: mockZod
}));
// DO NOT import the real module - create a fake implementation
// This is the fake implementation of registerRemoveTaskTool
const registerRemoveTaskTool = (server) => {
// Create simplified version of the tool config
const toolConfig = {
name: 'remove_task',
description: 'Remove a task or subtask permanently from the tasks list',
parameters: mockZod,
// Create a simplified mock of the execute function
execute: mockWithNormalizedProjectRoot(async (args, context) => {
const { log, session } = context;
try {
log.info && log.info(`Removing task(s) with ID(s): ${args.id}`);
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = mockFindTasksPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
} catch (error) {
log.error && log.error(`Error finding tasks.json: ${error.message}`);
return mockCreateErrorResponse(
`Failed to find tasks.json: ${error.message}`
);
}
log.info && log.info(`Using tasks file path: ${tasksJsonPath}`);
const result = await mockRemoveTaskDirect(
{
tasksJsonPath: tasksJsonPath,
id: args.id,
projectRoot: args.projectRoot,
tag: args.tag
},
log,
{ session }
);
if (result.success) {
log.info && log.info(`Successfully removed task: ${args.id}`);
} else {
log.error &&
log.error(`Failed to remove task: ${result.error.message}`);
}
return mockHandleApiResult(
result,
log,
'Error removing task',
undefined,
args.projectRoot
);
} catch (error) {
log.error && log.error(`Error in remove-task tool: ${error.message}`);
return mockCreateErrorResponse(error.message);
}
})
};
// Register the tool with the server
server.addTool(toolConfig);
};
describe('MCP Tool: remove-task', () => {
// Create mock server
let mockServer;
let executeFunction;
// Create mock logger
const mockLogger = {
debug: jest.fn(),
info: jest.fn(),
warn: jest.fn(),
error: jest.fn()
};
// Test data
const validArgs = {
id: '5',
projectRoot: '/mock/project/root',
file: '/mock/project/tasks.json',
confirm: true,
tag: 'feature-branch'
};
const multipleTaskArgs = {
id: '5,6.1,7',
projectRoot: '/mock/project/root',
tag: 'master'
};
// Standard responses
const successResponse = {
success: true,
data: {
totalTasks: 1,
successful: 1,
failed: 0,
removedTasks: [
{
id: 5,
title: 'Removed Task',
status: 'pending'
}
],
messages: ["Successfully removed task 5 from tag 'feature-branch'"],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'feature-branch'
}
};
const multipleTasksSuccessResponse = {
success: true,
data: {
totalTasks: 3,
successful: 3,
failed: 0,
removedTasks: [
{ id: 5, title: 'Task 5', status: 'pending' },
{ id: 1, title: 'Subtask 6.1', status: 'done', parentTaskId: 6 },
{ id: 7, title: 'Task 7', status: 'in-progress' }
],
messages: [
"Successfully removed task 5 from tag 'master'",
"Successfully removed subtask 6.1 from tag 'master'",
"Successfully removed task 7 from tag 'master'"
],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'master'
}
};
const errorResponse = {
success: false,
error: {
code: 'INVALID_TASK_ID',
message: "The following tasks were not found in tag 'feature-branch': 999"
}
};
const pathErrorResponse = {
success: false,
error: {
code: 'PATH_ERROR',
message: 'Failed to find tasks.json: No tasks.json found'
}
};
beforeEach(() => {
// Reset all mocks
jest.clearAllMocks();
// Create mock server
mockServer = {
addTool: jest.fn((config) => {
executeFunction = config.execute;
})
};
// Setup default successful response
mockRemoveTaskDirect.mockResolvedValue(successResponse);
mockFindTasksPath.mockReturnValue('/mock/project/tasks.json');
// Register the tool
registerRemoveTaskTool(mockServer);
});
test('should register the tool correctly', () => {
// Verify tool was registered
expect(mockServer.addTool).toHaveBeenCalledWith(
expect.objectContaining({
name: 'remove_task',
description: 'Remove a task or subtask permanently from the tasks list',
parameters: expect.any(Object),
execute: expect.any(Function)
})
);
// Verify the tool config was passed
const toolConfig = mockServer.addTool.mock.calls[0][0];
expect(toolConfig).toHaveProperty('parameters');
expect(toolConfig).toHaveProperty('execute');
});
test('should execute the tool with valid parameters including tag', async () => {
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify findTasksPath was called with correct arguments
expect(mockFindTasksPath).toHaveBeenCalledWith(
{
projectRoot: validArgs.projectRoot,
file: validArgs.file
},
mockLogger
);
// Verify removeTaskDirect was called with correct arguments including tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
tasksJsonPath: '/mock/project/tasks.json',
id: validArgs.id,
projectRoot: validArgs.projectRoot,
tag: validArgs.tag // This is the key test - tag parameter should be passed through
}),
mockLogger,
{
session: mockContext.session
}
);
// Verify handleApiResult was called
expect(mockHandleApiResult).toHaveBeenCalledWith(
successResponse,
mockLogger,
'Error removing task',
undefined,
validArgs.projectRoot
);
});
test('should handle multiple task IDs with tag context', async () => {
// Setup multiple tasks response
mockRemoveTaskDirect.mockResolvedValueOnce(multipleTasksSuccessResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(multipleTaskArgs, mockContext);
// Verify removeTaskDirect was called with comma-separated IDs and tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5,6.1,7',
tag: 'master'
}),
mockLogger,
expect.any(Object)
);
// Verify successful handling of multiple tasks
expect(mockHandleApiResult).toHaveBeenCalledWith(
multipleTasksSuccessResponse,
mockLogger,
'Error removing task',
undefined,
multipleTaskArgs.projectRoot
);
});
test('should handle missing tag parameter (defaults to current tag)', async () => {
const argsWithoutTag = {
id: '5',
projectRoot: '/mock/project/root'
};
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(argsWithoutTag, mockContext);
// Verify removeTaskDirect was called with undefined tag (should default to current tag)
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5',
projectRoot: '/mock/project/root',
tag: undefined // Should be undefined when not provided
}),
mockLogger,
expect.any(Object)
);
});
test('should handle errors from removeTaskDirect', async () => {
// Setup error response
mockRemoveTaskDirect.mockResolvedValueOnce(errorResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify removeTaskDirect was called
expect(mockRemoveTaskDirect).toHaveBeenCalled();
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
"Failed to remove task: The following tasks were not found in tag 'feature-branch': 999"
);
// Verify handleApiResult was called with error response
expect(mockHandleApiResult).toHaveBeenCalledWith(
errorResponse,
mockLogger,
'Error removing task',
undefined,
validArgs.projectRoot
);
});
test('should handle path finding errors', async () => {
// Setup path finding error
mockFindTasksPath.mockImplementationOnce(() => {
throw new Error('No tasks.json found');
});
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
const result = await executeFunction(validArgs, mockContext);
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
'Error finding tasks.json: No tasks.json found'
);
// Verify error response was returned
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
'Failed to find tasks.json: No tasks.json found'
);
// Verify removeTaskDirect was NOT called
expect(mockRemoveTaskDirect).not.toHaveBeenCalled();
});
test('should handle unexpected errors in execute function', async () => {
// Setup unexpected error
mockRemoveTaskDirect.mockImplementationOnce(() => {
throw new Error('Unexpected error');
});
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
'Error in remove-task tool: Unexpected error'
);
// Verify error response was returned
expect(mockCreateErrorResponse).toHaveBeenCalledWith('Unexpected error');
});
test('should properly handle withNormalizedProjectRoot wrapper', () => {
// Verify that withNormalizedProjectRoot was called with the execute function
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
expect.any(Function)
);
});
test('should log appropriate info messages for successful operations', async () => {
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify appropriate logging
expect(mockLogger.info).toHaveBeenCalledWith(
'Removing task(s) with ID(s): 5'
);
expect(mockLogger.info).toHaveBeenCalledWith(
'Using tasks file path: /mock/project/tasks.json'
);
expect(mockLogger.info).toHaveBeenCalledWith(
'Successfully removed task: 5'
);
});
test('should handle subtask removal with proper tag context', async () => {
const subtaskArgs = {
id: '5.2',
projectRoot: '/mock/project/root',
tag: 'feature-branch'
};
const subtaskSuccessResponse = {
success: true,
data: {
totalTasks: 1,
successful: 1,
failed: 0,
removedTasks: [
{
id: 2,
title: 'Removed Subtask',
status: 'pending',
parentTaskId: 5
}
],
messages: [
"Successfully removed subtask 5.2 from tag 'feature-branch'"
],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'feature-branch'
}
};
mockRemoveTaskDirect.mockResolvedValueOnce(subtaskSuccessResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(subtaskArgs, mockContext);
// Verify removeTaskDirect was called with subtask ID and tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5.2',
tag: 'feature-branch'
}),
mockLogger,
expect.any(Object)
);
// Verify successful handling
expect(mockHandleApiResult).toHaveBeenCalledWith(
subtaskSuccessResponse,
mockLogger,
'Error removing task',
undefined,
subtaskArgs.projectRoot
);
});
});

View File

@@ -0,0 +1,190 @@
/**
* Unit test to ensure fixDependenciesCommand writes JSON with the correct
* projectRoot and tag arguments so that tag data is preserved.
*/
import { jest } from '@jest/globals';
// Mock process.exit to prevent test termination
const mockProcessExit = jest.fn();
const originalExit = process.exit;
process.exit = mockProcessExit;
// Mock utils.js BEFORE importing the module under test
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
readJSON: jest.fn(),
writeJSON: jest.fn(),
log: jest.fn(),
findProjectRoot: jest.fn(() => '/mock/project/root'),
getCurrentTag: jest.fn(() => 'master'),
taskExists: jest.fn(() => true),
formatTaskId: jest.fn((id) => id),
findCycles: jest.fn(() => []),
isSilentMode: jest.fn(() => true),
resolveTag: jest.fn(() => 'master'),
getTasksForTag: jest.fn(() => []),
setTasksForTag: jest.fn(),
enableSilentMode: jest.fn(),
disableSilentMode: jest.fn()
}));
// Mock ui.js
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
displayBanner: jest.fn()
}));
// Mock task-manager.js
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager.js',
() => ({
generateTaskFiles: jest.fn()
})
);
// Mock external libraries
jest.unstable_mockModule('chalk', () => ({
default: {
green: jest.fn((text) => text),
cyan: jest.fn((text) => text),
bold: jest.fn((text) => text)
}
}));
jest.unstable_mockModule('boxen', () => ({
default: jest.fn((text) => text)
}));
// Import the mocked modules
const { readJSON, writeJSON, log, taskExists } = await import(
'../../../../../scripts/modules/utils.js'
);
// Import the module under test
const { fixDependenciesCommand } = await import(
'../../../../../scripts/modules/dependency-manager.js'
);
describe('fixDependenciesCommand tag preservation', () => {
beforeEach(() => {
jest.clearAllMocks();
mockProcessExit.mockClear();
});
afterAll(() => {
// Restore original process.exit
process.exit = originalExit;
});
it('calls writeJSON with projectRoot and tag parameters when changes are made', async () => {
const tasksPath = '/mock/tasks.json';
const projectRoot = '/mock/project/root';
const tag = 'master';
// Mock data WITH dependency issues to trigger writeJSON
const tasksDataWithIssues = {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [999] // Non-existent dependency to trigger fix
},
{
id: 2,
title: 'Task 2',
dependencies: []
}
],
tag: 'master',
_rawTaggedData: {
master: {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [999]
}
]
}
}
};
readJSON.mockReturnValue(tasksDataWithIssues);
taskExists.mockReturnValue(false); // Make dependency invalid to trigger fix
await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Verify readJSON was called with correct parameters
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
// Verify writeJSON was called (should be triggered by removing invalid dependency)
expect(writeJSON).toHaveBeenCalled();
// Check the writeJSON call parameters
const writeJSONCalls = writeJSON.mock.calls;
const lastWriteCall = writeJSONCalls[writeJSONCalls.length - 1];
const [calledPath, _data, calledProjectRoot, calledTag] = lastWriteCall;
expect(calledPath).toBe(tasksPath);
expect(calledProjectRoot).toBe(projectRoot);
expect(calledTag).toBe(tag);
// Verify process.exit was NOT called (meaning the function succeeded)
expect(mockProcessExit).not.toHaveBeenCalled();
});
it('does not call writeJSON when no changes are needed', async () => {
const tasksPath = '/mock/tasks.json';
const projectRoot = '/mock/project/root';
const tag = 'master';
// Mock data WITHOUT dependency issues (no changes needed)
const cleanTasksData = {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [] // Clean, no issues
}
],
tag: 'master'
};
readJSON.mockReturnValue(cleanTasksData);
taskExists.mockReturnValue(true); // All dependencies exist
await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Verify readJSON was called
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
// Verify writeJSON was NOT called (no changes needed)
expect(writeJSON).not.toHaveBeenCalled();
// Verify process.exit was NOT called
expect(mockProcessExit).not.toHaveBeenCalled();
});
it('handles early exit when no valid tasks found', async () => {
const tasksPath = '/mock/tasks.json';
// Mock invalid data to trigger early exit
readJSON.mockReturnValue(null);
await fixDependenciesCommand(tasksPath, {
context: { projectRoot: '/mock', tag: 'master' }
});
// Verify readJSON was called
expect(readJSON).toHaveBeenCalled();
// Verify writeJSON was NOT called (early exit)
expect(writeJSON).not.toHaveBeenCalled();
// Verify process.exit WAS called due to invalid data
expect(mockProcessExit).toHaveBeenCalledWith(1);
});
});

Some files were not shown because too many files have changed in this diff Show More