Compare commits

..

83 Commits

Author SHA1 Message Date
Ralph Khreish
a673df87bc fix: issues with release
Fix remove-task bug with mcp
Fix response-language using old config file .taskmaster
2025-07-03 14:22:16 +03:00
Geoff Hammond
f7fbdd6755 Feat: Implemented advanced settings for Claude Code AI provider (#872)
* Feat: Implemented advanced settings for Claude Code AI provider

- Added new 'claudeCode' property to default config
- Added getters and validation functions to 'config-manager.js'
- Added new 'isEmpty' utility to 'utils.js'
- Added new constants file 'commands.js' for AI_COMMAND_NAMES
- Updated Claude Code AI provider to use new config functions
- Updated 'claude-code-usage.md' documentation
- Added 'config-manager.test.js' tests to cover new settings

* chore: run format

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-02 22:43:46 +02:00
shenysun
c99df64f65 feat: Support custom response language (#510)
* feat: Support custom response language

* fix: Add default values for response language in config-manager.js

* chore: Update configuration file and add default response language settings

* feat: Support MCP/CLI custom response language

* chore: Update test comments to English for consistency

* docs: Auto-update and format models.md

* chore: fix format

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-02 22:35:49 +02:00
Geoff Hammond
5eafc5ea11 Feat: Added automatic determination of task number based on complexity (#884)
- Added 'defaultNumTasks: 10' to default config, now used in 'parse-prd'
- Adjusted 'parse-prd' and 'expand-task' to:
  - Accept a 'numTasks' value of 0
  - Updated tool and command descriptions
  - Updated prompts to 'an appropriate number of' when value is 0
- Updated 'README-task-master.md' and 'command-reference.md' docs
- Added more tests for: 'parse-prd', 'expand-task' and 'config-manager'

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-02 22:12:27 +02:00
github-actions[bot]
a33d6ecfeb docs: Auto-update and format models.md 2025-07-02 19:46:30 +00:00
Ben Vargas
dd96f51179 feat: Add gemini-cli provider integration for Task Master (#897)
* feat: Add gemini-cli provider integration for Task Master

This commit adds comprehensive support for the Gemini CLI provider, enabling users
to leverage Google's Gemini models through OAuth authentication via the gemini CLI
tool. This integration provides a seamless experience for users who prefer using
their existing Google account authentication rather than managing API keys.

## Implementation Details

### Provider Class (`src/ai-providers/gemini-cli.js`)
- Created GeminiCliProvider extending BaseAIProvider
- Implements dual authentication support:
  - Primary: OAuth authentication via `gemini auth login` (authType: 'oauth-personal')
  - Secondary: API key authentication for compatibility (authType: 'api-key')
- Uses the npm package `ai-sdk-provider-gemini-cli` (v0.0.3) for SDK integration
- Properly handles authentication validation without console output

### Model Configuration (`scripts/modules/supported-models.json`)
- Added two Gemini models with accurate specifications:
  - gemini-2.5-pro: 72% SWE score, 65,536 max output tokens
  - gemini-2.5-flash: 71% SWE score, 65,536 max output tokens
- Both models support main, fallback, and research roles
- Configured with zero cost (free tier)

### System Integration
- Registered provider in PROVIDERS map (`scripts/modules/ai-services-unified.js`)
- Added to OPTIONAL_AUTH_PROVIDERS set for flexible authentication
- Added GEMINI_CLI constant to provider constants (`src/constants/providers.js`)
- Exported GeminiCliProvider from index (`src/ai-providers/index.js`)

### Command Line Support (`scripts/modules/commands.js`)
- Added --gemini-cli flag to models command for provider hint
- Integrated into model selection logic (setModel function)
- Updated error messages to include gemini-cli in provider list
- Removed unrelated azure/vertex changes to maintain PR focus

### Documentation (`docs/providers/gemini-cli.md`)
- Comprehensive provider documentation emphasizing OAuth-first approach
- Clear explanation of why users would choose gemini-cli over standard google provider
- Detailed installation, authentication, and configuration instructions
- Troubleshooting section with common issues and solutions

### Testing (`tests/unit/ai-providers/gemini-cli.test.js`)
- Complete test suite with 12 tests covering all functionality
- Tests for both OAuth and API key authentication paths
- Error handling and edge case coverage
- Updated mocks in ai-services-unified.test.js for integration testing

## Key Design Decisions

1. **OAuth-First Design**: The provider assumes users want to leverage their existing
   `gemini auth login` credentials, making this the default authentication method.

2. **Authentication Type Mapping**: Discovered through testing that the SDK expects:
   - 'oauth-personal' for OAuth/CLI authentication (not 'gemini-cli' or 'oauth')
   - 'api-key' for API key authentication (not 'gemini-api-key')

3. **Silent Operation**: Removed console.log statements from validateAuth to match
   the pattern used by other providers like claude-code.

4. **Limited Model Support**: Only gemini-2.5-pro and gemini-2.5-flash are available
   through the CLI, as confirmed by the package author.

## Usage

```bash
# Install gemini CLI globally
npm install -g @google/gemini-cli

# Authenticate with Google account
gemini auth login

# Configure Task Master to use gemini-cli
task-master models --set-main gemini-2.5-pro --gemini-cli

# Use Task Master normally
task-master new "Create a REST API endpoint"
```

## Dependencies
- Added `ai-sdk-provider-gemini-cli@^0.0.3` to package.json
- This package wraps the Google Gemini CLI Core functionality for Vercel AI SDK

## Testing
All tests pass (613 total), including the new gemini-cli provider tests.
Code has been formatted with biome to maintain consistency.

This implementation provides a clean, well-tested integration that follows Task Master's
existing patterns while offering users a convenient way to use Gemini models with their
existing Google authentication.

* feat: implement lazy loading for gemini-cli provider

- Move ai-sdk-provider-gemini-cli to optionalDependencies
- Implement dynamic import with loadGeminiCliModule() function
- Make getClient() async to support lazy loading
- Update base-provider to handle async getClient() calls
- Update tests to handle async getClient() method

This allows the application to start without the gemini-cli package
installed, only loading it when actually needed.

* feat(gemini-cli): replace regex-based JSON extraction with jsonc-parser

- Add jsonc-parser dependency for robust JSON parsing
- Replace simple regex approach with progressive parsing strategy:
  1. Direct parsing after cleanup
  2. Smart boundary detection with single-pass analysis
  3. Limited fallback for edge cases
- Optimize performance with early termination and strategic sampling
- Add comprehensive tests for variable declarations, trailing commas,
  escaped quotes, nested objects, and performance edge cases
- Improve reliability for complex JSON structures that Gemini commonly produces
- Fix code formatting with biome

This addresses JSON parsing failures in generateObject operations while
maintaining backward compatibility and significantly improving performance
for large responses.

* fix: update package-lock.json and fix formatting for CI/CD

- Add jsonc-parser to package-lock.json for proper npm ci compatibility
- Fix biome formatting issues in gemini-cli provider and tests
- Ensure all CI/CD checks pass

* feat(gemini-cli): implement comprehensive JSON output reliability system

- Add automatic JSON request detection via content analysis patterns
- Implement task-specific prompt simplification for improved AI compliance
- Add strict JSON enforcement through enhanced system prompts
- Implement response interception with intelligent JSON extraction fallback
- Add comprehensive test coverage for all new JSON handling methods
- Move debug logging to appropriate level for clean user experience

This multi-layered approach addresses gemini-cli's conversational response
tendencies, ensuring reliable structured JSON output for task expansion
operations. Achieves 100% success rate in end-to-end testing while
maintaining full backward compatibility with existing functionality.

Technical implementation includes:
• JSON detection via user message content analysis
• Expand-task prompt simplification with cleaner instructions
• System prompt enhancement with strict JSON enforcement
• Response processing with jsonc-parser-based extraction
• Comprehensive unit test coverage for edge cases
• Debug-level logging to prevent user interface clutter

Resolves: gemini-cli JSON formatting inconsistencies
Tested: All 46 test suites pass, formatting verified

* chore: add changeset for gemini-cli provider implementation

Adds minor version bump for comprehensive gemini-cli provider with:
- Lazy loading and optional dependency management
- Advanced JSON parsing with jsonc-parser
- Multi-layer reliability system for structured output
- Complete test coverage and CI/CD compliance

* refactor: consolidate optional auth provider logic

- Add gemini-cli to existing providersWithoutApiKeys array in config-manager
- Export providersWithoutApiKeys for reuse across modules
- Remove duplicate OPTIONAL_AUTH_PROVIDERS Set from ai-services-unified
- Update ai-services-unified to import and use centralized array
- Fix Jest mock to include new providersWithoutApiKeys export

This eliminates code duplication and provides a single source of truth
for which providers support optional authentication, addressing PR
reviewer feedback about existing similar functionality in src/constants.
2025-07-02 21:46:19 +02:00
Parthy
2852149a47 fix: Critical writeJSON Context Fixes - Prevent Tag Corruption (#910)
* feat(tasks): Fix critical tag corruption bug in task management

- Fixed missing context parameters in writeJSON calls across add-task, remove-task, and add-subtask functions
- Added projectRoot and tag parameters to prevent data corruption in multi-tag environments
- Re-enabled generateTaskFiles calls to ensure markdown files are updated after operations
- Enhanced add_subtask MCP tool with tag parameter support
- Refactored addSubtaskDirect function to properly pass context to core logic
- Streamlined codebase by removing deprecated functionality

This resolves the critical bug where task operations in one tag context would corrupt or delete tasks from other tags in tasks.json.

* feat(task-manager): Enhance addSubtask with current tag support

- Added `getCurrentTag` utility to retrieve the current tag context for task operations.
- Updated `addSubtask` to use the current tag when reading and writing tasks, ensuring proper context handling.
- Refactored tests to accommodate changes in the `addSubtask` function, ensuring accurate mock implementations and expectations.
- Cleaned up test cases for better readability and maintainability.

This improves task management by preventing tag-related data corruption and enhances the overall functionality of the task manager.

* feat(remove-task): Add tag support for task removal and enhance error handling

- Introduced `tag` parameter in `removeTaskDirect` to specify context for task operations, improving multi-tag support.
- Updated logging to include tag context in messages for better traceability.
- Refactored task removal logic to streamline the process and improve error reporting.
- Added comprehensive unit tests to validate tag handling and ensure robust error management.

This enhancement prevents task data corruption across different tags and improves the overall reliability of the task management system.

* feat(add-task): Add projectRoot and tag parameters to addTask tests

- Updated `addTask` unit tests to include `projectRoot` and `tag` parameters for better context handling.
- Enhanced test cases to ensure accurate expectations and improve overall test coverage.

This change aligns with recent enhancements in task management, ensuring consistency across task operations.

* feat(set-task-status): Add tag parameter support and enhance task status handling

- Introduced `tag` parameter in `setTaskStatusDirect` and related functions to improve context management in multi-tag environments.
- Updated `writeJSON` calls to ensure task data integrity across different tags.
- Enhanced unit tests to validate tag preservation during task status updates, ensuring robust functionality.

This change aligns with recent improvements in task management, preventing data corruption and enhancing overall reliability.

* feat(tag-management): Enhance writeJSON calls to preserve tag context

- Updated `writeJSON` calls in `createTag`, `deleteTag`, `renameTag`, `copyTag`, and `enhanceTagsWithMetadata` to include `projectRoot` for better context management and to prevent tag corruption.
- Added comprehensive unit tests for tag management functions to ensure data integrity and proper tag handling during operations.

This change improves the reliability of tag management by ensuring that operations do not corrupt existing tags and maintains the overall structure of the task data.

* feat(expand-task): Update writeJSON to include projectRoot and tag context

- Modified `writeJSON` call in `expandTaskDirect` to pass `projectRoot` and `tag` parameters, ensuring proper context management when saving tasks.json.
- This change aligns with recent enhancements in task management, preventing potential data corruption and improving overall reliability.

* feat(fix-dependencies): Add projectRoot and tag parameters for enhanced context management

- Updated `fixDependenciesDirect` and `registerFixDependenciesTool` to include `projectRoot` and `tag` parameters, improving context handling during dependency fixes.
- Introduced a new unit test for `fixDependenciesCommand` to ensure proper preservation of projectRoot and tag data in JSON outputs.

This change enhances the reliability of dependency management by ensuring that context is maintained across operations, preventing potential data issues.

* fix(context): propagate projectRoot and tag through dependency, expansion, status-update and tag-management commands to prevent cross-tag data corruption

* test(fix-dependencies): Enhance unit tests for fixDependenciesCommand

- Refactored tests to use unstable mocks for utils, ui, and task-manager modules, improving isolation and reliability.
- Added checks for process.exit to ensure proper handling of invalid data scenarios.
- Updated test cases to verify writeJSON calls with projectRoot and tag parameters, ensuring accurate context preservation during dependency fixes.

This change strengthens the test suite for dependency management, ensuring robust functionality and preventing potential data issues.

* chore(plan): remove outdated fix plan for `writeJSON` context parameters
2025-07-02 21:45:10 +02:00
Parthy
43e0025f4c fix: prevent tag corruption in bulk updates (#856)
* fix(task-manager): prevent tag corruption in bulk updates and add tag preservation test

- Fix writeJSON call in scripts/modules/task-manager/update-tasks.js (line 469) to include projectRoot and tag parameters.
- Ensure tagged task lists maintain data integrity during bulk updates, preventing task disappearance in tagged contexts.
- Update MCP tools to properly pass tag context through the call chain.
- Introduce a comprehensive test case to verify that all tags are preserved when updating tasks, covering both master and feature-branch scenarios.

Addresses an issue where bulk updates could corrupt tasks.json in tagged task list structures, reinforcing task management robustness.

* style(tests): format task data in update-tasks test
2025-07-02 12:53:12 +02:00
Parthy
598e687067 fix: use tag-specific complexity reports (#857)
* fix(expand-task): Use tag-specific complexity reports

- Add getTagAwareFilePath utility function to resolve tag-specific file paths
- Update expandTask to use tag-aware complexity report paths
- Fix issue where expand-task always used default complexity report
- Add comprehensive tests for getTagAwareFilePath utility
- Ensure proper handling of file extensions and directory structures

Fixes #850: Expanding tasks not using tag-specific complexity reports

The expandTask function now correctly uses complexity reports specific
to the current tag context (e.g., task-complexity-report_feature-branch.json)
instead of always using the default task-complexity-report.json file.

This enables proper task expansion behavior when working with multiple
tag contexts, ensuring complexity analysis is tag-specific and accurate.

* chore: Add changeset for tag-specific complexity reports fix

* test(expand-task): Add tests for tag-specific complexity report integration

- Introduced a new test suite for verifying the integration of tag-specific complexity reports in the expandTask function.
- Added a test case to ensure the correct complexity report is used when available for a specific tag.
- Mocked file system interactions to simulate the presence of tag-specific complexity reports.

This enhances the test coverage for task expansion behavior, ensuring it accurately reflects the complexity analysis based on the current tag context.

* refactor(task-manager): unify and simplify tag-aware file path logic and tests

- Reformatted imports and cleaned up comments in test files for readability
- Centralized mocks: moved getTagAwareFilePath & slugifyTagForFilePath
  mocks to setup.js for consistency and maintainability
- Simplified utils/getTagAwareFilePath: replaced manual parsing with
  path.parse() & path.format(); improved extension handling
- Enhanced test mocks for path.parse, path.format & reset path.join
  in beforeEach to avoid interference
- All tests now pass consistently; no change in functionality
2025-07-02 12:52:45 +02:00
Shandy Hermawan
f38abd6843 fix: Subtask generation fails on gemini-2.5-pro (#852)
* fix: clarify details format in task expansion prompt

* chore: add changeset
2025-07-02 07:16:09 +02:00
Joe Danziger
24e9206da0 Fix rules command to use reliable project root detection like other commands (#908)
* update/fix projectRoot call for consistency

* internal naming consistency

* add changeset
2025-07-02 07:05:30 +02:00
Ofer Shaal
8d9fcf2064 Fix/spelling mistakes (#876)
* docs: Auto-update and format models.md

* fix: correct typos in documentation for parse-prd and taskmaster commands

- Updated the `parse-prd` documentation to fix the spelling of "multiple."
- Clarified the description of the `id` parameter in the `taskmaster` documentation to ensure proper syntax and readability.

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-01 11:44:44 +02:00
Joe Danziger
56a415ef79 fix: Ensure projectRoot is a string (potential WSL fix) (#892)
* ensure projectRoot is a string

* add changeset
2025-07-01 10:55:48 +02:00
github-actions[bot]
f081bba83c docs: Auto-update and format models.md 2025-06-25 13:49:17 +00:00
Nicholas Spalding
6fd5e23396 Support for Additional Anthropic Models on Bedrock (#870)
* Add additional Anthropic Models for Bedrock

* Update Models Docs from `scripts/modules/supported-models.json`

* feat(models): add additional Bedrock supported models
2025-06-25 15:49:02 +02:00
Joe Danziger
e4456b11bc fix: .gitignore missing trailing newline during project initialization (#855) 2025-06-24 07:42:23 +03:00
github-actions[bot]
295087a5b8 docs: Auto-update and format models.md 2025-06-23 06:13:28 +00:00
Ralph Khreish
5f2b7323ad Merge pull request #849 from eyaltoledano/chore/update.next.june
Chore: rebase next after Release 0.18.0
2025-06-23 09:13:08 +03:00
Ralph Khreish
9ddc521757 chore: fix CI and weird conflicts 2025-06-23 09:11:30 +03:00
Ralph Khreish
e7087cf88f chore: fix format 2025-06-23 09:08:34 +03:00
Ralph Khreish
08f86f19c3 Merge remote-tracking branch 'origin/next' into chore/update.next.june 2025-06-23 09:06:52 +03:00
Joe Danziger
f272748965 Default to Cursor profile for MCP init when no rules specified (#846) 2025-06-23 08:57:42 +03:00
Ralph Khreish
15e15a1f17 chore: format fix 2025-06-23 08:57:42 +03:00
github-actions[bot]
3a30e9acd4 Version Packages 2025-06-23 08:57:42 +03:00
Ralph Khreish
15286c029d feat: make more compatible with "o" family models (#839) 2025-06-23 08:57:39 +03:00
neno
c39e5158b4 feat: Claude Code slash commands for Task Master (#774)
* Fix Cursor deeplink installation with copy-paste instructions (#723)

* fix: expand-task (#755)

* docs: Update o3 model price (#751)

* docs: Auto-update and format models.md

* docs: Auto-update and format models.md

* feat: Add Claude Code task master commands

Adds Task Master slash commands for Claude Code under /project:tm/ namespace

---------

Co-authored-by: Joe Danziger <joe@ticc.net>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
2025-06-23 08:57:38 +03:00
github-actions[bot]
4bda8f4d76 chore: rc version bump 2025-06-23 08:57:36 +03:00
Joe Danziger
49976e864b Call rules interactive setup during init (#833) 2025-06-23 08:57:34 +03:00
Joe Danziger
30b873a7da store tasks in git by default (#835) 2025-06-23 08:57:34 +03:00
Ralph Khreish
ab37859a7e fix: update task by id (#834) 2025-06-23 08:57:32 +03:00
Joe Danziger
e704ba12fd feat: Enhanced project initialization with Git worktree detection (#743)
* Fix Cursor deeplink installation with copy-paste instructions (#723)

* detect git worktree

* add changeset

* add aliases and git flags

* add changeset

* rename and update test

* add store tasks in git functionality

* update changeset

* fix newline

* remove unused import

* update command wording

* update command option text
2025-06-23 08:56:43 +03:00
Joe Danziger
64b2d8f79e Rename Roo Code "Boomerang" role to "Orchestrator" (#831) 2025-06-23 08:56:42 +03:00
Ralph Khreish
bbb4bbcc11 Feature/compatibleapisupport (#830)
* add compatible platform api support

* Adjust the code according to the suggestions

* Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English.

* feat: Add support for compatible API endpoints via baseURL

* chore: Add changeset for compatible API support

* chore: cleanup

* chore: improve changeset

* fix: package-lock.json

* fix: package-lock.json

---------

Co-authored-by: He-Xun <1226807142@qq.com>
2025-06-23 08:56:39 +03:00
Ben Vargas
8e38348203 chore: add changeset for Claude Code provider feature 2025-06-23 08:56:36 +03:00
Ben Vargas
01b651bddc revert: remove maxTokens update functionality from init
This functionality was out of scope for the Claude Code provider PR.
The automatic updating of maxTokens values in config.json during
initialization is a general improvement that should be in a separate PR.

Additionally, Claude Code ignores maxTokens and temperature parameters
anyway, making this change irrelevant for the Claude Code integration.

Removed:
- scripts/modules/update-config-tokens.js
- Import and usage in scripts/init.js
2025-06-23 08:56:34 +03:00
Ben Vargas
0840ad8316 feat: make @anthropic-ai/claude-code an optional dependency
This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality.

Changes:
- Added @anthropic-ai/claude-code to optionalDependencies in package.json
- Implemented lazy loading in language-model.js to only import the SDK when actually used
- Updated documentation to explain the optional installation requirement
- Applied formatting fixes to ensure code consistency

Benefits:
- Users without Claude Code subscriptions don't need to install the dependency
- Reduces package size for users who don't use Claude Code
- Prevents installation failures if the package is unavailable
- Provides clear error messages when the package is needed but not installed

The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present.
2025-06-23 08:56:30 +03:00
Ben Vargas
5c726dc542 feat: add Claude Code provider support
Implements Claude Code as a new AI provider that uses the Claude Code CLI
without requiring API keys. This enables users to leverage Claude models
through their local Claude Code installation.

Key changes:
- Add complete AI SDK v1 implementation for Claude Code provider
  - Custom SDK with streaming/non-streaming support
  - Session management for conversation continuity
  - JSON extraction for object generation mode
  - Support for advanced settings (maxTurns, allowedTools, etc.)

- Integrate Claude Code into Task Master's provider system
  - Update ai-services-unified.js to handle keyless authentication
  - Add provider to supported-models.json with opus/sonnet models
  - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000)

- Fix maxTokens configuration issue
  - Add max_tokens property to getAvailableModels() output
  - Update setModel() to properly handle claude-code models
  - Create update-config-tokens.js utility for init process

- Add comprehensive documentation
  - User guide with configuration examples
  - Advanced settings explanation and future integration options

The implementation maintains full backward compatibility with existing
providers while adding seamless Claude Code support to all Task Master
commands.
2025-06-23 08:56:28 +03:00
ejones40
21d988691b Add pyproject.toml as project root marker (#804)
* feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-06-23 08:56:26 +03:00
Ralph Khreish
21839b1cd6 Fix/expand command tag corruption (#827)
* fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added.

* test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test

* fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification

* chore(changeset): combine duplicate changesets for expand tag corruption fix

Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate
the expand command fix and its comprehensive E2E testing enhancements
into a single changeset entry.

* Delete .changeset/eighty-breads-wonder.md

* Version Packages

* chore: fix package.json

* fix(expand): Enhance context handling in expandAllTasks function
- Added `tag` to context destructuring for better context management.
- Updated `readJSON` call to include `contextTag` for improved data integrity.
- Ensured the correct tag is passed during task expansion to prevent tag corruption.

---------

Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de>
Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-23 08:56:26 +03:00
Ralph Khreish
6160089b8e fix(bedrock): improve AWS credential handling and add model definitions (#826)
* fix(bedrock): improve AWS credential handling and add model definitions

- Change error to warning when AWS credentials are missing in environment
- Allow fallback to system configuration (aws config files or instance profiles)
- Remove hardcoded region and profile parameters in Bedrock client
- Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock
- Update config manager to properly handle Bedrock provider

* chore: cleanup and format and small refactor

---------

Co-authored-by: Ray Krueger <raykrueger@gmail.com>
2025-06-23 08:56:18 +03:00
Nathan Marley
82bb50619f fix: switch to ESM export to avoid mixed format (#633)
* fix: switch to ESM export to avoid mixed format

The CLI entrypoint was using `module.exports` alongside ESM `import` statements,
resulting in an invalid mixed module format. Replaced the CommonJS export with
a proper ESM `export` to maintain consistency and prevent module resolution issues.

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-06-23 08:56:06 +03:00
Ralph Khreish
898f15e699 fix: providers config for azure, bedrock, and vertex (#822)
* fix: providers config for azure, bedrock, and vertex

* chore: improve changelog

* chore: fix CI
2025-06-23 08:56:02 +03:00
Joe Danziger
1a157567dc feat: Flexible brand rules management (#460)
* chore(docs): update docs and rules related to model management.

* feat(ai): Add OpenRouter AI provider support

Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform.

- Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject).

- Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic.

- Verified config-manager.js handles OpenRouter API key checks correctly.

- Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY.

- Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint.

* feat(cli): Add --status/-s filter flag to show command and get-task MCP tool

Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context.

- Modified `commands.js` to add the `--status` option to the `show` command definition.

- Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering.

- Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation.

- Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter.

- Added changeset entry.

* fix(tasks): Improve next task logic to be subtask-aware

* fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs

- Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs.

- Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs.

- Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly.

- Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly.

- Updates the `remove_task` MCP tool description to reflect the new multi-ID capability.

This addresses the previously known issue where only the first ID in a comma-separated list was processed.

Closes #140

* Update README.md (#342)

* Update Discord badge (#337)

* refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template

* Refactor: Improve MCP logging, update E2E & tests

Refactors MCP server logging and updates testing infrastructure.

- MCP Server:

  - Replaced manual logger wrappers with centralized `createLogWrapper` utility.

  - Updated direct function calls to use `{ session, mcpLog }` context.

  - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools.

  - Adjusted MCP tool import paths and parameter descriptions.

- Documentation:

  - Modified `docs/configuration.md`.

  - Modified `docs/tutorial.md`.

- Testing:

  - E2E Script (`run_e2e.sh`):

    - Removed `set -e`.

    - Added LLM analysis function (`analyze_log_with_llm`) & integration.

    - Adjusted test run directory creation timing.

    - Added debug echo statements.

  - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`.

  - Modified Fixtures: Updated `scripts/task-complexity-report.json`.

- Dev Scripts:

  - Modified `scripts/dev.js`.

* chore(tests): Passes tests for merge candidate
- Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup'
- E2E script has been perfected and works as designed provided there are all provider API keys .env in the root
- Fixes the entire test suite to make sure it passes with the new architecture.
- Fixes dependency command to properly show there is a validation failure if there is one.
- Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json
- Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad.
- Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling.

* chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/.

* fix: ensure API key detection properly reads .env in MCP context

Problem:
- Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP
- The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly
- This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env

Solution:
- Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot
- Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable
- Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts
- Maintains the correct precedence: session.env → .env file → process.env

Testing:
- Verified working correctly with both MCP and CLI tools
- API keys properly detected in .env file in both contexts
- Deleted .cursor/mcp.json to confirm introspection of .env as fallback works

* fix(update): pass projectRoot through update command flow

Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context.

* fix(analyze-complexity): pass projectRoot through analyze-complexity flow

Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity.

* fix(add-task): pass projectRoot and fix logging/refs

Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working.

* fix(parse-prd): pass projectRoot and fix schema/logging

Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js.

* fix(update-task): pass projectRoot and adjust parsing

Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot.

* fix(update-subtask): pass projectRoot and allow updating done subtasks

Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks.

* fix(mcp, expand): pass projectRoot through expand/expand-all flows

Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect.

Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect.

Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation.

* chore: prettier

* fix(expand-all): add projectRoot to expandAllTasksDirect invokation.

* fix(update-tasks): Improve AI response parsing for 'update' command

Refactors the JSON array parsing logic within
in .

The previous logic primarily relied on extracting content from markdown
code blocks (json or javascript), which proved brittle when the AI
response included comments or non-JSON text within the block, leading to
parsing errors for the  command.

This change modifies the parsing strategy to first attempt extracting
content directly between the outermost '[' and ']' brackets. This is
more robust as it targets the expected array structure directly. If
bracket extraction fails, it falls back to looking for a strict json
code block, then prefix stripping, before attempting a raw parse.

This approach aligns with the successful parsing strategy used for
single-object responses in  and resolves the
parsing errors previously observed with the  command.

* refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization

Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation.

* refactor(mcp): apply withNormalizedProjectRoot HOF to update tool

Problem: The  MCP tool previously handled project root acquisition and path resolution within its  method, leading to potential inconsistencies and repetition.

Solution: Refactored the  tool () to utilize the new  Higher-Order Function (HOF) from .

Specific Changes:
- Imported  HOF.
- Updated the Zod schema for the  parameter to be optional, as the HOF handles deriving it from the session if not provided.
- Wrapped the entire  function body with the  HOF.
- Removed the manual call to  from within the  function body.
- Destructured the  from the  object received by the wrapped  function, ensuring it's the normalized path provided by the HOF.
- Used the normalized  variable when calling  and when passing arguments to .

This change standardizes project root handling for the  tool, simplifies its  method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools.

* fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows

* fix: add rest of tools that need wrapper

* chore: cleanup tools to stop using rootFolder and remove unused imports

* chore: more cleanup

* refactor: Improve update-subtask, consolidate utils, update config

This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration.

**Major Changes:**

1.  **Refactor updateSubtaskById:**
    - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation.
    - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting.
    - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended.
    - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details.

2.  **Consolidate MCP Utilities:**
    - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js.
    - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location.

3.  **Refactor Project Initialization:**
    - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file.
    - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js).

**Other Changes:**

-   Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219.
-   Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js).

* fix: displayBanner logging when silentMode is active (#385)

* fix: improve error handling, test options, and model configuration

- Enhance error validation in parse-prd.js and update-tasks.js
- Fix bug where mcpLog was incorrectly passed as logWrapper
- Improve error messages and response formatting
- Add --skip-verification flag to E2E tests
- Update MCP server config that ships with init to match new API key structure
- Fix task force/append handling in parse-prd command
- Increase column width in update-tasks display

* chore: fixes parse prd to show loading indicator in cli.

* fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code.

* chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into.

* chore: restores 3.7 sonnet as the main role.

* fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files.

* chore: removes tasks json backup that was temporarily created.

* fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response.

* chore: prettier

* chore: readme typos

* fix(config): restores sonnet 3.7 as default main role.

* Version Packages

* hotfix: move production package to "dependencies" (#399)

* Version Packages

* Fix: issues with 0.13.0 not working (#402)

* Exit prerelease mode and version packages

* hotfix: move production package to "dependencies"

* Enter prerelease mode and version packages

* Enter prerelease mode and version packages

* chore: cleanup

* chore: improve pre.json and add pre-release workflow

* chore: fix package.json

* chore: cleanup

* chore: improve pre-release workflow

* chore: allow github actions to commit

* extract fileMap and conversionConfig into brand profile

* extract into brand profile

* add windsurf profile

* add remove brand rules function

* fix regex

* add rules command to add/remove rules for a specific brand

* fix post processing for roo

* allow multiples

* add cursor profile

* update test for new structure

* move rules to assets

* use assets/rules for rules files

* use standardized setupMCP function

* fix formatting

* fix formatting

* add logging

* fix escapes

* default to cursor

* allow init with certain rulesets; no more .windsurfrules

* update docs

* update log msg

* fix formatting

* keep mdc extension for cursor

* don't rewrite .mdc to .md inside the files

* fix roo init (add modes)

* fix cursor init (don't use roo transformation by default)

* use more generic function names

* update docs

* fix formatting

* update function names

* add changeset

* add rules to mcp initialize project

* register tool with mcp server

* update docs

* add integration test

* fix cursor initialization

* rule selection

* fix formatting

* fix MCP - remove yes flag

* add import

* update roo tests

* add/update tests

* remove test

* add rules command test

* update MCP responses, centralize rules profiles & helpers

* fix logging and MCP response messages

* fix formatting

* incorrect test

* fix tests

* update fileMap

* fix file extension transformations

* fix formatting

* add rules command test

* test already covered

* fix formatting

* move renaming logic into profiles

* make sure dir is deleted (DS_Store)

* add confirmation for rules removal

* add force flag for rules remove

* use force flag for test

* remove yes parameter

* fix formatting

* import brand profiles from rule-transformer.js

* update comment

* add interactive rules setup

* optimize

* only copy rules specifically listed in fileMap

* update comment

* add cline profile

* add brandDir to remove ambiguity and support Cline

* specify whether to create mcp config and filename

* add mcpConfigName value for parh

* fix formatting

* remove rules just for this repository - only include rules to be distributed

* update error message

* update "brand rules" to "rules"

* update to minor

* remove comment

* remove comments

* move to /src/utils

* optimize imports

* move rules-setup.js to /src/utils

* move rule-transformer.js to /src/utils

* move confirmation to /src/ui/confirm.js

* default to all rules

* use profile js for mcp config settings

* only run rules interactive setup if not provided via command line

* update comments

* initialize with all brands if nothing specified

* update var name

* clean up

* enumerate brands for brand rules

* update instructions

* add test to check for brand profiles

* fix quotes

* update semantics and terminology from 'brand rules' to 'rules profiles'

* fix formatting

* fix formatting

* update function name and remove copying of cursor rules, now handled by rules transformer

* update comment

* rename to mcp-config-setup.js

* use enums for rules actions

* add aggregate reporting for rules add command

* add missing log message

* use simpler path

* use base profile with modifications for each brand

* use displayName and don't select any defaults in setup

* add confirmation if removing ALL rules profiles, and add --force flag on rules remove

* Use profile-detection instead of rules-detection

* add newline at end of mcp config

* add proper formatting for mcp.json

* update rules

* update rules

* update rules

* add checks for other rules and other profile folder items before removing

* update confirmation for rules remove

* update docs

* update changeset

* fix for filepath at bottom of rule

* Update cline profile and add test; adjust other rules tests

* update changeset

* update changeset

* clarify init for all profiles if not specified

* update rule text

* revert text

* use "rule profiles" instead of "rules profiles"

* use standard tool mappings for windsurf

* add Trae support

* update changeset

* update wording

* update to 'rule profile'

* remove unneeded exports to optimize loc

* combine to /src/utils/profiles.js; add codex and claude code profiles

* rename function and add boxen

* add claude and codex integration tests

* organize tests into profiles folder

* mock fs for transformer tests

* update UI

* add cline and trae integration tests

* update test

* update function name

* update formatting

* Update change set with new profiles

* move profile integration tests to subdirectory

* properly create temp directories in /tmp folder

* fix formatting

* use taskmaster subfolder for the 2 TM rules

* update wording

* ensure subdirectory exists

* update rules from next

* update from next

* update taskmaster rule

* add details on new rules command and init

* fix mcp init

* fix MCP path to assets

* remove duplication

* remove duplication

* MCP server path fixes for rules command

* fix for CLI roo rules add/remove

* update tests

* fix formatting

* fix pattern for interactive rule profiles setup

* restore comments

* restore comments

* restore comments

* remove unused import, fix quotes

* add missing integration tests

* add VS Code profile and tests

* update docs and rules to include vscode profile

* add rules subdirectory support per-profile

* move profiles to /src

* fix formatting

* rename to remove ambiguity

* use --setup for rules interactive setup

* Fix Cursor deeplink installation with copy-paste instructions (#723)

* change roo boomerang to orchestrator; update tests that don't use modes

* fix newline

* chore: cleanup

---------

Co-authored-by: Eyal Toledano <eyal@microangel.so>
Co-authored-by: Yuval <yuvalbl@users.noreply.github.com>
Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com>
Co-authored-by: Eyal Toledano <eutait@gmail.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-23 08:54:29 +03:00
Riccardo (Ricky) Esclapon
eb8a3a85a1 Update SWE scores (#657) 2025-06-23 08:52:35 +03:00
Joe Danziger
59a4ec9e1a Default to Cursor profile for MCP init when no rules specified (#846) 2025-06-22 21:24:09 +02:00
Ralph Khreish
403d7b00ca chore: format fix 2025-06-21 23:35:23 +03:00
Ralph Khreish
b78614b44e Merge branch 'main' into next 2025-06-21 23:02:17 +03:00
github-actions[bot]
19d795d63f Version Packages 2025-06-21 23:01:03 +03:00
github-actions[bot]
07ec89ab17 docs: Auto-update and format models.md 2025-06-21 19:50:31 +00:00
Jitesh Thakur
eaa7f24280 docs: Add comprehensive Azure OpenAI configuration documentation (#837)
* docs: Add comprehensive Azure OpenAI configuration documentation

- Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options
- Include both global and per-model baseURL configuration examples
- Add comprehensive troubleshooting guide for common Azure OpenAI issues
- Update environment variables section with Azure OpenAI examples
- Add Azure OpenAI models to all model tables (Main, Research, Fallback)
- Include prominent Azure configuration example in main documentation
- Fix azureBaseURL format to use correct Azure OpenAI endpoint structure

Addresses common Azure OpenAI setup challenges and provides clear guidance for new users.

* refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json

- Remove Azure model entries from documentation tables
- Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1
- Maintain consistency with existing model configuration structure
2025-06-21 21:50:20 +02:00
github-actions[bot]
b3d43c5992 docs: Auto-update and format models.md 2025-06-21 19:50:10 +00:00
Ralph Khreish
c5de4f8b68 feat: make more compatible with "o" family models (#839) 2025-06-21 21:50:00 +02:00
neno
b9299c5af0 feat: Claude Code slash commands for Task Master (#774)
* Fix Cursor deeplink installation with copy-paste instructions (#723)

* fix: expand-task (#755)

* docs: Update o3 model price (#751)

* docs: Auto-update and format models.md

* docs: Auto-update and format models.md

* feat: Add Claude Code task master commands

Adds Task Master slash commands for Claude Code under /project:tm/ namespace

---------

Co-authored-by: Joe Danziger <joe@ticc.net>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
2025-06-21 20:48:20 +02:00
github-actions[bot]
122a0465d8 chore: rc version bump 2025-06-21 02:43:13 +00:00
Joe Danziger
cf2c06697a Call rules interactive setup during init (#833) 2025-06-20 22:05:25 +02:00
Joe Danziger
727f1ec4eb store tasks in git by default (#835) 2025-06-20 18:49:38 +02:00
Ralph Khreish
648353794e fix: update task by id (#834) 2025-06-20 18:11:17 +02:00
Joe Danziger
a2a3229fd0 feat: Enhanced project initialization with Git worktree detection (#743)
* Fix Cursor deeplink installation with copy-paste instructions (#723)

* detect git worktree

* add changeset

* add aliases and git flags

* add changeset

* rename and update test

* add store tasks in git functionality

* update changeset

* fix newline

* remove unused import

* update command wording

* update command option text
2025-06-20 17:58:50 +02:00
Joe Danziger
b592dff8bc Rename Roo Code "Boomerang" role to "Orchestrator" (#831) 2025-06-20 17:20:14 +03:00
Ralph Khreish
e9d1bc2385 Feature/compatibleapisupport (#830)
* add compatible platform api support

* Adjust the code according to the suggestions

* Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English.

* feat: Add support for compatible API endpoints via baseURL

* chore: Add changeset for compatible API support

* chore: cleanup

* chore: improve changeset

* fix: package-lock.json

* fix: package-lock.json

---------

Co-authored-by: He-Xun <1226807142@qq.com>
2025-06-20 16:18:03 +02:00
V4G4X
030694bb96 readme: add troubleshooting note for MCP tools not working 2025-06-20 15:28:00 +02:00
github-actions[bot]
3e0f696c49 docs: Auto-update and format models.md 2025-06-20 13:25:33 +00:00
Ben Vargas
4b0c9d9af6 chore: add changeset for Claude Code provider feature 2025-06-20 16:25:22 +03:00
Ben Vargas
3fa91f56e5 fix(models): add missing --claude-code flag to models command
The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it.

Changes:
- Added --claude-code option to models command alongside existing provider flags
- Updated provider flags validation to include claudeCode option
- Added claude-code to providerHint logic for all three model roles (main, research, fallback)
- Updated error message to include --claude-code in list of mutually exclusive flags
- Added example usage in help text

This allows users to properly set Claude Code models using commands like:
  task-master models --set-main sonnet --claude-code
  task-master models --set-main opus --claude-code

Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus".
2025-06-20 16:25:22 +03:00
Ben Vargas
e69ac5d5cf style: apply biome formatting to test files 2025-06-20 16:25:22 +03:00
Ben Vargas
c60c9354a4 docs: add Claude Code support information to README
- Added Claude Code to the list of supported providers in Requirements section
- Noted that Claude Code requires no API key but needs Claude Code CLI
- Added example of configuring claude-code/sonnet model
- Created dedicated Claude Code Support section with key information
- Added link to detailed Claude Code setup documentation

This ensures users are aware of the Claude Code option as a no-API-key
alternative for using Claude models.
2025-06-20 16:25:22 +03:00
Ben Vargas
30b895be2c revert: remove maxTokens update functionality from init
This functionality was out of scope for the Claude Code provider PR.
The automatic updating of maxTokens values in config.json during
initialization is a general improvement that should be in a separate PR.

Additionally, Claude Code ignores maxTokens and temperature parameters
anyway, making this change irrelevant for the Claude Code integration.

Removed:
- scripts/modules/update-config-tokens.js
- Import and usage in scripts/init.js
2025-06-20 16:25:22 +03:00
Ben Vargas
9995075093 test: add comprehensive tests for ClaudeCodeProvider
Addresses code review feedback about missing automated tests for the ClaudeCodeProvider.

## Changes

- Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods
- Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling
- Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed

## Test Coverage

1. **Unit Tests**:
   - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation
   - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation

2. **Integration Tests**:
   - Optional dependency behavior when package is not installed
   - Clear error messages for users about missing package
   - Provider instantiation works but usage fails gracefully

All tests pass and provide comprehensive coverage for the claude-code provider implementation.
2025-06-20 16:25:22 +03:00
Ben Vargas
b62cb1bbe7 feat: make @anthropic-ai/claude-code an optional dependency
This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality.

Changes:
- Added @anthropic-ai/claude-code to optionalDependencies in package.json
- Implemented lazy loading in language-model.js to only import the SDK when actually used
- Updated documentation to explain the optional installation requirement
- Applied formatting fixes to ensure code consistency

Benefits:
- Users without Claude Code subscriptions don't need to install the dependency
- Reduces package size for users who don't use Claude Code
- Prevents installation failures if the package is unavailable
- Provides clear error messages when the package is needed but not installed

The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present.
2025-06-20 16:25:22 +03:00
Ben Vargas
7defcba465 fix(docs): correct invalid commands in claude-code usage examples
- Remove non-existent 'do', 'estimate', and 'analyze' commands
- Replace with actual Task Master commands: next, show, set-status
- Use correct syntax for parse-prd and analyze-complexity
2025-06-20 16:25:22 +03:00
Ben Vargas
3e838ed34b feat: add Claude Code provider support
Implements Claude Code as a new AI provider that uses the Claude Code CLI
without requiring API keys. This enables users to leverage Claude models
through their local Claude Code installation.

Key changes:
- Add complete AI SDK v1 implementation for Claude Code provider
  - Custom SDK with streaming/non-streaming support
  - Session management for conversation continuity
  - JSON extraction for object generation mode
  - Support for advanced settings (maxTurns, allowedTools, etc.)

- Integrate Claude Code into Task Master's provider system
  - Update ai-services-unified.js to handle keyless authentication
  - Add provider to supported-models.json with opus/sonnet models
  - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000)

- Fix maxTokens configuration issue
  - Add max_tokens property to getAvailableModels() output
  - Update setModel() to properly handle claude-code models
  - Create update-config-tokens.js utility for init process

- Add comprehensive documentation
  - User guide with configuration examples
  - Advanced settings explanation and future integration options

The implementation maintains full backward compatibility with existing
providers while adding seamless Claude Code support to all Task Master
commands.
2025-06-20 16:25:22 +03:00
ejones40
1b8c320c57 Add pyproject.toml as project root marker (#804)
* feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-06-20 15:15:13 +02:00
Ralph Khreish
5da5b59bde Fix/expand command tag corruption (#827)
* fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added.

* test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test

* fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification

* chore(changeset): combine duplicate changesets for expand tag corruption fix

Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate
the expand command fix and its comprehensive E2E testing enhancements
into a single changeset entry.

* Delete .changeset/eighty-breads-wonder.md

* Version Packages

* chore: fix package.json

* fix(expand): Enhance context handling in expandAllTasks function
- Added `tag` to context destructuring for better context management.
- Updated `readJSON` call to include `contextTag` for improved data integrity.
- Ensured the correct tag is passed during task expansion to prevent tag corruption.

---------

Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de>
Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-20 15:12:40 +02:00
Ralph Khreish
04f44a2d3d chore: fix package.json 2025-06-20 16:10:52 +03:00
github-actions[bot]
36fe838fd5 Version Packages 2025-06-20 16:10:52 +03:00
github-actions[bot]
415b1835d4 docs: Auto-update and format models.md 2025-06-20 13:05:31 +00:00
Ralph Khreish
78112277b3 fix(bedrock): improve AWS credential handling and add model definitions (#826)
* fix(bedrock): improve AWS credential handling and add model definitions

- Change error to warning when AWS credentials are missing in environment
- Allow fallback to system configuration (aws config files or instance profiles)
- Remove hardcoded region and profile parameters in Bedrock client
- Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock
- Update config manager to properly handle Bedrock provider

* chore: cleanup and format and small refactor

---------

Co-authored-by: Ray Krueger <raykrueger@gmail.com>
2025-06-20 15:05:20 +02:00
Ralph Khreish
2bb4260966 fix: Fix external provider support (#726) 2025-06-20 14:59:53 +02:00
Nathan Marley
3a2325a963 fix: switch to ESM export to avoid mixed format (#633)
* fix: switch to ESM export to avoid mixed format

The CLI entrypoint was using `module.exports` alongside ESM `import` statements,
resulting in an invalid mixed module format. Replaced the CommonJS export with
a proper ESM `export` to maintain consistency and prevent module resolution issues.

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-06-20 14:12:36 +02:00
Ralph Khreish
1bd6d4f246 fix: providers config for azure, bedrock, and vertex (#822)
* fix: providers config for azure, bedrock, and vertex

* chore: improve changelog

* chore: fix CI
2025-06-20 13:13:53 +02:00
Joe Danziger
a09a2d0967 feat: Flexible brand rules management (#460)
* chore(docs): update docs and rules related to model management.

* feat(ai): Add OpenRouter AI provider support

Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform.

- Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject).

- Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic.

- Verified config-manager.js handles OpenRouter API key checks correctly.

- Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY.

- Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint.

* feat(cli): Add --status/-s filter flag to show command and get-task MCP tool

Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context.

- Modified `commands.js` to add the `--status` option to the `show` command definition.

- Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering.

- Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation.

- Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter.

- Added changeset entry.

* fix(tasks): Improve next task logic to be subtask-aware

* fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs

- Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs.

- Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs.

- Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly.

- Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly.

- Updates the `remove_task` MCP tool description to reflect the new multi-ID capability.

This addresses the previously known issue where only the first ID in a comma-separated list was processed.

Closes #140

* Update README.md (#342)

* Update Discord badge (#337)

* refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template

* Refactor: Improve MCP logging, update E2E & tests

Refactors MCP server logging and updates testing infrastructure.

- MCP Server:

  - Replaced manual logger wrappers with centralized `createLogWrapper` utility.

  - Updated direct function calls to use `{ session, mcpLog }` context.

  - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools.

  - Adjusted MCP tool import paths and parameter descriptions.

- Documentation:

  - Modified `docs/configuration.md`.

  - Modified `docs/tutorial.md`.

- Testing:

  - E2E Script (`run_e2e.sh`):

    - Removed `set -e`.

    - Added LLM analysis function (`analyze_log_with_llm`) & integration.

    - Adjusted test run directory creation timing.

    - Added debug echo statements.

  - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`.

  - Modified Fixtures: Updated `scripts/task-complexity-report.json`.

- Dev Scripts:

  - Modified `scripts/dev.js`.

* chore(tests): Passes tests for merge candidate
- Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup'
- E2E script has been perfected and works as designed provided there are all provider API keys .env in the root
- Fixes the entire test suite to make sure it passes with the new architecture.
- Fixes dependency command to properly show there is a validation failure if there is one.
- Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json
- Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad.
- Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling.

* chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/.

* fix: ensure API key detection properly reads .env in MCP context

Problem:
- Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP
- The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly
- This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env

Solution:
- Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot
- Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable
- Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts
- Maintains the correct precedence: session.env → .env file → process.env

Testing:
- Verified working correctly with both MCP and CLI tools
- API keys properly detected in .env file in both contexts
- Deleted .cursor/mcp.json to confirm introspection of .env as fallback works

* fix(update): pass projectRoot through update command flow

Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context.

* fix(analyze-complexity): pass projectRoot through analyze-complexity flow

Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity.

* fix(add-task): pass projectRoot and fix logging/refs

Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working.

* fix(parse-prd): pass projectRoot and fix schema/logging

Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js.

* fix(update-task): pass projectRoot and adjust parsing

Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot.

* fix(update-subtask): pass projectRoot and allow updating done subtasks

Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks.

* fix(mcp, expand): pass projectRoot through expand/expand-all flows

Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect.

Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect.

Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation.

* chore: prettier

* fix(expand-all): add projectRoot to expandAllTasksDirect invokation.

* fix(update-tasks): Improve AI response parsing for 'update' command

Refactors the JSON array parsing logic within
in .

The previous logic primarily relied on extracting content from markdown
code blocks (json or javascript), which proved brittle when the AI
response included comments or non-JSON text within the block, leading to
parsing errors for the  command.

This change modifies the parsing strategy to first attempt extracting
content directly between the outermost '[' and ']' brackets. This is
more robust as it targets the expected array structure directly. If
bracket extraction fails, it falls back to looking for a strict json
code block, then prefix stripping, before attempting a raw parse.

This approach aligns with the successful parsing strategy used for
single-object responses in  and resolves the
parsing errors previously observed with the  command.

* refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization

Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation.

* refactor(mcp): apply withNormalizedProjectRoot HOF to update tool

Problem: The  MCP tool previously handled project root acquisition and path resolution within its  method, leading to potential inconsistencies and repetition.

Solution: Refactored the  tool () to utilize the new  Higher-Order Function (HOF) from .

Specific Changes:
- Imported  HOF.
- Updated the Zod schema for the  parameter to be optional, as the HOF handles deriving it from the session if not provided.
- Wrapped the entire  function body with the  HOF.
- Removed the manual call to  from within the  function body.
- Destructured the  from the  object received by the wrapped  function, ensuring it's the normalized path provided by the HOF.
- Used the normalized  variable when calling  and when passing arguments to .

This change standardizes project root handling for the  tool, simplifies its  method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools.

* fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows

* fix: add rest of tools that need wrapper

* chore: cleanup tools to stop using rootFolder and remove unused imports

* chore: more cleanup

* refactor: Improve update-subtask, consolidate utils, update config

This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration.

**Major Changes:**

1.  **Refactor updateSubtaskById:**
    - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation.
    - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting.
    - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended.
    - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details.

2.  **Consolidate MCP Utilities:**
    - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js.
    - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location.

3.  **Refactor Project Initialization:**
    - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file.
    - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js).

**Other Changes:**

-   Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219.
-   Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js).

* fix: displayBanner logging when silentMode is active (#385)

* fix: improve error handling, test options, and model configuration

- Enhance error validation in parse-prd.js and update-tasks.js
- Fix bug where mcpLog was incorrectly passed as logWrapper
- Improve error messages and response formatting
- Add --skip-verification flag to E2E tests
- Update MCP server config that ships with init to match new API key structure
- Fix task force/append handling in parse-prd command
- Increase column width in update-tasks display

* chore: fixes parse prd to show loading indicator in cli.

* fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code.

* chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into.

* chore: restores 3.7 sonnet as the main role.

* fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files.

* chore: removes tasks json backup that was temporarily created.

* fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response.

* chore: prettier

* chore: readme typos

* fix(config): restores sonnet 3.7 as default main role.

* Version Packages

* hotfix: move production package to "dependencies" (#399)

* Version Packages

* Fix: issues with 0.13.0 not working (#402)

* Exit prerelease mode and version packages

* hotfix: move production package to "dependencies"

* Enter prerelease mode and version packages

* Enter prerelease mode and version packages

* chore: cleanup

* chore: improve pre.json and add pre-release workflow

* chore: fix package.json

* chore: cleanup

* chore: improve pre-release workflow

* chore: allow github actions to commit

* extract fileMap and conversionConfig into brand profile

* extract into brand profile

* add windsurf profile

* add remove brand rules function

* fix regex

* add rules command to add/remove rules for a specific brand

* fix post processing for roo

* allow multiples

* add cursor profile

* update test for new structure

* move rules to assets

* use assets/rules for rules files

* use standardized setupMCP function

* fix formatting

* fix formatting

* add logging

* fix escapes

* default to cursor

* allow init with certain rulesets; no more .windsurfrules

* update docs

* update log msg

* fix formatting

* keep mdc extension for cursor

* don't rewrite .mdc to .md inside the files

* fix roo init (add modes)

* fix cursor init (don't use roo transformation by default)

* use more generic function names

* update docs

* fix formatting

* update function names

* add changeset

* add rules to mcp initialize project

* register tool with mcp server

* update docs

* add integration test

* fix cursor initialization

* rule selection

* fix formatting

* fix MCP - remove yes flag

* add import

* update roo tests

* add/update tests

* remove test

* add rules command test

* update MCP responses, centralize rules profiles & helpers

* fix logging and MCP response messages

* fix formatting

* incorrect test

* fix tests

* update fileMap

* fix file extension transformations

* fix formatting

* add rules command test

* test already covered

* fix formatting

* move renaming logic into profiles

* make sure dir is deleted (DS_Store)

* add confirmation for rules removal

* add force flag for rules remove

* use force flag for test

* remove yes parameter

* fix formatting

* import brand profiles from rule-transformer.js

* update comment

* add interactive rules setup

* optimize

* only copy rules specifically listed in fileMap

* update comment

* add cline profile

* add brandDir to remove ambiguity and support Cline

* specify whether to create mcp config and filename

* add mcpConfigName value for parh

* fix formatting

* remove rules just for this repository - only include rules to be distributed

* update error message

* update "brand rules" to "rules"

* update to minor

* remove comment

* remove comments

* move to /src/utils

* optimize imports

* move rules-setup.js to /src/utils

* move rule-transformer.js to /src/utils

* move confirmation to /src/ui/confirm.js

* default to all rules

* use profile js for mcp config settings

* only run rules interactive setup if not provided via command line

* update comments

* initialize with all brands if nothing specified

* update var name

* clean up

* enumerate brands for brand rules

* update instructions

* add test to check for brand profiles

* fix quotes

* update semantics and terminology from 'brand rules' to 'rules profiles'

* fix formatting

* fix formatting

* update function name and remove copying of cursor rules, now handled by rules transformer

* update comment

* rename to mcp-config-setup.js

* use enums for rules actions

* add aggregate reporting for rules add command

* add missing log message

* use simpler path

* use base profile with modifications for each brand

* use displayName and don't select any defaults in setup

* add confirmation if removing ALL rules profiles, and add --force flag on rules remove

* Use profile-detection instead of rules-detection

* add newline at end of mcp config

* add proper formatting for mcp.json

* update rules

* update rules

* update rules

* add checks for other rules and other profile folder items before removing

* update confirmation for rules remove

* update docs

* update changeset

* fix for filepath at bottom of rule

* Update cline profile and add test; adjust other rules tests

* update changeset

* update changeset

* clarify init for all profiles if not specified

* update rule text

* revert text

* use "rule profiles" instead of "rules profiles"

* use standard tool mappings for windsurf

* add Trae support

* update changeset

* update wording

* update to 'rule profile'

* remove unneeded exports to optimize loc

* combine to /src/utils/profiles.js; add codex and claude code profiles

* rename function and add boxen

* add claude and codex integration tests

* organize tests into profiles folder

* mock fs for transformer tests

* update UI

* add cline and trae integration tests

* update test

* update function name

* update formatting

* Update change set with new profiles

* move profile integration tests to subdirectory

* properly create temp directories in /tmp folder

* fix formatting

* use taskmaster subfolder for the 2 TM rules

* update wording

* ensure subdirectory exists

* update rules from next

* update from next

* update taskmaster rule

* add details on new rules command and init

* fix mcp init

* fix MCP path to assets

* remove duplication

* remove duplication

* MCP server path fixes for rules command

* fix for CLI roo rules add/remove

* update tests

* fix formatting

* fix pattern for interactive rule profiles setup

* restore comments

* restore comments

* restore comments

* remove unused import, fix quotes

* add missing integration tests

* add VS Code profile and tests

* update docs and rules to include vscode profile

* add rules subdirectory support per-profile

* move profiles to /src

* fix formatting

* rename to remove ambiguity

* use --setup for rules interactive setup

* Fix Cursor deeplink installation with copy-paste instructions (#723)

* change roo boomerang to orchestrator; update tests that don't use modes

* fix newline

* chore: cleanup

---------

Co-authored-by: Eyal Toledano <eyal@microangel.so>
Co-authored-by: Yuval <yuvalbl@users.noreply.github.com>
Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com>
Co-authored-by: Eyal Toledano <eutait@gmail.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-20 10:09:36 +02:00
github-actions[bot]
02e0db09df docs: Auto-update and format models.md 2025-06-20 07:59:03 +00:00
Riccardo (Ricky) Esclapon
3bcce8d70e Update SWE scores (#657) 2025-06-20 09:58:53 +02:00
111 changed files with 8827 additions and 2344 deletions

View File

@@ -0,0 +1,12 @@
---
"task-master-ai": patch
---
Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Ensure projectRoot is a string (potential WSL fix)

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix bulk update tag corruption in tagged task lists

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Include additional Anthropic models running on Bedrock in what is supported

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": patch
---
Fix expand-task to use tag-specific complexity reports
The expand-task function now correctly uses complexity reports specific to the current tag context (e.g., task-complexity-report_feature-branch.json) instead of always using the default task-complexity-report.json file. This enables proper task expansion behavior when working with multiple tag contexts.

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Call rules interactive setup during init

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Update o3 model price

View File

@@ -0,0 +1,17 @@
---
'task-master-ai': minor
---
Added comprehensive rule profile management:
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
- Resolves #338

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix .gitignore missing trailing newline during project initialization

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Default to Cursor profile for MCP init when no rules specified

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improves Amazon Bedrock support

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Adds support for gemini-cli as a provider, enabling free or subscription use through Google Accounts and paid Gemini Cloud Assist (GCA) subscriptions.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": patch
---
Fixes issue with expand CLI command "Complexity report not found"
- Closes #735
- Closes #728

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix data corruption issues by ensuring project root and tag information is properly passed through all command operations

View File

@@ -0,0 +1,10 @@
---
"task-master-ai": minor
---
Make task-master more compatible with the "o" family models of OpenAI
Now works well with:
- o3
- o3-mini
- etc.

23
.changeset/pre.json Normal file
View File

@@ -0,0 +1,23 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.17.1"
},
"changesets": [
"bright-llamas-enter",
"huge-moose-prove",
"icy-dryers-hunt",
"lemon-deer-hide",
"modern-cats-pick",
"nasty-berries-tan",
"shy-groups-fly",
"sour-lions-check",
"spicy-teams-travel",
"stale-cameras-sin",
"swift-squids-sip",
"tiny-dogs-change",
"vast-plants-exist",
"wet-berries-dress"
]
}

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add better support for python projects by adding `pyproject.toml` as a projectRoot marker

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Added option for the AI to determine the number of tasks required based entirely on complexity

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Store tasks in Git by default

View File

@@ -0,0 +1,11 @@
---
"task-master-ai": patch
---
Improve provider validation system with clean constants structure
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix weird `task-master init` bug when using in certain environments

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add advanced settings for Claude Code AI Provider

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Rename Roo Code Boomerang role to Orchestrator

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
fixes a critical issue where subtask generation fails on gemini-2.5-pro unless explicitly prompted to return 'details' field as a string not an object

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Support custom response language

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improve mcp keys check in cursor

View File

@@ -0,0 +1,22 @@
---
"task-master-ai": minor
---
- **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734

View File

@@ -0,0 +1,22 @@
---
"task-master-ai": minor
---
Add Claude Code provider support
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
Key features:
- New claude-code provider with support for opus and sonnet models
- No API key required - uses local Claude Code CLI installation
- Optional dependency - won't affect users who don't need Claude Code
- Lazy loading ensures the provider only loads when requested
- Full integration with existing Task Master commands and workflows
- Comprehensive test coverage for reliability
- New --claude-code flag for the models command
Users can now configure Claude Code models with:
task-master models --set-main sonnet --claude-code
task-master models --set-research opus --claude-code
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix rules command to use reliable project root detection like other commands

View File

@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples

View File

@@ -272,7 +272,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)

View File

@@ -29,6 +29,8 @@
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"userId": "1234567890",
"azureBaseURL": "https://your-endpoint.azure.com/",
"defaultTag": "master"
}
"defaultTag": "master",
"responseLanguage": "English"
},
"claudeCode": {}
}

View File

@@ -219,6 +219,110 @@
- [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')`
## 0.18.0-rc.0
### Minor Changes
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`
- [#460](https://github.com/eyaltoledano/claude-task-master/pull/460) [`a09a2d0`](https://github.com/eyaltoledano/claude-task-master/commit/a09a2d0967a10276623e3f3ead3ed577c15ce62f) Thanks [@joedanz](https://github.com/joedanz)! - Added comprehensive rule profile management:
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
- Resolves #338
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734
- [#829](https://github.com/eyaltoledano/claude-task-master/pull/829) [`4b0c9d9`](https://github.com/eyaltoledano/claude-task-master/commit/4b0c9d9af62d00359fca3f43283cf33223d410bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
Key features:
- New claude-code provider with support for opus and sonnet models
- No API key required - uses local Claude Code CLI installation
- Optional dependency - won't affect users who don't need Claude Code
- Lazy loading ensures the provider only loads when requested
- Full integration with existing Task Master commands and workflows
- Comprehensive test coverage for reliability
- New --claude-code flag for the models command
Users can now configure Claude Code models with:
task-master models --set-main sonnet --claude-code
task-master models --set-research opus --claude-code
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
### Patch Changes
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
- [#833](https://github.com/eyaltoledano/claude-task-master/pull/833) [`cf2c066`](https://github.com/eyaltoledano/claude-task-master/commit/cf2c06697a0b5b952fb6ca4b3c923e9892604d08) Thanks [@joedanz](https://github.com/joedanz)! - Call rules interactive setup during init
- [#826](https://github.com/eyaltoledano/claude-task-master/pull/826) [`7811227`](https://github.com/eyaltoledano/claude-task-master/commit/78112277b3caa4539e6e29805341a944799fb0e7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
- [#834](https://github.com/eyaltoledano/claude-task-master/pull/834) [`6483537`](https://github.com/eyaltoledano/claude-task-master/commit/648353794eb60d11ffceda87370a321ad310fbd7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
- [#633](https://github.com/eyaltoledano/claude-task-master/pull/633) [`3a2325a`](https://github.com/eyaltoledano/claude-task-master/commit/3a2325a963fed82377ab52546eedcbfebf507a7e) Thanks [@nmarley](https://github.com/nmarley)! - Fix weird `task-master init` bug when using in certain environments
- [#831](https://github.com/eyaltoledano/claude-task-master/pull/831) [`b592dff`](https://github.com/eyaltoledano/claude-task-master/commit/b592dff8bc5c5d7966843fceaa0adf4570934336) Thanks [@joedanz](https://github.com/joedanz)! - Rename Roo Code Boomerang role to Orchestrator
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
## 0.17.1
### Patch Changes
- [#789](https://github.com/eyaltoledano/claude-task-master/pull/789) [`8cde6c2`](https://github.com/eyaltoledano/claude-task-master/commit/8cde6c27087f401d085fe267091ae75334309d96) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix contextGatherer bug when adding a task `Cannot read properties of undefined (reading 'forEach')`
## 0.17.0
### Minor Changes

View File

@@ -323,8 +323,11 @@ Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
# Limit the number of tasks generated (default is 10)
task-master parse-prd <prd-file.txt> --num-tasks=5
# Allow task master to determine the number of tasks based on complexity
task-master parse-prd <prd-file.txt> --num-tasks=0
```
### List Tasks
@@ -397,6 +400,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
task-master expand --id=<id> --num=0
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"

View File

@@ -3,7 +3,7 @@
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"maxTokens": 100000,
"temperature": 0.2
},
"research": {
@@ -14,9 +14,9 @@
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20240620",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 8192,
"temperature": 0.1
"temperature": 0.2
}
},
"global": {
@@ -28,6 +28,7 @@
"defaultTag": "master",
"ollamaBaseURL": "http://localhost:11434/api",
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com"
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"responseLanguage": "English"
}
}

View File

@@ -153,7 +153,7 @@ When users initialize Taskmaster on existing projects:
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples

View File

@@ -271,7 +271,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`)
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)

View File

@@ -8,8 +8,11 @@ Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
# Limit the number of tasks generated (default is 10)
task-master parse-prd <prd-file.txt> --num-tasks=5
# Allow task master to determine the number of tasks based on complexity
task-master parse-prd <prd-file.txt> --num-tasks=0
```
## List Tasks
@@ -128,6 +131,9 @@ When marking a task as "done", all of its subtasks will automatically be marked
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand a task with a dynamic number of subtasks (ignoring complexity report)
task-master expand --id=<id> --num=0
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"

View File

@@ -36,6 +36,7 @@ Taskmaster uses two primary methods for configuration:
"global": {
"logLevel": "info",
"debug": false,
"defaultNumTasks": 10,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"defaultTag": "master",
@@ -43,7 +44,8 @@ Taskmaster uses two primary methods for configuration:
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
"vertexLocation": "us-central1",
"responseLanguage": "English"
}
}
```

View File

@@ -64,100 +64,81 @@ task-master set-status --id=task-001 --status=in-progress
```bash
npm install @anthropic-ai/claude-code
```
3. No API key is required in your environment variables or MCP configuration
3. Run Claude Code for the first time and authenticate with your Anthropic account:
```bash
claude
```
4. No API key is required in your environment variables or MCP configuration
## Advanced Settings
The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. While these settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), they are not currently exposed through Task Master's standard API due to architectural constraints.
The Claude Code SDK supports additional settings that provide fine-grained control over Claude's behavior. These settings are implemented in the underlying SDK (`src/ai-providers/custom-sdk/claude-code/`), and can be managed through Task Master's configuration file.
### Supported Settings
### Advanced Settings Usage
To update settings for Claude Code, update your `.taskmaster/config.json`:
The Claude Code settings can be specified globally in the `claudeCode` section of the config, or on a per-command basis in the `commandSpecific` section:
```javascript
const settings = {
// Maximum conversation turns Claude can make in a single request
maxTurns: 5,
// Custom system prompt to override Claude Code's default behavior
customSystemPrompt: "You are a helpful assistant focused on code quality",
// Permission mode for file system operations
permissionMode: 'default', // Options: 'default', 'restricted', 'permissive'
// Explicitly allow only certain tools
allowedTools: ['Read', 'LS'], // Claude can only read files and list directories
// Explicitly disallow certain tools
disallowedTools: ['Write', 'Edit'], // Prevent Claude from modifying files
// MCP servers for additional tool integrations
mcpServers: []
};
```
{
// "models" and "global" config...
### Current Limitations
"claudeCode": {
// Maximum conversation turns Claude can make in a single request
"maxTurns": 5,
// Custom system prompt to override Claude Code's default behavior
"customSystemPrompt": "You are a helpful assistant focused on code quality",
Task Master uses a standardized `BaseAIProvider` interface that only passes through common parameters (modelId, messages, maxTokens, temperature) to maintain consistency across all providers. The Claude Code advanced settings are implemented in the SDK but not accessible through Task Master's high-level commands.
// Append additional content to the system prompt
"appendSystemPrompt": "Always follow coding best practices",
// Permission mode for file system operations
"permissionMode": "default", // Options: "default", "acceptEdits", "plan", "bypassPermissions"
// Explicitly allow only certain tools
"allowedTools": ["Read", "LS"], // Claude can only read files and list directories
// Explicitly disallow certain tools
"disallowedTools": ["Write", "Edit"], // Prevent Claude from modifying files
// MCP servers for additional tool integrations
"mcpServers": {
"mcp-server-name": {
"command": "npx",
"args": ["-y", "mcp-serve"],
"env": {
// ...
}
}
}
},
### Future Integration Options
For developers who need to use these advanced settings, there are three potential approaches:
#### Option 1: Extend BaseAIProvider
Modify the core Task Master architecture to support provider-specific settings:
```javascript
// In BaseAIProvider
const result = await generateText({
model: client(params.modelId),
messages: params.messages,
maxTokens: params.maxTokens,
temperature: params.temperature,
...params.providerSettings // New: pass through provider-specific settings
});
```
#### Option 2: Override Methods in ClaudeCodeProvider
Create custom implementations that extract and use Claude-specific settings:
```javascript
// In ClaudeCodeProvider
async generateText(params) {
const { maxTurns, allowedTools, disallowedTools, ...baseParams } = params;
const client = this.getClient({
...baseParams,
settings: { maxTurns, allowedTools, disallowedTools }
});
// Continue with generation...
// Command-specific settings override global settings
"commandSpecific": {
"parse-prd": {
// Settings specific to the 'parse-prd' command
"maxTurns": 10,
"customSystemPrompt": "You are a task breakdown specialist"
},
"analyze-complexity": {
// Settings specific to the 'analyze-complexity' command
"maxTurns": 3,
"appendSystemPrompt": "Focus on identifying bottlenecks"
}
}
}
```
#### Option 3: Direct SDK Usage
For immediate access to advanced features, developers can use the Claude Code SDK directly:
```javascript
import { createClaudeCode } from 'task-master-ai/ai-providers/custom-sdk/claude-code';
const claude = createClaudeCode({
defaultSettings: {
maxTurns: 5,
allowedTools: ['Read', 'LS'],
disallowedTools: ['Write', 'Edit']
}
});
const model = claude('sonnet');
const result = await generateText({
model,
messages: [{ role: 'user', content: 'Analyze this code...' }]
});
```
- For a full list of Cluaude Code settings, see the [Claude Code Settings documentation](https://docs.anthropic.com/en/docs/claude-code/settings).
- For a full list of AI powered command names, see this file: `src/constants/commands.js`
### Why These Settings Matter
- **maxTurns**: Useful for complex refactoring tasks that require multiple iterations
- **customSystemPrompt**: Allows specializing Claude for specific domains or coding standards
- **appendSystemPrompt**: Useful for enforcing coding standards or providing additional context
- **permissionMode**: Critical for security in production environments
- **allowedTools/disallowedTools**: Enable read-only analysis modes or restrict access to sensitive operations
- **mcpServers**: Future extensibility for custom tool integrations

View File

@@ -1,10 +1,17 @@
# Available Models as of June 21, 2025
# Available Models as of July 2, 2025
## Main Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
@@ -67,29 +74,46 @@
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Research Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------- | --------- | ---------- | ----------- |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| bedrock | us.deepseek.r1-v1:0 | | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Fallback Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
@@ -141,3 +165,5 @@
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |

View File

@@ -0,0 +1,169 @@
# Gemini CLI Provider
The Gemini CLI provider allows you to use Google's Gemini models through the Gemini CLI tool, leveraging your existing Gemini subscription and OAuth authentication.
## Why Use Gemini CLI?
The primary benefit of using the `gemini-cli` provider is to leverage your existing Gemini Pro subscription or OAuth authentication configured through the Gemini CLI. This is ideal for users who:
- Have an active Gemini subscription
- Want to use OAuth authentication instead of managing API keys
- Have already configured authentication via `gemini auth login`
## Installation
The provider is already included in Task Master. However, you need to install the Gemini CLI tool:
```bash
# Install gemini CLI globally
npm install -g @google/gemini-cli
```
## Authentication
### Primary Method: CLI Authentication (Recommended)
The Gemini CLI provider is designed to use your pre-configured OAuth authentication:
```bash
# Authenticate with your Google account
gemini auth login
```
This will open a browser window for OAuth authentication. Once authenticated, Task Master will automatically use these credentials when you select the `gemini-cli` provider.
### Alternative Method: API Key
While the primary use case is OAuth authentication, you can also use an API key if needed:
```bash
export GEMINI_API_KEY="your-gemini-api-key"
```
**Note:** If you want to use API keys, consider using the standard `google` provider instead, as `gemini-cli` is specifically designed for OAuth/subscription users.
## Configuration
Configure `gemini-cli` as a provider using the Task Master models command:
```bash
# Set gemini-cli as your main provider with gemini-2.5-pro
task-master models --set-main gemini-2.5-pro --gemini-cli
# Or use the faster gemini-2.5-flash model
task-master models --set-main gemini-2.5-flash --gemini-cli
```
You can also manually edit your `.taskmaster/config/providers.json`:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-flash"
}
}
```
### Available Models
The gemini-cli provider supports only two models:
- `gemini-2.5-pro` - High performance model (1M token context window, 65,536 max output tokens)
- `gemini-2.5-flash` - Fast, efficient model (1M token context window, 65,536 max output tokens)
## Usage Examples
### Basic Usage
Once authenticated with `gemini auth login` and configured, simply use Task Master as normal:
```bash
# The provider will automatically use your OAuth credentials
task-master new "Create a hello world function"
```
### With Specific Parameters
Configure model parameters in your providers.json:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-pro",
"parameters": {
"maxTokens": 65536,
"temperature": 0.7
}
}
}
```
### As Fallback Provider
Use gemini-cli as a fallback when your primary provider is unavailable:
```json
{
"main": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-latest"
},
"fallback": {
"provider": "gemini-cli",
"model": "gemini-2.5-flash"
}
}
```
## Troubleshooting
### "Authentication failed" Error
If you get an authentication error:
1. **Primary solution**: Run `gemini auth login` to authenticate with your Google account
2. **Check authentication status**: Run `gemini auth status` to verify you're logged in
3. **If using API key** (not recommended): Ensure `GEMINI_API_KEY` is set correctly
### "Model not found" Error
The gemini-cli provider only supports two models:
- `gemini-2.5-pro`
- `gemini-2.5-flash`
If you need other Gemini models, use the standard `google` provider with an API key instead.
### Gemini CLI Not Found
If you get a "gemini: command not found" error:
```bash
# Install the Gemini CLI globally
npm install -g @google/gemini-cli
# Verify installation
gemini --version
```
### Custom Endpoints
Custom endpoints can be configured if needed:
```json
{
"main": {
"provider": "gemini-cli",
"model": "gemini-2.5-pro",
"baseURL": "https://custom-endpoint.example.com"
}
}
```
## Important Notes
- **OAuth vs API Key**: This provider is specifically designed for users who want to use OAuth authentication via `gemini auth login`. If you prefer using API keys, consider using the standard `google` provider instead.
- **Limited Model Support**: Only `gemini-2.5-pro` and `gemini-2.5-flash` are available through gemini-cli.
- **Subscription Benefits**: Using OAuth authentication allows you to leverage any subscription benefits associated with your Google account.
- The provider uses the `ai-sdk-provider-gemini-cli` npm package internally.
- Supports all standard Task Master features: text generation, streaming, and structured object generation.

View File

@@ -20,6 +20,8 @@ import {
* @param {string} [args.status] - Status for new subtask (default: 'pending')
* @param {string} [args.dependencies] - Comma-separated list of dependency IDs
* @param {boolean} [args.skipGenerate] - Skip regenerating task files
* @param {string} [args.projectRoot] - Project root directory
* @param {string} [args.tag] - Tag for the task
* @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: string}>}
*/
@@ -34,7 +36,9 @@ export async function addSubtaskDirect(args, log) {
details,
status,
dependencies: dependenciesStr,
skipGenerate
skipGenerate,
projectRoot,
tag
} = args;
try {
log.info(`Adding subtask with args: ${JSON.stringify(args)}`);
@@ -96,6 +100,8 @@ export async function addSubtaskDirect(args, log) {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
const context = { projectRoot, tag };
// Case 1: Convert existing task to subtask
if (existingTaskId) {
log.info(`Converting task ${existingTaskId} to a subtask of ${parentId}`);
@@ -104,7 +110,8 @@ export async function addSubtaskDirect(args, log) {
parentId,
existingTaskId,
null,
generateFiles
generateFiles,
context
);
// Restore normal logging
@@ -135,7 +142,8 @@ export async function addSubtaskDirect(args, log) {
parentId,
null,
newSubtaskData,
generateFiles
generateFiles,
context
);
// Restore normal logging

View File

@@ -171,8 +171,8 @@ export async function expandTaskDirect(args, log, context = {}) {
task.subtasks = [];
}
// Save tasks.json with potentially empty subtasks array
writeJSON(tasksPath, data);
// Save tasks.json with potentially empty subtasks array and proper context
writeJSON(tasksPath, data, projectRoot, tag);
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);

View File

@@ -13,12 +13,14 @@ import fs from 'fs';
* Fix invalid dependencies in tasks.json automatically
* @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.projectRoot - Project root directory
* @param {string} args.tag - Tag for the project
* @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
*/
export async function fixDependenciesDirect(args, log) {
// Destructure expected args
const { tasksJsonPath } = args;
const { tasksJsonPath, projectRoot, tag } = args;
try {
log.info(`Fixing invalid dependencies in tasks: ${tasksJsonPath}`);
@@ -51,8 +53,10 @@ export async function fixDependenciesDirect(args, log) {
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Call the original command function using the provided path
await fixDependenciesCommand(tasksPath);
// Call the original command function using the provided path and proper context
await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Restore normal logging
disableSilentMode();
@@ -61,7 +65,8 @@ export async function fixDependenciesDirect(args, log) {
success: true,
data: {
message: 'Dependencies fixed successfully',
tasksPath
tasksPath,
tag: tag || 'master'
}
};
} catch (error) {

View File

@@ -72,15 +72,16 @@ export async function initializeProjectDirect(args, log, context = {}) {
yes: true // Force yes mode
};
// Handle rules option just like CLI
// Handle rules option with MCP-specific defaults
if (Array.isArray(args.rules) && args.rules.length > 0) {
options.rules = args.rules;
options.rulesExplicitlyProvided = true;
log.info(`Including rules: ${args.rules.join(', ')}`);
} else {
options.rules = RULE_PROFILES;
log.info(
`No rule profiles specified, defaulting to: ${RULE_PROFILES.join(', ')}`
);
// For MCP initialization, default to Cursor profile only
options.rules = ['cursor'];
options.rulesExplicitlyProvided = true;
log.info(`No rule profiles specified, defaulting to: Cursor`);
}
log.info(`Initializing project with options: ${JSON.stringify(options)}`);

View File

@@ -109,7 +109,7 @@ export async function parsePRDDirect(args, log, context = {}) {
if (numTasksArg) {
numTasks =
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
if (Number.isNaN(numTasks) || numTasks <= 0) {
if (Number.isNaN(numTasks) || numTasks < 0) {
// Ensure positive number
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
logWrapper.warn(

View File

@@ -20,12 +20,13 @@ import {
* @param {Object} args - Command arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
* @param {string} [args.tag] - Tag context to operate on (defaults to current active tag).
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
*/
export async function removeTaskDirect(args, log, context = {}) {
// Destructure expected args
const { tasksJsonPath, id, projectRoot } = args;
const { tasksJsonPath, id, projectRoot, tag } = args;
const { session } = context;
try {
// Check if tasksJsonPath was provided
@@ -56,17 +57,17 @@ export async function removeTaskDirect(args, log, context = {}) {
const taskIdArray = id.split(',').map((taskId) => taskId.trim());
log.info(
`Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}`
`Removing ${taskIdArray.length} task(s) with ID(s): ${taskIdArray.join(', ')} from ${tasksJsonPath}${tag ? ` in tag '${tag}'` : ''}`
);
// Validate all task IDs exist before proceeding
const data = readJSON(tasksJsonPath, projectRoot);
const data = readJSON(tasksJsonPath, projectRoot, tag);
if (!data || !data.tasks) {
return {
success: false,
error: {
code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksJsonPath}`
message: `No valid tasks found in ${tasksJsonPath}${tag ? ` for tag '${tag}'` : ''}`
}
};
}
@@ -80,71 +81,49 @@ export async function removeTaskDirect(args, log, context = {}) {
success: false,
error: {
code: 'INVALID_TASK_ID',
message: `The following tasks were not found: ${invalidTasks.join(', ')}`
message: `The following tasks were not found${tag ? ` in tag '${tag}'` : ''}: ${invalidTasks.join(', ')}`
}
};
}
// Remove tasks one by one
const results = [];
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
try {
for (const taskId of taskIdArray) {
try {
const result = await removeTask(tasksJsonPath, taskId);
results.push({
taskId,
success: true,
message: result.message,
removedTask: result.removedTask
});
log.info(`Successfully removed task: ${taskId}`);
} catch (error) {
results.push({
taskId,
success: false,
error: error.message
});
log.error(`Error removing task ${taskId}: ${error.message}`);
}
// Call removeTask with proper context including tag
const result = await removeTask(tasksJsonPath, id, {
projectRoot,
tag
});
if (!result.success) {
return {
success: false,
error: {
code: 'REMOVE_TASK_ERROR',
message: result.error || 'Failed to remove tasks'
}
};
}
log.info(`Successfully removed ${result.removedTasks.length} task(s)`);
return {
success: true,
data: {
totalTasks: taskIdArray.length,
successful: result.removedTasks.length,
failed: taskIdArray.length - result.removedTasks.length,
removedTasks: result.removedTasks,
message: result.message,
tasksPath: tasksJsonPath,
tag: data.tag || tag || 'master'
}
};
} finally {
// Restore normal logging
disableSilentMode();
}
// Check if all tasks were successfully removed
const successfulRemovals = results.filter((r) => r.success);
const failedRemovals = results.filter((r) => !r.success);
if (successfulRemovals.length === 0) {
// All removals failed
return {
success: false,
error: {
code: 'REMOVE_TASK_ERROR',
message: 'Failed to remove any tasks',
details: failedRemovals
.map((r) => `${r.taskId}: ${r.error}`)
.join('; ')
}
};
}
// At least some tasks were removed successfully
return {
success: true,
data: {
totalTasks: taskIdArray.length,
successful: successfulRemovals.length,
failed: failedRemovals.length,
results: results,
tasksPath: tasksJsonPath
}
};
} catch (error) {
// Ensure silent mode is disabled even if an outer error occurs
disableSilentMode();

View File

@@ -0,0 +1,40 @@
/**
* response-language.js
* Direct function for managing response language via MCP
*/
import { setResponseLanguage } from '../../../../scripts/modules/task-manager.js';
import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
export async function responseLanguageDirect(args, log, context = {}) {
const { projectRoot, language } = args;
const mcpLog = createLogWrapper(log);
log.info(
`Executing response-language_direct with args: ${JSON.stringify(args)}`
);
log.info(`Using project root: ${projectRoot}`);
try {
enableSilentMode();
return setResponseLanguage(language, {
mcpLog,
projectRoot
});
} catch (error) {
return {
success: false,
error: {
code: 'DIRECT_FUNCTION_ERROR',
message: error.message,
details: error.stack
}
};
} finally {
disableSilentMode();
}
}

View File

@@ -20,7 +20,8 @@ import { nextTaskDirect } from './next-task.js';
*/
export async function setTaskStatusDirect(args, log, context = {}) {
// Destructure expected args, including the resolved tasksJsonPath and projectRoot
const { tasksJsonPath, id, status, complexityReportPath, projectRoot } = args;
const { tasksJsonPath, id, status, complexityReportPath, projectRoot, tag } =
args;
const { session } = context;
try {
log.info(`Setting task status with args: ${JSON.stringify(args)}`);
@@ -69,11 +70,17 @@ export async function setTaskStatusDirect(args, log, context = {}) {
enableSilentMode(); // Enable silent mode before calling core function
try {
// Call the core function
await setTaskStatus(tasksPath, taskId, newStatus, {
mcpLog: log,
projectRoot,
session
});
await setTaskStatus(
tasksPath,
taskId,
newStatus,
{
mcpLog: log,
projectRoot,
session
},
tag
);
log.info(`Successfully set task ${taskId} status to ${newStatus}`);

View File

@@ -21,7 +21,7 @@ import {
*/
export async function updateTasksDirect(args, log, context = {}) {
const { session } = context;
const { from, prompt, research, tasksJsonPath, projectRoot } = args;
const { from, prompt, research, tasksJsonPath, projectRoot, tag } = args;
// Create the standard logger wrapper
const logWrapper = createLogWrapper(log);
@@ -75,7 +75,8 @@ export async function updateTasksDirect(args, log, context = {}) {
{
session,
mcpLog: logWrapper,
projectRoot
projectRoot,
tag
},
'json'
);

View File

@@ -52,6 +52,7 @@ export function registerAddSubtaskTool(server) {
.describe(
'Absolute path to the tasks file (default: tasks/tasks.json)'
),
tag: z.string().optional().describe('Tag context to operate on'),
skipGenerate: z
.boolean()
.optional()
@@ -89,7 +90,8 @@ export function registerAddSubtaskTool(server) {
status: args.status,
dependencies: args.dependencies,
skipGenerate: args.skipGenerate,
projectRoot: args.projectRoot
projectRoot: args.projectRoot,
tag: args.tag
},
log,
{ session }

View File

@@ -24,7 +24,8 @@ export function registerFixDependenciesTool(server) {
file: z.string().optional().describe('Absolute path to the tasks file'),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
.describe('The directory of the project. Must be an absolute path.'),
tag: z.string().optional().describe('Tag context to operate on')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
@@ -46,7 +47,9 @@ export function registerFixDependenciesTool(server) {
const result = await fixDependenciesDirect(
{
tasksJsonPath: tasksJsonPath
tasksJsonPath: tasksJsonPath,
projectRoot: args.projectRoot,
tag: args.tag
},
log
);

View File

@@ -29,6 +29,7 @@ import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js';
@@ -83,6 +84,7 @@ export function registerTaskMasterTools(server) {
registerRemoveDependencyTool(server);
registerValidateDependenciesTool(server);
registerFixDependenciesTool(server);
registerResponseLanguageTool(server);
// Group 7: Tag Management
registerListTagsTool(server);

View File

@@ -51,7 +51,7 @@ export function registerInitializeProjectTool(server) {
.array(z.enum(RULE_PROFILES))
.optional()
.describe(
`List of rule profiles to include at initialization. If omitted, defaults to all available profiles. Available options: ${RULE_PROFILES.join(', ')}`
`List of rule profiles to include at initialization. If omitted, defaults to Cursor profile only. Available options: ${RULE_PROFILES.join(', ')}`
)
}),
execute: withNormalizedProjectRoot(async (args, context) => {

View File

@@ -43,7 +43,7 @@ export function registerParsePRDTool(server) {
.string()
.optional()
.describe(
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Avoid entering numbers above 50 due to context window limitations.'
'Approximate number of top-level tasks to generate (default: 10). As the agent, if you have enough information, ensure to enter a number of tasks that would logically scale with project complexity. Setting to 0 will allow Taskmaster to determine the appropriate number of tasks based on the complexity of the PRD. Avoid entering numbers above 50 due to context window limitations.'
),
force: z
.boolean()

View File

@@ -33,7 +33,13 @@ export function registerRemoveTaskTool(server) {
confirm: z
.boolean()
.optional()
.describe('Whether to skip confirmation prompt (default: false)')
.describe('Whether to skip confirmation prompt (default: false)'),
tag: z
.string()
.optional()
.describe(
'Specify which tag context to operate on. Defaults to the current active tag.'
)
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
@@ -59,7 +65,8 @@ export function registerRemoveTaskTool(server) {
{
tasksJsonPath: tasksJsonPath,
id: args.id,
projectRoot: args.projectRoot
projectRoot: args.projectRoot,
tag: args.tag
},
log,
{ session }

View File

@@ -0,0 +1,46 @@
import { z } from 'zod';
import {
createErrorResponse,
handleApiResult,
withNormalizedProjectRoot
} from './utils.js';
import { responseLanguageDirect } from '../core/direct-functions/response-language.js';
export function registerResponseLanguageTool(server) {
server.addTool({
name: 'response-language',
description: 'Get or set the response language for the project',
parameters: z.object({
projectRoot: z
.string()
.describe(
'The root directory for the project. ALWAYS SET THIS TO THE PROJECT ROOT DIRECTORY. IF NOT SET, THE TOOL WILL NOT WORK.'
),
language: z
.string()
.describe(
'The new response language to set. like "中文" "English" or "español".'
)
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
log.info(
`Executing response-language tool with args: ${JSON.stringify(args)}`
);
const result = await responseLanguageDirect(
{
...args,
projectRoot: args.projectRoot
},
log,
{ session }
);
return handleApiResult(result, log, 'Error setting response language');
} catch (error) {
log.error(`Error in response-language tool: ${error.message}`);
return createErrorResponse(error.message);
}
})
});
}

View File

@@ -47,7 +47,8 @@ export function registerSetTaskStatusTool(server) {
),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
.describe('The directory of the project. Must be an absolute path.'),
tag: z.string().optional().describe('Optional tag context to operate on')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
@@ -86,7 +87,8 @@ export function registerSetTaskStatusTool(server) {
id: args.id,
status: args.status,
complexityReportPath,
projectRoot: args.projectRoot
projectRoot: args.projectRoot,
tag: args.tag
},
log,
{ session }

View File

@@ -43,11 +43,12 @@ export function registerUpdateTool(server) {
.optional()
.describe(
'The directory of the project. (Optional, usually from session)'
)
),
tag: z.string().optional().describe('Tag context to operate on')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const toolName = 'update';
const { from, prompt, research, file, projectRoot } = args;
const { from, prompt, research, file, projectRoot, tag } = args;
try {
log.info(
@@ -71,7 +72,8 @@ export function registerUpdateTool(server) {
from: from,
prompt: prompt,
research: research,
projectRoot: projectRoot
projectRoot: projectRoot,
tag: tag
},
log,
{ session }

5516
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -68,6 +68,7 @@
"gradient-string": "^3.0.0",
"helmet": "^8.1.0",
"inquirer": "^12.5.0",
"jsonc-parser": "^3.3.1",
"jsonwebtoken": "^9.0.2",
"lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0",
@@ -77,7 +78,8 @@
"zod": "^3.23.8"
},
"optionalDependencies": {
"@anthropic-ai/claude-code": "^1.0.25"
"@anthropic-ai/claude-code": "^1.0.25",
"ai-sdk-provider-gemini-cli": "^0.0.3"
},
"engines": {
"node": ">=18.0.0"

View File

@@ -30,6 +30,7 @@ import {
convertAllRulesToProfileRules,
getRulesProfile
} from '../src/utils/rule-transformer.js';
import { updateConfigMaxTokens } from './modules/update-config-tokens.js';
import { execSync } from 'child_process';
import {
@@ -623,6 +624,14 @@ function createProjectStructure(
}
);
// Update config.json with correct maxTokens values from supported-models.json
const configPath = path.join(targetDir, TASKMASTER_CONFIG_FILE);
if (updateConfigMaxTokens(configPath)) {
log('info', 'Updated config with correct maxTokens values');
} else {
log('warn', 'Could not update maxTokens in config');
}
// Copy .gitignore with GitTasks preference
try {
const gitignoreTemplatePath = path.join(
@@ -757,6 +766,44 @@ function createProjectStructure(
}
// =====================================
// === Add Response Language Step ===
if (!isSilentMode() && !dryRun && !options?.yes) {
console.log(
boxen(chalk.cyan('Configuring Response Language...'), {
padding: 0.5,
margin: { top: 1, bottom: 0.5 },
borderStyle: 'round',
borderColor: 'blue'
})
);
log(
'info',
'Running interactive response language setup. Please input your preferred language.'
);
try {
execSync('npx task-master lang --setup', {
stdio: 'inherit',
cwd: targetDir
});
log('success', 'Response Language configured.');
} catch (error) {
log('error', 'Failed to configure response language:', error.message);
log('warn', 'You may need to run "task-master lang --setup" manually.');
}
} else if (isSilentMode() && !dryRun) {
log(
'info',
'Skipping interactive response language setup in silent (MCP) mode.'
);
log(
'warn',
'Please configure response language using "task-master models --set-response-language" or the "models" MCP tool.'
);
} else if (dryRun) {
log('info', 'DRY RUN: Skipping interactive response language setup.');
}
// =====================================
// === Add Model Configuration Step ===
if (!isSilentMode() && !dryRun && !options?.yes) {
console.log(

View File

@@ -15,6 +15,7 @@ import {
getFallbackProvider,
getFallbackModelId,
getParametersForRole,
getResponseLanguage,
getUserId,
MODEL_MAP,
getDebugFlag,
@@ -24,7 +25,8 @@ import {
getAzureBaseURL,
getBedrockBaseURL,
getVertexProjectId,
getVertexLocation
getVertexLocation,
providersWithoutApiKeys
} from './config-manager.js';
import {
log,
@@ -45,7 +47,8 @@ import {
BedrockAIProvider,
AzureProvider,
VertexAIProvider,
ClaudeCodeProvider
ClaudeCodeProvider,
GeminiCliProvider
} from '../../src/ai-providers/index.js';
// Create provider instances
@@ -60,7 +63,8 @@ const PROVIDERS = {
bedrock: new BedrockAIProvider(),
azure: new AzureProvider(),
vertex: new VertexAIProvider(),
'claude-code': new ClaudeCodeProvider()
'claude-code': new ClaudeCodeProvider(),
'gemini-cli': new GeminiCliProvider()
};
// Helper function to get cost for a specific model
@@ -232,6 +236,12 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
return 'claude-code-no-key-required';
}
// Gemini CLI can work without an API key (uses CLI auth)
if (providerName === 'gemini-cli') {
const apiKey = resolveEnvVariable('GEMINI_API_KEY', session, projectRoot);
return apiKey || 'gemini-cli-no-key-required';
}
const keyMap = {
openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY',
@@ -244,7 +254,8 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
ollama: 'OLLAMA_API_KEY',
bedrock: 'AWS_ACCESS_KEY_ID',
vertex: 'GOOGLE_API_KEY',
'claude-code': 'CLAUDE_CODE_API_KEY' // Not actually used, but included for consistency
'claude-code': 'CLAUDE_CODE_API_KEY', // Not actually used, but included for consistency
'gemini-cli': 'GEMINI_API_KEY'
};
const envVarName = keyMap[providerName];
@@ -257,7 +268,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
// Special handling for providers that can use alternative auth
if (providerName === 'ollama' || providerName === 'bedrock') {
if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
return apiKey || null;
}
@@ -457,7 +468,7 @@ async function _unifiedServiceRunner(serviceType, params) {
}
// Check API key if needed
if (providerName?.toLowerCase() !== 'ollama') {
if (!providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
log(
'warn',
@@ -541,9 +552,12 @@ async function _unifiedServiceRunner(serviceType, params) {
}
const messages = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
const responseLanguage = getResponseLanguage(effectiveProjectRoot);
const systemPromptWithLanguage = `${systemPrompt} \n\n Always respond in ${responseLanguage}.`;
messages.push({
role: 'system',
content: systemPromptWithLanguage.trim()
});
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
// {

View File

@@ -42,7 +42,8 @@ import {
findTaskById,
taskExists,
moveTask,
migrateProject
migrateProject,
setResponseLanguage
} from './task-manager.js';
import {
@@ -69,7 +70,9 @@ import {
ConfigurationError,
isConfigFilePresent,
getAvailableModels,
getBaseUrlForRole
getBaseUrlForRole,
getDefaultNumTasks,
getDefaultSubtasks
} from './config-manager.js';
import { CUSTOM_PROVIDERS } from '../../src/constants/providers.js';
@@ -803,7 +806,11 @@ function registerCommands(programInstance) {
'Path to the PRD file (alternative to positional argument)'
)
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
.option('-n, --num-tasks <number>', 'Number of tasks to generate', '10')
.option(
'-n, --num-tasks <number>',
'Number of tasks to generate',
getDefaultNumTasks()
)
.option('-f, --force', 'Skip confirmation when overwriting existing tasks')
.option(
'--append',
@@ -3421,6 +3428,10 @@ ${result.result}
'--vertex',
'Allow setting a custom Vertex AI model ID (use with --set-*) '
)
.option(
'--gemini-cli',
'Allow setting a Gemini CLI model ID (use with --set-*)'
)
.addHelpText(
'after',
`
@@ -3435,6 +3446,7 @@ Examples:
$ task-master models --set-main sonnet --claude-code # Set Claude Code model for main role
$ task-master models --set-main gpt-4o --azure # Set custom Azure OpenAI model for main role
$ task-master models --set-main claude-3-5-sonnet@20241022 --vertex # Set custom Vertex AI model for main role
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role
$ task-master models --setup # Run interactive setup`
)
.action(async (options) => {
@@ -3448,12 +3460,13 @@ Examples:
options.openrouter,
options.ollama,
options.bedrock,
options.claudeCode
options.claudeCode,
options.geminiCli
].filter(Boolean).length;
if (providerFlags > 1) {
console.error(
chalk.red(
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code) simultaneously.'
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli) simultaneously.'
)
);
process.exit(1);
@@ -3497,7 +3510,9 @@ Examples:
? 'bedrock'
: options.claudeCode
? 'claude-code'
: undefined
: options.geminiCli
? 'gemini-cli'
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -3521,7 +3536,9 @@ Examples:
? 'bedrock'
: options.claudeCode
? 'claude-code'
: undefined
: options.geminiCli
? 'gemini-cli'
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -3547,7 +3564,9 @@ Examples:
? 'bedrock'
: options.claudeCode
? 'claude-code'
: undefined
: options.geminiCli
? 'gemini-cli'
: undefined
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
@@ -3643,6 +3662,63 @@ Examples:
return; // Stop execution here
});
// response-language command
programInstance
.command('lang')
.description('Manage response language settings')
.option('--response <response_language>', 'Set the response language')
.option('--setup', 'Run interactive setup to configure response language')
.action(async (options) => {
const projectRoot = findProjectRoot(); // Find project root for context
const { response, setup } = options;
console.log(
chalk.blue('Response language set to:', JSON.stringify(options))
);
let responseLanguage = response || 'English';
if (setup) {
console.log(
chalk.blue('Starting interactive response language setup...')
);
try {
const userResponse = await inquirer.prompt([
{
type: 'input',
name: 'responseLanguage',
message: 'Input your preferred response language',
default: 'English'
}
]);
console.log(
chalk.blue(
'Response language set to:',
userResponse.responseLanguage
)
);
responseLanguage = userResponse.responseLanguage;
} catch (setupError) {
console.error(
chalk.red('\\nInteractive setup failed unexpectedly:'),
setupError.message
);
}
}
const result = setResponseLanguage(responseLanguage, {
projectRoot
});
if (result.success) {
console.log(chalk.green(`${result.data.message}`));
} else {
console.error(
chalk.red(
`❌ Error setting response language: ${result.error.message}`
)
);
}
});
// move-task command
programInstance
.command('move')
@@ -3810,7 +3886,11 @@ Examples:
$ task-master rules --${RULES_SETUP_ACTION} # Interactive setup to select rule profiles`
)
.action(async (action, profiles, options) => {
const projectDir = process.cwd();
const projectRoot = findProjectRoot();
if (!projectRoot) {
console.error(chalk.red('Error: Could not find project root.'));
process.exit(1);
}
/**
* 'task-master rules --setup' action:
@@ -3857,7 +3937,7 @@ Examples:
const profileConfig = getRulesProfile(profile);
const addResult = convertAllRulesToProfileRules(
projectDir,
projectRoot,
profileConfig
);
@@ -3903,8 +3983,8 @@ Examples:
let confirmed = true;
if (!options.force) {
// Check if this removal would leave no profiles remaining
if (wouldRemovalLeaveNoProfiles(projectDir, expandedProfiles)) {
const installedProfiles = getInstalledProfiles(projectDir);
if (wouldRemovalLeaveNoProfiles(projectRoot, expandedProfiles)) {
const installedProfiles = getInstalledProfiles(projectRoot);
confirmed = await confirmRemoveAllRemainingProfiles(
expandedProfiles,
installedProfiles
@@ -3934,12 +4014,12 @@ Examples:
if (action === RULES_ACTIONS.ADD) {
console.log(chalk.blue(`Adding rules for profile: ${profile}...`));
const addResult = convertAllRulesToProfileRules(
projectDir,
projectRoot,
profileConfig
);
if (typeof profileConfig.onAddRulesProfile === 'function') {
const assetsDir = path.join(process.cwd(), 'assets');
profileConfig.onAddRulesProfile(projectDir, assetsDir);
const assetsDir = path.join(projectRoot, 'assets');
profileConfig.onAddRulesProfile(projectRoot, assetsDir);
}
console.log(
chalk.blue(`Completed adding rules for profile: ${profile}`)
@@ -3955,7 +4035,7 @@ Examples:
console.log(chalk.green(generateProfileSummary(profile, addResult)));
} else if (action === RULES_ACTIONS.REMOVE) {
console.log(chalk.blue(`Removing rules for profile: ${profile}...`));
const result = removeProfileRules(projectDir, profileConfig);
const result = removeProfileRules(projectRoot, profileConfig);
removalResults.push(result);
console.log(
chalk.green(generateProfileRemovalSummary(profile, result))

View File

@@ -1,8 +1,9 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { z } from 'zod';
import { fileURLToPath } from 'url';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
import { log, findProjectRoot, resolveEnvVariable, isEmpty } from './utils.js';
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
import { findConfigPath } from '../../src/utils/path-utils.js';
import {
@@ -11,6 +12,7 @@ import {
CUSTOM_PROVIDERS_ARRAY,
ALL_PROVIDERS
} from '../../src/constants/providers.js';
import { AI_COMMAND_NAMES } from '../../src/constants/commands.js';
// Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
@@ -61,12 +63,15 @@ const DEFAULTS = {
global: {
logLevel: 'info',
debug: false,
defaultNumTasks: 10,
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api',
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
}
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
responseLanguage: 'English'
},
claudeCode: {}
};
// --- Internal Config Loading ---
@@ -127,7 +132,8 @@ function _loadAndValidateConfig(explicitRoot = null) {
? { ...defaults.models.fallback, ...parsedConfig.models.fallback }
: { ...defaults.models.fallback }
},
global: { ...defaults.global, ...parsedConfig?.global }
global: { ...defaults.global, ...parsedConfig?.global },
claudeCode: { ...defaults.claudeCode, ...parsedConfig?.claudeCode }
};
configSource = `file (${configPath})`; // Update source info
@@ -170,6 +176,9 @@ function _loadAndValidateConfig(explicitRoot = null) {
config.models.fallback.provider = undefined;
config.models.fallback.modelId = undefined;
}
if (config.claudeCode && !isEmpty(config.claudeCode)) {
config.claudeCode = validateClaudeCodeSettings(config.claudeCode);
}
} catch (error) {
// Use console.error for actual errors during parsing
console.error(
@@ -277,6 +286,83 @@ function validateProviderModelCombination(providerName, modelId) {
);
}
/**
* Validates Claude Code AI provider custom settings
* @param {object} settings The settings to validate
* @returns {object} The validated settings
*/
function validateClaudeCodeSettings(settings) {
// Define the base settings schema without commandSpecific first
const BaseSettingsSchema = z.object({
maxTurns: z.number().int().positive().optional(),
customSystemPrompt: z.string().optional(),
appendSystemPrompt: z.string().optional(),
permissionMode: z
.enum(['default', 'acceptEdits', 'plan', 'bypassPermissions'])
.optional(),
allowedTools: z.array(z.string()).optional(),
disallowedTools: z.array(z.string()).optional(),
mcpServers: z
.record(
z.string(),
z.object({
type: z.enum(['stdio', 'sse']).optional(),
command: z.string(),
args: z.array(z.string()).optional(),
env: z.record(z.string()).optional(),
url: z.string().url().optional(),
headers: z.record(z.string()).optional()
})
)
.optional()
});
// Define CommandSpecificSchema using the base schema
const CommandSpecificSchema = z.record(
z.enum(AI_COMMAND_NAMES),
BaseSettingsSchema
);
// Define the full settings schema with commandSpecific
const SettingsSchema = BaseSettingsSchema.extend({
commandSpecific: CommandSpecificSchema.optional()
});
let validatedSettings = {};
try {
validatedSettings = SettingsSchema.parse(settings);
} catch (error) {
console.warn(
chalk.yellow(
`Warning: Invalid Claude Code settings in config: ${error.message}. Falling back to default.`
)
);
validatedSettings = {};
}
return validatedSettings;
}
// --- Claude Code Settings Getters ---
function getClaudeCodeSettings(explicitRoot = null, forceReload = false) {
const config = getConfig(explicitRoot, forceReload);
// Ensure Claude Code defaults are applied if Claude Code section is missing
return { ...DEFAULTS.claudeCode, ...(config?.claudeCode || {}) };
}
function getClaudeCodeSettingsForCommand(
commandName,
explicitRoot = null,
forceReload = false
) {
const settings = getClaudeCodeSettings(explicitRoot, forceReload);
const commandSpecific = settings?.commandSpecific || {};
return { ...settings, ...commandSpecific[commandName] };
}
// --- Role-Specific Getters ---
function getModelConfigForRole(role, explicitRoot = null) {
@@ -424,6 +510,11 @@ function getVertexLocation(explicitRoot = null) {
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
}
function getResponseLanguage(explicitRoot = null) {
// Directly return value from config
return getGlobalConfig(explicitRoot).responseLanguage;
}
/**
* Gets model parameters (maxTokens, temperature) for a specific role,
* considering model-specific overrides from supported-models.json.
@@ -500,7 +591,8 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
// Providers that don't require API keys for authentication
const providersWithoutApiKeys = [
CUSTOM_PROVIDERS.OLLAMA,
CUSTOM_PROVIDERS.BEDROCK
CUSTOM_PROVIDERS.BEDROCK,
CUSTOM_PROVIDERS.GEMINI_CLI
];
if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
@@ -794,15 +886,26 @@ function getBaseUrlForRole(role, explicitRoot = null) {
return undefined;
}
// Export the providers without API keys array for use in other modules
export const providersWithoutApiKeys = [
CUSTOM_PROVIDERS.OLLAMA,
CUSTOM_PROVIDERS.BEDROCK,
CUSTOM_PROVIDERS.GEMINI_CLI
];
export {
// Core config access
getConfig,
writeConfig,
ConfigurationError,
isConfigFilePresent,
// Claude Code settings
getClaudeCodeSettings,
getClaudeCodeSettingsForCommand,
// Validation
validateProvider,
validateProviderModelCombination,
validateClaudeCodeSettings,
VALIDATED_PROVIDERS,
CUSTOM_PROVIDERS,
ALL_PROVIDERS,
@@ -832,6 +935,7 @@ export {
getOllamaBaseURL,
getAzureBaseURL,
getBedrockBaseURL,
getResponseLanguage,
getParametersForRole,
getUserId,
// API Key Checkers (still relevant)

View File

@@ -1,16 +1,64 @@
{
"bedrock": [
{
"id": "us.anthropic.claude-3-haiku-20240307-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": { "input": 0.25, "output": 1.25 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-3-opus-20240229-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 15, "output": 75 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
"swe_score": 0.49,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"swe_score": 0.49,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"swe_score": 0.623,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback"],
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
},
{
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": { "input": 0.8, "output": 4 },
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-opus-4-20250514-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 15, "output": 75 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"swe_score": 0.727,
"cost_per_1m_tokens": { "input": 3, "output": 15 },
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.deepseek.r1-v1:0",
"swe_score": 0,
"cost_per_1m_tokens": { "input": 1.35, "output": 5.4 },
"cost_per_1m_tokens": {
"input": 1.35,
"output": 5.4
},
"allowed_roles": ["research"],
"max_tokens": 65536
}
@@ -648,16 +696,44 @@
{
"id": "opus",
"swe_score": 0.725,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32000
},
{
"id": "sonnet",
"swe_score": 0.727,
"cost_per_1m_tokens": { "input": 0, "output": 0 },
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000
}
],
"gemini-cli": [
{
"id": "gemini-2.5-pro",
"swe_score": 0.72,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
},
{
"id": "gemini-2.5-flash",
"swe_score": 0.71,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
}
]
}

View File

@@ -23,10 +23,12 @@ import updateSubtaskById from './task-manager/update-subtask-by-id.js';
import removeTask from './task-manager/remove-task.js';
import taskExists from './task-manager/task-exists.js';
import isTaskDependentOn from './task-manager/is-task-dependent.js';
import setResponseLanguage from './task-manager/response-language.js';
import moveTask from './task-manager/move-task.js';
import { migrateProject } from './task-manager/migrate.js';
import { performResearch } from './task-manager/research.js';
import { readComplexityReport } from './utils.js';
// Export task manager functions
export {
parsePRD,
@@ -49,6 +51,7 @@ export {
findTaskById,
taskExists,
isTaskDependentOn,
setResponseLanguage,
moveTask,
readComplexityReport,
migrateProject,

View File

@@ -1,6 +1,6 @@
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import { log, readJSON, writeJSON, getCurrentTag } from '../utils.js';
import { isTaskDependentOn } from '../task-manager.js';
import generateTaskFiles from './generate-task-files.js';
@@ -25,8 +25,10 @@ async function addSubtask(
try {
log('info', `Adding subtask to parent task ${parentId}...`);
const currentTag =
context.tag || getCurrentTag(context.projectRoot) || 'master';
// Read the existing tasks with proper context
const data = readJSON(tasksPath, context.projectRoot, context.tag);
const data = readJSON(tasksPath, context.projectRoot, currentTag);
if (!data || !data.tasks) {
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
}
@@ -137,12 +139,12 @@ async function addSubtask(
}
// Write the updated tasks back to the file with proper context
writeJSON(tasksPath, data, context.projectRoot, context.tag);
writeJSON(tasksPath, data, context.projectRoot, currentTag);
// Generate task files if requested
if (generateFiles) {
log('info', 'Regenerating task files...');
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
await generateTaskFiles(tasksPath, path.dirname(tasksPath), context);
}
return newSubtask;

View File

@@ -28,6 +28,7 @@ import {
import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js';
import ContextGatherer from '../utils/contextGatherer.js';
import generateTaskFiles from './generate-task-files.js';
// Define Zod schema for the expected AI output object
const AiTaskDataSchema = z.object({
@@ -553,18 +554,18 @@ async function addTask(
report('DEBUG: Writing tasks.json...', 'debug');
// Write the updated raw data back to the file
// The writeJSON function will automatically filter out _rawTaggedData
writeJSON(tasksPath, rawData);
writeJSON(tasksPath, rawData, projectRoot, targetTag);
report('DEBUG: tasks.json written.', 'debug');
// Generate markdown task files
// report('Generating task files...', 'info');
// report('DEBUG: Calling generateTaskFiles...', 'debug');
// // Pass mcpLog if available to generateTaskFiles
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
// projectRoot,
// tag: targetTag
// });
// report('DEBUG: generateTaskFiles finished.', 'debug');
report('Generating task files...', 'info');
report('DEBUG: Calling generateTaskFiles...', 'debug');
// Pass mcpLog if available to generateTaskFiles
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
projectRoot,
tag: targetTag
});
report('DEBUG: generateTaskFiles finished.', 'debug');
// Show success message - only for text output (CLI)
if (outputFormat === 'text') {

View File

@@ -2,7 +2,13 @@ import fs from 'fs';
import path from 'path';
import { z } from 'zod';
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
import {
log,
readJSON,
writeJSON,
isSilentMode,
getTagAwareFilePath
} from '../utils.js';
import {
startLoadingIndicator,
@@ -61,7 +67,7 @@ const subtaskWrapperSchema = z.object({
*/
function generateMainSystemPrompt(subtaskCount) {
return `You are an AI assistant helping with task breakdown for software development.
You need to break down a high-level task into ${subtaskCount} specific subtasks that can be implemented one by one.
You need to break down a high-level task into ${subtaskCount > 0 ? subtaskCount : 'an appropriate number of'} specific subtasks that can be implemented one by one.
Subtasks should:
1. Be specific and actionable implementation steps
@@ -76,7 +82,7 @@ For each subtask, provide:
- title: Clear, specific title
- description: Detailed description
- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)
- details: Implementation details
- details: Implementation details, the output should be in string
- testStrategy: Optional testing approach
@@ -111,11 +117,11 @@ function generateMainUserPrompt(
"details": "Implementation guidance",
"testStrategy": "Optional testing approach"
},
// ... (repeat for a total of ${subtaskCount} subtasks with sequential IDs)
// ... (repeat for ${subtaskCount ? 'a total of ' + subtaskCount : 'each of the'} subtasks with sequential IDs)
]
}`;
return `Break down this task into exactly ${subtaskCount} specific subtasks:
return `Break down this task into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks:
Task ID: ${task.id}
Title: ${task.title}
@@ -159,7 +165,7 @@ function generateResearchUserPrompt(
]
}`;
return `Analyze the following task and break it down into exactly ${subtaskCount} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
return `Analyze the following task and break it down into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
Parent Task:
ID: ${task.id}
@@ -497,9 +503,18 @@ async function expandTask(
let complexityReasoningContext = '';
let systemPrompt; // Declare systemPrompt here
const complexityReportPath = path.join(projectRoot, COMPLEXITY_REPORT_FILE);
// Use tag-aware complexity report path
const complexityReportPath = getTagAwareFilePath(
COMPLEXITY_REPORT_FILE,
tag,
projectRoot
);
let taskAnalysis = null;
logger.info(
`Looking for complexity report at: ${complexityReportPath}${tag && tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
);
try {
if (fs.existsSync(complexityReportPath)) {
const complexityReport = readJSON(complexityReportPath);
@@ -531,7 +546,7 @@ async function expandTask(
// Determine final subtask count
const explicitNumSubtasks = parseInt(numSubtasks, 10);
if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks > 0) {
if (!Number.isNaN(explicitNumSubtasks) && explicitNumSubtasks >= 0) {
finalSubtaskCount = explicitNumSubtasks;
logger.info(
`Using explicitly provided subtask count: ${finalSubtaskCount}`
@@ -545,7 +560,7 @@ async function expandTask(
finalSubtaskCount = getDefaultSubtasks(session);
logger.info(`Using default number of subtasks: ${finalSubtaskCount}`);
}
if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount <= 0) {
if (Number.isNaN(finalSubtaskCount) || finalSubtaskCount < 0) {
logger.warn(
`Invalid subtask count determined (${finalSubtaskCount}), defaulting to 3.`
);
@@ -566,7 +581,7 @@ async function expandTask(
}
// --- Use Simplified System Prompt for Report Prompts ---
systemPrompt = `You are an AI assistant helping with task breakdown. Generate exactly ${finalSubtaskCount} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
systemPrompt = `You are an AI assistant helping with task breakdown. Generate ${finalSubtaskCount > 0 ? 'exactly ' + finalSubtaskCount : 'an appropriate number of'} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
logger.info(
`Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.`
);
@@ -608,7 +623,7 @@ async function expandTask(
let loadingIndicator = null;
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
`Generating ${finalSubtaskCount} subtasks...\n`
`Generating ${finalSubtaskCount || 'appropriate number of'} subtasks...\n`
);
}

View File

@@ -1,5 +1,5 @@
import fs from 'fs';
import path from 'path';
import fs from 'fs';
import chalk from 'chalk';
import { log, readJSON } from '../utils.js';

View File

@@ -523,6 +523,24 @@ async function setModel(role, modelId, options = {}) {
determinedProvider = CUSTOM_PROVIDERS.VERTEX;
warningMessage = `Warning: Custom Vertex AI model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`;
report('warn', warningMessage);
} else if (providerHint === CUSTOM_PROVIDERS.GEMINI_CLI) {
// Gemini CLI provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.GEMINI_CLI;
// Re-find modelData specifically for gemini-cli provider
const geminiCliModels = availableModels.filter(
(m) => m.provider === 'gemini-cli'
);
const geminiCliModelData = geminiCliModels.find(
(m) => m.id === modelId
);
if (geminiCliModelData) {
// Update modelData to the found gemini-cli model
modelData = geminiCliModelData;
report('info', `Setting Gemini CLI model '${modelId}'.`);
} else {
warningMessage = `Warning: Gemini CLI model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
} else {
// Invalid provider hint - should not happen with our constants
throw new Error(`Invalid provider hint received: ${providerHint}`);

View File

@@ -188,7 +188,7 @@ Your task breakdown should incorporate this research, resulting in more detailed
// Base system prompt for PRD parsing
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
Analyze the provided PRD content and generate ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
@@ -207,7 +207,7 @@ Each task should follow this JSON structure:
}
Guidelines:
1. Unless complexity warrants otherwise, create exactly ${numTasks} tasks, numbered sequentially starting from ${nextId}
1. ${numTasks > 0 ? 'Unless complexity warrants otherwise' : 'Depending on the complexity'}, create ${numTasks > 0 ? 'exactly ' + numTasks : 'an appropriate number of'} tasks, numbered sequentially starting from ${nextId}
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
3. Order tasks logically - consider dependencies and implementation sequence
4. Early tasks should focus on setup, core functionality first, then advanced features
@@ -220,7 +220,7 @@ Guidelines:
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
// Build user prompt with PRD content
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
Return your response in this format:
{
@@ -235,7 +235,7 @@ Guidelines:
],
"metadata": {
"projectName": "PRD Implementation",
"totalTasks": ${numTasks},
"totalTasks": {number of tasks},
"sourceFile": "${prdPath}",
"generatedAt": "YYYY-MM-DD"
}

View File

@@ -1,7 +1,6 @@
import fs from 'fs';
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import * as fs from 'fs';
import { readJSON, writeJSON, log, findTaskById } from '../utils.js';
import generateTaskFiles from './generate-task-files.js';
import taskExists from './task-exists.js';
@@ -172,7 +171,7 @@ async function removeTask(tasksPath, taskIds, context = {}) {
}
// Save the updated raw data structure
writeJSON(tasksPath, fullTaggedData);
writeJSON(tasksPath, fullTaggedData, projectRoot, currentTag);
// Delete task files AFTER saving tasks.json
for (const taskIdNum of tasksToDeleteFiles) {
@@ -195,10 +194,10 @@ async function removeTask(tasksPath, taskIds, context = {}) {
// Generate updated task files ONCE, with context
try {
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
// projectRoot,
// tag: currentTag
// });
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
projectRoot,
tag: currentTag
});
results.messages.push('Task files regenerated successfully.');
} catch (genError) {
const genErrMsg = `Failed to regenerate task files: ${genError.message}`;

View File

@@ -0,0 +1,89 @@
import {
getConfig,
isConfigFilePresent,
writeConfig
} from '../config-manager.js';
import { findConfigPath } from '../../../src/utils/path-utils.js';
import { log } from '../utils.js';
function setResponseLanguage(lang, options = {}) {
const { mcpLog, projectRoot } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
return {
success: false,
error: {
code: 'CONFIG_MISSING',
message:
'The configuration file is missing. Run "task-master models --setup" to create it.'
}
};
}
// Validate response language
if (typeof lang !== 'string' || lang.trim() === '') {
return {
success: false,
error: {
code: 'INVALID_RESPONSE_LANGUAGE',
message: `Invalid response language: ${lang}. Must be a non-empty string.`
}
};
}
try {
const currentConfig = getConfig(projectRoot);
currentConfig.global.responseLanguage = lang;
const writeResult = writeConfig(currentConfig, projectRoot);
if (!writeResult) {
return {
success: false,
error: {
code: 'WRITE_ERROR',
message: 'Error writing updated configuration to configuration file'
}
};
}
const successMessage = `Successfully set response language to: ${lang}`;
report('info', successMessage);
return {
success: true,
data: {
responseLanguage: lang,
message: successMessage
}
};
} catch (error) {
report('error', `Error setting response language: ${error.message}`);
return {
success: false,
error: {
code: 'SET_RESPONSE_LANGUAGE_ERROR',
message: error.message
}
};
}
}
export default setResponseLanguage;

View File

@@ -132,7 +132,7 @@ async function setTaskStatus(
// Write the updated raw data back to the file
// The writeJSON function will automatically filter out _rawTaggedData
writeJSON(tasksPath, rawData);
writeJSON(tasksPath, rawData, options.projectRoot, currentTag);
// Validate dependencies after status update
log('info', 'Validating dependencies after status update...');

View File

@@ -145,8 +145,8 @@ async function createTag(
}
}
// Write the clean data back to file
writeJSON(tasksPath, cleanData);
// Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData, projectRoot);
logFn.success(`Successfully created tag "${tagName}"`);
@@ -365,8 +365,8 @@ async function deleteTag(
}
}
// Write the clean data back to file
writeJSON(tasksPath, cleanData);
// Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData, projectRoot);
logFn.success(`Successfully deleted tag "${tagName}"`);
@@ -485,7 +485,7 @@ async function enhanceTagsWithMetadata(tasksPath, rawData, context = {}) {
cleanData[key] = value;
}
}
writeJSON(tasksPath, cleanData);
writeJSON(tasksPath, cleanData, context.projectRoot);
}
} catch (error) {
// Don't throw - just log and continue
@@ -905,8 +905,8 @@ async function renameTag(
}
}
// Write the clean data back to file
writeJSON(tasksPath, cleanData);
// Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData, projectRoot);
// Get task count
const tasks = getTasksForTag(rawData, newName);
@@ -1062,8 +1062,8 @@ async function copyTag(
}
}
// Write the clean data back to file
writeJSON(tasksPath, cleanData);
// Write the clean data back to file with proper context to avoid tag corruption
writeJSON(tasksPath, cleanData, projectRoot);
logFn.success(
`Successfully copied tag from "${sourceName}" to "${targetName}"`

View File

@@ -9,7 +9,8 @@ import {
readJSON,
writeJSON,
truncate,
isSilentMode
isSilentMode,
getCurrentTag
} from '../utils.js';
import {
@@ -222,6 +223,7 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
* @param {Object} [context.session] - Session object from MCP server.
* @param {Object} [context.mcpLog] - MCP logger object.
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
* @param {string} [tag=null] - Tag associated with the tasks.
*/
async function updateTasks(
tasksPath,
@@ -231,7 +233,7 @@ async function updateTasks(
context = {},
outputFormat = 'text' // Default to text for CLI
) {
const { session, mcpLog, projectRoot: providedProjectRoot } = context;
const { session, mcpLog, projectRoot: providedProjectRoot, tag } = context;
// Use mcpLog if available, otherwise use the imported consoleLog function
const logFn = mcpLog || consoleLog;
// Flag to easily check which logger type we have
@@ -255,8 +257,11 @@ async function updateTasks(
throw new Error('Could not determine project root directory');
}
// --- Task Loading/Filtering (Unchanged) ---
const data = readJSON(tasksPath, projectRoot);
// Determine the current tag - prioritize explicit tag, then context.tag, then current tag
const currentTag = tag || getCurrentTag(projectRoot) || 'master';
// --- Task Loading/Filtering (Updated to pass projectRoot and tag) ---
const data = readJSON(tasksPath, projectRoot, currentTag);
if (!data || !data.tasks)
throw new Error(`No valid tasks found in ${tasksPath}`);
const tasksToUpdate = data.tasks.filter(
@@ -428,7 +433,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
isMCP
);
// --- Update Tasks Data (Unchanged) ---
// --- Update Tasks Data (Updated writeJSON call) ---
if (!Array.isArray(parsedUpdatedTasks)) {
// Should be caught by parser, but extra check
throw new Error(
@@ -467,7 +472,8 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
`Applied updates to ${actualUpdateCount} tasks in the dataset.`
);
writeJSON(tasksPath, data);
// Fix: Pass projectRoot and currentTag to writeJSON
writeJSON(tasksPath, data, projectRoot, currentTag);
if (isMCP)
logFn.info(
`Successfully updated ${actualUpdateCount} tasks in ${tasksPath}`

View File

@@ -0,0 +1,57 @@
/**
* update-config-tokens.js
* Updates config.json with correct maxTokens values from supported-models.json
*/
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
/**
* Updates the config file with correct maxTokens values from supported-models.json
* @param {string} configPath - Path to the config.json file to update
* @returns {boolean} True if successful, false otherwise
*/
export function updateConfigMaxTokens(configPath) {
try {
// Load supported models
const supportedModelsPath = path.join(__dirname, 'supported-models.json');
const supportedModels = JSON.parse(
fs.readFileSync(supportedModelsPath, 'utf-8')
);
// Load config
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
// Update each role's maxTokens if the model exists in supported-models.json
const roles = ['main', 'research', 'fallback'];
for (const role of roles) {
if (config.models && config.models[role]) {
const provider = config.models[role].provider;
const modelId = config.models[role].modelId;
// Find the model in supported models
if (supportedModels[provider]) {
const modelData = supportedModels[provider].find(
(m) => m.id === modelId
);
if (modelData && modelData.max_tokens) {
config.models[role].maxTokens = modelData.max_tokens;
}
}
}
}
// Write back the updated config
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
return true;
} catch (error) {
console.error('Error updating config maxTokens:', error.message);
return false;
}
}

View File

@@ -64,6 +64,51 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
return undefined;
}
// --- Tag-Aware Path Resolution Utility ---
/**
* Slugifies a tag name to be filesystem-safe
* @param {string} tagName - The tag name to slugify
* @returns {string} Slugified tag name safe for filesystem use
*/
function slugifyTagForFilePath(tagName) {
if (!tagName || typeof tagName !== 'string') {
return 'unknown-tag';
}
// Replace invalid filesystem characters with hyphens and clean up
return tagName
.replace(/[^a-zA-Z0-9_-]/g, '-') // Replace invalid chars with hyphens
.replace(/^-+|-+$/g, '') // Remove leading/trailing hyphens
.replace(/-+/g, '-') // Collapse multiple hyphens
.toLowerCase() // Convert to lowercase
.substring(0, 50); // Limit length to prevent overly long filenames
}
/**
* Resolves a file path to be tag-aware, following the pattern used by other commands.
* For non-master tags, appends _slugified-tagname before the file extension.
* @param {string} basePath - The base file path (e.g., '.taskmaster/reports/task-complexity-report.json')
* @param {string|null} tag - The tag name (null, undefined, or 'master' uses base path)
* @param {string} [projectRoot='.'] - The project root directory
* @returns {string} The resolved file path
*/
function getTagAwareFilePath(basePath, tag, projectRoot = '.') {
// Use path.parse and format for clean tag insertion
const parsedPath = path.parse(basePath);
if (!tag || tag === 'master') {
return path.join(projectRoot, basePath);
}
// Slugify the tag for filesystem safety
const slugifiedTag = slugifyTagForFilePath(tag);
// Append slugified tag before file extension
parsedPath.base = `${parsedPath.name}_${slugifiedTag}${parsedPath.ext}`;
const relativePath = path.format(parsedPath);
return path.join(projectRoot, relativePath);
}
// --- Project Root Finding Utility ---
/**
* Recursively searches upwards for project root starting from a given directory.
@@ -967,6 +1012,21 @@ function truncate(text, maxLength) {
return `${text.slice(0, maxLength - 3)}...`;
}
/**
* Checks if array or object are empty
* @param {*} value - The value to check
* @returns {boolean} True if empty, false otherwise
*/
function isEmpty(value) {
if (Array.isArray(value)) {
return value.length === 0;
} else if (typeof value === 'object' && value !== null) {
return Object.keys(value).length === 0;
}
return false; // Not an array or object, or is null
}
/**
* Find cycles in a dependency graph using DFS
* @param {string} subtaskId - Current subtask ID
@@ -1328,6 +1388,7 @@ export {
formatTaskId,
findTaskById,
truncate,
isEmpty,
findCycles,
toKebabCase,
detectCamelCaseFlags,
@@ -1338,6 +1399,8 @@ export {
addComplexityToTask,
resolveEnvVariable,
findProjectRoot,
getTagAwareFilePath,
slugifyTagForFilePath,
aggregateTelemetry,
getCurrentTag,
resolveTag,

View File

@@ -1,4 +1,4 @@
import { generateText, streamText, generateObject } from 'ai';
import { generateObject, generateText, streamText } from 'ai';
import { log } from '../../scripts/modules/index.js';
/**
@@ -109,7 +109,7 @@ export class BaseAIProvider {
`Generating ${this.name} text with model: ${params.modelId}`
);
const client = this.getClient(params);
const client = await this.getClient(params);
const result = await generateText({
model: client(params.modelId),
messages: params.messages,
@@ -145,7 +145,7 @@ export class BaseAIProvider {
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
const client = this.getClient(params);
const client = await this.getClient(params);
const stream = await streamText({
model: client(params.modelId),
messages: params.messages,
@@ -184,7 +184,7 @@ export class BaseAIProvider {
`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
);
const client = this.getClient(params);
const client = await this.getClient(params);
const result = await generateObject({
model: client(params.modelId),
messages: params.messages,

View File

@@ -7,6 +7,7 @@
import { createClaudeCode } from './custom-sdk/claude-code/index.js';
import { BaseAIProvider } from './base-provider.js';
import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js';
export class ClaudeCodeProvider extends BaseAIProvider {
constructor() {
@@ -26,6 +27,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
/**
* Creates and returns a Claude Code client instance.
* @param {object} params - Parameters for client initialization
* @param {string} [params.commandName] - Name of the command invoking the service
* @param {string} [params.baseURL] - Optional custom API endpoint (not used by Claude Code)
* @returns {Function} Claude Code client function
* @throws {Error} If initialization fails
@@ -35,10 +37,7 @@ export class ClaudeCodeProvider extends BaseAIProvider {
// Claude Code doesn't use API keys or base URLs
// Just return the provider factory
return createClaudeCode({
defaultSettings: {
// Add any default settings if needed
// These can be overridden per request
}
defaultSettings: getClaudeCodeSettingsForCommand(params?.commandName)
});
} catch (error) {
this.handleError('client initialization', error);

View File

@@ -0,0 +1,656 @@
/**
* src/ai-providers/gemini-cli.js
*
* Implementation for interacting with Gemini models via Gemini CLI
* using the ai-sdk-provider-gemini-cli package.
*/
import { generateObject, generateText, streamText } from 'ai';
import { parse } from 'jsonc-parser';
import { BaseAIProvider } from './base-provider.js';
import { log } from '../../scripts/modules/index.js';
let createGeminiProvider;
async function loadGeminiCliModule() {
if (!createGeminiProvider) {
try {
const mod = await import('ai-sdk-provider-gemini-cli');
createGeminiProvider = mod.createGeminiProvider;
} catch (err) {
throw new Error(
"Gemini CLI SDK is not installed. Please install 'ai-sdk-provider-gemini-cli' to use the gemini-cli provider."
);
}
}
}
export class GeminiCliProvider extends BaseAIProvider {
constructor() {
super();
this.name = 'Gemini CLI';
}
/**
* Override validateAuth to handle Gemini CLI authentication options
* @param {object} params - Parameters to validate
*/
validateAuth(params) {
// Gemini CLI is designed to use pre-configured OAuth authentication
// Users choose gemini-cli specifically to leverage their existing
// gemini auth login credentials, not to use API keys.
// We support API keys for compatibility, but the expected usage
// is through CLI authentication (no API key required).
// No validation needed - the SDK will handle auth internally
}
/**
* Creates and returns a Gemini CLI client instance.
* @param {object} params - Parameters for client initialization
* @param {string} [params.apiKey] - Optional Gemini API key (rarely used with gemini-cli)
* @param {string} [params.baseURL] - Optional custom API endpoint
* @returns {Promise<Function>} Gemini CLI client function
* @throws {Error} If initialization fails
*/
async getClient(params) {
try {
// Load the Gemini CLI module dynamically
await loadGeminiCliModule();
// Primary use case: Use existing gemini CLI authentication
// Secondary use case: Direct API key (for compatibility)
let authOptions = {};
if (params.apiKey && params.apiKey !== 'gemini-cli-no-key-required') {
// API key provided - use it for compatibility
authOptions = {
authType: 'api-key',
apiKey: params.apiKey
};
} else {
// Expected case: Use gemini CLI authentication
// Requires: gemini auth login (pre-configured)
authOptions = {
authType: 'oauth-personal'
};
}
// Add baseURL if provided (for custom endpoints)
if (params.baseURL) {
authOptions.baseURL = params.baseURL;
}
// Create and return the provider
return createGeminiProvider(authOptions);
} catch (error) {
this.handleError('client initialization', error);
}
}
/**
* Extracts system messages from the messages array and returns them separately.
* This is needed because ai-sdk-provider-gemini-cli expects system prompts as a separate parameter.
* @param {Array} messages - Array of message objects
* @param {Object} options - Options for system prompt enhancement
* @param {boolean} options.enforceJsonOutput - Whether to add JSON enforcement to system prompt
* @returns {Object} - {systemPrompt: string|undefined, messages: Array}
*/
_extractSystemMessage(messages, options = {}) {
if (!messages || !Array.isArray(messages)) {
return { systemPrompt: undefined, messages: messages || [] };
}
const systemMessages = messages.filter((msg) => msg.role === 'system');
const nonSystemMessages = messages.filter((msg) => msg.role !== 'system');
// Combine multiple system messages if present
let systemPrompt =
systemMessages.length > 0
? systemMessages.map((msg) => msg.content).join('\n\n')
: undefined;
// Add Gemini CLI specific JSON enforcement if requested
if (options.enforceJsonOutput) {
const jsonEnforcement = this._getJsonEnforcementPrompt();
systemPrompt = systemPrompt
? `${systemPrompt}\n\n${jsonEnforcement}`
: jsonEnforcement;
}
return { systemPrompt, messages: nonSystemMessages };
}
/**
* Gets a Gemini CLI specific system prompt to enforce strict JSON output
* @returns {string} JSON enforcement system prompt
*/
_getJsonEnforcementPrompt() {
return `CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.`;
}
/**
* Checks if a string is valid JSON
* @param {string} text - Text to validate
* @returns {boolean} True if valid JSON
*/
_isValidJson(text) {
if (!text || typeof text !== 'string') {
return false;
}
try {
JSON.parse(text.trim());
return true;
} catch {
return false;
}
}
/**
* Detects if the user prompt is requesting JSON output
* @param {Array} messages - Array of message objects
* @returns {boolean} True if JSON output is likely expected
*/
_detectJsonRequest(messages) {
const userMessages = messages.filter((msg) => msg.role === 'user');
const combinedText = userMessages
.map((msg) => msg.content)
.join(' ')
.toLowerCase();
// Look for indicators that JSON output is expected
const jsonIndicators = [
'json',
'respond only with',
'return only',
'output only',
'format:',
'structure:',
'schema:',
'{"',
'[{',
'subtasks',
'array',
'object'
];
return jsonIndicators.some((indicator) => combinedText.includes(indicator));
}
/**
* Simplifies complex prompts for gemini-cli to improve JSON output compliance
* @param {Array} messages - Array of message objects
* @returns {Array} Simplified messages array
*/
_simplifyJsonPrompts(messages) {
// First, check if this is an expand-task operation by looking at the system message
const systemMsg = messages.find((m) => m.role === 'system');
const isExpandTask =
systemMsg &&
systemMsg.content.includes(
'You are an AI assistant helping with task breakdown. Generate exactly'
);
if (!isExpandTask) {
return messages; // Not an expand task, return unchanged
}
// Extract subtask count from system message
const subtaskCountMatch = systemMsg.content.match(
/Generate exactly (\d+) subtasks/
);
const subtaskCount = subtaskCountMatch ? subtaskCountMatch[1] : '10';
log(
'debug',
`${this.name} detected expand-task operation, simplifying for ${subtaskCount} subtasks`
);
return messages.map((msg) => {
if (msg.role !== 'user') {
return msg;
}
// For expand-task user messages, create a much simpler, more direct prompt
// that doesn't depend on specific task content
const simplifiedPrompt = `Generate exactly ${subtaskCount} subtasks in the following JSON format.
CRITICAL INSTRUCTION: You must respond with ONLY valid JSON. No explanatory text, no "Here is", no "Of course", no markdown - just the JSON object.
Required JSON structure:
{
"subtasks": [
{
"id": 1,
"title": "Specific actionable task title",
"description": "Clear task description",
"dependencies": [],
"details": "Implementation details and guidance",
"testStrategy": "Testing approach"
}
]
}
Generate ${subtaskCount} subtasks based on the original task context. Return ONLY the JSON object.`;
log(
'debug',
`${this.name} simplified user prompt for better JSON compliance`
);
return { ...msg, content: simplifiedPrompt };
});
}
/**
* Extract JSON from Gemini's response using a tolerant parser.
*
* Optimized approach that progressively tries different parsing strategies:
* 1. Direct parsing after cleanup
* 2. Smart boundary detection with single-pass analysis
* 3. Limited character-by-character fallback for edge cases
*
* @param {string} text - Raw text which may contain JSON
* @returns {string} A valid JSON string if extraction succeeds, otherwise the original text
*/
extractJson(text) {
if (!text || typeof text !== 'string') {
return text;
}
let content = text.trim();
// Early exit for very short content
if (content.length < 2) {
return text;
}
// Strip common wrappers in a single pass
content = content
// Remove markdown fences
.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
// Remove variable declarations
.replace(/^\s*(?:const|let|var)\s+\w+\s*=\s*([\s\S]*?)(?:;|\s*)$/i, '$1')
// Remove common prefixes
.replace(/^(?:Here's|The)\s+(?:the\s+)?JSON.*?[:]\s*/i, '')
.trim();
// Find the first JSON-like structure
const firstObj = content.indexOf('{');
const firstArr = content.indexOf('[');
if (firstObj === -1 && firstArr === -1) {
return text;
}
const start =
firstArr === -1
? firstObj
: firstObj === -1
? firstArr
: Math.min(firstObj, firstArr);
content = content.slice(start);
// Optimized parsing function with error collection
const tryParse = (value) => {
if (!value || value.length < 2) return undefined;
const errors = [];
try {
const result = parse(value, errors, {
allowTrailingComma: true,
allowEmptyContent: false
});
if (errors.length === 0 && result !== undefined) {
return JSON.stringify(result, null, 2);
}
} catch {
// Parsing failed completely
}
return undefined;
};
// Try parsing the full content first
const fullParse = tryParse(content);
if (fullParse !== undefined) {
return fullParse;
}
// Smart boundary detection - single pass with optimizations
const openChar = content[0];
const closeChar = openChar === '{' ? '}' : ']';
let depth = 0;
let inString = false;
let escapeNext = false;
let lastValidEnd = -1;
// Single-pass boundary detection with early termination
for (let i = 0; i < content.length && i < 10000; i++) {
// Limit scan for performance
const char = content[i];
if (escapeNext) {
escapeNext = false;
continue;
}
if (char === '\\') {
escapeNext = true;
continue;
}
if (char === '"') {
inString = !inString;
continue;
}
if (inString) continue;
if (char === openChar) {
depth++;
} else if (char === closeChar) {
depth--;
if (depth === 0) {
lastValidEnd = i + 1;
// Try parsing immediately on first valid boundary
const candidate = content.slice(0, lastValidEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) {
return parsed;
}
}
}
}
// If we found valid boundaries but parsing failed, try limited fallback
if (lastValidEnd > 0) {
const maxAttempts = Math.min(5, Math.floor(lastValidEnd / 100)); // Limit attempts
for (let i = 0; i < maxAttempts; i++) {
const testEnd = Math.max(
lastValidEnd - i * 50,
Math.floor(lastValidEnd * 0.8)
);
const candidate = content.slice(0, testEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) {
return parsed;
}
}
}
return text;
}
/**
* Generates text using Gemini CLI model
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
*/
async generateText(params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
log(
'debug',
`Generating ${this.name} text with model: ${params.modelId}`
);
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
const enforceJsonOutput = this._detectJsonRequest(params.messages);
// Debug logging to understand what's happening
log('debug', `${this.name} JSON detection analysis:`, {
enforceJsonOutput,
messageCount: params.messages.length,
messages: params.messages.map((msg) => ({
role: msg.role,
contentPreview: msg.content
? msg.content.substring(0, 200) + '...'
: 'empty'
}))
});
if (enforceJsonOutput) {
log(
'debug',
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
);
}
// For gemini-cli, simplify complex prompts before processing
let processedMessages = params.messages;
if (enforceJsonOutput) {
processedMessages = this._simplifyJsonPrompts(params.messages);
}
// Extract system messages for separate handling with optional JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
processedMessages,
{ enforceJsonOutput }
);
// Debug the final system prompt being sent
log('debug', `${this.name} final system prompt:`, {
systemPromptLength: systemPrompt ? systemPrompt.length : 0,
systemPromptPreview: systemPrompt
? systemPrompt.substring(0, 300) + '...'
: 'none',
finalMessageCount: messages.length
});
const client = await this.getClient(params);
const result = await generateText({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});
// If we detected a JSON request and gemini-cli returned conversational text,
// attempt to extract JSON from the response
let finalText = result.text;
if (enforceJsonOutput && result.text && !this._isValidJson(result.text)) {
log(
'debug',
`${this.name} response appears conversational, attempting JSON extraction`
);
// Log first 1000 chars of the response to see what Gemini actually returned
log('debug', `${this.name} raw response preview:`, {
responseLength: result.text.length,
responseStart: result.text.substring(0, 1000)
});
const extractedJson = this.extractJson(result.text);
if (this._isValidJson(extractedJson)) {
log(
'debug',
`${this.name} successfully extracted JSON from conversational response`
);
finalText = extractedJson;
} else {
log(
'debug',
`${this.name} JSON extraction failed, returning original response`
);
// Log what extraction returned to debug why it failed
log('debug', `${this.name} extraction result preview:`, {
extractedLength: extractedJson ? extractedJson.length : 0,
extractedStart: extractedJson
? extractedJson.substring(0, 500)
: 'null'
});
}
}
log(
'debug',
`${this.name} generateText completed successfully for model: ${params.modelId}`
);
return {
text: finalText,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (error) {
this.handleError('text generation', error);
}
}
/**
* Streams text using Gemini CLI model
* Overrides base implementation to properly handle system messages and enforce JSON output when needed
*/
async streamText(params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
// Detect if JSON output is expected and enforce it for better gemini-cli compatibility
const enforceJsonOutput = this._detectJsonRequest(params.messages);
// Debug logging to understand what's happening
log('debug', `${this.name} JSON detection analysis:`, {
enforceJsonOutput,
messageCount: params.messages.length,
messages: params.messages.map((msg) => ({
role: msg.role,
contentPreview: msg.content
? msg.content.substring(0, 200) + '...'
: 'empty'
}))
});
if (enforceJsonOutput) {
log(
'debug',
`${this.name} detected JSON request - applying strict JSON enforcement system prompt`
);
}
// Extract system messages for separate handling with optional JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
params.messages,
{ enforceJsonOutput }
);
const client = await this.getClient(params);
const stream = await streamText({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});
log(
'debug',
`${this.name} streamText initiated successfully for model: ${params.modelId}`
);
// Note: For streaming, we can't intercept and modify the response in real-time
// The JSON extraction would need to happen on the consuming side
return stream;
} catch (error) {
this.handleError('text streaming', error);
}
}
/**
* Generates a structured object using Gemini CLI model
* Overrides base implementation to handle Gemini-specific JSON formatting issues and system messages
*/
async generateObject(params) {
try {
// First try the standard generateObject from base class
return await super.generateObject(params);
} catch (error) {
// If it's a JSON parsing error, try to extract and parse JSON manually
if (error.message?.includes('JSON') || error.message?.includes('parse')) {
log(
'debug',
`Gemini CLI generateObject failed with parsing error, attempting manual extraction`
);
try {
// Validate params first
this.validateParams(params);
this.validateMessages(params.messages);
if (!params.schema) {
throw new Error('Schema is required for object generation');
}
if (!params.objectName) {
throw new Error('Object name is required for object generation');
}
// Extract system messages for separate handling with JSON enforcement
const { systemPrompt, messages } = this._extractSystemMessage(
params.messages,
{ enforceJsonOutput: true }
);
// Call generateObject directly with our client
const client = await this.getClient(params);
const result = await generateObject({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
schema: params.schema,
mode: 'json', // Use json mode instead of auto for Gemini
maxTokens: params.maxTokens,
temperature: params.temperature
});
// If we get rawResponse text, try to extract JSON from it
if (result.rawResponse?.text && !result.object) {
const extractedJson = this.extractJson(result.rawResponse.text);
try {
result.object = JSON.parse(extractedJson);
} catch (parseError) {
log(
'error',
`Failed to parse extracted JSON: ${parseError.message}`
);
log(
'debug',
`Extracted JSON: ${extractedJson.substring(0, 500)}...`
);
throw new Error(
`Gemini CLI returned invalid JSON that could not be parsed: ${parseError.message}`
);
}
}
return {
object: result.object,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (retryError) {
log(
'error',
`Gemini CLI manual JSON extraction failed: ${retryError.message}`
);
// Re-throw the original error with more context
throw new Error(
`${this.name} failed to generate valid JSON object: ${error.message}`
);
}
}
// For non-parsing errors, just re-throw
throw error;
}
}
}

View File

@@ -14,3 +14,4 @@ export { BedrockAIProvider } from './bedrock.js';
export { AzureProvider } from './azure.js';
export { VertexAIProvider } from './google-vertex.js';
export { ClaudeCodeProvider } from './claude-code.js';
export { GeminiCliProvider } from './gemini-cli.js';

17
src/constants/commands.js Normal file
View File

@@ -0,0 +1,17 @@
/**
* Command related constants
* Defines which commands trigger AI processing
*/
// Command names that trigger AI processing
export const AI_COMMAND_NAMES = [
'add-task',
'analyze-complexity',
'expand-task',
'parse-prd',
'research',
'research-save',
'update-subtask',
'update-task',
'update-tasks'
];

View File

@@ -20,7 +20,8 @@ export const CUSTOM_PROVIDERS = {
BEDROCK: 'bedrock',
OPENROUTER: 'openrouter',
OLLAMA: 'ollama',
CLAUDE_CODE: 'claude-code'
CLAUDE_CODE: 'claude-code',
GEMINI_CLI: 'gemini-cli'
};
// Custom providers array (for backward compatibility and iteration)

View File

@@ -25,7 +25,7 @@ function formatJSONWithTabs(obj) {
}
// Structure matches project conventions (see scripts/init.js)
export function setupMCPConfiguration(projectDir, mcpConfigPath) {
export function setupMCPConfiguration(projectRoot, mcpConfigPath) {
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
if (!mcpConfigPath) {
log(
@@ -36,7 +36,7 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
}
// Build the full path to the MCP config file
const mcpPath = path.join(projectDir, mcpConfigPath);
const mcpPath = path.join(projectRoot, mcpConfigPath);
const configDir = path.dirname(mcpPath);
log('info', `Setting up MCP configuration at ${mcpPath}...`);
@@ -140,11 +140,11 @@ export function setupMCPConfiguration(projectDir, mcpConfigPath) {
/**
* Remove Task Master MCP server configuration from an existing mcp.json file
* Only removes Task Master entries, preserving other MCP servers
* @param {string} projectDir - Target project directory
* @param {string} projectRoot - Target project directory
* @param {string} mcpConfigPath - Relative path to MCP config file (e.g., '.cursor/mcp.json')
* @returns {Object} Result object with success status and details
*/
export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
if (!mcpConfigPath) {
return {
@@ -156,7 +156,7 @@ export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
};
}
const mcpPath = path.join(projectDir, mcpConfigPath);
const mcpPath = path.join(projectRoot, mcpConfigPath);
let result = {
success: false,

View File

@@ -170,7 +170,7 @@ function validateInputs(targetPath, content, storeTasksInGit) {
*/
function createNewGitignoreFile(targetPath, templateLines, log) {
try {
fs.writeFileSync(targetPath, templateLines.join('\n'));
fs.writeFileSync(targetPath, templateLines.join('\n') + '\n');
if (typeof log === 'function') {
log('success', `Created ${targetPath} with full template`);
}
@@ -223,7 +223,7 @@ function mergeWithExistingFile(
finalLines.push(...buildTaskFilesSection(storeTasksInGit));
// Write result
fs.writeFileSync(targetPath, finalLines.join('\n'));
fs.writeFileSync(targetPath, finalLines.join('\n') + '\n');
if (typeof log === 'function') {
const hasNewContent =

View File

@@ -25,6 +25,9 @@ import { getLoggerOrDefault } from './logger-utils.js';
export function normalizeProjectRoot(projectRoot) {
if (!projectRoot) return projectRoot;
// Ensure it's a string
projectRoot = String(projectRoot);
// Split the path into segments
const segments = projectRoot.split(path.sep);

View File

@@ -198,7 +198,7 @@ export function convertRuleToProfileRule(sourcePath, targetPath, profile) {
/**
* Convert all Cursor rules to profile rules for a specific profile
*/
export function convertAllRulesToProfileRules(projectDir, profile) {
export function convertAllRulesToProfileRules(projectRoot, profile) {
// Handle simple profiles (Claude, Codex) that just copy files to root
const isSimpleProfile = Object.keys(profile.fileMap).length === 0;
if (isSimpleProfile) {
@@ -208,7 +208,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
const assetsDir = path.join(__dirname, '..', '..', 'assets');
if (typeof profile.onPostConvertRulesProfile === 'function') {
profile.onPostConvertRulesProfile(projectDir, assetsDir);
profile.onPostConvertRulesProfile(projectRoot, assetsDir);
}
return { success: 1, failed: 0 };
}
@@ -216,7 +216,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const sourceDir = path.join(__dirname, '..', '..', 'assets', 'rules');
const targetDir = path.join(projectDir, profile.rulesDir);
const targetDir = path.join(projectRoot, profile.rulesDir);
// Ensure target directory exists
if (!fs.existsSync(targetDir)) {
@@ -225,7 +225,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
// Setup MCP configuration if enabled
if (profile.mcpConfig !== false) {
setupMCPConfiguration(projectDir, profile.mcpConfigPath);
setupMCPConfiguration(projectRoot, profile.mcpConfigPath);
}
let success = 0;
@@ -286,7 +286,7 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
// Call post-processing hook if defined (e.g., for Roo's rules-*mode* folders)
if (typeof profile.onPostConvertRulesProfile === 'function') {
const assetsDir = path.join(__dirname, '..', '..', 'assets');
profile.onPostConvertRulesProfile(projectDir, assetsDir);
profile.onPostConvertRulesProfile(projectRoot, assetsDir);
}
return { success, failed };
@@ -294,13 +294,13 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
/**
* Remove only Task Master specific files from a profile, leaving other existing rules intact
* @param {string} projectDir - Target project directory
* @param {string} projectRoot - Target project directory
* @param {Object} profile - Profile configuration
* @returns {Object} Result object
*/
export function removeProfileRules(projectDir, profile) {
const targetDir = path.join(projectDir, profile.rulesDir);
const profileDir = path.join(projectDir, profile.profileDir);
export function removeProfileRules(projectRoot, profile) {
const targetDir = path.join(projectRoot, profile.rulesDir);
const profileDir = path.join(projectRoot, profile.profileDir);
const result = {
profileName: profile.profileName,
@@ -320,12 +320,12 @@ export function removeProfileRules(projectDir, profile) {
if (isSimpleProfile) {
// For simple profiles, just call their removal hook and return
if (typeof profile.onRemoveRulesProfile === 'function') {
profile.onRemoveRulesProfile(projectDir);
profile.onRemoveRulesProfile(projectRoot);
}
result.success = true;
log(
'debug',
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectDir}`
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectRoot}`
);
return result;
}
@@ -418,7 +418,7 @@ export function removeProfileRules(projectDir, profile) {
// 2. Handle MCP configuration - only remove Task Master, preserve other servers
if (profile.mcpConfig !== false) {
result.mcpResult = removeTaskMasterMCPConfiguration(
projectDir,
projectRoot,
profile.mcpConfigPath
);
if (result.mcpResult.hasOtherServers) {
@@ -432,7 +432,7 @@ export function removeProfileRules(projectDir, profile) {
// 3. Call removal hook if defined (e.g., Roo's custom cleanup)
if (typeof profile.onRemoveRulesProfile === 'function') {
profile.onRemoveRulesProfile(projectDir);
profile.onRemoveRulesProfile(projectRoot);
}
// 4. Only remove profile directory if:
@@ -490,7 +490,7 @@ export function removeProfileRules(projectDir, profile) {
result.success = true;
log(
'debug',
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectDir}`
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectRoot}`
);
} catch (error) {
result.error = error.message;

View File

@@ -0,0 +1,649 @@
import { jest } from '@jest/globals';
// Mock the ai module
jest.unstable_mockModule('ai', () => ({
generateObject: jest.fn(),
generateText: jest.fn(),
streamText: jest.fn()
}));
// Mock the gemini-cli SDK module
jest.unstable_mockModule('ai-sdk-provider-gemini-cli', () => ({
createGeminiProvider: jest.fn((options) => {
const provider = (modelId, settings) => ({
// Mock language model
id: modelId,
settings,
authOptions: options
});
provider.languageModel = jest.fn((id, settings) => ({ id, settings }));
provider.chat = provider.languageModel;
return provider;
})
}));
// Mock the base provider
jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
BaseAIProvider: class {
constructor() {
this.name = 'Base Provider';
}
handleError(context, error) {
throw error;
}
validateParams(params) {
// Basic validation
if (!params.modelId) {
throw new Error('Model ID is required');
}
}
validateMessages(messages) {
if (!messages || !Array.isArray(messages)) {
throw new Error('Invalid messages array');
}
}
async generateObject(params) {
// Mock implementation that can be overridden
throw new Error('Mock base generateObject error');
}
}
}));
// Mock the log module
jest.unstable_mockModule('../../../scripts/modules/index.js', () => ({
log: jest.fn()
}));
// Import after mocking
const { GeminiCliProvider } = await import(
'../../../src/ai-providers/gemini-cli.js'
);
const { createGeminiProvider } = await import('ai-sdk-provider-gemini-cli');
const { generateObject, generateText, streamText } = await import('ai');
const { log } = await import('../../../scripts/modules/index.js');
describe('GeminiCliProvider', () => {
let provider;
let consoleLogSpy;
beforeEach(() => {
provider = new GeminiCliProvider();
jest.clearAllMocks();
consoleLogSpy = jest.spyOn(console, 'log').mockImplementation();
});
afterEach(() => {
consoleLogSpy.mockRestore();
});
describe('constructor', () => {
it('should set the provider name to Gemini CLI', () => {
expect(provider.name).toBe('Gemini CLI');
});
});
describe('validateAuth', () => {
it('should not throw an error when API key is provided', () => {
expect(() => provider.validateAuth({ apiKey: 'test-key' })).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
it('should not require API key and should not log messages', () => {
expect(() => provider.validateAuth({})).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
it('should not require any parameters', () => {
expect(() => provider.validateAuth()).not.toThrow();
expect(consoleLogSpy).not.toHaveBeenCalled();
});
});
describe('getClient', () => {
it('should return a gemini client with API key auth when apiKey is provided', async () => {
const client = await provider.getClient({ apiKey: 'test-api-key' });
expect(client).toBeDefined();
expect(typeof client).toBe('function');
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'test-api-key'
});
});
it('should return a gemini client with OAuth auth when no apiKey is provided', async () => {
const client = await provider.getClient({});
expect(client).toBeDefined();
expect(typeof client).toBe('function');
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
it('should include baseURL when provided', async () => {
const client = await provider.getClient({
apiKey: 'test-key',
baseURL: 'https://custom-endpoint.com'
});
expect(client).toBeDefined();
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'test-key',
baseURL: 'https://custom-endpoint.com'
});
});
it('should have languageModel and chat methods', async () => {
const client = await provider.getClient({ apiKey: 'test-key' });
expect(client.languageModel).toBeDefined();
expect(client.chat).toBeDefined();
expect(client.chat).toBe(client.languageModel);
});
});
describe('_extractSystemMessage', () => {
it('should extract single system message', () => {
const messages = [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBe('You are a helpful assistant');
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should combine multiple system messages', () => {
const messages = [
{ role: 'system', content: 'You are helpful' },
{ role: 'system', content: 'Be concise' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBe('You are helpful\n\nBe concise');
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should handle messages without system prompts', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' }
];
const result = provider._extractSystemMessage(messages);
expect(result.systemPrompt).toBeUndefined();
expect(result.messages).toEqual(messages);
});
it('should handle empty or invalid input', () => {
expect(provider._extractSystemMessage([])).toEqual({
systemPrompt: undefined,
messages: []
});
expect(provider._extractSystemMessage(null)).toEqual({
systemPrompt: undefined,
messages: []
});
expect(provider._extractSystemMessage(undefined)).toEqual({
systemPrompt: undefined,
messages: []
});
});
it('should add JSON enforcement when enforceJsonOutput is true', () => {
const messages = [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
];
const result = provider._extractSystemMessage(messages, {
enforceJsonOutput: true
});
expect(result.systemPrompt).toContain('You are a helpful assistant');
expect(result.systemPrompt).toContain(
'CRITICAL: You MUST respond with ONLY valid JSON'
);
expect(result.messages).toEqual([{ role: 'user', content: 'Hello' }]);
});
it('should add JSON enforcement with no existing system message', () => {
const messages = [{ role: 'user', content: 'Return JSON format' }];
const result = provider._extractSystemMessage(messages, {
enforceJsonOutput: true
});
expect(result.systemPrompt).toBe(
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown formatting, code block markers, or conversational phrases like "Here is" or "Of course". Your entire response must be parseable JSON that starts with { or [ and ends with } or ]. No exceptions.'
);
expect(result.messages).toEqual([
{ role: 'user', content: 'Return JSON format' }
]);
});
});
describe('_detectJsonRequest', () => {
it('should detect JSON requests from user messages', () => {
const messages = [
{
role: 'user',
content: 'Please return JSON format with subtasks array'
}
];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
it('should detect various JSON indicators', () => {
const testCases = [
'respond only with valid JSON',
'return JSON format',
'output schema: {"test": true}',
'format: [{"id": 1}]',
'Please return subtasks in array format',
'Return an object with properties'
];
testCases.forEach((content) => {
const messages = [{ role: 'user', content }];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
});
it('should not detect JSON requests for regular conversation', () => {
const messages = [{ role: 'user', content: 'Hello, how are you today?' }];
expect(provider._detectJsonRequest(messages)).toBe(false);
});
it('should handle multiple user messages', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'Now please return JSON format' }
];
expect(provider._detectJsonRequest(messages)).toBe(true);
});
});
describe('_getJsonEnforcementPrompt', () => {
it('should return strict JSON enforcement prompt', () => {
const prompt = provider._getJsonEnforcementPrompt();
expect(prompt).toContain('CRITICAL');
expect(prompt).toContain('ONLY valid JSON');
expect(prompt).toContain('No exceptions');
});
});
describe('_isValidJson', () => {
it('should return true for valid JSON objects', () => {
expect(provider._isValidJson('{"test": true}')).toBe(true);
expect(provider._isValidJson('{"subtasks": [{"id": 1}]}')).toBe(true);
});
it('should return true for valid JSON arrays', () => {
expect(provider._isValidJson('[1, 2, 3]')).toBe(true);
expect(provider._isValidJson('[{"id": 1}, {"id": 2}]')).toBe(true);
});
it('should return false for invalid JSON', () => {
expect(provider._isValidJson('Of course. Here is...')).toBe(false);
expect(provider._isValidJson('{"invalid": json}')).toBe(false);
expect(provider._isValidJson('not json at all')).toBe(false);
});
it('should handle edge cases', () => {
expect(provider._isValidJson('')).toBe(false);
expect(provider._isValidJson(null)).toBe(false);
expect(provider._isValidJson(undefined)).toBe(false);
expect(provider._isValidJson(' {"test": true} ')).toBe(true); // with whitespace
});
});
describe('extractJson', () => {
it('should extract JSON from markdown code blocks', () => {
const input = '```json\n{"subtasks": [{"id": 1}]}\n```';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should extract JSON with explanatory text', () => {
const input = 'Here\'s the JSON response:\n{"subtasks": [{"id": 1}]}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle variable declarations', () => {
const input = 'const result = {"subtasks": [{"id": 1}]};';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle trailing commas with jsonc-parser', () => {
const input = '{"subtasks": [{"id": 1,}],}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ subtasks: [{ id: 1 }] });
});
it('should handle arrays', () => {
const input = 'The result is: [1, 2, 3]';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual([1, 2, 3]);
});
it('should handle nested objects with proper bracket matching', () => {
const input =
'Response: {"outer": {"inner": {"value": "test"}}} extra text';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ outer: { inner: { value: 'test' } } });
});
it('should handle escaped quotes in strings', () => {
const input = '{"message": "He said \\"hello\\" to me"}';
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ message: 'He said "hello" to me' });
});
it('should return original text if no JSON found', () => {
const input = 'No JSON here';
expect(provider.extractJson(input)).toBe(input);
});
it('should handle null or non-string input', () => {
expect(provider.extractJson(null)).toBe(null);
expect(provider.extractJson(undefined)).toBe(undefined);
expect(provider.extractJson(123)).toBe(123);
});
it('should handle partial JSON by finding valid boundaries', () => {
const input = '{"valid": true, "partial": "incomplete';
// Should return original text since no valid JSON can be extracted
expect(provider.extractJson(input)).toBe(input);
});
it('should handle performance edge cases with large text', () => {
// Test with large text that has JSON at the end
const largePrefix = 'This is a very long explanation. '.repeat(1000);
const json = '{"result": "success"}';
const input = largePrefix + json;
const result = provider.extractJson(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ result: 'success' });
});
it('should handle early termination for very large invalid content', () => {
// Test that it doesn't hang on very large content without JSON
const largeText = 'No JSON here. '.repeat(2000);
const result = provider.extractJson(largeText);
expect(result).toBe(largeText);
});
});
describe('generateObject', () => {
const mockParams = {
modelId: 'gemini-2.0-flash-exp',
apiKey: 'test-key',
messages: [{ role: 'user', content: 'Test message' }],
schema: { type: 'object', properties: {} },
objectName: 'testObject'
};
beforeEach(() => {
jest.clearAllMocks();
});
it('should handle JSON parsing errors by attempting manual extraction', async () => {
// Mock the parent generateObject to throw a JSON parsing error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON response'));
// Mock generateObject from ai module to return text with JSON
generateObject.mockResolvedValueOnce({
rawResponse: {
text: 'Here is the JSON:\n```json\n{"subtasks": [{"id": 1}]}\n```'
},
object: null,
usage: { promptTokens: 10, completionTokens: 20, totalTokens: 30 }
});
const result = await provider.generateObject(mockParams);
expect(log).toHaveBeenCalledWith(
'debug',
expect.stringContaining('attempting manual extraction')
);
expect(generateObject).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp',
authOptions: expect.objectContaining({
authType: 'api-key',
apiKey: 'test-key'
})
}),
messages: mockParams.messages,
schema: mockParams.schema,
mode: 'json', // Should use json mode for Gemini
system: expect.stringContaining(
'CRITICAL: You MUST respond with ONLY valid JSON'
),
maxTokens: undefined,
temperature: undefined
});
expect(result.object).toEqual({ subtasks: [{ id: 1 }] });
});
it('should throw error if manual extraction also fails', async () => {
// Mock parent to throw JSON error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
// Mock generateObject to return unparseable text
generateObject.mockResolvedValueOnce({
rawResponse: { text: 'Not valid JSON at all' },
object: null
});
await expect(provider.generateObject(mockParams)).rejects.toThrow(
'Gemini CLI failed to generate valid JSON object: Failed to parse JSON'
);
});
it('should pass through non-JSON errors unchanged', async () => {
const otherError = new Error('Network error');
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(otherError);
await expect(provider.generateObject(mockParams)).rejects.toThrow(
'Network error'
);
expect(generateObject).not.toHaveBeenCalled();
});
it('should handle successful response from parent', async () => {
const mockResult = {
object: { test: 'data' },
usage: { inputTokens: 5, outputTokens: 10, totalTokens: 15 }
};
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockResolvedValueOnce(mockResult);
const result = await provider.generateObject(mockParams);
expect(result).toEqual(mockResult);
expect(generateObject).not.toHaveBeenCalled();
});
});
describe('system message support', () => {
const mockParams = {
modelId: 'gemini-2.0-flash-exp',
apiKey: 'test-key',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
],
maxTokens: 100,
temperature: 0.7
};
describe('generateText with system messages', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('should pass system prompt separately to AI SDK', async () => {
const { generateText } = await import('ai');
generateText.mockResolvedValueOnce({
text: 'Hello! How can I help you?',
usage: { promptTokens: 10, completionTokens: 8, totalTokens: 18 }
});
const result = await provider.generateText(mockParams);
expect(generateText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: 'You are a helpful assistant',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
expect(result.text).toBe('Hello! How can I help you?');
});
it('should handle messages without system prompt', async () => {
const { generateText } = await import('ai');
const paramsNoSystem = {
...mockParams,
messages: [{ role: 'user', content: 'Hello' }]
};
generateText.mockResolvedValueOnce({
text: 'Hi there!',
usage: { promptTokens: 5, completionTokens: 3, totalTokens: 8 }
});
await provider.generateText(paramsNoSystem);
expect(generateText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: undefined,
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
});
});
describe('streamText with system messages', () => {
it('should pass system prompt separately to AI SDK', async () => {
const { streamText } = await import('ai');
const mockStream = { stream: 'mock-stream' };
streamText.mockResolvedValueOnce(mockStream);
const result = await provider.streamText(mockParams);
expect(streamText).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: 'You are a helpful assistant',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100,
temperature: 0.7
});
expect(result).toBe(mockStream);
});
});
describe('generateObject with system messages', () => {
const mockObjectParams = {
...mockParams,
schema: { type: 'object', properties: {} },
objectName: 'testObject'
};
it('should include system prompt in fallback generateObject call', async () => {
// Mock parent to throw JSON error
jest
.spyOn(
Object.getPrototypeOf(Object.getPrototypeOf(provider)),
'generateObject'
)
.mockRejectedValueOnce(new Error('Failed to parse JSON'));
// Mock direct generateObject call
generateObject.mockResolvedValueOnce({
object: { result: 'success' },
usage: { promptTokens: 15, completionTokens: 10, totalTokens: 25 }
});
const result = await provider.generateObject(mockObjectParams);
expect(generateObject).toHaveBeenCalledWith({
model: expect.objectContaining({
id: 'gemini-2.0-flash-exp'
}),
system: expect.stringContaining('You are a helpful assistant'),
messages: [{ role: 'user', content: 'Hello' }],
schema: mockObjectParams.schema,
mode: 'json',
maxTokens: 100,
temperature: 0.7
});
expect(result.object).toEqual({ result: 'success' });
});
});
});
// Note: Error handling for module loading is tested in integration tests
// since dynamic imports are difficult to mock properly in unit tests
describe('authentication scenarios', () => {
it('should use api-key auth type with API key', async () => {
await provider.getClient({ apiKey: 'gemini-test-key' });
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'api-key',
apiKey: 'gemini-test-key'
});
});
it('should use oauth-personal auth type without API key', async () => {
await provider.getClient({});
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
it('should handle empty string API key as no API key', async () => {
await provider.getClient({ apiKey: '' });
expect(createGeminiProvider).toHaveBeenCalledWith({
authType: 'oauth-personal'
});
});
});
});

View File

@@ -8,6 +8,7 @@ const mockGetResearchModelId = jest.fn();
const mockGetFallbackProvider = jest.fn();
const mockGetFallbackModelId = jest.fn();
const mockGetParametersForRole = jest.fn();
const mockGetResponseLanguage = jest.fn();
const mockGetUserId = jest.fn();
const mockGetDebugFlag = jest.fn();
const mockIsApiKeySet = jest.fn();
@@ -98,6 +99,7 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
getFallbackMaxTokens: mockGetFallbackMaxTokens,
getFallbackTemperature: mockGetFallbackTemperature,
getParametersForRole: mockGetParametersForRole,
getResponseLanguage: mockGetResponseLanguage,
getUserId: mockGetUserId,
getDebugFlag: mockGetDebugFlag,
getBaseUrlForRole: mockGetBaseUrlForRole,
@@ -117,7 +119,10 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
getBedrockBaseURL: mockGetBedrockBaseURL,
getVertexProjectId: mockGetVertexProjectId,
getVertexLocation: mockGetVertexLocation,
getMcpApiKeyStatus: mockGetMcpApiKeyStatus
getMcpApiKeyStatus: mockGetMcpApiKeyStatus,
// Providers without API keys
providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli']
}));
// Mock AI Provider Classes with proper methods
@@ -185,6 +190,11 @@ jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
})),
GeminiCliProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
}))
}));
@@ -269,6 +279,7 @@ describe('Unified AI Services', () => {
if (role === 'fallback') return { maxTokens: 150, temperature: 0.6 };
return { maxTokens: 100, temperature: 0.5 }; // Default
});
mockGetResponseLanguage.mockReturnValue('English');
mockResolveEnvVariable.mockImplementation((key) => {
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
@@ -455,6 +466,68 @@ describe('Unified AI Services', () => {
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1);
});
test('should use configured responseLanguage in system prompt', async () => {
mockGetResponseLanguage.mockReturnValue('中文');
mockAnthropicProvider.generateText.mockResolvedValue('中文回复');
const params = {
role: 'main',
systemPrompt: 'You are an assistant',
prompt: 'Hello'
};
await generateTextService(params);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
expect.objectContaining({
messages: [
{
role: 'system',
content: expect.stringContaining('Always respond in 中文')
},
{ role: 'user', content: 'Hello' }
]
})
);
expect(mockGetResponseLanguage).toHaveBeenCalledWith(fakeProjectRoot);
});
test('should pass custom projectRoot to getResponseLanguage', async () => {
const customRoot = '/custom/project/root';
mockGetResponseLanguage.mockReturnValue('Español');
mockAnthropicProvider.generateText.mockResolvedValue(
'Respuesta en Español'
);
const params = {
role: 'main',
systemPrompt: 'You are an assistant',
prompt: 'Hello',
projectRoot: customRoot
};
await generateTextService(params);
expect(mockGetResponseLanguage).toHaveBeenCalledWith(customRoot);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledWith(
expect.objectContaining({
messages: [
{
role: 'system',
content: expect.stringContaining('Always respond in Español')
},
{ role: 'user', content: 'Hello' }
]
})
);
});
// Add more tests for edge cases:
// - Missing API keys (should throw from _resolveApiKey)
// - Unsupported provider configured (should skip and log)
// - Missing provider/model config for a role (should skip and log)
// - Missing prompt
// - Different initial roles (research, fallback)
// - generateObjectService (mock schema, check object result)
// - streamTextService (more complex to test, might need stream helpers)
test('should skip provider with missing API key and try next in fallback sequence', async () => {
// Setup isApiKeySet to return false for anthropic but true for perplexity
mockIsApiKeySet.mockImplementation((provider, session, root) => {

View File

@@ -48,11 +48,14 @@ const mockConsole = {
};
global.console = mockConsole;
// --- Define Mock Function Instances ---
const mockFindConfigPath = jest.fn(() => null); // Default to null, can be overridden in tests
// Mock path-utils to prevent config file path discovery and logging
jest.mock('../../src/utils/path-utils.js', () => ({
__esModule: true,
findProjectRoot: jest.fn(() => '/mock/project'),
findConfigPath: jest.fn(() => null), // Always return null to prevent config discovery
findConfigPath: mockFindConfigPath, // Use the mock function instance
findTasksPath: jest.fn(() => '/mock/tasks.json'),
findComplexityReportPath: jest.fn(() => null),
resolveTasksOutputPath: jest.fn(() => '/mock/tasks.json'),
@@ -136,12 +139,15 @@ const DEFAULT_CONFIG = {
global: {
logLevel: 'info',
debug: false,
defaultNumTasks: 10,
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api',
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com'
}
bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com',
responseLanguage: 'English'
},
claudeCode: {}
};
// Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG)
@@ -195,6 +201,61 @@ const INVALID_PROVIDER_CONFIG = {
}
};
// Claude Code test data
const VALID_CLAUDE_CODE_CONFIG = {
maxTurns: 5,
customSystemPrompt: 'You are a helpful coding assistant',
appendSystemPrompt: 'Always follow best practices',
permissionMode: 'acceptEdits',
allowedTools: ['Read', 'LS', 'Edit'],
disallowedTools: ['Write'],
mcpServers: {
'test-server': {
type: 'stdio',
command: 'node',
args: ['server.js'],
env: { NODE_ENV: 'test' }
}
},
commandSpecific: {
'add-task': {
maxTurns: 3,
permissionMode: 'plan'
},
research: {
customSystemPrompt: 'You are a research assistant'
}
}
};
const INVALID_CLAUDE_CODE_CONFIG = {
maxTurns: 'invalid', // Should be number
permissionMode: 'invalid-mode', // Invalid enum value
allowedTools: 'not-an-array', // Should be array
mcpServers: {
'invalid-server': {
type: 'invalid-type', // Invalid enum value
url: 'not-a-valid-url' // Invalid URL format
}
},
commandSpecific: {
'invalid-command': {
// Invalid command name
maxTurns: -1 // Invalid negative number
}
}
};
const PARTIAL_CLAUDE_CODE_CONFIG = {
maxTurns: 10,
permissionMode: 'default',
commandSpecific: {
'expand-task': {
customSystemPrompt: 'Focus on task breakdown'
}
}
};
// Define spies globally to be restored in afterAll
let consoleErrorSpy;
let consoleWarnSpy;
@@ -220,6 +281,7 @@ beforeEach(() => {
// Reset the external mock instances for utils
mockFindProjectRoot.mockReset();
mockLog.mockReset();
mockFindConfigPath.mockReset();
// --- Set up spies ON the imported 'fs' mock ---
fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync');
@@ -228,6 +290,7 @@ beforeEach(() => {
// --- Default Mock Implementations ---
mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot
mockFindConfigPath.mockReturnValue(null); // Default to no config file found
fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default
// Default readFileSync: Return REAL models content, mocked config, or throw error
@@ -325,6 +388,162 @@ describe('Validation Functions', () => {
});
});
// --- Claude Code Validation Tests ---
describe('Claude Code Validation', () => {
test('validateClaudeCodeSettings should return valid settings for correct input', () => {
const result = configManager.validateClaudeCodeSettings(
VALID_CLAUDE_CODE_CONFIG
);
expect(result).toEqual(VALID_CLAUDE_CODE_CONFIG);
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should return empty object for invalid input', () => {
const result = configManager.validateClaudeCodeSettings(
INVALID_CLAUDE_CODE_CONFIG
);
expect(result).toEqual({});
expect(consoleWarnSpy).toHaveBeenCalledWith(
expect.stringContaining('Warning: Invalid Claude Code settings in config')
);
});
test('validateClaudeCodeSettings should handle partial valid configuration', () => {
const result = configManager.validateClaudeCodeSettings(
PARTIAL_CLAUDE_CODE_CONFIG
);
expect(result).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should return empty object for empty input', () => {
const result = configManager.validateClaudeCodeSettings({});
expect(result).toEqual({});
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
test('validateClaudeCodeSettings should handle null/undefined input', () => {
expect(configManager.validateClaudeCodeSettings(null)).toEqual({});
expect(configManager.validateClaudeCodeSettings(undefined)).toEqual({});
expect(consoleWarnSpy).toHaveBeenCalledTimes(2);
});
});
// --- Claude Code Getter Tests ---
describe('Claude Code Getter Functions', () => {
test('getClaudeCodeSettings should return default empty object when no config exists', () => {
// No config file exists, should return empty object
fsExistsSyncSpy.mockReturnValue(false);
const settings = configManager.getClaudeCodeSettings(MOCK_PROJECT_ROOT);
expect(settings).toEqual({});
});
test('getClaudeCodeSettings should return merged settings from config file', () => {
// Config file with Claude Code settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: VALID_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4o' }],
google: [{ id: 'gemini-1.5-pro-latest' }],
anthropic: [
{ id: 'claude-3-opus-20240229' },
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
],
perplexity: [{ id: 'sonar-pro' }],
ollama: [],
openrouter: []
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettings(
MOCK_PROJECT_ROOT,
true
); // Force reload
expect(settings).toEqual(VALID_CLAUDE_CODE_CONFIG);
});
test('getClaudeCodeSettingsForCommand should return command-specific settings', () => {
// Config with command-specific settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: VALID_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (path.basename(filePath) === 'supported-models.json') return '{}';
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettingsForCommand(
'add-task',
MOCK_PROJECT_ROOT,
true
); // Force reload
// Should merge global settings with command-specific settings
const expectedSettings = {
...VALID_CLAUDE_CODE_CONFIG,
...VALID_CLAUDE_CODE_CONFIG.commandSpecific['add-task']
};
expect(settings).toEqual(expectedSettings);
});
test('getClaudeCodeSettingsForCommand should return global settings for unknown command', () => {
// Config with Claude Code settings
const configWithClaudeCode = {
...VALID_CUSTOM_CONFIG,
claudeCode: PARTIAL_CLAUDE_CODE_CONFIG
};
// Mock findConfigPath to return the mock config path
mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH);
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (path.basename(filePath) === 'supported-models.json') return '{}';
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(configWithClaudeCode);
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
const settings = configManager.getClaudeCodeSettingsForCommand(
'unknown-command',
MOCK_PROJECT_ROOT,
true
); // Force reload
// Should return global settings only
expect(settings).toEqual(PARTIAL_CLAUDE_CODE_CONFIG);
});
});
// --- getConfig Tests ---
describe('getConfig Tests', () => {
test('should return default config if .taskmasterconfig does not exist', () => {
@@ -409,7 +628,11 @@ describe('getConfig Tests', () => {
...VALID_CUSTOM_CONFIG.models.fallback
}
},
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global }
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
};
expect(config).toEqual(expectedMergedConfig);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
@@ -447,7 +670,11 @@ describe('getConfig Tests', () => {
research: { ...DEFAULT_CONFIG.models.research },
fallback: { ...DEFAULT_CONFIG.models.fallback }
},
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global }
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
};
expect(config).toEqual(expectedMergedConfig);
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
@@ -551,7 +778,11 @@ describe('getConfig Tests', () => {
},
fallback: { ...DEFAULT_CONFIG.models.fallback }
},
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global }
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global },
claudeCode: {
...DEFAULT_CONFIG.claudeCode,
...VALID_CUSTOM_CONFIG.claudeCode
}
};
expect(config).toEqual(expectedMergedConfig);
});
@@ -684,6 +915,82 @@ describe('Getter Functions', () => {
expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel);
});
test('getResponseLanguage should return responseLanguage from config', () => {
// Arrange
// Prepare a config object with responseLanguage property for this test
const configWithLanguage = JSON.stringify({
models: {
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
},
global: {
projectName: 'Test Project',
responseLanguage: '中文'
}
});
// Set up fs.readFileSync to return our test config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return configWithLanguage;
}
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4-turbo' }]
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Ensure getConfig returns new values instead of cached ones
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act
const responseLanguage =
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
// Assert
expect(responseLanguage).toBe('中文');
});
test('getResponseLanguage should return undefined when responseLanguage is not in config', () => {
// Arrange
const configWithoutLanguage = JSON.stringify({
models: {
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
},
global: {
projectName: 'Test Project'
// No responseLanguage property
}
});
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return configWithoutLanguage;
}
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4-turbo' }]
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Ensure getConfig returns new values instead of cached ones
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act
const responseLanguage =
configManager.getResponseLanguage(MOCK_PROJECT_ROOT);
// Assert
expect(responseLanguage).toBe('English');
});
// Add more tests for other getters (getResearchProvider, getProjectName, etc.)
});
@@ -738,5 +1045,116 @@ describe('getAllProviders', () => {
// Add tests for getParametersForRole if needed
// --- defaultNumTasks Tests ---
describe('Configuration Getters', () => {
test('getDefaultNumTasks should return default value when config is valid', () => {
// Arrange: Mock fs.readFileSync to return valid config when called with the expected path
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 15
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(15);
});
test('getDefaultNumTasks should return fallback when config value is invalid', () => {
// Arrange: Mock fs.readFileSync to return invalid config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 'invalid'
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should return fallback when config value is missing', () => {
// Arrange: Mock fs.readFileSync to return config without defaultNumTasks
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should handle non-existent config file', () => {
// Arrange: Mock file not existing
fsExistsSyncSpy.mockReturnValue(false);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks
});
test('getDefaultNumTasks should accept explicit project root', () => {
// Arrange: Mock fs.readFileSync to return valid config
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) {
return JSON.stringify({
global: {
defaultNumTasks: 20
}
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// Force reload to clear cache
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Act: Call getDefaultNumTasks with explicit project root
const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT);
// Assert
expect(result).toBe(20);
});
});
// Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation.
// If similar setter functions exist, add tests for them following the writeConfig pattern.

View File

@@ -179,7 +179,8 @@ logs
# Task files
# tasks.json
# tasks/ `
# tasks/
`
);
expect(mockLog).toHaveBeenCalledWith(
'success',
@@ -200,7 +201,8 @@ logs
# Task files
tasks.json
tasks/ `
tasks/
`
);
expect(mockLog).toHaveBeenCalledWith(
'success',
@@ -432,7 +434,8 @@ tasks/ `;
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
expect(writtenContent).toBe(`# Task files
# tasks.json
# tasks/ `);
# tasks/
`);
});
});
});

View File

@@ -0,0 +1,528 @@
/**
* Tests for the remove-task MCP tool
*
* Note: This test does NOT test the actual implementation. It tests that:
* 1. The tool is registered correctly with the correct parameters
* 2. Arguments are passed correctly to removeTaskDirect
* 3. Error handling works as expected
* 4. Tag parameter is properly handled and passed through
*
* We do NOT import the real implementation - everything is mocked
*/
import { jest } from '@jest/globals';
// Mock EVERYTHING
const mockRemoveTaskDirect = jest.fn();
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
removeTaskDirect: mockRemoveTaskDirect
}));
const mockHandleApiResult = jest.fn((result) => result);
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
const mockCreateErrorResponse = jest.fn((msg) => ({
success: false,
error: { code: 'ERROR', message: msg }
}));
const mockFindTasksPath = jest.fn(() => '/mock/project/tasks.json');
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
handleApiResult: mockHandleApiResult,
createErrorResponse: mockCreateErrorResponse,
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
}));
jest.mock('../../../../mcp-server/src/core/utils/path-utils.js', () => ({
findTasksPath: mockFindTasksPath
}));
// Mock the z object from zod
const mockZod = {
object: jest.fn(() => mockZod),
string: jest.fn(() => mockZod),
boolean: jest.fn(() => mockZod),
optional: jest.fn(() => mockZod),
describe: jest.fn(() => mockZod),
_def: {
shape: () => ({
id: {},
file: {},
projectRoot: {},
confirm: {},
tag: {}
})
}
};
jest.mock('zod', () => ({
z: mockZod
}));
// DO NOT import the real module - create a fake implementation
// This is the fake implementation of registerRemoveTaskTool
const registerRemoveTaskTool = (server) => {
// Create simplified version of the tool config
const toolConfig = {
name: 'remove_task',
description: 'Remove a task or subtask permanently from the tasks list',
parameters: mockZod,
// Create a simplified mock of the execute function
execute: mockWithNormalizedProjectRoot(async (args, context) => {
const { log, session } = context;
try {
log.info && log.info(`Removing task(s) with ID(s): ${args.id}`);
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = mockFindTasksPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
} catch (error) {
log.error && log.error(`Error finding tasks.json: ${error.message}`);
return mockCreateErrorResponse(
`Failed to find tasks.json: ${error.message}`
);
}
log.info && log.info(`Using tasks file path: ${tasksJsonPath}`);
const result = await mockRemoveTaskDirect(
{
tasksJsonPath: tasksJsonPath,
id: args.id,
projectRoot: args.projectRoot,
tag: args.tag
},
log,
{ session }
);
if (result.success) {
log.info && log.info(`Successfully removed task: ${args.id}`);
} else {
log.error &&
log.error(`Failed to remove task: ${result.error.message}`);
}
return mockHandleApiResult(
result,
log,
'Error removing task',
undefined,
args.projectRoot
);
} catch (error) {
log.error && log.error(`Error in remove-task tool: ${error.message}`);
return mockCreateErrorResponse(error.message);
}
})
};
// Register the tool with the server
server.addTool(toolConfig);
};
describe('MCP Tool: remove-task', () => {
// Create mock server
let mockServer;
let executeFunction;
// Create mock logger
const mockLogger = {
debug: jest.fn(),
info: jest.fn(),
warn: jest.fn(),
error: jest.fn()
};
// Test data
const validArgs = {
id: '5',
projectRoot: '/mock/project/root',
file: '/mock/project/tasks.json',
confirm: true,
tag: 'feature-branch'
};
const multipleTaskArgs = {
id: '5,6.1,7',
projectRoot: '/mock/project/root',
tag: 'master'
};
// Standard responses
const successResponse = {
success: true,
data: {
totalTasks: 1,
successful: 1,
failed: 0,
removedTasks: [
{
id: 5,
title: 'Removed Task',
status: 'pending'
}
],
messages: ["Successfully removed task 5 from tag 'feature-branch'"],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'feature-branch'
}
};
const multipleTasksSuccessResponse = {
success: true,
data: {
totalTasks: 3,
successful: 3,
failed: 0,
removedTasks: [
{ id: 5, title: 'Task 5', status: 'pending' },
{ id: 1, title: 'Subtask 6.1', status: 'done', parentTaskId: 6 },
{ id: 7, title: 'Task 7', status: 'in-progress' }
],
messages: [
"Successfully removed task 5 from tag 'master'",
"Successfully removed subtask 6.1 from tag 'master'",
"Successfully removed task 7 from tag 'master'"
],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'master'
}
};
const errorResponse = {
success: false,
error: {
code: 'INVALID_TASK_ID',
message: "The following tasks were not found in tag 'feature-branch': 999"
}
};
const pathErrorResponse = {
success: false,
error: {
code: 'PATH_ERROR',
message: 'Failed to find tasks.json: No tasks.json found'
}
};
beforeEach(() => {
// Reset all mocks
jest.clearAllMocks();
// Create mock server
mockServer = {
addTool: jest.fn((config) => {
executeFunction = config.execute;
})
};
// Setup default successful response
mockRemoveTaskDirect.mockResolvedValue(successResponse);
mockFindTasksPath.mockReturnValue('/mock/project/tasks.json');
// Register the tool
registerRemoveTaskTool(mockServer);
});
test('should register the tool correctly', () => {
// Verify tool was registered
expect(mockServer.addTool).toHaveBeenCalledWith(
expect.objectContaining({
name: 'remove_task',
description: 'Remove a task or subtask permanently from the tasks list',
parameters: expect.any(Object),
execute: expect.any(Function)
})
);
// Verify the tool config was passed
const toolConfig = mockServer.addTool.mock.calls[0][0];
expect(toolConfig).toHaveProperty('parameters');
expect(toolConfig).toHaveProperty('execute');
});
test('should execute the tool with valid parameters including tag', async () => {
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify findTasksPath was called with correct arguments
expect(mockFindTasksPath).toHaveBeenCalledWith(
{
projectRoot: validArgs.projectRoot,
file: validArgs.file
},
mockLogger
);
// Verify removeTaskDirect was called with correct arguments including tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
tasksJsonPath: '/mock/project/tasks.json',
id: validArgs.id,
projectRoot: validArgs.projectRoot,
tag: validArgs.tag // This is the key test - tag parameter should be passed through
}),
mockLogger,
{
session: mockContext.session
}
);
// Verify handleApiResult was called
expect(mockHandleApiResult).toHaveBeenCalledWith(
successResponse,
mockLogger,
'Error removing task',
undefined,
validArgs.projectRoot
);
});
test('should handle multiple task IDs with tag context', async () => {
// Setup multiple tasks response
mockRemoveTaskDirect.mockResolvedValueOnce(multipleTasksSuccessResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(multipleTaskArgs, mockContext);
// Verify removeTaskDirect was called with comma-separated IDs and tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5,6.1,7',
tag: 'master'
}),
mockLogger,
expect.any(Object)
);
// Verify successful handling of multiple tasks
expect(mockHandleApiResult).toHaveBeenCalledWith(
multipleTasksSuccessResponse,
mockLogger,
'Error removing task',
undefined,
multipleTaskArgs.projectRoot
);
});
test('should handle missing tag parameter (defaults to current tag)', async () => {
const argsWithoutTag = {
id: '5',
projectRoot: '/mock/project/root'
};
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(argsWithoutTag, mockContext);
// Verify removeTaskDirect was called with undefined tag (should default to current tag)
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5',
projectRoot: '/mock/project/root',
tag: undefined // Should be undefined when not provided
}),
mockLogger,
expect.any(Object)
);
});
test('should handle errors from removeTaskDirect', async () => {
// Setup error response
mockRemoveTaskDirect.mockResolvedValueOnce(errorResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify removeTaskDirect was called
expect(mockRemoveTaskDirect).toHaveBeenCalled();
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
"Failed to remove task: The following tasks were not found in tag 'feature-branch': 999"
);
// Verify handleApiResult was called with error response
expect(mockHandleApiResult).toHaveBeenCalledWith(
errorResponse,
mockLogger,
'Error removing task',
undefined,
validArgs.projectRoot
);
});
test('should handle path finding errors', async () => {
// Setup path finding error
mockFindTasksPath.mockImplementationOnce(() => {
throw new Error('No tasks.json found');
});
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
const result = await executeFunction(validArgs, mockContext);
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
'Error finding tasks.json: No tasks.json found'
);
// Verify error response was returned
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
'Failed to find tasks.json: No tasks.json found'
);
// Verify removeTaskDirect was NOT called
expect(mockRemoveTaskDirect).not.toHaveBeenCalled();
});
test('should handle unexpected errors in execute function', async () => {
// Setup unexpected error
mockRemoveTaskDirect.mockImplementationOnce(() => {
throw new Error('Unexpected error');
});
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify error logging
expect(mockLogger.error).toHaveBeenCalledWith(
'Error in remove-task tool: Unexpected error'
);
// Verify error response was returned
expect(mockCreateErrorResponse).toHaveBeenCalledWith('Unexpected error');
});
test('should properly handle withNormalizedProjectRoot wrapper', () => {
// Verify that withNormalizedProjectRoot was called with the execute function
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
expect.any(Function)
);
});
test('should log appropriate info messages for successful operations', async () => {
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(validArgs, mockContext);
// Verify appropriate logging
expect(mockLogger.info).toHaveBeenCalledWith(
'Removing task(s) with ID(s): 5'
);
expect(mockLogger.info).toHaveBeenCalledWith(
'Using tasks file path: /mock/project/tasks.json'
);
expect(mockLogger.info).toHaveBeenCalledWith(
'Successfully removed task: 5'
);
});
test('should handle subtask removal with proper tag context', async () => {
const subtaskArgs = {
id: '5.2',
projectRoot: '/mock/project/root',
tag: 'feature-branch'
};
const subtaskSuccessResponse = {
success: true,
data: {
totalTasks: 1,
successful: 1,
failed: 0,
removedTasks: [
{
id: 2,
title: 'Removed Subtask',
status: 'pending',
parentTaskId: 5
}
],
messages: [
"Successfully removed subtask 5.2 from tag 'feature-branch'"
],
errors: [],
tasksPath: '/mock/project/tasks.json',
tag: 'feature-branch'
}
};
mockRemoveTaskDirect.mockResolvedValueOnce(subtaskSuccessResponse);
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
await executeFunction(subtaskArgs, mockContext);
// Verify removeTaskDirect was called with subtask ID and tag
expect(mockRemoveTaskDirect).toHaveBeenCalledWith(
expect.objectContaining({
id: '5.2',
tag: 'feature-branch'
}),
mockLogger,
expect.any(Object)
);
// Verify successful handling
expect(mockHandleApiResult).toHaveBeenCalledWith(
subtaskSuccessResponse,
mockLogger,
'Error removing task',
undefined,
subtaskArgs.projectRoot
);
});
});

View File

@@ -0,0 +1,190 @@
/**
* Unit test to ensure fixDependenciesCommand writes JSON with the correct
* projectRoot and tag arguments so that tag data is preserved.
*/
import { jest } from '@jest/globals';
// Mock process.exit to prevent test termination
const mockProcessExit = jest.fn();
const originalExit = process.exit;
process.exit = mockProcessExit;
// Mock utils.js BEFORE importing the module under test
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
readJSON: jest.fn(),
writeJSON: jest.fn(),
log: jest.fn(),
findProjectRoot: jest.fn(() => '/mock/project/root'),
getCurrentTag: jest.fn(() => 'master'),
taskExists: jest.fn(() => true),
formatTaskId: jest.fn((id) => id),
findCycles: jest.fn(() => []),
isSilentMode: jest.fn(() => true),
resolveTag: jest.fn(() => 'master'),
getTasksForTag: jest.fn(() => []),
setTasksForTag: jest.fn(),
enableSilentMode: jest.fn(),
disableSilentMode: jest.fn()
}));
// Mock ui.js
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
displayBanner: jest.fn()
}));
// Mock task-manager.js
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager.js',
() => ({
generateTaskFiles: jest.fn()
})
);
// Mock external libraries
jest.unstable_mockModule('chalk', () => ({
default: {
green: jest.fn((text) => text),
cyan: jest.fn((text) => text),
bold: jest.fn((text) => text)
}
}));
jest.unstable_mockModule('boxen', () => ({
default: jest.fn((text) => text)
}));
// Import the mocked modules
const { readJSON, writeJSON, log, taskExists } = await import(
'../../../../../scripts/modules/utils.js'
);
// Import the module under test
const { fixDependenciesCommand } = await import(
'../../../../../scripts/modules/dependency-manager.js'
);
describe('fixDependenciesCommand tag preservation', () => {
beforeEach(() => {
jest.clearAllMocks();
mockProcessExit.mockClear();
});
afterAll(() => {
// Restore original process.exit
process.exit = originalExit;
});
it('calls writeJSON with projectRoot and tag parameters when changes are made', async () => {
const tasksPath = '/mock/tasks.json';
const projectRoot = '/mock/project/root';
const tag = 'master';
// Mock data WITH dependency issues to trigger writeJSON
const tasksDataWithIssues = {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [999] // Non-existent dependency to trigger fix
},
{
id: 2,
title: 'Task 2',
dependencies: []
}
],
tag: 'master',
_rawTaggedData: {
master: {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [999]
}
]
}
}
};
readJSON.mockReturnValue(tasksDataWithIssues);
taskExists.mockReturnValue(false); // Make dependency invalid to trigger fix
await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Verify readJSON was called with correct parameters
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
// Verify writeJSON was called (should be triggered by removing invalid dependency)
expect(writeJSON).toHaveBeenCalled();
// Check the writeJSON call parameters
const writeJSONCalls = writeJSON.mock.calls;
const lastWriteCall = writeJSONCalls[writeJSONCalls.length - 1];
const [calledPath, _data, calledProjectRoot, calledTag] = lastWriteCall;
expect(calledPath).toBe(tasksPath);
expect(calledProjectRoot).toBe(projectRoot);
expect(calledTag).toBe(tag);
// Verify process.exit was NOT called (meaning the function succeeded)
expect(mockProcessExit).not.toHaveBeenCalled();
});
it('does not call writeJSON when no changes are needed', async () => {
const tasksPath = '/mock/tasks.json';
const projectRoot = '/mock/project/root';
const tag = 'master';
// Mock data WITHOUT dependency issues (no changes needed)
const cleanTasksData = {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [] // Clean, no issues
}
],
tag: 'master'
};
readJSON.mockReturnValue(cleanTasksData);
taskExists.mockReturnValue(true); // All dependencies exist
await fixDependenciesCommand(tasksPath, {
context: { projectRoot, tag }
});
// Verify readJSON was called
expect(readJSON).toHaveBeenCalledWith(tasksPath, projectRoot, tag);
// Verify writeJSON was NOT called (no changes needed)
expect(writeJSON).not.toHaveBeenCalled();
// Verify process.exit was NOT called
expect(mockProcessExit).not.toHaveBeenCalled();
});
it('handles early exit when no valid tasks found', async () => {
const tasksPath = '/mock/tasks.json';
// Mock invalid data to trigger early exit
readJSON.mockReturnValue(null);
await fixDependenciesCommand(tasksPath, {
context: { projectRoot: '/mock', tag: 'master' }
});
// Verify readJSON was called
expect(readJSON).toHaveBeenCalled();
// Verify writeJSON was NOT called (early exit)
expect(writeJSON).not.toHaveBeenCalled();
// Verify process.exit WAS called due to invalid data
expect(mockProcessExit).toHaveBeenCalledWith(1);
});
});

Some files were not shown because too many files have changed in this diff Show More