* feat: support MCP sampling
* support provider registry
* use standard config options for MCP provider
* update fastmcp to support passing params to requestSampling
* move key name definition to base provider
* moved check for required api key to provider class
* remove unused code
* more cleanup
* more cleanup
* refactor provider
* remove not needed files
* more cleanup
* more cleanup
* more cleanup
* update docs
* fix tests
* add tests
* format fix
* clean files
* merge fixes
* format fix
* feat: add support for MCP Sampling as AI provider
* initial mcp ai sdk
* fix references to old provider
* update models
* lint
* fix gemini-cli conflicts
* ran format
* Update src/provider-registry/index.js
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
* fix circular dependency
Circular Dependency Issue ✅ FIXED
Root Cause: BaseAIProvider was importing from index.js, which includes commands.js and other modules that eventually import back to AI providers
Solution: Changed imports to use direct paths to avoid circular dependencies:
Updated base-provider.js to import log directly from utils.js
Updated gemini-cli.js to import log directly from utils.js
Result: Fixed 11 failing tests in mcp-provider.test.js
* fix gemini test
* fix(claude-code): recover from CLI JSON truncation bug (#913) (#920)
Gracefully handle SyntaxError thrown by @anthropic-ai/claude-code when the CLI truncates large JSON outputs (4–16 kB cut-offs).\n\nKey points:\n• Detect JSON parse error + existing buffered text in both doGenerate() and doStream() code paths.\n• Convert the failure into a recoverable 'truncated' finish state and push a provider-warning.\n• Allows Task Master to continue parsing long PRDs / expand-task operations instead of crashing.\n\nA patch changeset (.changeset/claude-code-json-truncation.md) is included for the next release.\n\nRef: eyaltoledano/claude-task-master#913
* docs: fix gemini-cli authentication documentation (#923)
Remove erroneous 'gemini auth login' command references and replace with correct 'gemini' command authentication flow. Update documentation to reflect proper OAuth setup process via the gemini CLI interactive interface.
* fix tests
* fix: update ai-sdk-provider-gemini-cli to 0.0.4 for improved authentication (#932)
- Fixed authentication compatibility issues with Google auth
- Added support for 'api-key' auth type alongside 'gemini-api-key'
- Resolved "Unsupported authType: undefined" runtime errors
- Updated @google/gemini-cli-core dependency to 0.1.9
- Improved documentation and removed invalid auth references
- Maintained backward compatibility while enhancing type validation
* call logging directly
Need to patch upstream fastmcp to allow easier access and bootstrap the TM mcp logger to use the fastmcp logger which today is only exposed in the tools handler
* fix tests
* removing logs until we figure out how to pass mcp logger
* format
* fix tests
* format
* clean up
* cleanup
* readme fix
---------
Co-authored-by: Oren Melamed <oren.m@gloat.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@vargas.com>
* fix: claude-4 not having the right max_tokens
* feat: add bedrock support
* chore: fix package-lock.json
* fix: rename baseUrl to baseURL
* feat: add azure support
* fix: final touches of azure integration
* feat: add google vertex provider
* chore: fix tests and refactor task-manager.test.js
* chore: move task 92 to 94
This commit introduces a standardized pattern for capturing and propagating AI usage telemetry (cost, tokens, model used) across the Task Master stack and applies it to the 'add-task' functionality.
Key changes include:
- **Telemetry Pattern Definition:**
- Added defining the integration pattern for core logic, direct functions, MCP tools, and CLI commands.
- Updated related rules (, ,
Usage: mcp [OPTIONS] COMMAND [ARGS]...
MCP development tools
╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ version Show the MCP version. │
│ dev Run a MCP server with the MCP Inspector. │
│ run Run a MCP server. │
│ install Install a MCP server in the Claude desktop app. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯, , ) to reference the new telemetry rule.
- **Core Telemetry Implementation ():**
- Refactored the unified AI service to generate and return a object alongside the main AI result.
- Fixed an MCP server startup crash by removing redundant local loading of and instead using the imported from for cost calculations.
- Added to the object.
- ** Integration:**
- Modified (core) to receive from the AI service, return it, and call the new UI display function for CLI output.
- Updated to receive from the core function and include it in the payload of its response.
- Ensured (MCP tool) correctly passes the through via .
- Updated to correctly pass context (, ) to the core function and rely on it for CLI telemetry display.
- **UI Enhancement:**
- Added function to to show telemetry details in the CLI.
- **Project Management:**
- Added subtasks 77.6 through 77.12 to track the rollout of this telemetry pattern to other AI-powered commands (, , , , , , ).
This establishes the foundation for tracking AI usage across the application.
Resolves persistent 404 'Not Found' errors when calling Anthropic models via the Vercel AI SDK. The primary issue was likely related to incorrect or missing API headers.
- Refactors Anthropic provider (src/ai-providers/anthropic.js) to use the standard 'anthropic-version' header instead of potentially outdated/incorrect beta headers when creating the client instance.
- Updates the default fallback model ID in .taskmasterconfig to 'claude-3-5-sonnet-20241022'.
- Fixes the interactive model setup (task-master models --setup) in scripts/modules/commands.js to correctly filter and default the main model selection.
- Improves the cost display in the 'task-master models' command output to explicitly show 'Free' for models with zero cost.
- Updates description for the 'id' parameter in the 'set_task_status' MCP tool definition for clarity.
- Updates list of models and costs
- Unified Service: Introduced 'scripts/modules/ai-services-unified.js' to centralize AI interactions using provider modules ('src/ai-providers/') and the Vercel AI SDK.
- Provider Modules: Implemented 'anthropic.js' and 'perplexity.js' wrappers for Vercel SDK.
- 'updateSubtaskById' Fix: Refactored the AI call within 'updateSubtaskById' to use 'generateTextService' from the unified layer, resolving runtime errors related to parameter passing and streaming. This serves as the pattern for refactoring other AI calls in 'scripts/modules/task-manager/'.
- Task Status: Marked Subtask 61.19 as 'done'.
- Rules: Added new 'ai-services.mdc' rule.
This centralizes AI logic, replacing previous direct SDK calls and custom implementations. API keys are resolved via 'resolveEnvVariable' within the service layer. The refactoring of 'updateSubtaskById' establishes the standard approach for migrating other AI-dependent functions in the task manager module to use the unified service.
Relates to Task 61.