Files
claude-task-master/docs/models.md
Oren Me b53065713c feat: add support for MCP Sampling as AI provider (#863)
* feat: support MCP sampling

* support provider registry

* use standard config options for MCP provider

* update fastmcp to support passing params to requestSampling

* move key name definition to base provider

* moved check for required api key to provider class

* remove unused code

* more cleanup

* more cleanup

* refactor provider

* remove not needed files

* more cleanup

* more cleanup

* more cleanup

* update docs

* fix tests

* add tests

* format fix

* clean files

* merge fixes

* format fix

* feat: add support for MCP Sampling as AI provider

* initial mcp ai sdk

* fix references to old provider

* update models

* lint

* fix gemini-cli conflicts

* ran format

* Update src/provider-registry/index.js

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* fix circular dependency

Circular Dependency Issue  FIXED
Root Cause: BaseAIProvider was importing from index.js, which includes commands.js and other modules that eventually import back to AI providers
Solution: Changed imports to use direct paths to avoid circular dependencies:
Updated base-provider.js to import log directly from utils.js
Updated gemini-cli.js to import log directly from utils.js
Result: Fixed 11 failing tests in mcp-provider.test.js

* fix gemini test

* fix(claude-code): recover from CLI JSON truncation bug (#913) (#920)

Gracefully handle SyntaxError thrown by @anthropic-ai/claude-code when the CLI truncates large JSON outputs (4–16 kB cut-offs).\n\nKey points:\n• Detect JSON parse error + existing buffered text in both doGenerate() and doStream() code paths.\n• Convert the failure into a recoverable 'truncated' finish state and push a provider-warning.\n• Allows Task Master to continue parsing long PRDs / expand-task operations instead of crashing.\n\nA patch changeset (.changeset/claude-code-json-truncation.md) is included for the next release.\n\nRef: eyaltoledano/claude-task-master#913

* docs: fix gemini-cli authentication documentation (#923)

Remove erroneous 'gemini auth login' command references and replace with correct 'gemini' command authentication flow. Update documentation to reflect proper OAuth setup process via the gemini CLI interactive interface.

* fix tests

* fix: update ai-sdk-provider-gemini-cli to 0.0.4 for improved authentication (#932)

- Fixed authentication compatibility issues with Google auth
- Added support for 'api-key' auth type alongside 'gemini-api-key'
- Resolved "Unsupported authType: undefined" runtime errors
- Updated @google/gemini-cli-core dependency to 0.1.9
- Improved documentation and removed invalid auth references
- Maintained backward compatibility while enhancing type validation

* call logging directly

Need to patch upstream fastmcp to allow easier access and bootstrap the TM mcp logger to use the fastmcp logger which today is only exposed in the tools handler

* fix tests

* removing logs until we figure out how to pass mcp logger

* format

* fix tests

* format

* clean up

* cleanup

* readme fix

---------

Co-authored-by: Oren Melamed <oren.m@gloat.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@vargas.com>
2025-07-09 10:54:38 +02:00

19 KiB

Available Models as of July 8, 2025

Main Models

Provider Model Name SWE Score Input Cost Output Cost
bedrock us.anthropic.claude-3-haiku-20240307-v1:0 0.4 0.25 1.25
bedrock us.anthropic.claude-3-opus-20240229-v1:0 0.725 15 75
bedrock us.anthropic.claude-3-5-sonnet-20240620-v1:0 0.49 3 15
bedrock us.anthropic.claude-3-5-sonnet-20241022-v2:0 0.49 3 15
bedrock us.anthropic.claude-3-7-sonnet-20250219-v1:0 0.623 3 15
bedrock us.anthropic.claude-3-5-haiku-20241022-v1:0 0.4 0.8 4
bedrock us.anthropic.claude-opus-4-20250514-v1:0 0.725 15 75
bedrock us.anthropic.claude-sonnet-4-20250514-v1:0 0.727 3 15
anthropic claude-sonnet-4-20250514 0.727 3 15
anthropic claude-opus-4-20250514 0.725 15 75
anthropic claude-3-7-sonnet-20250219 0.623 3 15
anthropic claude-3-5-sonnet-20241022 0.49 3 15
azure gpt-4o 0.332 2.5 10
azure gpt-4o-mini 0.3 0.15 0.6
azure gpt-4-1 2 10
openai gpt-4o 0.332 2.5 10
openai o1 0.489 15 60
openai o3 0.5 2 8
openai o3-mini 0.493 1.1 4.4
openai o4-mini 0.45 1.1 4.4
openai o1-mini 0.4 1.1 4.4
openai o1-pro 150 600
openai gpt-4-5-preview 0.38 75 150
openai gpt-4-1-mini 0.4 1.6
openai gpt-4-1-nano 0.1 0.4
openai gpt-4o-mini 0.3 0.15 0.6
google gemini-2.5-pro-preview-05-06 0.638
google gemini-2.5-pro-preview-03-25 0.638
google gemini-2.5-flash-preview-04-17 0.604
google gemini-2.0-flash 0.518 0.15 0.6
google gemini-2.0-flash-lite
perplexity sonar-pro 3 15
perplexity sonar-reasoning-pro 0.211 2 8
perplexity sonar-reasoning 0.211 1 5
xai grok-3 3 15
xai grok-3-fast 5 25
mcp mcp-sampling - 0 0
ollama devstral:latest 0 0
ollama qwen3:latest 0 0
ollama qwen3:14b 0 0
ollama qwen3:32b 0 0
ollama mistral-small3.1:latest 0 0
ollama llama3.3:latest 0 0
ollama phi4:latest 0 0
openrouter google/gemini-2.5-flash-preview-05-20 0.15 0.6
openrouter google/gemini-2.5-flash-preview-05-20:thinking 0.15 3.5
openrouter google/gemini-2.5-pro-exp-03-25 0 0
openrouter deepseek/deepseek-chat-v3-0324:free 0 0
openrouter deepseek/deepseek-chat-v3-0324 0.27 1.1
openrouter openai/gpt-4.1 2 8
openrouter openai/gpt-4.1-mini 0.4 1.6
openrouter openai/gpt-4.1-nano 0.1 0.4
openrouter openai/o3 10 40
openrouter openai/codex-mini 1.5 6
openrouter openai/gpt-4o-mini 0.15 0.6
openrouter openai/o4-mini 0.45 1.1 4.4
openrouter openai/o4-mini-high 1.1 4.4
openrouter openai/o1-pro 150 600
openrouter meta-llama/llama-3.3-70b-instruct 120 600
openrouter meta-llama/llama-4-maverick 0.18 0.6
openrouter meta-llama/llama-4-scout 0.08 0.3
openrouter qwen/qwen-max 1.6 6.4
openrouter qwen/qwen-turbo 0.05 0.2
openrouter qwen/qwen3-235b-a22b 0.14 2
openrouter mistralai/mistral-small-3.1-24b-instruct:free 0 0
openrouter mistralai/mistral-small-3.1-24b-instruct 0.1 0.3
openrouter mistralai/mistral-nemo 0.03 0.07
openrouter thudm/glm-4-32b:free 0 0
groq llama-3.3-70b-versatile 0.55 0.59 0.79
groq llama-3.1-8b-instant 0.32 0.05 0.08
groq llama-4-scout 0.45 0.11 0.34
groq llama-4-maverick 0.52 0.5 0.77
groq mixtral-8x7b-32768 0.35 0.24 0.24
groq qwen-qwq-32b-preview 0.4 0.18 0.18
groq deepseek-r1-distill-llama-70b 0.52 0.75 0.99
groq gemma2-9b-it 0.3 0.2 0.2
groq whisper-large-v3 0.11 0
claude-code opus 0.725 0 0
claude-code sonnet 0.727 0 0
gemini-cli gemini-2.5-pro 0.72 0 0
gemini-cli gemini-2.5-flash 0.71 0 0

Research Models

Provider Model Name SWE Score Input Cost Output Cost
bedrock us.anthropic.claude-3-opus-20240229-v1:0 0.725 15 75
bedrock us.anthropic.claude-3-5-sonnet-20240620-v1:0 0.49 3 15
bedrock us.anthropic.claude-3-5-sonnet-20241022-v2:0 0.49 3 15
bedrock us.anthropic.claude-3-7-sonnet-20250219-v1:0 0.623 3 15
bedrock us.anthropic.claude-opus-4-20250514-v1:0 0.725 15 75
bedrock us.anthropic.claude-sonnet-4-20250514-v1:0 0.727 3 15
bedrock us.deepseek.r1-v1:0 1.35 5.4
openai gpt-4o-search-preview 0.33 2.5 10
openai gpt-4o-mini-search-preview 0.3 0.15 0.6
perplexity sonar-pro 3 15
perplexity sonar 1 1
perplexity deep-research 0.211 2 8
perplexity sonar-reasoning-pro 0.211 2 8
perplexity sonar-reasoning 0.211 1 5
xai grok-3 3 15
xai grok-3-fast 5 25
groq llama-3.3-70b-versatile 0.55 0.59 0.79
groq llama-4-scout 0.45 0.11 0.34
groq llama-4-maverick 0.52 0.5 0.77
groq qwen-qwq-32b-preview 0.4 0.18 0.18
groq deepseek-r1-distill-llama-70b 0.52 0.75 0.99
claude-code opus 0.725 0 0
claude-code sonnet 0.727 0 0
mcp mcp-sampling - 0 0
gemini-cli gemini-2.5-pro 0.72 0 0
gemini-cli gemini-2.5-flash 0.71 0 0

Fallback Models

Provider Model Name SWE Score Input Cost Output Cost
bedrock us.anthropic.claude-3-haiku-20240307-v1:0 0.4 0.25 1.25
bedrock us.anthropic.claude-3-opus-20240229-v1:0 0.725 15 75
bedrock us.anthropic.claude-3-5-sonnet-20240620-v1:0 0.49 3 15
bedrock us.anthropic.claude-3-5-sonnet-20241022-v2:0 0.49 3 15
bedrock us.anthropic.claude-3-7-sonnet-20250219-v1:0 0.623 3 15
bedrock us.anthropic.claude-3-5-haiku-20241022-v1:0 0.4 0.8 4
bedrock us.anthropic.claude-opus-4-20250514-v1:0 0.725 15 75
bedrock us.anthropic.claude-sonnet-4-20250514-v1:0 0.727 3 15
anthropic claude-sonnet-4-20250514 0.727 3 15
anthropic claude-opus-4-20250514 0.725 15 75
anthropic claude-3-7-sonnet-20250219 0.623 3 15
anthropic claude-3-5-sonnet-20241022 0.49 3 15
azure gpt-4o 0.332 2.5 10
azure gpt-4o-mini 0.3 0.15 0.6
azure gpt-4-1 2 10
openai gpt-4o 0.332 2.5 10
openai o3 0.5 2 8
openai o4-mini 0.45 1.1 4.4
google gemini-2.5-pro-preview-05-06 0.638
google gemini-2.5-pro-preview-03-25 0.638
google gemini-2.5-flash-preview-04-17 0.604
google gemini-2.0-flash 0.518 0.15 0.6
google gemini-2.0-flash-lite
perplexity sonar-reasoning-pro 0.211 2 8
perplexity sonar-reasoning 0.211 1 5
xai grok-3 3 15
xai grok-3-fast 5 25
mcp mcp-sampling - 0 0
ollama devstral:latest 0 0
ollama qwen3:latest 0 0
ollama qwen3:14b 0 0
ollama qwen3:32b 0 0
ollama mistral-small3.1:latest 0 0
ollama llama3.3:latest 0 0
ollama phi4:latest 0 0
openrouter google/gemini-2.5-flash-preview-05-20 0.15 0.6
openrouter google/gemini-2.5-flash-preview-05-20:thinking 0.15 3.5
openrouter google/gemini-2.5-pro-exp-03-25 0 0
openrouter deepseek/deepseek-chat-v3-0324:free 0 0
openrouter openai/gpt-4.1 2 8
openrouter openai/gpt-4.1-mini 0.4 1.6
openrouter openai/gpt-4.1-nano 0.1 0.4
openrouter openai/o3 10 40
openrouter openai/codex-mini 1.5 6
openrouter openai/gpt-4o-mini 0.15 0.6
openrouter openai/o4-mini 0.45 1.1 4.4
openrouter openai/o4-mini-high 1.1 4.4
openrouter openai/o1-pro 150 600
openrouter meta-llama/llama-3.3-70b-instruct 120 600
openrouter meta-llama/llama-4-maverick 0.18 0.6
openrouter meta-llama/llama-4-scout 0.08 0.3
openrouter qwen/qwen-max 1.6 6.4
openrouter qwen/qwen-turbo 0.05 0.2
openrouter qwen/qwen3-235b-a22b 0.14 2
openrouter mistralai/mistral-small-3.1-24b-instruct:free 0 0
openrouter mistralai/mistral-small-3.1-24b-instruct 0.1 0.3
openrouter mistralai/mistral-nemo 0.03 0.07
openrouter thudm/glm-4-32b:free 0 0
groq llama-3.3-70b-versatile 0.55 0.59 0.79
groq llama-3.1-8b-instant 0.32 0.05 0.08
groq llama-4-scout 0.45 0.11 0.34
groq llama-4-maverick 0.52 0.5 0.77
groq mixtral-8x7b-32768 0.35 0.24 0.24
groq qwen-qwq-32b-preview 0.4 0.18 0.18
groq gemma2-9b-it 0.3 0.2 0.2
claude-code opus 0.725 0 0
claude-code sonnet 0.727 0 0
gemini-cli gemini-2.5-pro 0.72 0 0
gemini-cli gemini-2.5-flash 0.71 0 0